I think that is not needed to switch from low to high resoution. mmal offer to use splitter, which can split one video input stream to 2 output streams and then one use for write full H264 video and second (after is resized to 320x240) for image processing.jbeale wrote:Excellent writeup, thanks for doing this! The MMAL stuff seemed very opaque up til now but your page looks very clear on first read through.
Given that the ARMv6 is rather limited in speed compared with the GPU, I'm curious about the possibility of doing some simple processing on low-res video, maybe 320x240 at video rates, and then once certain criterion are satisfied, switching to full-resolution H.264 encoded video which is just passed through into a file. Then after a timeout, changing back to lower resolution frames for processing again. Do you have a feeling for how quickly you can change video parameters on the fly and see a valid stream with the new settings? Can it happen within a single frame time or two, or would it need much longer for setup?
You MIGHT be able to do that with the resize component. The HW resizer can, IIRC, do colourspace conversion as well. But don't quote me on it.wibble82 wrote:Do you know if there's a converter component in mmal to get into the RGB space? Documentation on it seems minimal at best!
That sounds great, so I just need to figure out how to implement it. I am comfortable with C and somewhat C++ but not so much GPU stuff. Is there a general introduction to mmal programming online somewhere? I found thislagurus wrote:I think that is not needed to switch from low to high resoution. mmal offer to use splitter, which can split one video input stream to 2 output streams and then one use for write full H264 video and second (after is resized to 320x240) for image processing.
That's the same as what is in the mmal.h from the userland sources, except nicely formatted and actually legiblejbeale wrote:I found this http://www.jvcref.com/files/PI/document ... ml#example but is there anything more?
Code: Select all
pi@raspberrypi:~/picamdemo$ make [ 25%] Building CXX object CMakeFiles/picamdemo.dir/picam.cpp.o In file included from /opt/vc/include/interface/vcos/vcos_assert.h:149:0, from /opt/vc/include/interface/vcos/vcos.h:114, from /opt/vc/include/interface/vmcs_host/vc_dispmanx.h:33, from /opt/vc/include/bcm_host.h:46, from /home/pi/picamdemo/mmalincludes.h:9, from /home/pi/picamdemo/camera.h:3, from /home/pi/picamdemo/picam.cpp:3: /opt/vc/include/interface/vcos/vcos_types.h:38:33: fatal error: vcos_platform_types.h: No such file or directory compilation terminated. make: *** [CMakeFiles/picamdemo.dir/picam.cpp.o] Error 1 make: *** [CMakeFiles/picamdemo.dir/all] Error 2 make: *** [all] Error 2
Looking forward to it (you aren't already using the GPU? Or you mean passing the data to OpenGL without using CPU?)wibble82 wrote:Enjoy! Oh, and fyi, first version of the gpu accelerated api should be ready by Monday!
Hijbeale wrote:Looking forward to it (you aren't already using the GPU? Or you mean passing the data to OpenGL without using CPU?)wibble82 wrote:Enjoy! Oh, and fyi, first version of the gpu accelerated api should be ready by Monday!
I have your 16-image demo http://robotblogging.blogspot.com/2013/ ... ng-on.html running, which is really great. I'm eager to try to figure out how to do some advanced processing like the image-cascade to calculate variance and estimate focus... I know what the code would look with a bunch of nested for-loops in C on the CPU, but I feel like I need some remedial reading to figure out the GPU stuff.
Users browsing this forum: No registered users and 22 guests