thanks for the keyboard input details,
I usually connect to RPi over ethernet
and hence save output as .h264 to file or stream using rtsp,
is this possible currently?
my raspivid based attempts failed. eg -o is not accepted
iirc this is simply sending data to stdout,
however as I have no idea about GPU....
Of course, I appreciate the intention of providing an API
but the explanation can be even more useful,
in the sense that one can develop a true understanding.
in my cursory run ~7fps
cpu was not running anything else, beyond system
seems slow, why would this be?
very much looking forward to your further blog reports
Sorry - just to clarify, I wasn't sure if you were looking at the gpu camera demo (http://robotblogging.blogspot.co.uk/201 ... ng-on.html) or the original api demo (http://robotblogging.blogspot.co.uk/201 ... i-for.html). The controls I mentioned were for the gpu demo - the simple api one has no controls and just shows most minimal use of the api.
If you want the details of how it works internally, it is documented over the 6 'pi eye' posts beforehand. However it's a complex system so there's no simple explanation. Basically access to the camera is via the multimedia adaption layer (mmal), which is a fairly low level interface to the OMX layer, which is the core of the raspberry pi's multi media system. It is fairly well explained across those posts though, and I made an effort to comment all the mmal code in camera.cpp which should help. Sadly no simple explanation for a complex system though.
It doesn't stream to h264 - the purpose of it was to output to memory so it could be used in-app. I'm afraid you'll need a tv to see the results! I too run over ethernet, but I also have the pi plugged into the tv so I can work on my pc but see the results on screen. The difficulty with h264 is that you'd have to pump the output back into mmal, and through an h264 ecoder which would be very troublesome indeed! There is example code in there somewhere to save textures to a png file, but it's very slow so wouldn't work in real time.
I could believe the gpu demo would see about 7fps, as its doing all 16 images at once! If you hit 'd' to toggle the actual image processing demo you should see a much more appreciable frame rate. similarly, if you use it for your own project and aren't doing 16 filters per frame it should run at a good rate. The api demo gives me 15fps at 720p (as it ships), and hits 30fps at half res. Next version is coming along soon though which will run much faster as it is entirely gpu based.