I am interested in knowing how this is actually accomplished low level. Every camera I have used only ever exposes one output stream and you modify it by sending exposure, white balance, fps, etc to the onboard controller. From all the documentation and libmmal code it sure looks like this camera has no brains and all the AWB, exposure, etc is controlled by the mmal library.
Can someone give me the quick and dirty on how the low level interface is exposing two streams to the userland?
