Looking through the "userspace" repository and the "hello_pi" source code examples, I can't find something that combines using MMAL to capture video from the camera, and then mapping that as a GLES texture.
The few examples of "video into textures" I can find use OpenIL, and hello_videocube uses video decoder data, not live camera data.
Is there a good example, or documentation, for how to take an OPAQUE frame out of the camera output, and use it as a texture in GLES?