I have been working with sources of raspistill and raspivid, trying to get an image from raspi-camera to opencv. I'm aplying OpenGL shader to preprocess the image, and then I use glReadPixels to get the rendered image to OpenCV. My code is still in development of course, and I have a lot of questions. So if anyone out there could help me, please answer the ones you can.
1- What are the differences between the three ports of camera component?
2- Is 30 the max frame rate of the camera preview port?
3- Which of the three ports outputs is the best choice to be passed to OpenGL?
4- Is it possible to make multiple shader passes? Can I render to a texture to use it again in rendering but with other shader?
5- Is there a way to avoid using glReadPixels to get the rendered image to OpenCV (I mean render to shared memory for example)?