I am working on a project in which I use a raspberry pi 2 with a raspberry pi camera model and OpenCV C++ for gesture recognition. It already works quite nice, but I searched the web for hours in order to find information on how to achieve the best performance in terms of fps and, even more important, delay. But most articles and discussions I found are from 2013, so I think they are outdated.
At first, I used raspicam_cv for grabbing frames, now I use v4l2. I read a lot about hardware acceleration, Y-channel extraction, mmal and so on and so forth. But I am not that experienced in programming and before I wast too much time, I thought that someone could possibly steer me into the right direction.
What I need: My algorithm starts with a small (<640x480) grey-scale image which is then converted into a binary image. So my questions are:
1.) How do I get a grey-scale image from the camera module into OpenCV C++ most efficiently? Do I somehow need to extract the Y-channel from a YUV-format? Or is there even a much faster approach which uses a different format? And how do I get OpenCV to accept YUV- or other formats? I have figured out that OpenCV overrides all the settings that I previously set by using v4l2-ctl in the command line. So if I set a certain format with v4l2, and I run my program, afterwards the "Format Video Capture" is set back to 'BGR3' (checked with v4l2-ctl --all). Also, "set(CV_CAP_PROP_CONVERT_RGB , false)" seems not to be supported as I get a "HIGHGUI ERROR: V4L: Property <unknown property string>(16) not supported by device" error.
2.) I figured out that, even though the v4l2-ctl settings are overridden, I am able to set the frame’s width and height using OpenCV’s functions Capture.set() . In terms of best performance, are there certain resolutions which are „faster/better“ than others? By using v4l2-ctl -list-formats-ext I can see that the native resolution of the sensor is 2592x1944 pixels. So in theory, it should be faster to compute a resolution of 259x194 than 320x240. Is that correct?
3) In this http://www.pyimagesearch.com/2015/12/28 ... nd-opencv/ blog, I read about increasing raspberry pi fps and decreasing I/O latency with Python and OpenCV by using a separate thread for grabbing frames from the camera module, so that the video processing pipeline is never blocked. Has anyone ever tried something similar with C++ and OpenCV?
4) I want to be able to set some parameters (especially the exposure time, ISO etc.) myself manually. How can I do that? Do I need to change these settings in OpenCV or in v4l2-ctl?
I am asking all this because I know that the raspberry camera module is capable of capturing 90 frames per second. Even though I know that this frame-rate is not achievable together with video processing, I am pretty sure that I might get closer to it. Any help is appreciated! Thank you very much in advance!