We have a set of register settings from Omnivision, but we need to match those up with registers settings for the ISP. This is not trivial. My first go produced black frames, so I need to take another look when I have time.FFAMax wrote:Limitation in GPU firmware? Do You know any details where is this limitation?RaTTuS wrote:it's limited to 30fps currently
No.jansendup wrote:so does setting MMAL_VIDEO_FORMAT_T.frame_rate > 30 simply not work?
The ARM side code is all open source. This stuff is all in the GPU which isn't. Basically, we have a bunch of register settings for the camera for 60 and 90fps modes. These need to be matched with the same sort of timings in the first stage of the ISP (Image system pipeline) which is the software/HW pipeline for processing images. It's that matching that is going wrong somewhere, and although everything appears to be working - the frames are black.jansendup wrote:I though it was open-sourced? https://github.com/raspberrypi/userland
It sounds difficult. Would the timing calculations involve internal GPU processes as well as calculated values to be written to ISP registers. If so I take it that the internal GPU process timings are most difficult to calculate?
No, the GPU is all C, and assembler when we need to use the vector units (C compiler doesn't support vector coding). The compiled code is the start.elf file in the boot partition.jansendup wrote:Thanks, that clarified a lot. Thought that by ISP you meant image sensor processor. So the compiled GPU code is then probably included in the raspberry firmware repository. Just for interest sake, when you write code to be loaded on the GPU and executed do you use an in-house language similar to to that of opencl?
Modes means the set of timing parameters you program the camera and ISP with. So generally a mode has a max and min frame rate (max would be the max the HW could go using the settings, min is more a software limitation). So generally you could have a frame rate anywhere between the max and min.jansendup wrote:Didn't measure this with an oscilloscope but it seems one have pretty fine control over the frame rate(measured with clock_gettime()). It seems very continues and not that discrete. James when you said "get a mode working" I though that a mode translates to a discrete frame rate? I'm a bit confused Would a mode be a frame rate range. Like for example all frame rates that could be generated with the same pixel clock.
I knew the part about max fps of 90 but there were firmware issues where requesting the >30fps lowres framerates didn't get them as the OP had mentioned so I'm wondering if @jamesh had fixed those. The last post from him didn't say he had fixed the problem.ghans wrote:The camera now supports a maximum of 90fps at VGA resolution.
You won't get more.
You may need to watch out as the sensor is based around a rolling shutter (http://en.wikipedia.org/wiki/Rolling_shutter) - not all lines of the image are exposed at the same moment in time, so synchronising your LEDs to the sensor may not be feasible.GauVeldt wrote:I knew the part about max fps of 90 but there were firmware issues where requesting the >30fps lowres framerates didn't get them as the OP had mentioned so I'm wondering if @jamesh had fixed those. The last post from him didn't say he had fixed the problem.
If 90 fps is now possible (at the corresponding resolution) then that's the advertised max and the problem has been resolved.
so I could get four LED shifts within a 22.5 fps overall framerate using the 90 fps settings
one way I could get more LED's per shift is to center them on differently colored pads and have the code look for the neighbouring color around an IR (bright spot) LED point when detected (the neighbouring color differentiating multiple IR points within the same camera frame, RGB would differentiate three, adding CMY would differentiate six LED points per shfit for a theoritical maximum of 4*6=24 points in the overall 22.5 fps sensing rate). I could get more with more colors for pads but anything other than primaries (RGB) and the secondary mixes (CMY) runs the risk of interference from the actor's attire and other background in the frame). I think for an amateur motion tracking project (that doesn't cost $1000s of dollars) 24 3D tracking points would be lots.
Think I'm going to make a topic just for the feasibility of such a motion tracking project.