SimplyOm
Posts: 5
Joined: Wed Jun 01, 2016 8:01 am

Pausing the shutter while capturing frames from VideoPort

Sat Jun 18, 2016 7:22 am

Hi all,

I am working on capturing the frames in sync with a GPIO trigger. My requirement is that the picture should be captured either as soon as possible or with a minimum deviation from a previously fixed time delay. The expected deviation should be as low as 4 ms. As I want it to be fast, I cannot use Picamera python API, but use C++ instead. At first, I was working on raspicam library and then after developing some basic understanding about MMAL, I wrote my own little code that grabs a frame when triggered. I was able to achieve ~50 fps in RGB mode.

As far as I understand, when I grab any frame, I get the buffer header of the next frame and then I can convert its data into matrix form and get the final image. But the problem which I encountered is that there was a huge(at least in milli range :P ) difference in time. Since my fps was restricted to 50, my next frame could be ready within any time in the range 2 ms to 25 ms. But I figured out the time difference between individual frames was pretty stable with a standard deviation of 2 ms.

Hence, I thought of a solution that if I knew when the last frame was captured(which I can know) and then somehow pause the grabbing of the next frame until a predetermined time delay, then I can pretty well have a delayed sync between my trigger and captured frame.

My question is, is it ever possible to delay capturing of a frame or pausing the shutter for a moment? As of now, I don't have much knowledge of low level details of working of a camera. But if its possible, I am ready to study more about openMAX or the camera kernel and implement the task. Apart from that, if there is some other solution that might be able to solve my problem, please consider suggesting.

Regards

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9069
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Pausing the shutter while capturing frames from VideoPor

Sun Jun 19, 2016 6:47 pm

Sorry, not possible. The sensor is programmed with a framerate and told to start. It then churns out frames at the specified rate.

You can apply fine control to the frame rate though, so if you can pick up an error signal from your "master" camera, you can fine tune it in 1/256ths of an FPS.
Further discussion on that on viewtopic.php?f=43&t=48238&start=75 and https://github.com/waveform80/picamera/pull/279
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

SimplyOm
Posts: 5
Joined: Wed Jun 01, 2016 8:01 am

Re: Pausing the shutter while capturing frames from VideoPor

Mon Jun 20, 2016 9:42 am

Thanks a lot for clarifying that!

I had also tried previously to synchronise the frame rate. I read all the posts above and most of the places you had mentioned to tweak the MMAL_PARAMETER_VIDEO_FRAME_RATE parameter. I have already assigned it a constrant value. Still I am getting significant deviation. Data is attached below.

I started the timer when camera component was initialised. I set the frame rate to 120 fps. I also set the resolution to 320 x 240. Then I checked the elapsed time each time the callback is called for 500 ms. Then I calculated the mean and standard deviation of time difference between successive calls of callback.
Opening Camera...
FPS is set to 120
Elapsed time: 120.849000 milliseconds
Elapsed time: 139.299000 milliseconds
Elapsed time: 144.455000 milliseconds
Elapsed time: 149.237000 milliseconds
Elapsed time: 155.836000 milliseconds
...
...
...
Elapsed time: 402.773000 milliseconds
Elapsed time: 411.091000 milliseconds
Elapsed time: 424.402000 milliseconds
Elapsed time: 430.783000 milliseconds
Elapsed time: 439.008000 milliseconds
Elapsed time: 444.290000 milliseconds
Elapsed time: 452.538000 milliseconds
Elapsed time: 465.648000 milliseconds
Elapsed time: 471.793000 milliseconds
Elapsed time: 477.594000 milliseconds
Elapsed time: 488.502000 milliseconds
Elapsed time: 494.057000 milliseconds
Elapsed time: 507.011000 milliseconds
The mean and standard deviation were calculated to be
Mean = 8.17
Standard Deviation = 3.2415
As far as I understand, callbacks are called each time the buffer is filled from the video port. So I see that the frames are not coming with fixed interval. Is there something that I misunderstand?

Also, as a side issue, I am not able to get the presentation timestamp from the buffer headers. Whenever I extract it as buffer_hdr->pts, I get MMAL_TIME_UNKNOWN. Is there something I am doing wrong?

Thanks a lot for your time.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9069
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Pausing the shutter while capturing frames from VideoPor

Mon Jun 20, 2016 10:16 am

The callbacks are asynchronous to the actual sensor as it is dependent on your client app returning buffers at appropriate points, and a small amount of jitter based on scheduling within the GPU. The point at which the callback is made should not be relied on for any accuracy.

header->pts is the correct thing to look at. ethanol100 has just posted some changes to raspividyuvthat give the option of logging the pts values in the same way raspivid can.
As a stab in the dark, I'd guess you haven't set the timestamping mode correctly as in https://github.com/raspberrypi/userland ... id.c#L1462
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

SimplyOm
Posts: 5
Joined: Wed Jun 01, 2016 8:01 am

Re: Pausing the shutter while capturing frames from VideoPor

Tue Jun 21, 2016 7:11 pm

Alright. Thanks a lot for the explanation of the randomness of the callback time.
Also, I had previously set my timestamp mode to MMAL_PARAM_TIMESTAMP_MODE_RESET_STC and it always returned me MMAL_TIME_UNKNOWN until you suggested to try out MMAL_PARAM_TIMESTAMP_MODE_RAW_STC. It magically started working. The frames are highly periodic with precision up to a couple of microseconds! :D

I just need to know one more thing. I am using an IR sensor to trigger grabbing a frame. The trigger is enabled only on the interrupt after the freely falling object has reached the level of IR sensors. But then, the image that I receive from the buffer is usually one or two frames before the trigger was sent. (I can see the object way above the IR sensor level in the grabbed frame, although grabbing was triggered after it reaches level of sensor).
What I understand is it's due to the delay that takes place while conversion in the ISP that makes me receive an older frame and not the frame that was captured after I triggered. Is it supposed to happen that way or am I doing something wrong with it? Also, I thought of removing this problem by comparing the PTS of the buffer with the time at which I triggered. Is there a better way of doing it?

Besides, I so loved the way you explained the working of video mode in seven simple steps here:
viewtopic.php?p=996767#p996767
You are really doing a great job :)

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9069
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Pausing the shutter while capturing frames from VideoPor

Tue Jun 21, 2016 8:14 pm

SimplyOm wrote:Alright. Thanks a lot for the explanation of the randomness of the callback time.
Also, I had previously set my timestamp mode to MMAL_PARAM_TIMESTAMP_MODE_RESET_STC and it always returned me MMAL_TIME_UNKNOWN until you suggested to try out MMAL_PARAM_TIMESTAMP_MODE_RAW_STC. It magically started working. The frames are highly periodic with precision up to a couple of microseconds! :D
Hmm, MMAL_PARAM_TIMESTAMP_MODE_RESET_STC should work, but as you've noticed it isn't used by anything so regressions may have happened. I'll add it to the list of things to check.
SimplyOm wrote:I just need to know one more thing. I am using an IR sensor to trigger grabbing a frame. The trigger is enabled only on the interrupt after the freely falling object has reached the level of IR sensors. But then, the image that I receive from the buffer is usually one or two frames before the trigger was sent. (I can see the object way above the IR sensor level in the grabbed frame, although grabbing was triggered after it reaches level of sensor).
What I understand is it's due to the delay that takes place while conversion in the ISP that makes me receive an older frame and not the frame that was captured after I triggered. Is it supposed to happen that way or am I doing something wrong with it? Also, I thought of removing this problem by comparing the PTS of the buffer with the time at which I triggered. Is there a better way of doing it?
As per my explanation that you liked, the sensor is churning away constantly. If you aren't consuming all the frames, then which one you get will be a little hit and miss. Depending on how the components are hooked up, the camera component may have already asked the lower layers for the frame and then be waiting to dispose of it. The system is not set up to deal with triggered captures but streaming data with the behaviour being slightly undefined if you don't consume it all.

The latency through the ISP has to be below the frame time as it keeps up with real time, so 33ms for 30fps. That does exclude the actual exposure time, and do remember that the sensor is a rolling shutter, so at which point do you class the frame to have been triggered?

I think you have the right plan of getting MMAL_PARAMETER_SYSTEM_TIME at the point of your trigger, and then match to the closest PTS (you know what the delta will be between the frames based on frame rate). PTS is when the frame start is received by the SoC, so the end of exposure of the first line.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

SimplyOm
Posts: 5
Joined: Wed Jun 01, 2016 8:01 am

Re: Pausing the shutter while capturing frames from VideoPor

Wed Jun 22, 2016 7:33 pm

Hmm, MMAL_PARAM_TIMESTAMP_MODE_RESET_STC should work, but as you've noticed it isn't used by anything so regressions may have happened. I'll add it to the list of things to check.
Hey! Sorry to have misstated the fact. MMAL_PARAM_TIMESTAMP_MODE_RESET_STC also worked flawlessly afterwards. I don't know something else might have been wrong during my initial testing.
The latency through the ISP has to be below the frame time as it keeps up with real time, so 33ms for 30fps. That does exclude the actual exposure time, and do remember that the sensor is a rolling shutter, so at which point do you class the frame to have been triggered?
I don't know if I correctly understood your statement. Yes, I get that the sensor has a rolling shutter. I assume that the frame can be trigerred at any moment. At that moment, the shutter can still continue to capture the whole frame and then, I grab the next frame that has been passed through the ISP and stored into the buffer. More details below. Also, yes, I used MMAL_PARAMETER_SYSTEM_TIME which was pretty accurate to microseconds too to compare the time difference between trigger and presentation timestamp of obtained frame.

I made a log of the time data for VGA and qVGA resolutions. You may go through them for some reference. The three lines contain the time from SoC at which the trigger was sent, the presentation timestamp of the frame received after trigger and the time lag after the trigger after which I grabbed the frame from the buffer, for three consecutive frames.
qVGA : 120 fps : 8 millisecond between frames ::
Trigger Time: 1349570.725000 millisecond
Presentation time: 1349568.571000 millisecond
Time after trigger: 7.177031 millisecond

Trigger Time: 1349579.045000 millisecond
Presentation time: 1349576.866000 millisecond
Time after trigger: 11.121041 millisecond

Trigger Time: 1349590.267000 millisecond
Presentation time: 1349585.162000 millisecond
Time after trigger: 7.896719 millisecond
As you can see, the PTS of the frame grabbed was a few milliseconds before the trigger was sent. More important observation was that, the frame I received was necessarily the frame captured before the trigger was sent.
But this observation failed miserably when the resolution was changed to VGA.
VGA : 120 fps : 8 millisecond between frames ::
Trigger Time: 1106741.335000 millisecond
Presentation time: 1106705.996000 millisecond
Time after trigger: 9.351041 millisecond

Trigger Time: 1106750.832000 millisecond
Presentation time: 1106730.883000 millisecond
Time after trigger: 13.827865 millisecond

Trigger Time: 1106765.250000 millisecond
Presentation time: 1106739.179000 millisecond
Time after trigger: 9.578125 millisecond

Trigger Time: 1106774.951000 millisecond
Presentation time: 1106747.475000 millisecond
Time after trigger: 9.701146 millisecond
Now, you can clearly see the significant time difference between the trigger and PTS of the frame I received. If you consider, the first trigger at 1106741 millisecond, a more relevant frame would have been the third frame with PTS 1106739 millisecond. Similarly, for the second trigger, the last frame would be highly accurate.

Anyway, I just hope I didn't waste much of your time providing those data logs, because most probably you already knew about this particular behaviour. Also, I thought this data might be able to illustrate something, when other users having similar issue view the post. 8-)
Nevertheless, what my main motto behind presenting those stats was that, I believe most of the time delay is while converting the RAW bayer data into YUV format. But, I don't need the YUV of all the frames that were captured, just the ones that were captured after the trigger. One thing I want to clarify is that, although in this data, the triggers seemed to be more or less consistent, I did this only to get a lower bound. My original triggers would be somewhere around 50~70 ms apart. This delay is also visible between the PTS of first two frames (1106705 ms and 1106730 ms), where, 2 frames in the middle were dropped because the ISP couldn't free the buffer while converting the first frame. So, a more efficient approach would be to simply drop all the unnecessary frames in the RAW format itself, and only carry on with conversion into YUV for the useful frames that I need after the trigger. This would ensure that the frame I received is always within a limit of maximum 8 ms from the trigger. I hope you get what I mean to say.
Now, I know this is highly specific to my work, and a few other users I found out having problems with frame sync with trigger. So, it is kinda obvious that there hasn't been any provisions for this functionality in the firmware. I just want to know, is it possible to drop the frames in RAW format itself if I know in advance that this frame is going to be useless anyway? If possible, what should be my best approach?
Again, thanks a lot for your time. :)

Return to “Camera board”