Hmm, MMAL_PARAM_TIMESTAMP_MODE_RESET_STC should work, but as you've noticed it isn't used by anything so regressions may have happened. I'll add it to the list of things to check.
Hey! Sorry to have misstated the fact. MMAL_PARAM_TIMESTAMP_MODE_RESET_STC also worked flawlessly afterwards. I don't know something else might have been wrong during my initial testing.
The latency through the ISP has to be below the frame time as it keeps up with real time, so 33ms for 30fps. That does exclude the actual exposure time, and do remember that the sensor is a rolling shutter, so at which point do you class the frame to have been triggered?
I don't know if I correctly understood your statement. Yes, I get that the sensor has a rolling shutter. I assume that the frame can be trigerred at any moment. At that moment, the shutter can still continue to capture the whole frame and then, I grab the next frame that has been passed through the ISP and stored into the buffer.
More details below. Also, yes, I used MMAL_PARAMETER_SYSTEM_TIME which was pretty accurate to microseconds too to compare the time difference between trigger and presentation timestamp of obtained frame.
I made a log of the time data for VGA and qVGA resolutions. You may go through them for some reference. The three lines contain the time from SoC at which the trigger was sent, the presentation timestamp of the frame received after trigger and the time lag after the trigger after which I grabbed the frame from the buffer, for three consecutive frames.
qVGA : 120 fps : 8 millisecond between frames ::
Trigger Time: 1349570.725000 millisecond
Presentation time: 1349568.571000 millisecond
Time after trigger: 7.177031 millisecond
Trigger Time: 1349579.045000 millisecond
Presentation time: 1349576.866000 millisecond
Time after trigger: 11.121041 millisecond
Trigger Time: 1349590.267000 millisecond
Presentation time: 1349585.162000 millisecond
Time after trigger: 7.896719 millisecond
As you can see, the PTS of the frame grabbed was a few milliseconds before the trigger was sent. More important observation was that, the frame I received was necessarily the frame captured before the trigger was sent.
But this observation failed miserably when the resolution was changed to VGA.
VGA : 120 fps : 8 millisecond between frames ::
Trigger Time: 1106741.335000 millisecond
Presentation time: 1106705.996000 millisecond
Time after trigger: 9.351041 millisecond
Trigger Time: 1106750.832000 millisecond
Presentation time: 1106730.883000 millisecond
Time after trigger: 13.827865 millisecond
Trigger Time: 1106765.250000 millisecond
Presentation time: 1106739.179000 millisecond
Time after trigger: 9.578125 millisecond
Trigger Time: 1106774.951000 millisecond
Presentation time: 1106747.475000 millisecond
Time after trigger: 9.701146 millisecond
Now, you can clearly see the significant time difference between the trigger and PTS of the frame I received. If you consider, the first trigger at 1106741 millisecond, a more relevant frame would have been the third frame with PTS 1106739 millisecond. Similarly, for the second trigger, the last frame would be highly accurate.
Anyway, I just hope I didn't waste much of your time providing those data logs, because most probably you already knew about this particular behaviour. Also, I thought this data might be able to illustrate something, when other users having similar issue view the post.
Nevertheless, what my main motto behind presenting those stats was that, I believe most of the time delay is while converting the RAW bayer data into YUV format. But, I don't need the YUV of all the frames that were captured, just the ones that were captured after the trigger.
One thing I want to clarify is that, although in this data, the triggers seemed to be more or less consistent, I did this only to get a lower bound. My original triggers would be somewhere around 50~70 ms apart. This delay is also visible between the PTS of first two frames (1106705 ms and 1106730 ms), where, 2 frames in the middle were dropped because the ISP couldn't free the buffer while converting the first frame. So, a more efficient approach would be to simply drop all the unnecessary frames in the RAW format itself, and only carry on with conversion into YUV for the useful frames that I need after the trigger. This would ensure that the frame I received is always within a limit of maximum 8 ms from the trigger. I hope you get what I mean to say.
Now, I know this is highly specific to my work, and a few other users I found out having problems with frame sync with trigger. So, it is kinda obvious that there hasn't been any provisions for this functionality in the firmware.
I just want to know, is it possible to drop the frames in RAW format itself if I know in advance that this frame is going to be useless anyway? If possible, what should be my best approach?
Again, thanks a lot for your time.
