I'm putting together a streaming camera with video and audio using using the python mmal library.
I'm using a Pi3 and a 3rd party camera board which emulates the V2 camera. For the most part I have a working system with the camera feeding 1920 x 1080 video through the h264 encoder component. I am then multiplexing this with audio into mkv format before piping it to ffmpeg for streaming.
The main issue I have faced is synchronising the bit/frame rates of the audio and video. Initially I thought the only problem was the slight differences in clock rates of the video and audio modules, however I then found gaps in both the audio and video streams. I have now resolved the audio stream but there is a strange issue with the video stream.
My pipeline links the python mmal camera module and mmal encoder module to produce the h264 stream - nominally at 1920 x 1080, 30fps and with I frames inserted every 60 frames. The encoder module then triggers a callback whenever a frame is ready for output. This works correctly 95+% of the time with frames appearing at intervals of 1/30 sec (+/- a few mS). However my debugging has revealed that the gaps between some callbacks can be much greater - representing 2 - 69 of the normal frame periods over a recent 10 minute test run.
My initial thought was that there was a problem with the callback mechanism, however this is incorrect because the encoder is still encoding correctly with 60 intra frames between I frames. The problem must therefore lie either with the camera module itself or the pipeline between the camera and encoder.
I wondered originally if this was a problem with low light and long exposure times but since it only happens on a few percent of frames I think I can discount this theory,
Does anyone know if there is a solution to this problem, or do I have to find a way to live with the variation??