DanR wrote: ↑
Wed Aug 08, 2018 9:48 am
Am I right in thinking the routes or data paths are as that the ADV7280 data over MIPI goes to V4L2 so I need to set up:
MMAL isp -> MMAL renderer -> MMAL Encoder
Where the Encoder can be 3 encoders, one for MMAL_ENCODING_BMP for still captures, two for MMAL_ENCODING_H264 one of which will record a video based upon the GPIO trigger and the other encoder can supply the RTSP server component.
Does that sound about right? Can 3 encoder components be connected to one renderer component?
The data is coming in via V4L2, so needs some manipulation to get it into MMAL. That is what the code in buffer_export is doing, and then the loop that is calling VIDIOC_DQBUF, finding the matching MMAL buffer header and passing it into MMAL. When MMAL returns the buffer via isp_ip_cb it returns it to V4L2 via video_queue_buffer and VIDIOC_QBUF.
UYVY is only accepted by the ISP, therefore it first needs to be passed to vc.ril.isp to be converted to something more useful. (Actually in the latest firmware it is also supported by video_encode).
video_render only has one input port and no output ports, so you can't take a buffer from it to pass to anywhere else.
What you can do is use a mmal_connection to a video_splitter component to clone the image data to multiple destinations. You can then hook image_encode to one port, video_encode to another, and video_render to a third. That's not ideal as it is doing a memcpy on all the buffers.
The alternative is what yavta is doing in using mmal_buffer_header_replicate. The app is set up to take callbacks from the ISP output into function isp_output_callback. As you can see in the source, depending on which components are enabled if calls mmal_buffer_header_replicate and sends the buffer to render and/or encoder. When the buffer is returned from render or encode via render_encoder_input_callback the replicated header is released, and when all replicated headers are returned the source buffer will also be released back to the pool. The buffers_to_isp function is then returning all available buffers to the ISP for filling.
The output of encode is handled by a separate callback (encoder_buffer_callback) which puts the buffer on another queue which then saves the data in asynchronously. Blocking in the callbacks for a long time will lead to stalls in the pipeline.
You don't really need to have two independent video encodes for saving and RTSP unless you need differing bitrates or other parameters for them. H264 can be split at any IDR frame, and if you set the parameter MMAL_PARAMETER_VIDEO_ENCODE_INLINE_HEADER to MMAL_TRUE on the encoder output port then it will also spit out the H264 header bytes before every IDR frame.
Raspivid has a circular buffer mode that uses this technique and keeps a note of where the header bytes are in the stream. On trigger it saves from the header bytes forward to give the last N seconds of video.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.