By default any buffer is allocated in both GPU and ARM memory, so needs double the amount of memory. The GPU must use a contiguous block of memory, whilst the ARM can fragment the buffer and map it via the MMU.
When the buffer is passed from GPU to ARM or vice versa, the VideoCore Host Interface (VCHI) will copy the data from one to the other - that's a hit on memory bandwidth too.
Zero copy uses the vcsm (VideoCore Shared Memory) service to map the GPU side buffer into the ARM memory. VCHI then needs to do no (zero) copying as the buffer is passed across, although it does require cache flushing/invalidation if mapped via the cache.
It only really makes any sense on input or output ports (not control) which are using large buffers on the ARM side, so typically raw images (not MMAL_ENCODING_OPAQUE which is only 128 bytes per buffer).
To use it,
- #include "mmal_util_params.h"
- #include "mmal_util.h"
- mmal_port_parameter_set_boolean(port, MMAL_PARAMETER_ZERO_COPY, MMAL_TRUE);
- use mmal_port_pool_create instead of mmal_pool_create when allocating pools. This allows the core to check this parameter and use an appropriate allocator.
That should be it.
If passing the buffers back to the GPU, do ensure you set ZERO_COPY on both the source and sink ports to avoid weirdness.
Be very careful if you ever call mmal_buffer_header_mem_lock/unlock - the core does magic to the buffer->data field to map it back and forth between a userspace pointer and VC buffer handle. Get it messed up and bad things will happen.
I have had the intention of switching raspistillyuv and raspividyuv to zero copy for a while, but it's not been a high priority task. My in progress app
for talking to the Toshiba TC358743 HDMI to CSI2 chip uses zero copy as it was pulling full RGB24 frames back from the GPU at 30fps. It can still do that, but also now H264 encodes the stream too.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.