Right now I have a working version that is capable of very fast HW decoding of H.264 video (up to 50 fps) but the rendering part is not fast enough (around 24 fps).
I'm using OpenGL ES 3.0 and the program runs as a Wayland client on top of Weston, which is using DRM backend.
The way I render each frame is:
- I use ffmpeg for decoding. My program works with either MMAL decoder or video4linux decoder.
- I receive 3 buffers that contain YUV encoded image.
- I copy those buffers to a OpenGL texture using glTexSubImage2D.
- I convert YUV into RGB in the shader for better performance.
If I'm correct, it should be possible to avoid the copying alltogether with use of DMA buffers. However I've never used DMA buffers and the information I found on how to do it, let alone in conjunction with OpenGL, is very incomplete and I'm still lost.
Could anyone point me in the right direction? Am I at least going in the right direction? Is there some literature or some other sources where I could find concrete examples of this?