For my open source AirPlay mirroring receiver RPiPlay, I'm looking for a future-proof video and audio decoding and rendering API that
- Uses the video decoding hardware
- Supports different audio sinks such as HDMI/Analog/ALSA (DACs and USB audio interfaces)
- Could easily be adapted to H265 should a need arise with a new AirPlay version
- (Possibly also works on generic desktop Linux systems using whatever hardware acceleration available)
In numerous places, I found recommendations for FFMPEG, which generally seems promising: The abstraction makes it so the same code works with various different input formats and decoding hardware. However, I completely failed at finding enough information regarding its API, so I hope someone can answer these questions I have regarding FFMPEG/LibAV on the Raspberry Pi:
1) Can FFMPEG/LibAV be used not only for decoding, but also for rendering video and audio? Is there an API to set up a pipeline similar to what's possible with OpenMAX, where after set up, I only need to provide H264 frames to the pipeline, which automatically renders the decoded frames to screen?
2) How does A/V sync work with FFMPEG/LibAV? It took some time for me to figure out how to use OpenMAX' clock component for syncing the video and audio rendering, video clock as reference and all, and I'm hoping a similar facility exists in FFMPEG/LibAV.
3) Does anyone know simple sample projects that demonstrate the concepts from 1 and 2 in FFMPEG/LibAV? The only material about the C API usage among the flood of command line tutorials for FFMPEG/LibAV only utilises the library for decoding, and renders with OpenMAX on the Pi.
I'd highly appreciate any hints!