I think this is better handled on the ARM through libav/ffmpeg.
"Why libav/ffmpeg Is Not The Answer"
Issues with libav/ffmpeg:
* Results in duplication of media data that affects memory bandwidth and application memory footprint (both are important on embedded devices with limited resources such a Raspberry PI): media is read into memory, demuxed, written to memory, then *copied to memory again* to fill an OMX buffer (1GB of media turns into 3GB - ideally the OMX buffers would point to the original data eliminating any data copying, but buffer alignment requirements of codecs make this impractical in real world scenarios)
* Introduces (lots of!) unnecessary complexity to a project raising the barrier to entry: using libav/ffmpeg is not the goal, using media is. Compiling (*see later point), binding libav/ffmpeg to higher level languages, introducing a threaded data pump to process the data, and learning another API are all eliminated by providing an OMX demux component
* API instability: libav/ffmpeg's API is a moving target. Code written today may not work on the next version of the library. The internet is littered with libavcodec samples/tutorials that no longer work. OpenMax code is more future proof.
* License: GPL/LGPL (not everything is open source or linux). This is important enough that Raspberry PI source is dual licensed.
* Patent/licensing issues with unused components: Requires specialized builds to get 'just what you need'.
* Better to solve the problem than "Make it someone else's problem". Having an OMX demuxer solves the issue for everyone instead of making everyone solve it.
* Possible Google Summer of Code project "OMX.raspberrypi.demux" (education: learn OpenMax and component authoring, provides a reference for others wanting to learn). There is no technical issue preventing this from residing entirely on the ARM side and in a different library to the vendor supplied components. GPU/SIMD does not offer a substantial benefit to demux which is mainly memory manipulation.
* It doesn't have to be done all at once: OMX.raspberrypi.demux.mp4,
OMX.raspberrypi.demux.mkv(webm), OMX.raspberrypi.demux.avi, OMX.raspberrypi.demux.ts
* Will provide same advantages for RaspberryPI.Next (not a one-time use scenario)
* Establishes the ground work for future *audio* codecs which are also being moved ARM side: OMX.raspberrypi.decode.mp3.
(As a side note, while its understandable that the OMX vendor wishes to move audio decode ARM side for their product line, the single core armv6 cpu used in the RPI does not offer a NEON SIMD unit like the current multicore armv7 offerings. Is GPU accelerated OpenMax *DL* (not a typo: *D*L) available for the SoC in the RPI? )
In conclusion, this proposal not only adds performance benefits but aligns with the Raspberry PI Foundation's goal of education. Additionally, it provides a pathway for the future of OpenMax on Raspberry PI on both current and future hardware and therefore is deserving of consideration.
I would like to hear comments and suggestion on this especially if anyone is interested in collaboration for a proof-of-concept or similar. Perhaps someone can recommend alternatives to libav/ffmpeg that would be readily adaptable to writing directly to OMX buffers.