Mark events should work, but as you say you need to have parsed the bitstream first.
You can't create your own OMX components running on the VPU, therefore any tunnelling would pull the data back to the ARM. If you're doing that then it's as easy to do the buffer handling yourself off the "normal" buffer events. Any tunnelling via the ARM will disable the performance optimisations available if the tunnel remains on the VPU.
Personally I'd switch from IL to MMAL. With MMAL you can request buffers out of the decoder to be of type MMAL_ENCODING_OPAQUE which are small buffers passing references to VPU objects. Those can be brought back to the ARM, your app can look at the headers, and then sent back down to the VPU for further processing/rendering. The mmal_connection
util functions almost do what you want, but the callback only passes the connection handle and not the buffer details (which have been pushed into a queue). Nothing stopping you copying and amending that functionality though. Most of the time I've used connections with the MMAL_CONNECTION_FLAG_TUNNELLING flag, which is a special one to tell MMAL to do all the callbacks on the VPU rather than the ARM. That minimises the latency, but means you can't get any callbacks on the ARM).
https://github.com/raspberrypi/userland ... _basic_2.c
is a reasonable example of video decodeusing MMAL, although it only prints a message saying that it has decoded the frame rather than rendering it (see line 341).
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.