(I guess the following is for dom and/or jamesh).
I'd like to use the R.Pi as a live transcoding system, using the GPU-accelerated codec blocks (through the OMX-IL layer provided by Broadcom as part of the VideoCore SDK). I would have one incoming 720p H264/AAC @ ~2Mbps stream, and would like to re-encode on the fly to two outgoing H264/AAC streams: one 480p @ ~900Kbps and one 360p @ ~500Kbps. The incoming stream would of course be properly constrained and framed to accomodate the hw decoder (Annex-B framing for AVC, ADTS prefixes for AAC, etc ...). My questions are:
- is the VC4 embedded in the R.Pi/BCM GPU powerful enough to handle such configuration (1 720p decoding and 2 480p+360p encodings) at the same time?
- what about the needed memory for data buffers and tables? is there enough on the board? (I'm already using a 128MB/128MB memory partitioning scheme, so 128MB are dedicated to the GPU)
- using OMX-IL, is it possible to instantiate the OMX.broadcom.video_encode component multiple times (at least 2 in my case)? what about OMX-IL tunnels between OMX.broadcom.video_decode, OMX.broadcom.video_splitter and OMX.broadcom.video_encode components? would that work as expected?
- I know that AVC decoding and encoding are available as OMX-IL accelerated components, what about AAC? also available accelerated or do I have to do that in software on the ARM CPU (or maybe not re-encode audio but just remux if there's not enough power for doing that on the CPU)?
Thanks for your reply,