On what basis have you determined 50-60ms to be reasonable?
Not quite. Image buffers are written to SDRAM, so there can be any number of buffers in between source (camera) and encoder. No requirement on timing to achieve 30fps through the pipeline as the source can just move on to filling the next set of buffers whilst the codec churns away.kotnia wrote: ↑Thu Apr 05, 2018 7:35 pmAnd to reanimate topic a bit i'd like to answer this question:On what basis have you determined 50-60ms to be reasonable?
- Encoding routine can be executed with ISP routine in parallel (with some shift of course)
- otherwise, if those stages were sequential, iteration would cost 76 ms making impossible to obtain 30 fps
Again, not quite. Each phase of encoding must be complete within the 33ms frame period, however the encoder can be pipeline. Predominantly this is CABAC or CAVLC entropy encoding being separated out from the motion estimation and coding phase. The overall latency for a 1080P frame through the codec is around 40ms - ~32ms in motion estimation and 8ms in entropy coding. (There are also two phases to the motion estimation too, and I'm not sure if the first phase hardware block can be passed data piecemeal)kotnia wrote:[*]Encoding routine has to take 33 ms in average (maybe with some jitter)
- again, it wouldn't be possible to encode 30 frames per second otherwise
Yes in theory we could overlap the camera and the encode. Reality is that we don't, and the extra effort required to save ~15ms (after overheads) probably isn't worth expending. I'm not aware of any other SoCs that do offer that much parallelism - if they use V4L2 then everything is frame based!kotnia wrote:[*]If MMAL_PARAMETER_MB_ROWS_PER_SLICE works fine then codec should be able to encode slices separately without waiting for entire frame complete
- we can even expect Periodic Intra Refresh feature in modern H264 codec. At least there many IP providers offering hardware codecs with those features
- mmal api allows to configure periodic intra refresh
That given i don't see fundamental reasons why complete pipeline in VideoCore can't look like this:
Users browsing this forum: No registered users and 1 guest