auvidea wrote:The only issue is, that it only supports raspivid with an input resolution of 1080p30 at this time. Hopefully we get some support from the Raspberry Foundation to get other modes supported.
Er, hello! I'm probably the closest person to the Pi Foundation/Trading that might work with you on this. I'd posted after your last message, but you seem to have ignored it.
The support for that chip in the firmware is NOT officially maintained, as Pi Towers have never released it as a product. If it ever breaks, then don't expect much sympathy or a rapid response.
There is a route forward already open to you using components that will be supported (CSI-2 receiver peripheral, and you can use the ISP to resize/format convert now). I've already offered support on that within time constraints (I am not employed by Pi Towers). I am in the process of creating a userland app that creates the basic pipeline to receive the data, process it, and slap it on the screen. Video encode can follow.
I would like to sort it out to use the official kernel driver, but I am missing some information on how the pipeline gets set up. Please (anyone!) advise if you have any extra information. I'm happy to switch to private message or email for that, rather than having public discussions.
For those asking questions, the data is received from the Toshiba chip as RGB888, converted to YUV4:2:0 and potentially resized in the ISP, and that is fed to the encoder. So output is the same selection as for the normal camera - MJPEG or H264.
1080P60 is 2.8Gbit/s if in RGB888. Each CSI2 lane can carry 1Gbit/s of data, and the normal Pi only has 2 lanes exposed, so insufficient bandwidth there. The Compute Module does expose the 4 lane interface on the CAM1 interface. There is a YUV4:2:2 mode from the chip which would just squeeze into 2 lanes.
I'm not sure what you then want to do with 1080P60 though - the H264 encoder supports a max of 1080P30.
The fun side of this chip is that most pipelines are expecting cameras, so the app sets the resolution. With this chip if you program in multiple resolutions via the EDID, then the source can change the resolution on you. Getting a framework to handle that nicely is not trivial. That is why there is only one resolution programmed into the EDID. If we can use the official kernel driver then that problem becomes theirs, not mine!
Adding in audio support is also entertaining - the chip can do I2S, or it can send it over the CSI2 bus in an appropriately marked format. If done over CSI2 then you've suddenly got to manage an audio device as well as video, at which point my head starts hurting with the Linux subsystems.