Reading these recent posts, I have the impression that one does not just need to connect the wires from an HDMI cable to the CSI port, but that you have use a chip in between (is that the TC358743, reffered to earlier in this thread? http://www.toshiba.com/taec/Catalog/Fam ... id=1779480
I understand Eben wants you to work on many other things, but perhaps this one can be mostly "handed over" to the community if it does not require too much GPU firmware coding?
Do your current prototype require coding on the Toshiba chip? Coding in the GPU firmware?
My project would be about adding realtime augmented reality insets, like telemetry info, to a video stream coming from a quadcopter's embarked camera. A future version of the quadcopter may embark a Pi and a PiCam, but my current quadcopter has its own camera that streams over a 5.8GHz TX-radios as a component PAL/NTSC signal, that I receive at the groud station and convert to HDMI with a simple CVBS-2-HDMI cheap converter. Would I feed that HDMI stream via your convertor into the Pi's CSI, would I have access to an API that allows the realtime insertion of text and vector graphics over that stream and have it realtime streamed out to the Pi's HDMI output?
Thanks for running that interest poll, and apologize if I am raisong yet more questions rather than answering yours.