If that is true, I look forward to official confirmation of this, as it is contrary to my understanding of what has been posted here by Broadcom folks, for example jamesh http://www.raspberrypi.org/phpBB3/viewt ... 85#p124506brianm734 wrote:The pins are all functional on both connectors.
I don't need to program the GPU. I can change its behaviour through the CPU from software or through the JTAG connector. There is no need to "program" anything - the pins are already assigned, all I need are drivers for the Linux distro I use.
I asked Gert about this at some point, he gave an answer, let me find it...mahjongg wrote:The camera when it arrives will come with GPU drivers specifically for it, and only it. It is very unlikely the CSI interface used will be able to support two camera's, maybe with a "CSI switch" if such a thing can be made, but not for two camera's at the same time.
See http://elinux.org/RPi_BCM2835_GPIOs and http://elinux.org/RPi_BCM2835_Pinout in conjunction with the other links already posted earlier. AIUI the CPU core doesn't have direct access to the SoC pins, it has to access them via the GPU core - and the GPU core determines which 'mode' the SoC pins are set up in. So in theory if all the CSI and DSI pins were exposed as GPIO lines, you could 'bit-bang' the CSI and DSI protocols over GPIO, but I believe this would be *far* too slow to work (so would need a GPU-side driver). But not all the relevant pins are exposed as GPIO anyway, so even at the very lowest level there's no way (until support gets added to the GPU firmware blob start.elf) to access these pins from the CPU side.brianm734 wrote:I don't see anything in those posts that contradicts my statements. The "GPU binary blob" is all software, controlled by the kernal drivers. The GPU is programmed and controlled through the CPU through the drivers you install.
No,adhdengineer wrote:Will it be possible to produce your own camera module and use the CSI port to stream raw video data?
If not why not.
surely it doesnt need that much cooperation with the GPU. the CSI interface is a fairly simple parallel bus with a few control lines. It shouldnt be that hard for hard to set the gpu up to fire off a callback every time an incoming frame is completed surely.mahjongg wrote: No,
because it will need very close cooperation with the GPU software, and unless your camera mudule is 110% the same as the foundations, the the GPU software will hiccup and die on your module.
egh. meant serial. said parallel.adhdengineer wrote: surely it doesnt need that much cooperation with the GPU. the CSI interface is a fairly simple parallel bus with a few control lines. It shouldnt be that hard for hard to set the gpu up to fire off a callback every time an incoming frame is completed surely.
It's like saying: " My motor connects to my wheels. Any motor has to connect to the wheels. So it must be trivial to take an arbitrary motor and drop it into my car"adhdengineer wrote:.....
surely it doesnt need that much cooperation with the GPU. the CSI interface is a fairly simple parallel bus with a few control lines. It shouldnt be that hard for hard to set the gpu up to fire off a callback every time an incoming frame is completed surely.
so implement that as one API call to configure it?gsh wrote:
The CSI interface is not as simple as you suggest, for a start the source of the clock for the interface is in the camera module not the SoC. Therefore you need to communicate with the device to set up pll's in the camera to actually start it sending us the data.
I'm sorry. De-Bayering needs "significant processing"? in what world?gsh wrote: Once we start to receive the data then you need to start doing some real work.... First of all camera sensors don't give you useful RGB data but Bayer data and therefore need significant processing to just get a picture,
you dont need lens if you plan to make it use interchangeable lenses or even no lenses (as i do).gsh wrote: then you need to worry about lens shading, motion compensation, auto focus, auto exposure, dead pixel elimination, denoise algorithms (of which there are many).
which is my point, there should be. Then everything else should be built off that. You should not be implementing the entire camera interface as a monolithic block but as a selection of individual parts that can be exposed externally and used individually.gsh wrote: If you just want to talk to the CSI interface directly, there is no ARM side driver to achieve that.
Users browsing this forum: No registered users and 14 guests