Camera Interface Specs


351 posts   Page 7 of 15   1 ... 4, 5, 6, 7, 8, 9, 10 ... 15
by jamesh » Mon Feb 04, 2013 5:55 pm
Hardware_man wrote:JamesH,

I thought about what B1234 said again. If I use 720P mode in my camera (720 x 1280) at 30 FPS, I just make it at USB2.0 speed

720 x 1280 x 16 x 30 = 442.4 M bps. And USB2.0 is specified for 480 M bps maximum. Maybe just enough overhead left for Vsyns, H sync, Frame, sync, etc. As the Pi is set up now, with simple high level code commands, maybe in OpenMAX, can I “route” the USB to the input of the H.264 encoder?

Hardware_man


You could but I doubt the Pi's USB system (even though USB claims 480) would be fast enough. There is a LOT of overhead in USB2.0.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Mon Feb 04, 2013 6:21 pm
When the Foundation did the PCB, did they use controlled dielectric / controlled impedance techniques on the USB differential pair lines from the connectors to the hub and from the hub to the Broadcom chip to maintain the USB 90 ohm differential impedance?
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Mon Feb 04, 2013 9:10 pm
Hardware_man wrote:When the Foundation did the PCB, did they use controlled dielectric / controlled impedance techniques on the USB differential pair lines from the connectors to the hub and from the hub to the Broadcom chip to maintain the USB 90 ohm differential impedance?


No idea, none of that sentence means anything to me. However, it's not the wiring that's the issue - the 700Mhz processor is a bit slow at handling the sheer interrupt volume generated by USB. And there are some software issues in the driver that are being worked on. And the USB protocol has a lot of 'fluff' that greatly reduces the actually data bandwidth.

See the USB Redux thread for more information than could ever want to have on the USB.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by Gert van Loo » Mon Feb 04, 2013 9:23 pm
Hardware_man wrote:When the Foundation did the PCB, did they use controlled dielectric / controlled impedance techniques on the USB differential pair lines from the connectors to the hub and from the hub to the Broadcom chip to maintain the USB 90 ohm differential impedance?

Yes of course!
There was a small 'issue' that we needed to add 0-ohm links to switch between model A and B which
is very difficult to do without creating a small impedance anomaly.
User avatar
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 1983
Joined: Tue Aug 02, 2011 7:27 am
by Hardware_man » Tue Feb 05, 2013 3:19 pm
Jamesh,

Gert van Loo says that proper layout guidelines were followed with the routing of the USB differential pares. Thus, it is reasonable to assume that if I use only 1 USB port and do not use the other USB port or the Ethernet, then I can expect to get full USB2.0 high speed 480 M bps. And USB2.0 does support a streaming data implementation (which I assume would have the least USB overhead). So the numbers are very close. And if it ended up that I had to throw away maybe 10 lines (only get 710 x 1280 instead of 720 x 1280), I could work with that.

So then to interface a Sony camera module all my adaptor board needs to be is an LVDS to parallel converter and a microprocessor to take parallel data in and “spit” it out the microprocessor’s USB port.

Then the question becomes, how well is exactly what format the USB streaming data would need to be in to properly input to the H.264 encoder documented? Is this well defined in the OpenMAX documentation?

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Tue Feb 05, 2013 3:45 pm
Hardware_man wrote:Jamesh,

Gert van Loo says that proper layout guidelines were followed with the routing of the USB differential pares. Thus, it is reasonable to assume that if I use only 1 USB port and do not use the other USB port or the Ethernet, then I can expect to get full USB2.0 high speed 480 M bps. And USB2.0 does support a streaming data implementation (which I assume would have the least USB overhead). So the numbers are very close. And if it ended up that I had to throw away maybe 10 lines (only get 710 x 1280 instead of 720 x 1280), I could work with that.

So then to interface a Sony camera module all my adaptor board needs to be is an LVDS to parallel converter and a microprocessor to take parallel data in and “spit” it out the microprocessor’s USB port.

Then the question becomes, how well is exactly what format the USB streaming data would need to be in to properly input to the H.264 encoder documented? Is this well defined in the OpenMAX documentation?

Hardware_man


Notwithstanding that I have never come across a USB that got to its theoretical max, any problem is more likely to be the speed of the CPU. Consider how many cycles would be need to get one pixel from the USB to where it needs to go (I don't know that figure, but it's going to be quite a few). Then multiply that by the resolution and framerate, and see if it's within the 700Mhz capability of the CPU. In effect about 25MBytes/second is needed uncompressed, giving you 28 cycles to get the pixel data from USB to encoder. Not possible.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Tue Feb 05, 2013 4:18 pm
Well, was a nice idea while it lasted. :cry:

I don’t know the internal structure of the Broadcom chip all that well. Broadcom would probably have to shoot me if I did. :D So if we are talking 16 bit bytes, then I agree with your numbers. I didn’t know if there was some kind of DMA “pipeline” between the USB and the encoder. But if every 16 bit byte has to go into the ALU and back out of the ALU on a bunch of move instructions to get from the USB to the encoder, then I understand why this won’t work.

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by Hardware_man » Tue Feb 05, 2013 6:21 pm
The only thing the CPU has to do is move data from USB to H.264 in and move data from H.264 out to SD card. I wouldn't ask the CPU to do any other thing, shut down all other interrupts. Still impossible?
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Tue Feb 05, 2013 8:22 pm
Hardware_man wrote:The only thing the CPU has to do is move data from USB to H.264 in and move data from H.264 out to SD card. I wouldn't ask the CPU to do any other thing, shut down all other interrupts. Still impossible?


Yes, impossible. USB runs under interrupt, so you have to take that in to account. You also have all of Linux taking its share, unless you go baremetal (not fun). There is also getting the data to the encoder which is on the GPU, which again takes quite a few cycles.

And in modern parlance, bytes are 8 bits. Words could be 2 or 4 or 8 bytes.

There is a reason there are such things as dedicated camera ports to allow massive throughput to where it's needed.

Or you could find a webcam that encodes straight to H264 which reduces the bandwidth a lot. But the current USB impleentation needs work so high res is still a problem.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by sharix » Tue Feb 05, 2013 10:59 pm
will it work on revision 1 ( 0? ) boards?
Posts: 164
Joined: Thu Feb 16, 2012 11:29 am
Location: Slovenia
by jamesh » Tue Feb 05, 2013 11:04 pm
sharix wrote:will it work on revision 1 ( 0? ) boards?


Will what work? The Foundation camera module? Yes. I'm testing it on a very early 256 version.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by sharix » Tue Feb 05, 2013 11:05 pm
yes, the camera.
someone asked this question in the comments of the blog article. something to do with jtag...
thanks for clarifying.
Posts: 164
Joined: Thu Feb 16, 2012 11:29 am
Location: Slovenia
by redhawk » Tue Feb 05, 2013 11:07 pm
Would it be possible to stream the camera as a video feed on a web page??

Also could you set the resolution to 640x480 and perform digital zoom (using 640x480 pixels of a specific area of the camera sensor) ??

Richard S.
User avatar
Posts: 3317
Joined: Sun Mar 04, 2012 2:13 pm
Location: ::1
by jackokring » Tue Feb 05, 2013 11:15 pm
I assume such a high data rate port as the camera port could be used as an audio stereo input port? Or even a video genlock and frame grabber input? I'd have more use for the audio myself, I've no need for more big brother cameras, and the state documentation and persecution project. Just fill in your details which should not get on the internet here school form, and ..... la, la, la ....

http://theoatmeal.com/story/eat_horses
Pi=B256R0USB CL4SD8GB Raspbian Stock. https://sites.google.com/site/rubikcompression/strictly-long https://dl.dropboxusercontent.com/u/1615413/Own%20Work/Leptronics.pdf https://groups.google.com/forum/#!topic/comp.compression/t22ct_BKi9w
User avatar
Posts: 784
Joined: Tue Jul 31, 2012 8:27 am
Location: London, UK
by jamesh » Tue Feb 05, 2013 11:18 pm
redhawk wrote:Would it be possible to stream the camera as a video feed on a web page??

Also could you set the resolution to 640x480 and perform digital zoom (using 640x480 pixels of a specific area of the camera sensor) ??

Richard S.


You will be able to do that eventually. I'm having trouble getting the 640 (60 and 90fps) modes to work at the moment, but Omnivision are very helpful. I'm just short of time to look at the problem. We have some image quality issues to sort out first - the driver needs some modification.

I also need to sort out some Linux side demo code on how to use it.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Fri Feb 08, 2013 3:33 pm
JamesH,

When the camera is available, will it be possible to do digital zoom?

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Fri Feb 08, 2013 4:33 pm
Hardware_man wrote:JamesH,

When the camera is available, will it be possible to do digital zoom?

Hardware_man


Should be, yes. Although not tried it yet I see no reason why not. Works on other cameras.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Fri Feb 08, 2013 5:01 pm
Will it be reasonably easy to map control of digital zoom to a Pi serial port?
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by Alfadaz » Thu Feb 14, 2013 9:28 am
You will be able to control it via the GPIO.
Posts: 51
Joined: Tue May 22, 2012 10:18 am
Location: Cwmbran, S.Wales
by Hardware_man » Thu Feb 14, 2013 4:01 pm
Alfadaz,

Thank you for thie reply. How will that work? Maybe 4 GPIO pins to control 16 levels of digital zoom or 8 pins for 256 levels or something like that?

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by geek01 » Fri Feb 15, 2013 5:48 pm
I just wanted to throw in my $0.02.

I believe you are vastly underestimating the market for a digital video-in solution, even without HDCP.

My interest would be based on making a TCP/IP KVM out of a raspberry pi based solution. The existing appliances sell for $300-400. There are open source KVM software packages, but the hardware was always the issue.

$35 - pi
$35 - dvi in (Theoretically)
$20 - case
$5 - cabling (probably dual USB cable that handles both power in and input out)

For $95 you now have a customize-able TCP/IP KVM that can be plugged into a keyboard and mouse of its own to be configured.

I want this. I want it very badly. I know many people who work in my industry would would want it too.
Posts: 1
Joined: Fri Feb 15, 2013 5:36 pm
by Hardware_man » Fri Feb 15, 2013 7:49 pm
I would love that for 2 reasons:

1. I want the DVR that I have been talking about in this thread.

2. I worked for NTI for one week (they make KVM boxes). That was the only time in my many years that I walked away after 1 week hateing the owner :D

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by camkego » Fri Feb 15, 2013 11:38 pm
Jamesh,

I'm a video software developer on Linux/Windows,
but am ignorant of the Pi's SoC model for video compression.

Typically on systems I work on, raw video [yuv] is dumped into memory [ie,DMA], and then
sent to whatever H.264 or other codec by somehow queing a pointer to the uncompressed buffer
in memory.

Am I right in understanding that the CSI-2 data goes straight to the SoC GPU for H.264 compression,
and does not pass through system memory first?
And, as a result, making it much harder to develop a generic H.264 encoder to be used with multiple different camera drivers or front-ends?

Thanks,
Cameron
Posts: 1
Joined: Fri Feb 15, 2013 11:12 pm
by jamesh » Sat Feb 16, 2013 10:04 am
camkego wrote:Jamesh,

I'm a video software developer on Linux/Windows,
but am ignorant of the Pi's SoC model for video compression.

Typically on systems I work on, raw video [yuv] is dumped into memory [ie,DMA], and then
sent to whatever H.264 or other codec by somehow queing a pointer to the uncompressed buffer
in memory.

Am I right in understanding that the CSI-2 data goes straight to the SoC GPU for H.264 compression,
and does not pass through system memory first?
And, as a result, making it much harder to develop a generic H.264 encoder to be used with multiple different camera drivers or front-ends?

Thanks,
Cameron


The datapath is (usually) CSI2 straight in to the GPU via the ISP. This results in a YUV uncompressed frame in GPU memory. This is then passed to the encoder, the output of which can stay on the GPU or be passed in to Arm memory space. (Usually done using OpenMAX components and buffers). Note that the is only one chunk of memory - at boot its all allocated to the GPU and it hands over a load to the Arm memory manager. To the GPU, thhe memory is a flat address space, to the Arm is managed via the memory manager. This disparate view of memory can mean getting data to and from the Arm may need a copy.

However encoder is in fact completely generic - it can take whatever data is thrown at it from wherever in the GPU, so can be used with anything producing the YUV (maybe some other formats too) frames. So you can have some Arm side code running that passes buffers to the encoder via the encoder component, OpenMAX (and mmal - a Broadcom wrapper over openmax) handles getting the data to the GPU and back. Same process for the decode components.

The difficulty with something like a HDMI in, is writing a driver for the HDMI chip, and however that is attached. Everything else (bar some tweaks) should come out in the wash.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11541
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Sat Feb 16, 2013 6:27 pm
JamisH,

As I have said, I would convert HDMI (or maybe DVI or even analog component) to CSI-2 in an add on hardware board. I was under the impression that the problem was getting through the ISP. Do I understand correctly, the ISP is the Input Signal Processor and this is where a camera CCD is tuned? The ISP was not designed with the expectation of a video source that is already tuned so there is no existing “default” tuning that is essentially a “straight pass through”?

So you would have to manually “tune” the ISP for already tuned video, you can’t just “call default tuning”. If I have this right, creating a “default tuning” would be as much work as tuning for any specific CCD and this is the “driver” that you are talking about?

Is this about right?

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm