Camera Interface Specs


351 posts   Page 2 of 15   1, 2, 3, 4, 5 ... 15
by jamesh » Fri Dec 21, 2012 9:17 am
Hardware_man wrote:Jamesh,

The only reason that I considered using the HDMI is that you thought that you already had a driver. But I would go back to my original idea of using the CSI-2 port. You tell me what works best for you.
I don’t know for a fact if your HDMI port is an input only, output only, or configurable to be either. I know for a fact the CSI-2 port is an input.
To avoid trying to make this all “automatic”, just provide a method where the user has to enter all of the configuration information:
• Resolution
• FPS
• Aspect ratio
• RGB or Y Cb Cr
• Etc
Also, let the user select the encoding algorithm MPEG2 or H.264 (MPEG only works if you buy the key)
Encoding details H.264 profile, level, etc
But what only somebody with access to the non public sections of the Broadcom data sheet can do is route the video from the CSI-2 connector, through the VIP to the encoder and select the encoder parameters.
Then the GPU can generate the correct ”container” file that specifies which encoder, profile, level, etc was used.
Hardware_man


All the code to do the above is present, you can specify resolution, framerate, effects etc. In fact most of the settings you would find on a compact camera. You can use OpenMAX components to tunnel the capture/video around to storage, encode, display etc.

What won't be there is the driver for the specific camera module used, plus tuning to actually make the output from the camera look half decent. That's the stuff that needs to be done for each new camera supported, and all that code is on the GPU. It also takes a long time to get right. Camera modules are always a PITA, the documentation is never sufficient, and the tuning takes ages, unless its a camera module that has an internal ISP, which is passed through and bypasses the Videocore ISP.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Fri Dec 21, 2012 4:29 pm
This sounds like a lot more is already available than I thought.

As I said, our camera module is already tuned. So to borrow from audio terminology, I need a driver that is tuned “flat”. And my video is in Y Cb Cr, do we need this “flat” driver to convert to RGB to go into the encoder?

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Fri Dec 21, 2012 4:52 pm
We can input YUV (amongst other formats), which is pretty close to YCbCr and we could probably convert later in the pipeline using a colour correction matrix to get it closer.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Fri Dec 21, 2012 7:50 pm
So what do we need to do next?
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Fri Dec 21, 2012 9:34 pm
No idea! The cost of doing this as a custom job would be quite considerable, and I doubt Broadcom would be in the slightest bit interested. So it would have to be the Foundation, and at the moment the Foundation doesn't do custom projects outside the educational remit. Being technically feasible doesn't necessarily mean it's financially feasible.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Sat Dec 22, 2012 5:35 pm
Dear Pi Foundation,

I’ll go back to my earlier proposal, except with a CSI-2 output.

I design the circuit and lay out a PCB to take analog component video in and output CSI-2. I will “sweeten” the deal: I will give the Foundation both the schematic and Gerber files. I’ll even throw in the BOM. The Foundation is free to do what ever they want with this, keep it proprietary, manufacture it, and / or post it all.

In exchange, the foundation will provide the firmware and / or software to take this “through” to the H.264 encoder and the software to use this as a DVR recording to the SD card. Again, the Foundation may keep this proprietary or include it in your normal firmware distribution.

What do you think?

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Sat Dec 22, 2012 6:29 pm
Hardware_man wrote:Dear Pi Foundation,

I’ll go back to my earlier proposal, except with a CSI-2 output.

I design the circuit and lay out a PCB to take analog component video in and output CSI-2. I will “sweeten” the deal: I will give the Foundation both the schematic and Gerber files. I’ll even throw in the BOM. The Foundation is free to do what ever they want with this, keep it proprietary, manufacture it, and / or post it all.

In exchange, the foundation will provide the firmware and / or software to take this “through” to the H.264 encoder and the software to use this as a DVR recording to the SD card. Again, the Foundation may keep this proprietary or include it in your normal firmware distribution.

What do you think?

Hardware_man


I have no idea whether the Foundation would be interested. Also, remember that the Foundation don't actually employ any engineers, so would need to contract in someone to do the job, and finding people with camera driver experience on the Videocore would be difficult (since they all work at Broadcom). As an aside I'd guess the software would cost in excess of £15k to develop if it was purely commercially driven.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by EdZ » Mon Dec 24, 2012 10:33 am
jamesh wrote:It also takes a long time to get right. Camera modules are always a PITA, the documentation is never sufficient, and the tuning takes ages, unless its a camera module that has an internal ISP, which is passed through and bypasses the Videocore ISP.

But the OV5647 does have an internal ISP. Or is it not the right kind of ISP?
jamesh wrote:We can input YUV (amongst other formats), which is pretty close to YCbCr and we could probably convert later in the pipeline using a colour correction matrix to get it closer.
Is it definitely YUV (or Y'UV), or really Y'CbCr? Things should be in the digital domain, so either Broadcom are using terms incorrectly (unfortunately common), or using some weird conversion factors.
Posts: 11
Joined: Sat Dec 01, 2012 11:36 am
by jamesh » Mon Dec 24, 2012 11:41 am
EdZ wrote:
jamesh wrote:It also takes a long time to get right. Camera modules are always a PITA, the documentation is never sufficient, and the tuning takes ages, unless its a camera module that has an internal ISP, which is passed through and bypasses the Videocore ISP.

But the OV5647 does have an internal ISP. Or is it not the right kind of ISP?
jamesh wrote:We can input YUV (amongst other formats), which is pretty close to YCbCr and we could probably convert later in the pipeline using a colour correction matrix to get it closer.
Is it definitely YUV (or Y'UV), or really Y'CbCr? Things should be in the digital domain, so either Broadcom are using terms incorrectly (unfortunately common), or using some weird conversion factors.


I'm not sure whether we use the internal ISP or not of the OV sensor. I was under the impression we used our own tuning. The ISP bypass code is pretty recent and the OV driver may predate it in which case we use the VC4 ISP.

We use YUV (e.g. VC_IMAGE_YUV422YUYV, VC_IMAGE_YUV422YVYU, VC_IMAGE_YUV422UYVY, VC_IMAGE_YUV422VYUY). We could probably import Y'CbCr and use the gamma block to fix up the differences (so I've been told!). We support quite a few formats around the GPU and ISP, but not all formats can be used in all circumstances. Talking to a codec guy, he reckoned you can use YUV and Y'CbCr fairly interchangeably.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by CameraGuY » Mon Dec 24, 2012 4:13 pm
5647 its a RAW sensor don't have ISP. You need to use your backend ISP or choose 5640 or 5642 which a YUV sensor (ISP built in)
Posts: 5
Joined: Mon Dec 24, 2012 4:06 pm
by jamesh » Mon Dec 24, 2012 8:32 pm
CameraGuY wrote:5647 its a RAW sensor don't have ISP. You need to use your backend ISP or choose 5640 or 5642 which a YUV sensor (ISP built in)


Thanks, I'd assumed that was the case since that particular module driver does have a tuning in the GPU codebase.

Tried one today on a Raspi, but image is slightly out of focus in video mode, which is odd as it appears to be fixed focus.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by CameraGuY » Wed Dec 26, 2012 5:20 am
Try few distances, if you still find the issue. You try changing the focus of the lens. We should be able to change the focus as the lens are only glued to the holder. Should be easy to play around with some gentleness
Posts: 5
Joined: Mon Dec 24, 2012 4:06 pm
by chatraed » Tue Jan 01, 2013 1:14 am
Dear jamesh,

In some upper post you have mentioned a friend of yours who has already implemented the hdmi to mipi csi conversion on a Videocore system. So, you are one of my few remaining hopes to solve a big problem in my starting project.

Instead of having the possibility to grab video using RasPi, I just want to make the board+LCD behave like a portable battery powered field monitor mainly for videographers and video assistants. For this purpose, the board should fulfill following requirements:
- provide hdmi input (HDCP keys not needed) and accept video resolution up to 1080p 30fps (1080p60 desired, but not realistic for the board capabilities);
- provide negligible latency between video stream aquisition and its playback on LCD (aka camera preview) with no frames dropped;

Both parts are equally important for a video shooter. To give an example, a delay of 100ms is already very disturbing for a video assistant who is in charge of remote focus pulling using a remote monitor connected to the camera and a remote follow focus device. So, the latency should be around 15-20ms or smaller.

My first problem is converting the hdmi stream either into mipi CSI-2 or into camera parallel (not supported by RasPi).

For the hdmi2csi I really could not find any converter except TC358743 (it is out of stock and not sold in small quantities). Seems like the majority of CSI transmitter/receiver solutions are under some sort of IP. There is though a more classical way for deserializing the HDMI Tdms signal by converting it to RGB format, using normal hdmi receiver ICs (e.g. ADV7611). Honestly speaking this is the reason that makes me focus on some other Arm Cortex A9 based boards, the ones providing a parallel interface to the camera. However, no application processor generally provides full RGB24b camera input, but usually accepts BT.601/BT.656/BT.1120 standards with a reduced bus width (8-16 instead of 24). Here I suppose a perfect match is needed between the hdmi receiver's output format and the processor's parallel input format. I still have to look for more documentation on different camera parallel formats, bus widths, clock speeds, which may vary according to the desired video resolution and framerate.

My second problem is the latency between what is captured on the camera interface and what is actually displayed on the board's LCD screen. Here, I suppose, a big part of the processing applied to the normal camera signal could be bypassed as the HDMI video stream should require no tuning.

To sum up, would you be so kind to answer below questions. It will very much help me:
- what was the hdmi2csi solution used by the guy you were talking about in the upper posts?
- could you estimate the delay for the simple "capture-display" scenario in case of RasPi, including only basic resize of the image (e.g. 1080p to 720p) to fit the resolution of the attached LCD;
- do you think more processing can be done in GPU on-the-fly without affecting the latency too hard? For example I would be interested in some specific processing effects that would highlight in real-time the over/under-exposed image areas.

Thank you and happy new year to everyone!
Posts: 5
Joined: Tue Dec 25, 2012 5:48 pm
by jamesh » Tue Jan 01, 2013 5:51 pm
1) HDMI in was implemented using the Videocores custom host interface, not CSI2 (I was wrong there). This is not accessible on the Raspi AFAIK. However, I see no reason why HDMI to CSI-2 cannot be done - I think Toshiba do a chip.
2) Latency was really rather good - about 15ms I think - and that was after being transmitted over a wireless link and decoded on a second videocore device. Straight to display should be better.
3) Some effects can be done with little impact on the throughput, mainly the standard built in HW ISP stuff. If the effect require a software stage (where we use the GPU's vector processors to process each frame for the effect), there is some impact. I cannot say how much, depends on the processing you want to do.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by jamesh » Tue Jan 01, 2013 5:53 pm
CameraGuY wrote:Try few distances, if you still find the issue. You try changing the focus of the lens. We should be able to change the focus as the lens are only glued to the holder. Should be easy to play around with some gentleness


Wow, never thought of that....!

Not had any time over the break to try things out, and am now out of action for 10 days. Might have a chance to play, but not sure.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Tue Jan 01, 2013 6:40 pm
Chatraed wrote,

“what was the hdmi2csi solution used by the guy you were talking about in the upper posts?”

If I’m the guy you are talking about, I only mentioned using HDMI as the input because Jamesh thought he had a low level driver to use the HDMI as an input and route it to the H.264 encoder. I was willing to convert my video into whatever hardware format the Pi board needed. But when Jamesh checked, he does not have the driver to use either the HMDI or the CSI-2 as an input for already tuned video.

Note, fully documented video chips like the T.I. TMS320DM365 that have a parallel Video input are fully programmable, and firmware can specify what is MSB, what is LSB, etc.

As this documentation is not available for the video section of the Broadcom chip, we will never know.

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Tue Jan 01, 2013 7:40 pm
I think sometime this year the Foundation will be in a better position to look in to things like this, but at the moment, patience is needed! We can't do everything at once. Lot's of educational stuff to do, with limited resources.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by chatraed » Tue Jan 01, 2013 8:43 pm
Dear jamesh and Hardware_man,

Kindly appreciate your precious feedback and opinions.
jamesh , from what you say, if GPU processing of the camera signal is only done per (whole) frame and not per a smaller frame unit (e.g line, pixel matrix), then the minimum "capture->display" latency cannot be smaller than the acquisition time of 1 frame. In case of a 1080p 30fps, this time would be equal to 1000ms/30fps = 33ms. Thinking this way, I cannot be sure of the 15ms delay reported by you above.

Regards.
Posts: 5
Joined: Tue Dec 25, 2012 5:48 pm
by Hardware_man » Wed Jan 02, 2013 5:07 pm
Chatraed,

What adds delay is video compression. So if all you were doing was taking video in the CSI-2 port and sending it out the Ethernet (if this implementation of Ethernet is fast enough), there would be very little delay.

But if you want to go through any of the video encoders to reduce the data rate, like MPEG2 or H.264, then the delay will primarily be a function of the encoding profile and level that you use.

But I can’t get the Foundation to provide the low level code to take video from the CSI-2 input to the H.264 encoder so this is all theoretical.

It appears that all of the low level code is provided to take encoded video in and run it through the video decoders. See Raspbmc.com. But this low level code is not available to use the codec encoder algorithms, except when using the proprietary Pi camera.

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by chatraed » Wed Jan 02, 2013 5:41 pm
Hi Hardware_man,

All you say is correct from general point of view.
However, my question about delay was related to capturing uncompressed (unencoded) video stream. As you know, hdmi signal is uncompressed. My particular application is a plain capture->display use-case. No encoding-decoding involved. No ethernet/wifi transmission.

Even having such a simple use-case, the delay between video signal acquisition and its displaying on LCD is not null for sure. The 15ms reported by jamesh are not realistic to me for the reasons stated above. On TI community forum somebody told me that on Davinci video processors the captured data can be divided into slices. That means GPU processing can be applied on smaller portions of the frame. And that's good because the latency can be lowered. Processing can be applied to slices and the output can be sent immediately to the display controller. To me, this looks like the only way to minimize the latency. If RasPi's GPU is only capable to process whole frames, that inherently should cause a 33ms delay in case of 30fps capture frame-rate.

Hope I made myself understood.
Best regards,
chatraed.

late edit: by using the word "delay" I refer to the time delta between the action in front of the camera and what is seen on board's LCD.
Posts: 5
Joined: Tue Dec 25, 2012 5:48 pm
by Hardware_man » Sat Jan 05, 2013 6:37 pm
Jamesh,

Stepping back and looking at the “big picture”, consider this. You provide video out both as SD composite on an RCA connector and HD on an HDMI connector. Thus, any “off the shelf” monitor from the dusty, old black and white sitting in the basement (plus a Radio Shack modulator), to the brand new gazillion inch large screen TV will work with the Pi board. I assume that a large screen monitor manufacturer has to “tune” the video for the particular actual display just like you are tuning the Foundation’s camera. Tuning the monitor during design is probably as much work involving both art and science as tuning a camera.

When it comes to video in, the Foundation takes the opposite approach. You are tuning a proprietary camera. When I ask for a simple video input so that any “off the shelf” tuned camera will work, you tell me that is too much work for the Foundation. Isn’t this inconsistent with the Foundation’s mission statement of an open source, educational device when only your proprietary camera will work?

To borrow from audio terminology, the microphone designer has to EQ the microphone flat. The speaker designer has to EQ the speaker flat. The audio signal processor designer assumes flat audio at the input and assumes the power amp and speakers are flat. Wouldn’t it be odd if an audio compressor designer had to first worry about an EQ curve for the audio in?

I think the Foundation’s first priority should be universal camera in, universal monitor out. Then possibly manufacture a proprietary camera that comes with tuning for the Pi board and a proprietary display that comes with tuning for the Pi board.

Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by ghans » Sat Jan 05, 2013 8:50 pm
Hmm.
At no point the foundation said that open-source was
the (only) way to accomplish their goals. They suppport it ,
but they are also willing to compromise for their mission.
If i got that completely wrong , please somedoby enlighten me :D

ghans
• Don't like the board ? Missing features ? Change to the prosilver theme ! You can find it in your settings.
• Don't like to search the forum BEFORE posting 'cos it's useless ? Try googling : yoursearchtermshere site:raspberrypi.org
Posts: 4505
Joined: Mon Dec 12, 2011 8:30 pm
Location: Germany
by jamesh » Sun Jan 06, 2013 1:12 am
Hardware_man wrote:Jamesh,

Stepping back and looking at the “big picture”, consider this. You provide video out both as SD composite on an RCA connector and HD on an HDMI connector. Thus, any “off the shelf” monitor from the dusty, old black and white sitting in the basement (plus a Radio Shack modulator), to the brand new gazillion inch large screen TV will work with the Pi board. I assume that a large screen monitor manufacturer has to “tune” the video for the particular actual display just like you are tuning the Foundation’s camera. Tuning the monitor during design is probably as much work involving both art and science as tuning a camera.

When it comes to video in, the Foundation takes the opposite approach. You are tuning a proprietary camera. When I ask for a simple video input so that any “off the shelf” tuned camera will work, you tell me that is too much work for the Foundation. Isn’t this inconsistent with the Foundation’s mission statement of an open source, educational device when only your proprietary camera will work?

To borrow from audio terminology, the microphone designer has to EQ the microphone flat. The speaker designer has to EQ the speaker flat. The audio signal processor designer assumes flat audio at the input and assumes the power amp and speakers are flat. Wouldn’t it be odd if an audio compressor designer had to first worry about an EQ curve for the audio in?

I think the Foundation’s first priority should be universal camera in, universal monitor out. Then possibly manufacture a proprietary camera that comes with tuning for the Pi board and a proprietary display that comes with tuning for the Pi board.

Hardware_man


Mostly nonsense I'm afraid. You argument referencing HDMI is a strawman. There is a predefined standard for digital out - HDMI (or DVI). There is no standard camera interface.

Universal camera in is impossible anyway. Almost every camera module is different. If it were possible I wouldn't spend my entire working day trying to make cameras work properly. Nokia did attempt to standardise the camera interface using something call SMIA. It's a massive document describing almost everything to do with camera and setup, and yet, we still needed to write new camera drivers every time they wanted a new camera, because they all worked in very slightly different ways. And of course, that's just Nokia, and camera manufacturers don't need to adhere to their rules if they don't want to. It is NOT the Foundations job to define a generic camera interface spec. And of course, that doesn't help with tuning, which is a different kettle of fish entirely .

Your definition of the Foundations mission statement is also incorrect. See ghans post which is the approach taken.

The Foundations first priority is to education. We believe a simple camera board, with decent quality, is a good addition to that. Spending months trying to do a generic system that can handle any camera would be a complete waste of time and money, and as I said, impossible anyway.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm
by Hardware_man » Sun Jan 06, 2013 8:54 pm
Now, in the “old” days, you could by a SD camcorder with composite video out on an RCA jack. Is there no modern HD equivalent of this?
Hardware_man
Posts: 94
Joined: Tue Dec 04, 2012 6:28 pm
by jamesh » Sun Jan 06, 2013 9:33 pm
Hardware_man wrote:Now, in the “old” days, you could by a SD camcorder with composite video out on an RCA jack. Is there no modern HD equivalent of this?
Hardware_man


Everyone uses HDMI nowadays I thought. ONe connectors for high quality video and audio. Simplest way,
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11686
Joined: Sat Jul 30, 2011 7:41 pm