Creating the camera board – part two

Liz: here’s the second and final part of David Plowman’s walk through the development of the Raspberry Pi camera board, which will be available to purchase in April. Before you go ahead and read this, check out David’s first post.

The Eye of the Beholder

That’s where beauty lies, so the saying goes. And for all the test charts, metrics and objective measurements that imaging engineers like to throw at their pictures, it’s perhaps sobering that the human eye – what people actually like – is the final arbiter of Image Quality (IQ). There has been much discussion, and no little research, on what makes for “good IQ”, but the consensus probably has it that while the micro aspects of IQ, such as sharpness, noise and detail, are very important, your eye turns first to the the macro (in the sense of large scale) image features – exposure and contrast, colours and colour balance.

We live in a grey world…

All camera modules respond differently to red, green and blue stimuli. Of itself this isn’t so problematic as the behaviour can be measured, calibrated and transformations applied to map the camera’s RGB response (which you saw in the final ugly image of my previous post!) onto our canonical (or standard) notion of RGB. It’s in coping with the different kinds of illumination that things get a little tricky. Let me explain.

Imagine you’re looking at a sheet of white paper. That’s just the thing – it’s always white. If you’re outside on a sunny day, it’s white, and if you’re indoors in gloomy artificial lighting, it’s still white. Yet if you were objectively to measure the colour of the paper with your handy spectrometer, you’d find it wasn’t the same at all. In the first case your spectrometer will tell you the paper is quite blue, and in the second, that it’s very orange. The Human Visual System has adapted itself brilliantly over millions of years simply not to notice any difference, a phenomenon known as colour constancy.

No such luck with digital images, though. Here we have to correct for the ambient illumination to make the colours look “right”. Take a look at the two images below. (You’ll find it easier to judge the “right”-ness  if you scroll so only one image is on the screen at a time.)

It’s a scene taken in the Science Park in Cambridge, outside the Broadcom offices. The top one looks fine, but the bottom one has a strong blue cast. This is precisely because the top one has been (in the jargon) white-balanced for an outdoor illuminant and the bottom one for in indoor illuminant. But how do we find the right white balance?

The simplest assumption that camera systems can make is that every scene is, on average, grey, and it works surprisingly well. It has some clear limitations too, of course. With the scene above, a “grey world” white balance would actually give a noticeable yellow cast because of the preponderance of blue sky skewing the average. So in reality more sophisticated algorithms are generally employed which constrain the candidate illuminants to a known set (predominantly those a physicist would describe as being radiated by a black body, which includes sunlight and incandescent bulbs), and in keying on colours other than merely grey (often specific memory colours, such as blue sky or skin tones).

The devil is in the details

With our colours sorted out, we need to look at the micro aspects of our image tuning. On the Pi, fortunately, we don’t have to worry about focusing, which leaves the noise and sharpening filters within the ISP. Note that some amount of sharpening is essential, really, because of the inherent softening effect of the Bayer mosaic that we saw last time.

When it comes to tuning noise and detail, there are generally two camps. The first camp regards noise as ugly and tries very hard to eliminate it. The second camp thinks a certain amount of noise is tolerable (it can look a bit like film “grain”) in return for better details and a more natural (less processed) look to the image.

To see what I mean, take a look at the following three images. It’s a small crop from a picture of some objects on a mantelpiece, taken in very gloomy orange lighting, and the walls are even a murky pinkish colour too. Pretty challenging for any digital camera!

The top one has had practically no noise filtering applied to it at all. Actually it shows bags of detail, but I think most people would regard the noise as pretty heinous. The second image demonstrates the opposite approach. The noise has been exterminated with extreme prejudice, but out with the bathwater goes the baby – detail and a “natural” looking result. Though my examples are deliberately extreme, you can find the influence of both camps at work in mobile imaging devices today!

The final image shows where we’ve settled with the Pi – a happy medium, I hope, but it does remain, ultimately, a matter of taste. And de gustibus non est disputandum, after all!

Happy snapping!

I’ve only grazed the surface of this subject – there are many more niggles and wrinkles that an imaging system has to iron out – but I’m hoping I’ve given you some sense of why a proper camera integration represents a significant commitment of time and effort. Whilst you’re all waiting for the boards finally to become available I’ll stick around on this website to answer any questions that I can.

My deep thanks, as ever, is due to those clever engineers at Broadcom who actually make this stuff work.

David Plowman, March 2013

50 comments

Avatar

I quite like the second photo in the “noise” comparison. It gives the picture a “painting”-like effect. Are these values going to be adjustable via software or will they be fixed on the module? It would be pretty cool to be able to change these values in code.

keep up the good work!

Avatar

In the previous camera article, JamesH said in the comments that the software he’s writing has a mode where you can extract the raw Bayer pattern, and then process/filter/manipulate it however you like.

Avatar

Good. That extends the design intent of the Pi itself to be able to tinker with things the others seal up. Excellent move guys.

Avatar

Yeah, it’s really great that the RasPi Foundation keeps things as “open” and flexible as they’re reasonably able to – offers up all kinds of interesting experimentation opportunities :)
You can definitely tell that these are engineers and not marketing/sales people! :-D

Avatar

Actually there are some special effects modes that I’ve incorporated that give oil paint, watercolour or pastel effects (amongst others).

Avatar

James, You’re a champ!

Avatar

It is really cool to see how this stuff works!
This explains a lot of why pictures some times don’t look real, even thought they seem that they should be.

I’m really excited about this camera board and getting past the limited bandwidth of a USB Camera.

Any updates on a date in April?

Avatar

Waiting impatiently for that camera module to be on sale!

Avatar

Getting raw from the camera, and being able to control the amount of noise reduction would be good :)

Gordon77

Avatar

I spent years in broadcast TV playing with just such compromises as these. My only wish is for us to be able to manually tweak all or at least many of the parameters – and it seems to be likely to be realised

Quote: “My deep thanks, as ever, is due to those clever engineers at Broadcom who actually make this stuff work”
Can I add mine, and also thank you. Keep up the good work

Avatar

can not wait~~!
I need source code to study~~!

Avatar

The source code to the code running on the GPU is not Opensource, so you won;t get access to that I am afraid. It’s propriety Broadcom information. You will get all the source for the Linux side apps.

Avatar

The R-Pi team may not release the GPU-side code, but you can find open source examples of this kind of image noise reduction. Try a google search on “Wavelet Denoise”

Avatar

This is all very interesting stuff. Keep ’em coming :)

Avatar

Any chance we will get a /proc/ interface for manipulating the functional variables on this thing? e.g. to increase or reduce the noise reduction, say.

Avatar

The current system does NOT allow access to the tuning parameters that David has been talking about I’m afraid. At the moment they are hardcoded in to the binary blob. There may be some mileage in the future to allow some access, but its further down the list than say V4L drivers etc. And most people simply don’t have the requisite skill levels to be able to do much above what is already done. The files with the tuning parameters are thousands of lines long, with lots of numbers of each line! David has only scratched the surface of the options available.

Avatar

The compromise settings shown in the example here seem like a perfectly reasonable choice. I have seen some cameras that go for the extreme noise reduction (plastic-y) setting which I find disappointing visually. I believe many camera designers apply an adaptive level of noise reduction, which achieves both the most detail possible in bright scenes, and sacrificing some detail for smoothness in low light conditions. But if we can get RAW output, then the user can season all this to taste (at least for stills, with the ARM lacking the bandwidth to handle RAW video of any significant resolution).

Avatar

The camera stack here is similar. In good light you’ll get much less denoise, increasing with gloominess to what you see above! The job of camera tuning involves making sure we have the best (or at least, a reasonable) set of parameters for all conditions…

Avatar

Just to add to what David said here, other tuning parameters require a similar process, not just denoise – they vary according to the gain required. Which is why tuning takes such a long time – you don’t just tune for a single set of conditions, but for as many as you can.

Avatar

Is there any news available about the 10 post production camera’s? I’m really “blowing up from suspense” we could really use it for our autonomous (RC) car project for our school project @ campus Denayer. thx ;-)

Avatar

Not yet – we had hundreds and hundreds of entries and we’re still working through boiling them down to the ten best! Gordon’s shortlisted about 60, and Clive and I are trying to get that list down to ten, so it shouldn’t be long now.

Avatar

Oh okay thx, now you’re really making us nervous. Much fun reading into those idea’s I’m sure there will be a lots of cool ideas. ;-)

Avatar

David,

Great write-up touching on some of the challenges. I’m afraid you’ve made it seem too easy and now everyone will want to try rolling their own! :)

Maybe this is what the Pi is all about. Thanks!

tai

Avatar

“With our colours sorted out…” just fwiw, that phrase covers a very large amount of territory if my understanding is correct. Doing a minimal loss transformation from raw sensor data into your output color space is (as I understand it) fundamentally hard. That is, there is more than one way to do it, there are conflicting optimization criteria, and no method is “best” for every scene’s color gamut and every lighting spectral distribution.

Avatar

In fact the colour transformations that we perform are adaptive based on light level, illuminant (e.g. outdoor/indoor) and of course customer preference (“I want grass to be *this* shade of green!”). So you’re right, I glossed over something that is rather less trivial than I made it sound (but that is true of much of the two articles!!).

Avatar

Will there be a need for a new Raspbian Wheezy OS release to support the camera, or will a apt-get or similair download be neccessary. For one thing I,d like to know if a new camera-friendly kernal.img file ( etc) will be required.
Texy

Avatar

You will need a new start.elf, and probably a new OS image as well as their are some new libraries IIRC. I presume an apt-get will do most of the work for you. I don’t think kernel.img has changed. The majority of work is in the GPU blob and the Linux apps. Not kernel space.

Avatar

I find this interesting, and as someone with a bit of experience with colour film handling can related to many of the issues, especially about colour balancing. That leads me on to my question; will the camera provide “raw” images as well as adjusted ones? As anyone who has worked in digital images will tell you, to properly manage your workflow it is better to start off with an unadulterated image, and then apply changes as you go along, especially if the output is destined for different devices (screen, projector, print etc).

Avatar

Does no-one ever read nowadays? This is covered in the posts just above where you posted your question!

Avatar

James,
maybe you missunderstood me. There’s the raw Bayer pattern (ie with absolutely no correction), and then there’s output that has been corrected for the lens etc, but with no further sharpening, colour corrrection etc. That’s what I’m refering to. I can’t see anything in any of the messages about that answers that. Maybe you have better reading skills than I do. ;-)

Avatar

Nope, there is no access to that I’m afraid. Raw Bayer or post ISP is what is currently available. I have done an app that does YUV420 out, but that has been processed.

Avatar

Hi David

Is it possible for this camera to be set up as a slow motion (high speed) video camera with a bit of tweaking.

I am coming at this from a 12 year old’s perspective. I am afraid I know nothing about the subject but would buy one if there is a chance he could somehow use it for slow mo. He has been inspired by a proper slow mo camera and wants to have a go.

Many thanks. Richard

Avatar

The possibility is there, but so far we have been unable to get the 60fps or 90fps modes working. It should work- just time to get it working.

Avatar

Sorry just saw your post now, does that mean that the 15 and 45 fps modes are working ?

Avatar

In video mode you can select between 2 and 30fps. With stills, you just take then as fast as possible. I’ve not put in any timelapse code though for captures – that might be worth a go. Hmm, I’ll think about that.

Avatar

Thanks James

Sounds promising : )

I’ve tried to get him interested in the pi for a while and this could do the trick!

Avatar

The camera module is OV5647 http://www.ovt.com/products/sensor.php?id=66
which acording to its data sheet supports a maximum image transfer rates:
QSXGA (2592×1944): 15 fps (75582720 pixels per second)
1080p (1920×1080): 30 fps (62208000 pixels per second)
960p (1280×960 ): 45 fps (55296000 pixels per second)
720p (1280×720 ): 60 fps (55296000 pixels per second)
VGA ( 640×480 ): 90 fps (27648000 pixels per second)

Through software it is possible to playback a normal video file shot at a high frame rate at a lower frame rate. And I don’t see why software could not re-encode a high frame rate video file as lower one.

Avatar

At the moment I’m playing with a Logitech C250 webcam and OpenCV on my Raspberry Pi. What I want to do is some motion- and face-detection, but at the moment I only get a few (less than 5) FPS *without* the detection running.

Am I right in thinking that this camera module will work better for this application? I don’t think I’ll need a full 30fps (though 15fps might be nice); presumably the drivers you’re writing at the moment will mean the CPU is entirely free to run the OpenCV code, etc., and the GPU will handle all the image capture stuff?

Avatar

Slightly off-topic, but for an excellent treatment of color perception, see these old Horizon episodes.

Avatar

Ha, love the 80s technology :)

Avatar

Ok, how can I get these source code in Linux side? for example, drivers in kernel? Thank u~!

Avatar

The source code for the apps will be released along with the camera module. All the rest of the Linux side code is already available in github.

Avatar

Hi James, do you have any info on how we are going to access pictures and video coming from the camera.

And will it be possible for programs like motion to use the camera.

/Bo

Avatar

The apps at the moment write to the SD card (or any file).

Avatar

Is the camera sensitive to infra red, or is this blocked by an internal filter? (thinking about “night vision” applications)

Avatar

I believe it has a non-removable internal filter.

Avatar

Can the focus be adjusted by hand? If not, can you tell us the focal length?

Avatar

Do you think it will be possible to get a bunch of pis and cameras synced together?
and i mean perfectly subframe-synced.
Would be great for triangulation stuff like timeslice oder motioncapture.

i can not WAIT!!!

cheers,
Vincent

Avatar

I would like to use the camera interface as a high speed data port. Is it possible to just take the raw data from the camera interface and pipe it to a file or other app on the CPU rather than processing it in the GPU in a proprietary driver?

Avatar

I found a thread about this in the comments on the first page of this article. So… never mind.

Leave a Comment

Comments are closed