Correct, each different camera requires a driver (although they are CSI-2 - that simply an electrical spec, not a spec that allows a single driver to cope for multiple camera). On top of that you them need to create a 'tuning' for that camera, that controls what is done to the data as it passes through the Videocore 4 ISP. These can take months for a competent team as it's a long winded process.PaulBuxton wrote:The approach that has been take with the more recent v4l scheme has separate drivers for the camera sensor and the ISP (image signal processor) there are quite a few cameras sensor drivers available already, the Pi will need an ISP driver to handle the conversion from raw camera data to something you can use (typically YUV or RGB data), this is probably done in the videocore processor.
As James has explained this will need some tweaking for the specific sensor in use.
Android uses a CameraHal component which is a user mode driver. It is typically built on top of either a V4L2 driver, or an OMX driver, but could be completely proprietary if you really wanted (that would make it awkward to debug). I know that Broadcom are quite active in the OpenMax working groups so suspect they have an OMX driver for their ISP.
Quite impressive. I'm not expecting quite so much from a Pi-cam. In fact if I cared that much I'd update my Nikon. A bit more expensive than a Pi but ...jamesh wrote: Here are some examples of how good the ISP is.....including denoise, Burngate!
http://www.gsmarena.com/pureview_blind_ ... ew-773.php
Do this the "obvious" way. leave colour correction to the matrix.jamesh wrote:Debayer
is per-camera different. So will need user-configuration. IMHO for a cheap device like the 'pi and a cheap camera addon, we can skip defective pixel processing, allowing a "learing experience" for the public in how cameras work and how to fix defective pixels.Defective pixel
Convolution filter? Size and actual values user configurable?Denoise
Lens dependent. Nice if the GPU can handle this. In essence it's just the scaling of the blue and red components compared to the green. Two parameters. But they change according to focus position. Will the camera be able to focus?Vignette correction
had that one before.Defective pixel
Does that really happen? After low-pass filtering the raw image you put the high frequencies through a gain stage again? Hmm.Sharpen
I quite strongly think this can be handled by a nonlinear component in the 4x4 matrix.I'm not entirely sure here.Gamma correction
I own a Sony DSC505. An older camera (it does close-ups really good). If you move in close it has an enormous barrel distortion. So when Sony can ship an expensive product with big geometric distortion, why can't the RPF ship a cheap product? Oh! it'd be nice to "showcase" that the broadcom GPU can do the de-barrel-conversion in real-time, but it is necessary to have this working BEFORE you release the camera to the public? Anyway, if you want a real-time, corrected image, the 3D pipeline of the GPU, will also allow you to do this. It's just a texture map on a sphere-like object. In fact, that's probably what actually happens.Geometric distortion
You're using this as an argument that it's difficult to tune. Even those with a fancy more-than-full-HD screen will have to scale down an 8MP shot for display on the screen. Of course the GPU can do that in real time. But the scalefactor must be user-adjustable to allow the people with smaller screens to watch their camera output too, right?Resize
Users browsing this forum: No registered users and 7 guests