Raw sensor access / CSI-2 receiver peripheral


215 posts   Page 1 of 9   1, 2, 3, 4, 5 ... 9
by 6by9 » Fri May 01, 2015 9:33 pm
So various people have asked about supporting this or that random camera, or HDMI input. Those at Pi Towers have been investigating various options, but none of those have come to fruition yet and raised several IP issues (something I really don't want to get involved in!), or are impractical due to the effort involved in tuning the ISP for a new sensor.

I had a realisation that we could add a new MMAL (or IL if you really have to) component that just reads the data off the CSI2 bus and dumps it in the provided buffers. After a moderate amount of playing, I've got this working :-)

Firstly, this should currently be considered alpha code - it's working, but there are quite a few things that are only partially implemented and/or not tested. If people have a chance to play with it and don't find too many major holes in it, then I'll get Dom to release it officially, but still a beta.
Secondly, this is ONLY providing access to the raw data. Opening up the Image Sensor Pipeline (ISP) is NOT an option. There are no real processing options for Bayer data within the GPU, so that may limit the sensors that are that useful with this.
Thirdly, all data that the Foundation has from Omnivision for the OV5647 is under NDA, therefore I can not discuss the details there.

So what have we got?
  • There's a test firmware on my github account (https://github.com/6by9/RPiTest/blob/master/rawcam/start_x.elf) that adds a new MMAL component ("vc.ril.rawcam"). It has one output port which will spit out the data received from the CSI-2 peripheral (known as Unicam). Please do a "sudo rpi-update" first as it is built from the same top of tree with my changes. DO NOT RISK A CRITICAL PI SYSTEM WITH THIS FIRMWARE. The required firmware changes are now in the official release - no need for special firmware.
  • There's a modified userland (https://github.com/6by9/userland/tree/rawcam) that includes the new header changes, and a new, very simple, app called raspiraw. It saves every 15th frame as rawXXXX.raw and runs for 30 seconds. The saved data is the same format as the raw on the end of the JPEG that you get from "raspistill -raw", though you need to hack dcraw to get it to recognise the data. The code demonstrates the basic use of the component and includes code to start/stop the OV5647 streaming in the full 5MPix mode. It does not include in the source the GPIO manipulations required to be able to address the sensor, but there is a script "camera_i2c" that uses wiringPi to do that (I started doing it within the app, but that then required running it as root, and I didn't like that). You do need to jump through the hoops to enable /dev/i2c-0 first (see the "Interfacing" forum, but it should just be adding "dtparam=i2c_vc=on" to /boot/config.txt, and "sudo modprobe i2c-dev").
    The OV5647 register settings in that app are those captured and posted on https://www.raspberrypi.org/forums/view ... 25#p748855
  • I've made use of zero copy within MMAL. So the buffers are allocated from GPU memory, but mapped into ARM virtual address space. It should save a fair chunk of just copying stuff around, which could be quite a burden when doing 5MPix15 or similar. This requires a quick tweak to /lib/udev/rules.d/10-local-rpi.rules adding the line: SUBSYSTEM=="vc-sm", GROUP="video", MODE="0660".

Hmm, that means that we've just achieved the request in https://www.raspberrypi.org/forums/view ... 3&t=108287, and things like the HDMI to CSI-2 receiver chips can now spit their data out into userland (although image packing formats may not be optimal, and the audio channel won't currently come through)


What is this not doing?
  • This is just reading the raw data out of the sensor. There is no AGC loop running, therefore you've got one fixed exposure time and analogue gain. Not going to be fixed as that is down to the individual sensor/image source.
  • The handling of the sensor non-image data path is not tested. You will find that you always get a pair of buffers back with the same timestamp. One has the MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO flag set and should be the non-image data. I have not tested this at all as yet, and the length will always come through as the full buffer size at the moment.
  • The hardware peripheral has quite a few nifty tricks up its sleeve, such as decompressing DPCM data, or repacking data to an alternate bit depth. This has not been tested, but the relevant enums are there.
  • There are a bundle of timing registers and other setup values that the hardware takes. I haven't checked exactly what can and can't be divulged of the Broadcom hardware, so currently they are listed as parameters timing1-timing5, term1/term2, and cpi_timing1/cpi_timing-2. I need to discuss with others whether these can be renamed to something more useful.
  • This hasn't been heavily tested. There is a bundle of extra logging on the VC side, so "sudo vcdbg log msg" should give a load of information if people hit problems.

So it's a bank holiday weekend, those who are interested please have a play and report back. I will offer assistance where I can, but obviously I can't really help if you've hooked up an ABC123 sensor and it doesn't work, as I won't have one of those.
Longer term I do hope to find time to integrate this into the V4L2 soc-camera framework so that people can hopefully use a wider variety of sensors, but that is a longer term aim. The code for talking to MMAL from the kernel is already there in the bcm2835-v4l2 driver, and the demo code for the new component is linked here, so it doesn't have to be me who does that.

I think that just about covers it all for now. Please do report back if you play with this - hopefully it'll be useful to a fair few people, so I do want to improve it where needed.
Thanks to jbeale for following my daft requests and hooking an I2C analyser to the camera I2C, as that means I'm not breaking NDAs.

Further reading:
- The official CSI-2 spec is only available to MIPI Alliance members, but there is a copy of a spec on http://electronix.ru/forum/index.php?ac ... t&id=67362 which should give the gist of how it works. If you really start playing, then you'll have to understand how the image ID scheme works, image data packing, and the like.
- OV5647 docs - please don't ask us for them. There is a copy floating around on the net which Google will find you, and there are also discussions on the Freescale i.MX6 forums about writing a V4L2 driver for that platform, so information may be gleaned from there (https://community.freescale.com/thread/310786 and similar).

*edit*:
NB 1: As noted further down the thread, my scripts set up the GPIOs correctly for B+, and B+2 Pis (probably A+ too). If you are using an old A or B, please read lower to note the alternate GPIOs and I2C bus usage.
NB 2: This will NOT work with the new Pi display. The display also uses I2C-0 driven from the GPU, so adding in an ARM client of I2C-0 will cause issues. It may be possible to get the display to be recognised but not enable the touchscreen driver, but I haven't investigated the options there.
Last edited by 6by9 on Mon Nov 02, 2015 11:09 am, edited 2 times in total.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by rpdom » Sat May 02, 2015 6:14 am
I don't understand a word of that, as imaging is not my field, but... I do realise that you have put a lot of work into expanding the possibilities for direct camera interfacing on the Pi and have opened up a whole new area for people to explore.

Thank you :)
User avatar
Posts: 10842
Joined: Sun May 06, 2012 5:17 am
Location: Essex, UK
by tchiwam » Sat May 02, 2015 11:52 am
OK, I got this !! Thank you , sadly I only got this now, my new RPI2 + Noir Cam will thank you soon.
Posts: 43
Joined: Mon Nov 24, 2014 4:01 pm
by 6by9 » Sat May 02, 2015 6:39 pm
Things I forgot to say in my post last night.

The hardware needs the stride to be correct. Stride in the MMAL world comes via mmal_encoding_width_to_stride, under IL it is nStride in the port format. The data will be formatted with that stride, regardless of the line length of the incoming data. Stride has to be a multiple of 32.
It also will wrap around the buffer if the buffer is too small to receive all the data presented within one frame on the CSI2 bus.

This driver has no knowledge of the incoming data layout other than via the data the client provides. If you change the sensor config to spit out something else and don't tell this driver, then expect it to do weird stuff (either cropping the right hand edge off, wrapping and writing the bottom part of the image at the top, or leaving a large portion of the image unused).

I've just posted my mods to dcraw to https://github.com/6by9/RPiTest/tree/master/dcraw There's also a built copy of the binary with source and my hacks so that it recognises all files as Pi raws.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by ethanol100 » Mon May 04, 2015 9:29 am
6by9 wrote:
  • The handling of the sensor non-image data path is not tested. You will find that you always get a pair of buffers back with the same timestamp. One has the MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO flag set and should be the non-image data. I have not tested this at all as yet, and the length will always come through as the full buffer size at the moment.

I think there is a small error in raspiraw.c on line 261:
Code: Select all
if((buffer->flags&MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO) &&

should be
Code: Select all
if(!(buffer->flags&MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO) &&


With this change I get the right buffer an can convert it using your version of dcraw.

A good example of how much work is done in the ISP to get a nice looking picture :-)
Posts: 464
Joined: Wed Oct 02, 2013 12:28 pm
by 6by9 » Mon May 04, 2015 12:07 pm
ethanol100 wrote:
6by9 wrote:
  • The handling of the sensor non-image data path is not tested. You will find that you always get a pair of buffers back with the same timestamp. One has the MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO flag set and should be the non-image data. I have not tested this at all as yet, and the length will always come through as the full buffer size at the moment.

I think there is a small error in raspiraw.c on line 261:
Code: Select all
if((buffer->flags&MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO) &&

should be
Code: Select all
if(!(buffer->flags&MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO) &&


With this change I get the right buffer an can convert it using your version of dcraw.

A good example of how much work is done in the ISP to get a nice looking picture :-)

Good catch. I'd cleaned it up from "buffer->flags == 0x04" as my initial hacked together thing. The brackets got added, but not the !
Fix pushed.

Yup, the ISP does a hell of a lot of work just to get reasonable AGC and white balance values. Add in crosstalk, lens shading, tone mapping, denoise, demosaic, and it's doing quite a bit of number crunching.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by 6by9 » Tue May 05, 2015 3:02 pm
Second blunder found - I've missed out the MMAL parameter mapping on the GPU, so all calls for mmal_port_parameter_set/get on the 4 new parameters will fail. Just fixing it, and will push an updated start_x.elf later.
I'll also try to get some testing done on the packing/unpacking stuff - it should make producing DNGs easier by converting them to 16 bit and just writing a header to the front.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by ThomasDK » Tue May 05, 2015 3:34 pm
I have been doing quite a bit of reverse engineering on the camera board the last few months.

We are using the camera, without the Pi, in a university project. It will be used in a student cubesat satellite.

The Pi Cam was chosen as they are cheap, and readily available for purchase locally. This saves us weeks of waiting for something to arrive from China, if a camera should die. 2 Pi cams have been killed already, they are sensitive little buggers :D

Documentation isn't finished yet, but will be released open source when done.

As a start, here is the complete pinout for the camera:

Code: Select all
1:    GND
2:    MIPI Data lane 0 N
3:    MIPI Data lane 0 P
4:    GND
5:    MIPI Data lane 1 N
6:    MIPI Data lane 1 P
7:    GND
8:    MIPI Clock N
9:    MIPI Clock P
10:   GND
11:   Sensor reset (active low)
12:   Sensor clock enable (active high)
13:   SCL
14:   SDA
15:   +3.3V
Posts: 2
Joined: Tue May 05, 2015 3:16 pm
by 6by9 » Tue May 05, 2015 3:57 pm
ThomasDK wrote:As a start, here is the complete pinout for the camera:

Code: Select all
1:    GND
2:    MIPI Data lane 0 N
3:    MIPI Data lane 0 P
4:    GND
5:    MIPI Data lane 1 N
6:    MIPI Data lane 1 P
7:    GND
8:    MIPI Clock N
9:    MIPI Clock P
10:   GND
11:   Sensor reset (active low)
12:   Sensor clock enable (active high)
13:   SCL
14:   SDA
15:   +3.3V

Useful to collate (thank you), but I would say that that's available from the schematics at https://www.raspberrypi.org/documentati ... matics.pdf

Pin 12 is incorrect. It was originally intended to be a clock signal generated by the Pi for the sensor (hence the name of CAM_CLK, though renamed to CAM_GPIO1 on B+), but that had too many issues with EMC so it became just the drive for the LED on the board. The oscillator on the camera board is controlled by pin 11, the same as the sensor itself.
Some of the Pi camera clones have wired that up differently and do require pin 12 to be asserted to make it function.

I'm intrigued to know what device you paired the OV5647 up with, as there aren't too many devices with exposed CSI-2 out there.

Not an immediate issue, but you might want to be aware of https://www.raspberrypi.org/forums/view ... 5&p=731659 if you're making use of the Pi supply chain for your sensors.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by ThomasDK » Tue May 05, 2015 4:39 pm
6by9 wrote:Useful to collate (thank you), but I would say that that's available from the schematics at https://www.raspberrypi.org/documentati ... matics.pdf

Pin 12 is incorrect. It was originally intended to be a clock signal generated by the Pi for the sensor (hence the name of CAM_CLK, though renamed to CAM_GPIO1 on B+), but that had too many issues with EMC so it became just the drive for the LED on the board. The oscillator on the camera board is controlled by pin 11, the same as the sensor itself.
Some of the Pi camera clones have wired that up differently and do require pin 12 to be asserted to make it function.

True, they are on the schematic, but it doesn't tell what they actually do ;)

Interesting, all 3 boards I have had my hands on have the oscillator power on with the LED. No idea where the university buys them, but I believe they are from either Farnell or RS.

6by9 wrote:I'm intrigued to know what device you paired the OV5647 up with, as there aren't too many devices with exposed CSI-2 out there.

Not an immediate issue, but you might want to be aware of https://www.raspberrypi.org/forums/view ... 5&p=731659 if you're making use of the Pi supply chain for your sensors.

We are aware of the EOL, but as you say, this isn't an immediate issue.

The sensor is connected to an FPGA, using the free CSI bridge provided by Lattice. This decodes the data stream and provides pixel data, data type, etc. The image is stored in SDRAM, read out by a cortex m4 mcu, compressed, and then stored for later transmission to earth.
Posts: 2
Joined: Tue May 05, 2015 3:16 pm
by 6by9 » Tue May 05, 2015 4:51 pm
ThomasDK wrote:Interesting, all 3 boards I have had my hands on have the oscillator power on with the LED. No idea where the university buys them, but I believe they are from either Farnell or RS.

Don't try setting "disable_camera_led=1" in /boot/config.txt then. It works fine with official boards and just disables the LED, but others have reported their camera stopping working if they do it with cameras from alternative vendors.

ThomasDK wrote:We are aware of the EOL, but as you say, this isn't an immediate issue.

As long as you are aware, seeing as you are investing time and effort.

ThomasDK wrote:The sensor is connected to an FPGA, using the free CSI bridge provided by Lattice. This decodes the data stream and provides pixel data, data type, etc. The image is stored in SDRAM, read out by a cortex m4 mcu, compressed, and then stored for later transmission to earth.

Cool. I never really played with FPGAs, but you can fit a heck of a lot on them these days (during development they simulated about 50% of 2835 at a time in FPGAs, but those weren't small or low power and only ran at about 4MHz instead of 250!)
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by jbeale » Tue May 05, 2015 4:58 pm
Wow, thank you for this effort 6by9 ! This is one of those things that I thought would never happen.

So you need to decimate the raw data by 15x to enable writing to SD card. I understand a typical RPi SD card write speed is 10 MByte/sec, is that the value you assumed? I was hoping you could do a little bit better by streaming over ethernet instead of local file writing, but I guess that wouldn't be much faster if at all, comparing http://elinux.org/RPi_SD_cards with http://www.deckle.co.uk/blog/using-a-ra ... rformance/

Does this code have the ability to capture a full-resolution still with only one frame of latency from the trigger time? Maybe that was possible before using the v4l2 interface but using raspistill I always had about half a second or so of latency.
User avatar
Posts: 3251
Joined: Tue Nov 22, 2011 11:51 pm
by 6by9 » Tue May 05, 2015 5:18 pm
jbeale wrote:Wow, thank you for this effort 6by9 ! This is one of those things that I thought would never happen.

It's not the perfect solution as you can't get directly at the peripheral, but it works for now.
jbeale wrote:So you need to decimate the raw data by 15x to enable writing to SD card. I understand a typical RPi SD card write speed is 10 MByte/sec, is that the value you assumed? I was hoping you could do a little bit better by streaming over ethernet instead of local file writing, but I guess that wouldn't be much faster if at all, comparing http://elinux.org/RPi_SD_cards with http://www.deckle.co.uk/blog/using-a-ra ... rformance/

It was a finger in the air number - I wanted to save some frames for processing, but didn't really care which ones. This really was hacked together code to exercise things - I claim no credits for cleanliness or elegance of the code!
jbeale wrote:Does this code have the ability to capture a full-resolution still with only one frame of latency from the trigger time? Maybe that was possible before using the v4l2 interface but using raspistill I always had about half a second or so of latency.

This is hooking directly into the receiver peripheral.
After the frame start interrupt is received we check to see if we have a new buffer to program into the peripheral for the next frame. If we managed to do the buffer swap, then at the frame end interrupt the frame callback will be triggered up to userspace. If we didn't manage to swap buffers, we hold on to the last buffer and overwrite the contents with the next frame.
So the delay from the last line finishing being exposed, being received, and delivered to the app is very small. Obviously with rolling shutter effects, the first line of the frame was exposed and received a while before.

Do bear in mind that the first frame that the OV5647 produces after starting streaming is always corrupt, so if you want minimal latency capturing then you want the camera streaming, and just grab the next buffer that comes back at your trigger point.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by jbeale » Tue May 05, 2015 8:07 pm
From http://electronix.ru/forum/index.php?ac ... t&id=67362 "DRAFT MIPI Alliance Specification for CSI-2" at p. 24, line 531, Section 6, Camera Control Interface (CCI) "CCI shall support 400kHz operation and 7-bit Slave Addressing."

Right now, the RPi runs the CCI (I2C control bus to camera) at 100 kHz. No doubt NDA prevents saying anything about what any specific OV chip can or cannot do, but just for fun, is it possible to get the RPi I2C bus configured for fast mode (400 kHz) ?
User avatar
Posts: 3251
Joined: Tue Nov 22, 2011 11:51 pm
by 6by9 » Tue May 05, 2015 9:48 pm
jbeale wrote:From http://electronix.ru/forum/index.php?ac ... t&id=67362 "DRAFT MIPI Alliance Specification for CSI-2" at p. 24, line 531, Section 6, Camera Control Interface (CCI) "CCI shall support 400kHz operation and 7-bit Slave Addressing."

Right now, the RPi runs the CCI (I2C control bus to camera) at 100 kHz. No doubt NDA prevents saying anything about what any specific OV chip can or cannot do, but just for fun, is it possible to get the RPi I2C bus configured for fast mode (400 kHz) ?

Assuming you're talking about doing it under Linux, it's all in the device tree config - https://github.com/raspberrypi/linux/bl ... -b.dts#L71
Code: Select all
&i2c0 {
   pinctrl-names = "default";
   pinctrl-0 = <&i2c0_pins>;
   clock-frequency = <100000>;
};

You should be able to crank that up to 400000. Most other sensors we used did run at 400kHz, so I'm guessing it was scaled back for EMC reasons. The latency difference between the two speeds is pretty minimal anyway, so don't go expecting massive differences between the two.
The Interfacing forum (https://www.raspberrypi.org/forums/viewforum.php?f=44) is the better bet for all things device tree and how to build and modify device tree stuff.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by 6by9 » Thu May 07, 2015 4:58 pm
6by9 wrote:Second blunder found - I've missed out the MMAL parameter mapping on the GPU, so all calls for mmal_port_parameter_set/get on the 4 new parameters will fail. Just fixing it, and will push an updated start_x.elf later.
I'll also try to get some testing done on the packing/unpacking stuff - it should make producing DNGs easier by converting them to 16 bit and just writing a header to the front.

Sorry, got delayed on this.
Push done, so all parameters should now be mapped correctly. As a bonus image_encode will now accept YUYV / YVYU / UYVY and VYUY - they're the most commonly used formats for sensors spitting out YUV data instead of Bayer. It'll only support JPEG encoding as there appear to be no YUV4:2:2 to RGB conversion functions written at the moment. I will look into feeding YUYV et al into video_encode, but there I need to sort out the extra chroma subsample (it may be throw one line of chroma away, but that feels a little too dirty)

Not had a chance to check the packing/unpacking options as yet, but possibly soon.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by 6by9 » Fri May 08, 2015 4:43 pm
Unpacking tested and working for a sample size of one (RAW10 to RAW16). https://github.com/6by9/userland/tree/rawcam (rebased and) updated to include a #define RAW16, and in doing so it spits out 16bit raw.

It appears to be packed into the least significant bits, but I've not double checked that as I don't have any really good tools for analysing the raws (dcraw is doing some funny things with my data on over saturated blocks going yellow or blue, but I think that is in the raw. Not having an AE loop is a right pain). I may have a word with a couple of my former colleagues to see if they remember what it is meant to do.
edit: Helps if you get the logic right when hacking dcraw and remember to shift one of the bytes. It seems to be packed MSByte first, and in the most significant bits of the word. There may be dithering being added to the bottom bits, as I don't see the bottom 6 bits always being clear which would be my expectation when that actual source is only 10bit.

I'm not going to test the encode and decode options - they should all work, but writing raws with DPCM is probably not too useful, and people can play if they find a sensor which has to transmit DPCM.
Next task - soc_camera host interface.

PLEASE WILL THOSE WHO HAVE TRIED THIS AND FIND IT USEFUL LET ME KNOW.
I have just encountered a couple of weird buffer management issues which are worrying and it would be useful to get to the bottom of, but I don't want this to be wasted effort. If this functionality isn't useful to people, then I will drop it.
I also need to make the call as to when to throw the changes at Dom to go into the official firmware - that will be determined by user feedback.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by jbeale » Fri May 08, 2015 4:55 pm
I have not yet used this code, but I am very happy it exists and I expect to have an opportunity to test it soon-ish on a different sensor (eg. not the OV5647). Unfortunately, I may not be able to say anything about it due to certain tiresome restrictions.
User avatar
Posts: 3251
Joined: Tue Nov 22, 2011 11:51 pm
by 6by9 » Fri May 08, 2015 5:06 pm
jbeale wrote:I have not yet used this code, but I am very happy it exists and I expect to have an opportunity to test it soon-ish on a different sensor (eg. not the OV5647). Unfortunately, I may not be able to say anything about it due to certain tiresome restrictions.

I fully understand not being able to disclose any details. I know of at least one other person/company interested in this who can't reveal any details of what they're up to.

There have been various people shouting over the couple of years since Pi was launched that they wanted to be able to plug in this sensor, or that module. Lots of noise, but I can't gauge how much they are actually doing themselves vs expecting it all dropped in their lap fully working. I'm not doing the latter, so how many does that leave, and is it then worth the effort to polish it fully?
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by spikedrba » Fri May 08, 2015 5:48 pm
fwiw, as someone who has asked "for this and that sensor" in the past, this is largely over my head, maybe to the point that I'm not even understanding that it's not about using other sensors than Omnivision's, but using some of those Sony's sensors/boards is at the very top of the list of what it would take to really produce a high quality security camera with the Pi.

I've been trying to learn as much as I can along the way, but it remains an effort relagated to a specific project with limited scope that should be completed over summer. I'm trying to add and open source some of the stuff we cobbled together, but I don't know that I'd ever have the time to learn what it takes to contribute at a deeper level like this project requires.

thanks for all the effort put in so far, it's very much appreciated, whichever way you decide to go with this.
Posts: 75
Joined: Fri Feb 28, 2014 2:19 am
by jamesh » Fri May 08, 2015 7:27 pm
Worth noting that the effort to implement a new sensor is man months, even with the work done by 6x9. There is geting the sensor programed up correctly with the right timings, then al the tuning work to take the raw bayer and make it in to a picture that actually looks like it should.

So this is not a beginners topic at all. It's not 'I've got this new sensor, now I can plug it in and get pictures out', very far from it.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Please direct all questions to the forum, I do not do support via PM.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 17210
Joined: Sat Jul 30, 2011 7:41 pm
by 6by9 » Fri May 08, 2015 8:39 pm
jamesh wrote:Worth noting that the effort to implement a new sensor is man months, even with the work done by 6x9. There is geting the sensor programed up correctly with the right timings, then al the tuning work to take the raw bayer and make it in to a picture that actually looks like it should.

So this is not a beginners topic at all. It's not 'I've got this new sensor, now I can plug it in and get pictures out', very far from it.

I can't think of a way to open up the ISP sensibly (I did discuss it with NP earlier this week), so Bayer sensors are pretty much out of the question except for when saving raw images and doing post processing (that then makes AE loops tricky).
The Freescale iMX6 IPU is what I'm comparing to, but even that doesn't handle Bayer directly (at least according to https://community.freescale.com/message/406081#406081)

If I can get this component integrated with the soc-camera framework then any of the drivers in https://github.com/raspberrypi/linux/tr ... soc_camera come into play. You still need someone to make up the PCB though, and when you're dealing with CSI-2 you are dealing with matched impedance and length transmission lines. That's also not a trivial matter.
The market for supporting extra sensors is going to be small, so costs are going to be high, and that's assuming you can manage to purchase sensors in small numbers.

Great you're wanting to learn, just be aware it may be quite a step learning curve to get things working properly.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4083
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.
by tchiwam » Tue May 12, 2015 5:43 am
This is my hope to get a bigger sensor on that bus !! And I only use and need straight raw data from the sensor, the least amount of post processing the better !

Thank you for doing this. I'm right now rebuilding a complete new kit on a Pi2 just to give this a go.
Posts: 43
Joined: Mon Nov 24, 2014 4:01 pm
by sachsm » Tue May 12, 2015 4:44 pm
Greetings All - I see that it is no easy feat to interface a new MIPI camera to the Pi but was wondering if some similar Omnivision chip may be easier than others. For example, how about the OV2685, how would I estimate the effort involved? One engineer were I work already has a crude configuration file that seems to work with our TI-TEVA dev boards so I might be able to use those same register settings for the Pi. I only need still image capture so maybe that would be easier?
Posts: 1
Joined: Tue May 12, 2015 4:36 pm
by jbeale » Tue May 12, 2015 7:38 pm
If you only need still images, and you already have the register settings to make it work, I believe it could be done. You would get just raw data so you still need to do debayer, processing and tuning, but you'd at least have something to work with.
User avatar
Posts: 3251
Joined: Tue Nov 22, 2011 11:51 pm