tchiwam
Posts: 43
Joined: Mon Nov 24, 2014 4:01 pm

Re: RAW output information

Thu Dec 11, 2014 3:00 am

For those who want to see their raw data in R, you can read your image like this :
fid <- file("raw-01099.dng","rb")
junk <- readBin(fid, integer(), n=5080 , size=1, endian='little')
raw <- readBin(fid, integer(), n=2592*1944 , size=2, signed=FALSE,endian='little')
close(fid)
raw <- array(raw, dim=c(2592,1944))/64

#Look at the whole image
image(raw, col=grey(1:128 / 128))
#Single color images
image(raw[seq(1,1944,by=4),], col=grey(1:128 / 128))

# histogram for each channel
hist(raw[seq(1,2592, by = 4),], breaks=256, col='green', xlim=c(0,64))
hist(raw[seq(2,2592, by = 4),], breaks=256, col='red', xlim=c(0,64))
hist(raw[seq(3,2592, by = 4),], breaks=256, col='blue', xlim=c(0,64))
hist(raw[seq(4,2592, by = 4),], breaks=256, col='dark green', xlim=c(0,64))

Now my question, Is this OK ?
pixel[col+0] = buffer[j++] << 8;
pixel[col+1] = buffer[j++] << 8;
pixel[col+2] = buffer[j++] << 8;
pixel[col+3] = buffer[j++] << 8;
split = buffer[j++]; // low-order packed bits from previous 4 pixels
pixel[col+0] += (split & 0b11000000); // unpack them bits, add to 16-bit values, left-justified
pixel[col+1] += (split & 0b00110000)<<2;
pixel[col+2] += (split & 0b00001100)<<4;
pixel[col+3] += (split & 0b00000011)<<6;

Ain't this overwriting the top bits and adding giving sometimes values close to the 1023 limit ?

#include<stdio.h>
int main(int argc, char **argv)
{
unsigned char a = 0xff, b=0xff;
unsigned short x = 0x00;

x=a;
x+=((short)b & 0b11000000)<<2;
printf("%04x\n",x);
x=a;
x+=((short)b & 0b00110000)<<4;
printf("%04x\n",x);
x=a;
x+=((short)b & 0b00001100)<<6;
printf("%04x\n",x);
x=a;
x+=((short)b & 0b00000011)<<8;
printf("%04x\n",x);

return 0;
}

Maybe I'm just too tired and need to sleep...

tchiwam
Posts: 43
Joined: Mon Nov 24, 2014 4:01 pm

Re: RAW output information

Thu Dec 11, 2014 3:34 am

I was too tired, but the histogram is still showing strange distributions...

bablokb
Posts: 25
Joined: Fri Nov 07, 2014 7:45 am

Re: RAW output information

Fri Dec 19, 2014 4:41 pm

I have just updated raspi_dng on https://github.com/bablokb/raspiraw.

The new version supports "old" and "new" raw-files. It also uses the embedded color matrix automatically (although you can still pass your own matrix to the program).

I detected a bug when running the program on the RPI: the color matrix is not stored correctly in the DNG-file, negative values are stored as zero. This seems to be a bug in the tiff-library, it is quite old and the configure-script might not get everything right on the RPI. Help on this issue is appreciated.

Bernhard

aphextwin
Posts: 10
Joined: Tue Dec 02, 2014 5:46 pm

Re: RAW output information

Tue Jan 20, 2015 3:05 pm

Bernhard,

did you find a way to get your raspiraw-version running on the rpi?
It could be really useful...

thanks!

aphextwin
Posts: 10
Joined: Tue Dec 02, 2014 5:46 pm

Re: RAW output information

Thu Jan 22, 2015 2:13 pm

Imatest result from bealecorner.org:

I took your last Raw-Image you did with your M12 lens and ran it through Imatest. For MTF-calculation I took one border on the right side of your chart.
Enojy!
BealeImatest.jpg
BealeImatest.jpg (52.07 KiB) Viewed 4202 times

blindbloom
Posts: 20
Joined: Wed Jun 10, 2015 8:23 pm

Re: RAW output information

Wed Jun 10, 2015 9:54 pm

I have looked into the format of the RPi RAW files, trying to better understand the details. I know this topic has been kicked around on this thread for a long time, and perhaps all the bits and pieces are scattered among the various posts. Here is what I have found.

The ov5647 has a sensor array of 2624 by 1956. The raw encoding of this omits eight columns on the left and the right, leaving the raw image dimensions at 2608 by 1956. This image has a masked pixel frame that is six rows at the top and bottom, and eight columns at the left and right. The active region (excluding the masked sensors) is the familiar 2592 by 1944.

Each row is encoded as four, 10-bit values in five bytes. The first four bytes contain the bits 9 through 2 of the corresponding values. The fifth byte contains bits 1 and 0, packed high to low. The end of each row is padded by four bytes. The vertical stride (in bytes) from one row to the next is 5 * 2608 / 4 + 4 = 3264 bytes.

There is a trailing block of data (contents unknown to me) of 6538 bytes. So the offset from the end of the RAW JPEG file is 1956 * 3264 + 6538. Seeking to this location places you at the beginning of the first pixel (which is a masked pixel).

I have tried this on some of the raw files on jbeale's website and found that the "old" firmware encodes the Bayer filters as
Even Rows: BGBGBG...
Odd Rows: GRGRGR...

The "new" firmware encodes the Bayer filters as
Even Rows: GBGBGB...
Odd Rows: RGRGRG...

Aside from that, both versions are identical as far as I can see.

If you only want to extract the active part of the image, it is easy to adjust offsets to skip the masked sensors. I have included some sample C++ code to show a working example that extracts the full raw image as well as just the active sub-image (substitute your own in-memory storage in place of my private GenImage class). This for "unflipped" images.

Code: Select all

RawDecodeExample()
{
	static const char rawFilename[] = "D:/RaspberryPi/RAW-GMB-Nov2014.jpg";	// from jbeale's site
	static const int knSensorSizeX = 2608;	// actual ov5647 is 2624 sensors wide, but the raw encoding omits 8 columns on left and right
	static const int knSensorSizeY = 1956;	// full ov5647 sensor height
	static const int knMarginLR = 8;		// number of masked sensors on left and right
	static const int knMarginTB = 6;		// number of masked sensors at top and bottom
	static const int knActiveSizeX = knSensorSizeX - 2 * knMarginLR;	// image without masked sensors
	static const int knActiveSizeY = knSensorSizeY - 2 * knMarginTB;	// image without masked sensors
	static const int knEndOfRowPadding = 4;	// padding at end of each row
	static const int knBytesPerSet = 5;
	static const int knPixelsPerSet = 4;
	static const int knStrideY = knBytesPerSet * knSensorSizeX / knPixelsPerSet + knEndOfRowPadding;	// in bytes
	static const int knRawTrailerSize = 6538;	// unknown data at end of raw image data
	static const int knRawDataSize = knSensorSizeY * knStrideY + knRawTrailerSize;

	// open the raw file
	FILE* fd = NULL;
	fopen_s(&fd, rawFilename, "rb");

	// sensor image with masked pixel border: knMarginTB rows above and below, knMarginLR columns to left and right
	{
		// seek to where the raw image data starts
		fseek(fd, -knRawDataSize, SEEK_END);

		// create an image for the Bayer data (includes masked pixels)
		GenImage sensorImage(knSensorSizeX, knSensorSizeY, keGray);

		for (int nY = 0; nY < knSensorSizeY; nY++)
		{
			for (int nX = 0; nX < knSensorSizeX;)
			{
				BYTE auData[knBytesPerSet];	// every set of five bytes encodes four, 10-bit pixels (5 x 8 bits = 4 * 10 bits = 40 bits)
				fread(auData, 1, knBytesPerSet, fd);

				// compute the sensor values, in the range [0.0, 1.0]
				float flV0 = static_cast<float>((static_cast<UINT>(auData[0]) << 2) + (static_cast<UINT>(auData[4] & 0xC0) >> 6)) / 1023.0f;
				float flV1 = static_cast<float>((static_cast<UINT>(auData[1]) << 2) + (static_cast<UINT>(auData[4] & 0x30) >> 6)) / 1023.0f;
				float flV2 = static_cast<float>((static_cast<UINT>(auData[2]) << 2) + (static_cast<UINT>(auData[4] & 0x0C) >> 6)) / 1023.0f;
				float flV3 = static_cast<float>((static_cast<UINT>(auData[3]) << 2) + (static_cast<UINT>(auData[4] & 0x03) >> 6)) / 1023.0f;

				// write the sensor values
				sensorImage.Write(nX++, nY, flV0);
				sensorImage.Write(nX++, nY, flV1);
				sensorImage.Write(nX++, nY, flV2);
				sensorImage.Write(nX++, nY, flV3);
			}
			fseek(fd, knEndOfRowPadding, SEEK_CUR);	// skip past the end-of-row padding
		}

		// save the complete sensor image
		sensorImage.SaveToFile(L"D:/RaspberryPi/Sensor.png", ke16BitsPerChannel);
	}

	// active area image without masked pixel border
	{
		// seek to where the first active pixel is located
		fseek(fd, -knRawDataSize + knMarginTB * knStrideY + knBytesPerSet * knMarginLR / knPixelsPerSet, SEEK_END);

		// create an image for the Bayer data (excludes masked pixels)
		GenImage activeImage(knActiveSizeX, knActiveSizeY, keGray);

		for (int nY = 0; nY < knActiveSizeY; nY++)
		{
			for (int nX = 0; nX < knActiveSizeX;)
			{
				BYTE auData[knBytesPerSet];	// every set of five bytes encodes four, 10-bit pixels (5 x 8 bits = 4 * 10 bits = 40 bits)
				fread(auData, 1, knBytesPerSet, fd);

				// compute the sensor values, in the range [0.0, 1.0]
				float flV0 = static_cast<float>((static_cast<UINT>(auData[0]) << 2) + (static_cast<UINT>(auData[4] & 0xC0) >> 6)) / 1023.0f;
				float flV1 = static_cast<float>((static_cast<UINT>(auData[1]) << 2) + (static_cast<UINT>(auData[4] & 0x30) >> 6)) / 1023.0f;
				float flV2 = static_cast<float>((static_cast<UINT>(auData[2]) << 2) + (static_cast<UINT>(auData[4] & 0x0C) >> 6)) / 1023.0f;
				float flV3 = static_cast<float>((static_cast<UINT>(auData[3]) << 2) + (static_cast<UINT>(auData[4] & 0x03) >> 6)) / 1023.0f;

				// write the sensor values
				activeImage.Write(nX++, nY, flV0);
				activeImage.Write(nX++, nY, flV1);
				activeImage.Write(nX++, nY, flV2);
				activeImage.Write(nX++, nY, flV3);
			}

			// seek to the first active pixel of the next row
			fseek(fd, knBytesPerSet * knMarginLR / knPixelsPerSet, SEEK_CUR);	// skip masked sensors at end of this row
			fseek(fd, knEndOfRowPadding, SEEK_CUR);	// skip past the end-of-row padding
			fseek(fd, knBytesPerSet * knMarginLR / knPixelsPerSet, SEEK_CUR);	// skip masked sensors at start of next row
		}
		// save the active sensor image
		activeImage.SaveToFile(L"D:/RaspberryPi/Active.png", ke16BitsPerChannel);
	}

	fclose(fd);
	return;
}
Last edited by blindbloom on Thu Jun 11, 2015 12:09 pm, edited 1 time in total.

User avatar
jbeale
Posts: 3491
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: RAW output information

Thu Jun 11, 2015 4:47 am

Thanks for collecting those details. The Bayer filter order swap is a curious thing. The sensor cannot change physically with the firmware change, so one of those versions (either the old or new firmware) is presenting a RAW file that does not match the physical layout of the sensor. I wonder which one it is? Also, presuming a naive debayer function, one of those two versions is going to have a final image slightly more blurry than it should be after debayer, because adjacent RAW file pixels are not physically adjacent. I'm not sure if the per-pixel sharpness of the sensor makes the difference very significant, or not.

Or was the switch a result of fixing the presentation of the image reversed left-to-right? I haven't compared them for that, but it could be the case.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7274
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: RAW output information

Thu Jun 11, 2015 7:39 am

Do we really still care about the "old" firmware - it changed in 22nd May 2013!

Code: Select all

Author: James Hughes
Date:   Wed May 22 15:29:39 2013 +0000

    Fix up HFLIP to default correctly on OV5647/RaspberryPi.
    Added missing flip code that deals with default in driver.
Looking at the commit, yes the default transform changed from H&V flip to just H flip. The reason is not stated, but I suspect that the default was wrong when the flip support got added in April 2013. Actually the internet says the camera module was only released on 14th May 2013, so the sample size of old images must be tiny.

I have stated many times that the Bayer order is present in the raw header, but nobody seems to be bothered to investigate. Try about byte 68 into the raw header. The enum values are even in the userland repo https://github.com/raspberrypi/userland ... pes.h#L144 Process that correctly and you can then even deal with deliberate flips.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

blindbloom
Posts: 20
Joined: Wed Jun 10, 2015 8:23 pm

Re: RAW output information

Thu Jun 11, 2015 12:28 pm

I should not have said anything in my original post about the flip state. I have edited it with a strike-through.

6by9 is right about who cares about old firmware? But curiosity isn't a bad thing. Well, unless you're a cat.

And I also agree that the use of metadata (like Bayer filter order) is much preferred over "this seems to work--I'll cross my fingers that it keeps working." I haven't investigated the content of the raw header, but I'd like to know if anyone has figured out any bits (with some certainty).

Another thing I've found is that the masked sensor area seems to be mostly "black," but it contains random chunks that are clearly not the values of masked sensors. I was hoping to use the masked pixels for pedestal correction in RAW processing, but that looks unlikely now. Anyone else tried this?

User avatar
jbeale
Posts: 3491
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: RAW output information

Thu Jun 11, 2015 1:53 pm

FWIW, Dave Coffin, author of "dcraw" https://www.cybercom.net/~dcoffin/dcraw/ declined my request to update his converter to handle both old and new firmware RPi raw files, without knowing how to tell them apart. I believe up until now it handles only the "old" style. I was too lazy earlier to look into it, but it seems the information is there to make some progress on that front...

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23636
Joined: Sat Jul 30, 2011 7:41 pm

Re: RAW output information

Thu Jun 11, 2015 1:58 pm

blindbloom wrote:I should not have said anything in my original post about the flip state. I have edited it with a strike-through.

6by9 is right about who cares about old firmware? But curiosity isn't a bad thing. Well, unless you're a cat.

And I also agree that the use of metadata (like Bayer filter order) is much preferred over "this seems to work--I'll cross my fingers that it keeps working." I haven't investigated the content of the raw header, but I'd like to know if anyone has figured out any bits (with some certainty).

Another thing I've found is that the masked sensor area seems to be mostly "black," but it contains random chunks that are clearly not the values of masked sensors. I was hoping to use the masked pixels for pedestal correction in RAW processing, but that looks unlikely now. Anyone else tried this?
Are the random chunks simply noise? The pixel sites still produce noise, even if behind the mask.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7274
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: RAW output information

Thu Jun 11, 2015 3:03 pm

jamesh wrote:
blindbloom wrote:I should not have said anything in my original post about the flip state. I have edited it with a strike-through.

6by9 is right about who cares about old firmware? But curiosity isn't a bad thing. Well, unless you're a cat.

And I also agree that the use of metadata (like Bayer filter order) is much preferred over "this seems to work--I'll cross my fingers that it keeps working." I haven't investigated the content of the raw header, but I'd like to know if anyone has figured out any bits (with some certainty).

Another thing I've found is that the masked sensor area seems to be mostly "black," but it contains random chunks that are clearly not the values of masked sensors. I was hoping to use the masked pixels for pedestal correction in RAW processing, but that looks unlikely now. Anyone else tried this?
Are the random chunks simply noise? The pixel sites still produce noise, even if behind the mask.
Those pixels are not read out from the sensor, at least with the settings in use. The frame read from the sensor is 2592x1944 pixels.

There are restrictions within the hardware blocks to pad images to particular strides (number of bytes between start of two consecutive lines) - this is for memory bandwidth performance reasons.
For Bayer raw 10 format, it is that the number of pixels is rounded up to a multiple of 32, multiplied by 10/8, and that is then rounded up to a multiple of 32. So ALIGN_UP(2592, 32) = 2592. *10/8 = 3240. ALIGN_UP(3240, 32) = 3264. Those extra 24 bytes per line will not get written to, so may well be just random stuff left in memory.

For related reasons, images allocations have the height rounded up to a multiple of 16, mainly as VideoCore has a 16 way SIMD engine, and it saves mucking around with disabling elements at the edge of images. Again, those bytes will not be written to, so may be random stuff left in memory. ALIGN_UP(1944, 16) = 1952 lines.

Without looking at the code, my theory would say seek back 3264*1952 = 6371328 bytes, and that should be the start of the actual raw image data.

The "official" way to find the raw data at the end of a JPEG is to find the "@BRCM" string after the JPEG EOI (end of image) marker. If you treat the byte after that as offset 0, the 32bit word at offset 4 is the offset from there to the start of the raw image. Expect it to be (32768-4) = 32764 (ie the whole header including BRCM is 32768 bytes).

dcraw is a hybrid of those two (probably as they hadn't noticed the offset, and didn't want to scan the entire file for EOI)
if (!(strncmp(model,"ov",2) && strncmp(model,"RP_OV",5)) &&
!fseek (ifp, -6404096, SEEK_END) &&
fread (head, 1, 32, ifp) && !strcmp(head,"BRCMn")) {
strcpy (make, "OmniVision");
data_offset = ftell(ifp) + 0x8000-32;

I like it when practice and theory match - 6371328+32768 = 6404096, so dcraw is seeking by the amount I expect.

Sorry, I gave a duff offset before for the Bayer order - the structure I was looking at was embedded within another. It's more likely to be around byte 244 into the header.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

blindbloom
Posts: 20
Joined: Wed Jun 10, 2015 8:23 pm

Re: RAW output information

Thu Jun 11, 2015 3:41 pm

Thanks, 6by9. That is very useful information.

I was going by info in the ov5647 specification, which reads:
Of the 5,132,544 pixels, 5,038,848 (2592 x1944) are active pixels and can be output. The other pixels are used for black level calibration and interpolation. The center 2592x1944 is suggested to be output from the whole active pixel array. The back end processor can use the boundary pixels for additional processing.
So by the time the RAW image is encoded (and appended to the JPEG file), masked sensors are gone. How then is the black level calibration to be done? Has it already been done before encoding (in which case the RAW values are not truly the sensor values)? Or is the black level data encoded in the metadata in the header somewhere for those who care to use it?

I suppose if push came to shove, a particular camera's black level could be measured by shooting with the lens cap on, but it is likely to be dependent on gain (ISO) and exposure time.

Why is all this information undocumented? Since RAW is made available to RPi camera users, how are we expected to know how to use it? This heuristic method is tedious, time-consuming and prone to error.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23636
Joined: Sat Jul 30, 2011 7:41 pm

Re: RAW output information

Thu Jun 11, 2015 3:46 pm

blindbloom wrote:Thanks, 6by9. That is very useful information.

I was going by info in the ov5647 specification, which reads:
Of the 5,132,544 pixels, 5,038,848 (2592 x1944) are active pixels and can be output. The other pixels are used for black level calibration and interpolation. The center 2592x1944 is suggested to be output from the whole active pixel array. The back end processor can use the boundary pixels for additional processing.
So by the time the RAW image is encoded (and appended to the JPEG file), masked sensors are gone. How then is the black level calibration to be done? Has it already been done before encoding (in which case the RAW values are not truly the sensor values)? Or is the black level data encoded in the metadata in the header somewhere for those who care to use it?

I suppose if push came to shove, a particular camera's black level could be measured by shooting with the lens cap on, but it is likely to be dependent on gain (ISO) and exposure time.

Why is all this information undocumented? Since RAW is made available to RPi camera users, how are we expected to know how to use it? This heuristic method is tedious, time-consuming and prone to error.
The black level is hard coded in to the tuning. It will have been worked out in a way similar to what you have just described.

As for documentation, feel free to write some on this. RAW was simply provided as an added bonus, for people to play with if they wanted. I've never had the time or inclination to document it myself.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7274
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: RAW output information

Thu Jun 11, 2015 3:51 pm

blindbloom wrote:So by the time the RAW image is encoded (and appended to the JPEG file), masked sensors are gone. How then is the black level calibration to be done? Has it already been done before encoding (in which case the RAW values are not truly the sensor values)? Or is the black level data encoded in the metadata in the header somewhere for those who care to use it?
I'm not an expert on the tuning, but memory says that black level varies relatively little between samples of the same sensor, so it ends up hard coded in the tuning. Red and blue gain variation and lens shading values are more significant, and are often calibrated on the production line (not for the Pi OV5647 though).
blindbloom wrote:Why is all this information undocumented? Since RAW is made available to RPi camera users, how are we expected to know how to use it? This heuristic method is tedious, time-consuming and prone to error.
Mainly lawyers. What counts as Intellectual Property? All this code belongs to Broadcom. They may not care about it too much now having abandoned the mobile space, but I can't make that call, and nor can the Pi Foundation. I probably shouldn't be telling you the offset, but never mind.

The raw image stuff was written for in house captures for image tuning purposes, and was frequently being tweaked in structure. Being in-house, as long as the parser was updated too then there was no issue. I was a little amazed when I saw it had been released on the Pi branch as it was more test code than anything, but you can't put the genie back in the bottle.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
jbeale
Posts: 3491
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: RAW output information

Thu Jun 11, 2015 4:00 pm

aphextwin wrote:Imatest result from bealecorner.org:
I took your last Raw-Image you did with your M12 lens and ran it through Imatest. For MTF-calculation I took one border on the right side of your chart.
Thanks for posting that. I gather you're using http://bealecorner.org/best/RPi/RAW-GMB-Nov2014.jpg from my RPi raw page http://bealecorner.org/best/RPi/

Please note, that color chart is not in focus. It is too close and the background is more in-focus. Also I might not have been holding it perfectly still. I have seen it recommended to put color charts slightly out of focus to reduce the effect of any dust, etc.

(My impression was, a few years back, RAW data was made available on an "as-is" basis due to some demand, but I gather it was outside the RPi mission statement, so time was not budgeted for writing up docs etc. And in fact, I think the number of people actually making use of RAW has been rather limited.)

blindbloom
Posts: 20
Joined: Wed Jun 10, 2015 8:23 pm

Re: RAW output information

Thu Jun 11, 2015 4:35 pm

I am a documentation oriented guy and would be very willing to produce a more formal document describing the RAW format. But I lack reliable information. Most of it is speculation.

Based on an earlier comment, it appears as though the black-level compensation has already been applied to the values in the RAW file. This means that the supplied values are not truly RAW. I have color-calibrated literally hundreds of raw-capable DSLRs over the past few years, and they almost always provide actual sensor data. Black-level compensation is performed by the consumer of the file using either masked sensors (which are usually available) or metadata. The black-level compensation value is usually dependent on the gain and exposure time. Often, different black levels are seen for each of the colors in the Bayer 2 x 2 array: R, G1, G2 and B. I hope these variables have been properly accounted for in the RPi RAW.

I agree with jbeale that RAW is used by a small percentage of users. But since it represents scene-referred data, it is much more valuable to the computational photography folks (e.g. HDR, panorama stitching, lens-distortion correction, vignette correction, auto-exposure, AWB, etc.). Output-referred images (like JPEG) are much more difficult to work with for these applications.

(And I must thank jbeale for the nice resources on his site. Very useful stuff.)

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7274
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: RAW output information

Thu Jun 11, 2015 4:57 pm

BLACK LEVEL COMPENSATION IS NOT APPLIED. THIS IS THE RAW DATA DIRECT OFF THE SENSOR WITH NO PROCESSING WHAT SO EVER.
How many times do we have to say it?

And none of the image processing stages in the sensor are used. The only parameters changed in the sensor are exposure time and analogue gain. All other settings are done in the Broadcom ISP hardware on BCM2835/6.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
jbeale
Posts: 3491
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: RAW output information

Thu Jun 11, 2015 5:04 pm

Black level is a function of sensor temperature, and I thought measuring it is the purpose of the masked pixels. For now, the best we can do is control illumination and/or add an external shutter and measure it manually using the regular unmasked pixels. Unless it appears somewhere in the header bytes- but given the "really truly RAW" nature of the data, perhaps not.

(...on second thought, given that there is now a direct access mode possible viewtopic.php?f=43&t=109137 , with the right register settings it might be possible to read out the masked pixels, then you could imagine per-row and per-column offset correction. It would require documentation I think we don't have, and more work than I'd have time or interest in, though)

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7274
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: RAW output information

Thu Jun 11, 2015 5:19 pm

It's only useful if you have an algorithm to do something useful with those lines, and can put up with your framerate dropping slightly as you are now reading out extra pixels on each and every frame. If it is viewed that black level calibration isn't going to vary significantly, then it's just not worth it.

Feel free to take the datasheet and access to the full raw data via viewtopic.php?f=43&t=109137 to read out those extra lines to do your own stuff with.

(Yes, I'm a little grumpy today, and this is not something that is going to be changing in the GPU firmware)
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23636
Joined: Sat Jul 30, 2011 7:41 pm

Re: RAW output information

Thu Jun 11, 2015 8:22 pm

Just a quick comment on the quality of the ISP. This is the SAME ISP that is used in the Nokia 808, which for some time was regarded as the BEST camera phone you could buy. Even after some years, its still takes shots that put modern devices to shame.

Yes, the same ISP. Just a different sensor and therefore a different tuning. It will take an awful lot of work taking the raw and getting a decent image out of it. Work that the ISP does in real time, after tuning by experts in the field (admittedly it could use more tuning, but life is too short).
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

User avatar
jbeale
Posts: 3491
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: RAW output information

Thu Jun 11, 2015 8:43 pm

I do agree with JamesH here. Anyone who has compared the raw and JPEG output from raspistill can see that there is a LOT of work and maybe some magic involved in getting to that JPEG output. Given enough light, the still JPEG images from the RPi camera are simply pleasing to look at, to a degree that my RAW conversion efforts never matched (and likewise, unfortunately the still frames captured from .h264 video streams never reach either). I do not know the exact technical reasons why ("extra processing" I am told) but at any rate, the difference is clearly there.

I have also used or attempted to use the RPi camera for machine vision applications and there you need the unfiltered, un-magicked image so I also appreciate that point of view.

blindbloom
Posts: 20
Joined: Wed Jun 10, 2015 8:23 pm

Re: RAW output information

Thu Jun 11, 2015 10:16 pm

It is true that for most people, RAW is not useful. But for others, it is very nice to have. I was surprised and very pleased to discover that the option is available on the RPi.

Based on information from 6by9, jbeale and others, I have made another pass at an example C++ snippet that makes use of the metadata in the RAW header. It also serves as a "lazy man's" form of documentation. If anyone discovers the meaning of any more things in the header, I would like to know.

Code: Select all

RawDecodeExample()
{
	static const char rawFilename[] = "D:/RaspberryPi/RAW-GMB-Nov2014.jpg";	// from jbeale's site
	static const int knSizeX = 2592;
	static const int knSizeY = 1944;
	static const int knEndOfRowPadding = 24;	// padding at end of each row
	static const int knBytesPerSet = 5;
	static const int knPixelsPerSet = 4;
	static const int knStrideY = knBytesPerSet * knSizeX / knPixelsPerSet + knEndOfRowPadding;	// in bytes
	static const int knHeaderSize = 32768;
	static const int knEndOfImagePadding = 8 * knStrideY;
	static const int knRawDataSize = knHeaderSize + knSizeY * knStrideY + knEndOfImagePadding;

	// open the raw file
	FILE* fd = NULL;
	fopen_s(&fd, rawFilename, "rb");

	// seek to the header
	fseek(fd, -knRawDataSize, SEEK_END);

	// read the beginning of the header
	static const int knPartialHeaderSize = 512;
	BYTE auHeader[knPartialHeaderSize];
	fread(auHeader, sizeof(BYTE), sizeof(auHeader), fd);

	// get the offset to the sensor data
	UINT uOffset =
		(static_cast<UINT>(auHeader[11]) << 24) +
		(static_cast<UINT>(auHeader[10]) << 16) +
		(static_cast<UINT>(auHeader[ 9]) <<  8) +
		(static_cast<UINT>(auHeader[ 8]));

	// make sure the header looks ok, as far as we're able to check it
	assert(uOffset == knHeaderSize - 4);	// offset - 4 of sensor data, relative to the start of the header
	assert(strncmp(reinterpret_cast<const char*>(&auHeader[  0]), "BRCM", 4) == 0);
	assert(strncmp(reinterpret_cast<const char*>(&auHeader[ 16]), "ov5647 version 0.1", 18) == 0);
	assert(strncmp(reinterpret_cast<const char*>(&auHeader[176]), "2592x1944", 9) == 0);
	assert(strncmp(reinterpret_cast<const char*>(&auHeader[420]), "xxxx", 4) == 0);

	// print the Bayer pattern colors
	fprintf(stdout, "Bayer Pattern is %s\n",
		(auHeader[244] == 2) ? "BG/GR" :	// old firmware
		(auHeader[244] == 1) ? "GB/RG" :	// new firmware
		(auHeader[244] == 0) ? "RG/GB (unexpected)" : "unrecognized");
	fflush(stdout);

	// seek to the sensor data
	fseek(fd, uOffset + 4 - knPartialHeaderSize, SEEK_CUR);

	// create an image for the Bayer data
	GenImage image(knSizeX, knSizeY, keGray);

	for (int nY = 0; nY < knSizeY; nY++)
	{
		for (int nX = 0; nX < knSizeX;)
		{
			BYTE auData[knBytesPerSet];	// every set of five bytes encodes four, 10-bit pixels (5 x 8 bits = 4 * 10 bits = 40 bits)
			fread(auData, 1, knBytesPerSet, fd);

			// compute the 10-bit sensor values, in the range [0, 1023]
			UINT uV0 = (static_cast<UINT>(auData[0]) << 2) + (static_cast<UINT>(auData[4] & 0xC0) >> 6);
			UINT uV1 = (static_cast<UINT>(auData[1]) << 2) + (static_cast<UINT>(auData[4] & 0x30) >> 4);
			UINT uV2 = (static_cast<UINT>(auData[2]) << 2) + (static_cast<UINT>(auData[4] & 0x0C) >> 3);
			UINT uV3 = (static_cast<UINT>(auData[3]) << 2) + (static_cast<UINT>(auData[4] & 0x03) >> 0);

			// write the sensor values, scaling from the range [0, 1023] to [0.0, 1.0]
			image.Write(nX++, nY, uV0 / 1023.0f);
			image.Write(nX++, nY, uV1 / 1023.0f);
			image.Write(nX++, nY, uV2 / 1023.0f);
			image.Write(nX++, nY, uV3 / 1023.0f);
		}

		// seek to the first active pixel of the next row
		fseek(fd, knEndOfRowPadding, SEEK_CUR);
	}

	// save the active sensor image
	image.SaveToFile(L"D:/RaspberryPi/Bayer.png", ke16BitsPerChannel);

	fclose(fd);
	return;
}
My interest in RAW is due to my interest in computational photography. I have modified my camera by adding an aftermarket CS mount zoom lens. I hope to control aperture, focus and zoom (focal length) with small stepper motors, which should allow me to characterize and correct for lens distortion and vignette under all combinations of lens settings. Thus my silly detailed questions about raw sensor data extraction--how to get it, how to understand its meaning, what useful metadata accompanies it, and so on. I apologize if these questions annoy anyone. I am a color/imaging scientist and I know I get a bit carried away at times.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23636
Joined: Sat Jul 30, 2011 7:41 pm

Re: RAW output information

Fri Jun 12, 2015 8:32 am

Sounds like an interesting project.

An ISP is a complicated beast. The one in the VC4 has over twenty stages of processing (all done in real time using very little power -it's a very clever bit of silicon). I'll try and remember some of the stages, each of which of course has a load of parameters to tune the stage to the particular sensor.

Bayer denoise
Debayer
Black level
Lens shading (vignette correction)
Colour correction
Gamma correction
Stills denoise
Video denoise
Sharpness
White balance
Gain control
Lens control (for devices with moveable lens)
Flash control
Scaling
Contrast enhance (e.g. HDR)
jpeg encode



These are quite a good intros.

http://www.cs.cmu.edu/afs/cs/academic/c ... eline1.pdf
https://www.pathpartnertech.com/wp-cont ... uning1.pdf
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7274
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: RAW output information

Fri Jun 12, 2015 10:37 am

jamesh wrote:Bayer denoise
Debayer
Black level
Lens shading (vignette correction)
Colour correction
Gamma correction
Stills denoise
Video denoise
Sharpness
White balance
Gain control
Lens control (for devices with moveable lens)
Flash control
Scaling
Contrast enhance (e.g. HDR)
jpeg encode
JPEG encode is a separate block.
And you missed out at least:
- the input formatters (Bayer or YUV/RGB. The Bayer one can also do transpose)
- various colour conversion blocks between (IIRC) YCbCr, YUV (I think there were 3 overall, and an extra in the YUV input pipe)
- chrominance stretch
- distortion
- crosstalk
- statistics (and lies, damned lies)
- defective pixel correction
- output formatters
When needed, you could also pull data out of the hardware pipe at almost any stage, process it in software, and then shove it back in again for the rest of the pipe to process.
And yet we still managed to cram full 1080P30 camera capture and video encode into 800mW - those hardware guys were clever!
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Return to “Camera board”