(but probably mostly hello to those who know the insides of the camera pipe!)
I have been trying to work out the exact relationship between the AWB values that are captured with images and the processing of the image data to simulate an image taken with a different set of white balance gains.
Specifically, I have two .jpg images taken under slightly different conditions, which have a change in white balance, and I want to put them both into a common white balance (possibly the midpoint between them, or one into the other, or whatever works). The gains were recorded with the images when taken, so I wish to use this recorded information to inter-convert the images.
I _think_ the white balance is applied after de-bayering, before application of the CCM and before colour space conversions from RGB to others - but that is only implied here: viewtopic.php?f=43&t=164033&p=1059684&h ... g#p1059684
What this seems to mean is that I would need to dis-apply the various colour space conversions and CCM, adjust the R/B white balance values, and go back through that process.
Do I have this approximately correct? Is there an easier way to do this?
(NB: It does not appear to be a simple case of scaling the images in sRGB space at output, and yes, I am aware that in reprocessing like this I will likely lose a couple of bits of colour precision as these corrections should be done with 10+ bit data, not 8-bit data. And no, I don't want to force a particular set of gains for all images as my scenes do change in lighting so AWB gets approximately the right correction over time.)
Any hints/tips/suggestions or pointers to relevant code most appreciated.