BillyBag2 wrote:Things like gamma correction, saturation, HDR, merging two images, colour replacment. If there were multiple input images they would be the same size in pixels and be aligned over eachother. Simple things would be on a per pixel basis. It would be nice to do stuff that was influenced by neighbouring pixels, but being able to do the simple sums on a per-pixel basis would be useful too.
A case study could be merging N aligned images into one to reduce noise, just a simple vector mean.
You can do all of these things with OpenGLES2.0 shaders.
Gamma correction:
vec4 t=texture2D(Texture,uv);
t=pow(t,g_Gamma);
Saturation:
vec4 t=texture2D(Texture,uv);
float luminance=dot(t.xyz,vec3(0.2125,0.7154,0.0721));
t.xyz+=((t.xyz-luminance)*g_Saturation);
HDR is a very vague term - you can't actually display high dynamic range images without a high dynamic range monitor and the PI doesn't support those. You probably mean tone mapping? Look that up, its more complex than a few lines of code
Merging two images:
vec4 t1=texture2D(Texture1,uv);
vec4 t2=texture2D(Texture2,uv);
vec4 t=(t1*g_Blend)+(t2*(1.0-g_Blend))
Colour replacement is also a very vague term - there are lots of ways you can do that (eg. just simple chroma keying by comparing with a colour in the shader and using it as a mask - right up to full on colour correction using a 3D colour cube).
Anyway - the GPU is perfectly capable of doing these tasks just with standard GLES.