Page 1 of 1

camera -> egl texture path question

Posted: Sun Nov 24, 2013 11:19 am
by wibble82
Hi

I'm starting to pull apart the demos of your latest work for the direct to egl texture path (which is awesome by the way!). Just had 1 question so far, on reading the comments I spotted this line:

* 2) The driver implementation creates a new RGB_565 buffer and does the color
* space conversion from YUV. This happens in GPU memory using the vector
* processor.

When you say 'vector processor', are you referring to something gpu side there, or the vector extensions of the cpu? I ask because the key advantage of this system is to avoid putting load on the cpu. If we're chomping up a chunk of it for the conversion that'd be a shame!

It'd be really handy if we could just get the YUV channels as separate images, then colour space conversion can be done as and when its needed in a shader. For example, I may run face detection on the Y channel only, then having found candidates, extract and convert only the areas of the high res image containing a face.

[edit: added advantages of access to the pure YUV - we get to choose the image format as well, so aren't limited to RGB565, and can manage our own buffers a little more effectively]

Thanks

-Chris

Re: camera -> egl texture path question

Posted: Tue Nov 26, 2013 6:01 am
by drhastings
I'm fairly certain that refers to one of the two vector processors on the gpu.

I don't think it makes much sense to leave anything in YUV on the gpu side of things if only because its going to be used 99.9% (the preceding statistic is 100% accurate) of the time as rgb to render. Presumably, you would read back some down sampled image to run your face recognition on before deciding what regions you want to render in full resolution. In that case just do the conversion to Y in the shader you use for that step.

Pack into rgba for maximum efficiency, the pi doesn't seem to allow reading back just the alpha channel.

Re: camera -> egl texture path question

Posted: Tue Nov 26, 2013 10:38 am
by wibble82
Hmm. - not sure I agree. In any scenario where I either don't need the rgb data either every frame or at full res, I am wasting valuable cycles doing the full ress conversion to rgb every frame AND having to do an additional convert to Y for greyscale analysis. Maybe if all you're doing is rendering the camera feed its fine, but as soon as we start chewing up gpu time for image processing we need every cycle we can get!

Re: camera -> egl texture path question

Posted: Tue Nov 26, 2013 11:26 am
by jamesh
Internally the ISP is mostly YUV420 images. The path is Bayer for things like black level and defective pixel correction, AGC I think, then convert to YUV for the rest of the pipeline and transfer into the encoder (H264 uses YUV input). Only late in the day is it converted to RGB for display (and even then the composition HW can use YUV if that's what its given).

The GPU contains two vectors units which are 16 way SIMD devices running at the GPU clock rate, so multiple the clock rate by 16 to get the approximate throughput.

Re: camera -> egl texture path question

Posted: Tue Nov 26, 2013 4:09 pm
by drhastings
wibble82 wrote:Hmm. - not sure I agree. In any scenario where I either don't need the rgb data either every frame or at full res, I am wasting valuable cycles doing the full ress conversion to rgb every frame AND having to do an additional convert to Y for greyscale analysis. Maybe if all you're doing is rendering the camera feed its fine, but as soon as we start chewing up gpu time for image processing we need every cycle we can get!
Oh I agree that it would be more useful for what you are doing to have things in YUV, I just think that rendering will be done far more often that image processsing so if you are only going to do it one way or the other RGB makes more sense. It would put an even bigger barrier to entry on this if people had to write shaders to convert color in addition to writing all the other stuff required to get something on the screen.

Re: camera -> egl texture path question

Posted: Tue Nov 26, 2013 6:35 pm
by dom
wibble82 wrote: It'd be really handy if we could just get the YUV channels as separate images, then colour space conversion can be done as and when its needed in a shader. For example, I may run face detection on the Y channel only, then having found candidates, extract and convert only the areas of the high res image containing a face.

[edit: added advantages of access to the pure YUV - we get to choose the image format as well, so aren't limited to RGB565, and can manage our own buffers a little more effectively]
Tim does have a demo where the camera data is split into separate 8-bit Y/U/V textures and edge detection is done on the Y plane.
He did get a better framerate compared to processing the RGB texture.
He's still investigating a hang, but hopefully it will released within the next week.

Re: camera -> egl texture path question

Posted: Thu Nov 28, 2013 1:30 pm
by wibble82
Excellent - that'll be really handy. Agree that RGB output should be the 'default', but for when I am going to be working on YUV data myself anyway it'll be much nicer to just get it straight from the camera. Save on lots of memory and gpu cycles :)

Re: camera -> egl texture path question

Posted: Thu Nov 28, 2013 10:33 pm
by dom
Check out:
https://github.com/raspberrypi/userland ... 540f64fc59

If you run rpi-update you'll get Tim's changes:
* VC firmware updates to fastpath MMAL image conversion for YUV420 / YUV_UV -> L8_TF (8-bit greyscale tile format) for a specified plane.
* Added EGL_BRCM_MULTIMEDIA_Y, EGL_BRCM_MULTIMEDIA_V, EGL_BRCM_MULTIMEDIA_V targets to useland headers
* Added 'yuv' GL scene which demonstrates how to use RGB plus the three greyscale textures concurrently.
* Added 'sobel' GL scene which is a very simple GLSL Soble edge detect filter. Useful for performance comparisons !
* Added the '-gc' flag tp RaspiStill
* Tidied up the code tree to move the GL scene / models to a sub-directory to avoid things getting too cluttered.

To test:
/opt/vc/bin/raspistill --preview '0,0,1280,720' --gl -gw '0,0,1920,1080' -v -t 100000 -k -o capture.tga -gc -gs sobel
or
/opt/vc/bin/raspistill --preview '0,0,1280,720' --gl -gw '0,0,1920,1080' -v -t 100000 -k -o capture.tga -gc -gs yuv

Re: camera -> egl texture path question

Posted: Thu Nov 28, 2013 11:00 pm
by timgover
Thanks Dom !

There's a bit m ore information and some examples here

http://timgover.blogspot.co.uk/2013/11/ ... tures.html

Re: camera -> egl texture path question

Posted: Thu Nov 28, 2013 11:40 pm
by wibble82
you're amazing Tim! Can't wait to try this out. Almost fed up that I have to go on holiday and leave my lap top at home for 3 weeks now :)

Re: camera -> egl texture path question

Posted: Fri Nov 29, 2013 11:24 am
by g7ruh
wibble82 wrote: leave my lap top at home for 3 weeks now
wibble82

does that mean you ARE taking the pi and camera away then ;). If not, I hope you do not suffer withdrawal symptoms from a lack of pi for three weeks :cry:

Roger

Re: camera -> egl texture path question

Posted: Wed Jan 01, 2014 5:52 pm
by peepo
this all works very nicely, congrats!!!

will it be available for raspivid any time soon?
not clear what the issue might be ~:"

Jonathan

Re: camera -> egl texture path question

Posted: Tue Jan 07, 2014 9:56 am
by lagurus
peepo wrote: will it be available for raspivid any time soon?
Some time ago I have made raspivid modification, which is able to save h264 video and apply some shader effect on live preview.

But if you want to save h264 video WITH applied shader effect then it won't be possible (at least not for FullHD and 30fps).

Re: camera -> egl texture path question

Posted: Sun Mar 23, 2014 5:23 pm
by peepo
just chasing the raspivid port?

anytime soon?

cheers

Jonathan