wibble82
Posts: 66
Joined: Sun Jan 27, 2013 5:06 pm
Contact: Website

camera -> egl texture path question

Sun Nov 24, 2013 11:19 am

Hi

I'm starting to pull apart the demos of your latest work for the direct to egl texture path (which is awesome by the way!). Just had 1 question so far, on reading the comments I spotted this line:

* 2) The driver implementation creates a new RGB_565 buffer and does the color
* space conversion from YUV. This happens in GPU memory using the vector
* processor.

When you say 'vector processor', are you referring to something gpu side there, or the vector extensions of the cpu? I ask because the key advantage of this system is to avoid putting load on the cpu. If we're chomping up a chunk of it for the conversion that'd be a shame!

It'd be really handy if we could just get the YUV channels as separate images, then colour space conversion can be done as and when its needed in a shader. For example, I may run face detection on the Y channel only, then having found candidates, extract and convert only the areas of the high res image containing a face.

[edit: added advantages of access to the pure YUV - we get to choose the image format as well, so aren't limited to RGB565, and can manage our own buffers a little more effectively]

Thanks

-Chris

drhastings
Posts: 113
Joined: Wed Feb 06, 2013 11:38 pm

Re: camera -> egl texture path question

Tue Nov 26, 2013 6:01 am

I'm fairly certain that refers to one of the two vector processors on the gpu.

I don't think it makes much sense to leave anything in YUV on the gpu side of things if only because its going to be used 99.9% (the preceding statistic is 100% accurate) of the time as rgb to render. Presumably, you would read back some down sampled image to run your face recognition on before deciding what regions you want to render in full resolution. In that case just do the conversion to Y in the shader you use for that step.

Pack into rgba for maximum efficiency, the pi doesn't seem to allow reading back just the alpha channel.
http://www.dansrobotprojects.com/

wibble82
Posts: 66
Joined: Sun Jan 27, 2013 5:06 pm
Contact: Website

Re: camera -> egl texture path question

Tue Nov 26, 2013 10:38 am

Hmm. - not sure I agree. In any scenario where I either don't need the rgb data either every frame or at full res, I am wasting valuable cycles doing the full ress conversion to rgb every frame AND having to do an additional convert to Y for greyscale analysis. Maybe if all you're doing is rendering the camera feed its fine, but as soon as we start chewing up gpu time for image processing we need every cycle we can get!

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 25401
Joined: Sat Jul 30, 2011 7:41 pm

Re: camera -> egl texture path question

Tue Nov 26, 2013 11:26 am

Internally the ISP is mostly YUV420 images. The path is Bayer for things like black level and defective pixel correction, AGC I think, then convert to YUV for the rest of the pipeline and transfer into the encoder (H264 uses YUV input). Only late in the day is it converted to RGB for display (and even then the composition HW can use YUV if that's what its given).

The GPU contains two vectors units which are 16 way SIMD devices running at the GPU clock rate, so multiple the clock rate by 16 to get the approximate throughput.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I own the world’s worst thesaurus. Not only is it awful, it’s awful."

drhastings
Posts: 113
Joined: Wed Feb 06, 2013 11:38 pm

Re: camera -> egl texture path question

Tue Nov 26, 2013 4:09 pm

wibble82 wrote:Hmm. - not sure I agree. In any scenario where I either don't need the rgb data either every frame or at full res, I am wasting valuable cycles doing the full ress conversion to rgb every frame AND having to do an additional convert to Y for greyscale analysis. Maybe if all you're doing is rendering the camera feed its fine, but as soon as we start chewing up gpu time for image processing we need every cycle we can get!
Oh I agree that it would be more useful for what you are doing to have things in YUV, I just think that rendering will be done far more often that image processsing so if you are only going to do it one way or the other RGB makes more sense. It would put an even bigger barrier to entry on this if people had to write shaders to convert color in addition to writing all the other stuff required to get something on the screen.
http://www.dansrobotprojects.com/

dom
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5408
Joined: Wed Aug 17, 2011 7:41 pm
Location: Cambridge

Re: camera -> egl texture path question

Tue Nov 26, 2013 6:35 pm

wibble82 wrote: It'd be really handy if we could just get the YUV channels as separate images, then colour space conversion can be done as and when its needed in a shader. For example, I may run face detection on the Y channel only, then having found candidates, extract and convert only the areas of the high res image containing a face.

[edit: added advantages of access to the pure YUV - we get to choose the image format as well, so aren't limited to RGB565, and can manage our own buffers a little more effectively]
Tim does have a demo where the camera data is split into separate 8-bit Y/U/V textures and edge detection is done on the Y plane.
He did get a better framerate compared to processing the RGB texture.
He's still investigating a hang, but hopefully it will released within the next week.

wibble82
Posts: 66
Joined: Sun Jan 27, 2013 5:06 pm
Contact: Website

Re: camera -> egl texture path question

Thu Nov 28, 2013 1:30 pm

Excellent - that'll be really handy. Agree that RGB output should be the 'default', but for when I am going to be working on YUV data myself anyway it'll be much nicer to just get it straight from the camera. Save on lots of memory and gpu cycles :)

dom
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5408
Joined: Wed Aug 17, 2011 7:41 pm
Location: Cambridge

Re: camera -> egl texture path question

Thu Nov 28, 2013 10:33 pm

Check out:
https://github.com/raspberrypi/userland ... 540f64fc59

If you run rpi-update you'll get Tim's changes:
* VC firmware updates to fastpath MMAL image conversion for YUV420 / YUV_UV -> L8_TF (8-bit greyscale tile format) for a specified plane.
* Added EGL_BRCM_MULTIMEDIA_Y, EGL_BRCM_MULTIMEDIA_V, EGL_BRCM_MULTIMEDIA_V targets to useland headers
* Added 'yuv' GL scene which demonstrates how to use RGB plus the three greyscale textures concurrently.
* Added 'sobel' GL scene which is a very simple GLSL Soble edge detect filter. Useful for performance comparisons !
* Added the '-gc' flag tp RaspiStill
* Tidied up the code tree to move the GL scene / models to a sub-directory to avoid things getting too cluttered.

To test:
/opt/vc/bin/raspistill --preview '0,0,1280,720' --gl -gw '0,0,1920,1080' -v -t 100000 -k -o capture.tga -gc -gs sobel
or
/opt/vc/bin/raspistill --preview '0,0,1280,720' --gl -gw '0,0,1920,1080' -v -t 100000 -k -o capture.tga -gc -gs yuv

timgover
Posts: 8
Joined: Thu Nov 07, 2013 7:40 pm

Re: camera -> egl texture path question

Thu Nov 28, 2013 11:00 pm

Thanks Dom !

There's a bit m ore information and some examples here

http://timgover.blogspot.co.uk/2013/11/ ... tures.html

wibble82
Posts: 66
Joined: Sun Jan 27, 2013 5:06 pm
Contact: Website

Re: camera -> egl texture path question

Thu Nov 28, 2013 11:40 pm

you're amazing Tim! Can't wait to try this out. Almost fed up that I have to go on holiday and leave my lap top at home for 3 weeks now :)

User avatar
g7ruh
Posts: 68
Joined: Mon Apr 23, 2012 9:49 am
Location: Blackfield UK

Re: camera -> egl texture path question

Fri Nov 29, 2013 11:24 am

wibble82 wrote: leave my lap top at home for 3 weeks now
wibble82

does that mean you ARE taking the pi and camera away then ;). If not, I hope you do not suffer withdrawal symptoms from a lack of pi for three weeks :cry:

Roger

User avatar
peepo
Posts: 306
Joined: Sun Oct 21, 2012 9:36 am

Re: camera -> egl texture path question

Wed Jan 01, 2014 5:52 pm

this all works very nicely, congrats!!!

will it be available for raspivid any time soon?
not clear what the issue might be ~:"

Jonathan

lagurus
Posts: 46
Joined: Wed Aug 07, 2013 8:02 am

Re: camera -> egl texture path question

Tue Jan 07, 2014 9:56 am

peepo wrote: will it be available for raspivid any time soon?
Some time ago I have made raspivid modification, which is able to save h264 video and apply some shader effect on live preview.

But if you want to save h264 video WITH applied shader effect then it won't be possible (at least not for FullHD and 30fps).

User avatar
peepo
Posts: 306
Joined: Sun Oct 21, 2012 9:36 am

Re: camera -> egl texture path question

Sun Mar 23, 2014 5:23 pm

just chasing the raspivid port?

anytime soon?

cheers

Jonathan

Return to “Camera board”