ybs273
Posts: 7
Joined: Mon May 05, 2014 11:49 pm

Another question about the RasPi Camera

Tue Sep 08, 2015 2:43 am

As we know, the resolution of RasPi camera is (2592,1944).
Is that possible that I only use part of the sensor pixels to obtain the image (automatic exposure mode)?
For example: the sensor area from left-upper corner (100,100) to right-lower corner (1100,1100).
The output image is only based on the pixels in this selected rectangle area.

Could someone give me some guidance?
Thanks a lot!

betruk
Posts: 36
Joined: Fri Apr 24, 2015 8:17 am

Re: Another question about the RasPi Camera

Tue Sep 08, 2015 6:20 am

I used the stills capture mode, which offers the full resolution of the sensor (2592×1944) with 16:9 wide screen to capture videos. I have the same need but have no idea how to it, anyone helps?

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27021
Joined: Sat Jul 30, 2011 7:41 pm

Re: Another question about the RasPi Camera

Tue Sep 08, 2015 8:20 am

You can specify which part of the sensor you want to use using the region of interest command line parameter in raspistill. It's in the docs.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

User avatar
experix
Posts: 204
Joined: Mon Nov 10, 2014 7:39 pm
Location: Coquille OR
Contact: Website

Re: Another question about the RasPi Camera

Tue Sep 08, 2015 3:30 pm

If the purpose is to get an image size (pixel dimensions) smaller than the camera sensor dimensions, you can reduce the original image by cropping (if you only want a part of it) or by transforming the whole thing into smaller pixel dimensions. ImageMagick can do both of those. Or you may get a big speed advantage by using the gpu_fft package (I can't really say since I haven't tried that). Transforming into a smaller image size will theoretically give better results since effectively one pixel in your final image comes from averaging a group of pixels from the camera, so sensor defects and optical defects get averaged away.
If the purpose is to get faster imaging by using only part of the sensor, I don't know how much advantage that gives.

Return to “Advanced users”