waveform80 wrote:dren wrote:Ah, I don't think I was quite clear about my meaning. My aim is to circumvent the 1s exposure time limit by taking multiple captures and combining the data. My question was really about if sequential calls to capture will leave gaps in the combined image, and if so, how big and can they be minimized somehow?
Hmm, not sure I'm understanding what the issue is here. Consecutive calls to capture will capture separate frames from the camera so there may well be differences between those frames if movement is involved. The only way to minimize this is to capture as rapidly as possible. Is that what you mean by "gaps" in the combined image?
As I understand it, the Pi's camera is a "continuous" camera (which is apparently typical of mobile cameras). Once initialized it continually captures a sequence of frames, albeit to a null-sink by default (or to a preview renderer if you've called start_preview).
When you call capture (for example), a JPEG encoder is constructed and the next frame to be captured is fed to that encoder, and the output written to whatever destination you've provided. It's important to note that the call to capture didn't cause the camera to capture a frame - it was already capturing frames anyway - all it did was provide an encoder and a destination for the next frame that happened to be captured.
Does that answer the question? Sorry - I'm probably not understanding something here!
Dave.
Thanks Dave, you got my meaning about gaps. I'm trying to take exposures longer than 1 second as a stepping stone for creating a timelapse program for the pi that can do
holy grail timelapses. I think the pi could be perfect for the task because it is programmable and allows such fine control over the shutter speed (which would allow a very linear day/night transition without flickering). Solutions for DSLRs like the
Promote Controller tend to be very expensive (as is all camera stuff) and sort of clunky.
Anyway, the camera API won't allow long exposures,
the limit is 1 second as detailed by jamesh in this thread, but I believe it's possible by taking consecutive pictures then combining the data in post. The people in the
RAW Output Information thread have come up with code for converting the rpi camera raw data to an Adobe DNG. Using their stuff I should be able to sum the raw data from consecutive captures, convert to DNG, then process the DNGs in lightroom, adobe camera raw, or ufraw to get jpgs.
Another thing related to the pi camera that I am interested in is this paper:
http://www.cs.ubc.ca/labs/imager/tr/201 ... nsImaging/
The paper is essentially about how dramatic improvements in sharpness can be achieved for fixed focal length fixed focus lenses by measuring the lens defects and applying transforms to the photos taken with the lens. I'm curious as to if this technique could be applied to the pi camera to dramatically increase its image quality. This idea kind of takes a backseat to the timelapse idea for me but maybe someone else will find it interesting.
edit: then again, that idea might not work because to compute the correction they have to stop down and get a sharp image. The pi camera is fixed aperture.