Page 3 of 3

Re: CME -x postponed?

Posted: Fri May 23, 2014 4:59 pm
by ethanol100
The encoder just take frame by frame, and it does not know from then the images are. So in principle there should be not problem with the calculation of the motion vectors. But it would be wise to fix the exposure and white balance, else there could be big differences in the picture just influenced by these values.

But yes it is only a workaround, you can edit the source. You will find help here in the forum, if you would have some problems with it. If I find some time on the weekend I will give it a try to make a clean solution. I would be interested in monitoring sand dunes or similar slow motion processes, too.

For once a minute you can change the 1.86 to 59.86 and then you would get approximately one frame per minute.

Re: CME -x postponed?

Posted: Fri May 23, 2014 7:50 pm
by peepo
please do not fork threads??

this thread is already quite hard to follow.

~:"

Re: CME -x postponed?

Posted: Sun May 25, 2014 7:09 pm
by Yggdrasil
Hello Ethanol100,
ethanol100 wrote:Edit: have pushed my changes to fix the exp mode to: https://github.com/ethanol100/userland/tree/expOff
A new parameter "--waitAndFix 5000" will run the preview for 5s and than fix the exposure and continues as usual.
Thanks for this patch. :) It works for me and I will patch it into my raspivid version ( only RaspiVid with OpenGL output, no RaspiStill), too.

Re: CME -x postponed?

Posted: Sat Jun 07, 2014 11:03 am
by Hove
Sorry to take this thread up to the simpleton high level question:

Is it possible to reduce the number of macro blocks to process for motion by reducing the frame size to say 320 x 240 (20 x 15 macro blocks) at a frame rate as low as say 10Hz?

I ask because I'm considering using this method for motion detection of the whole (low res) frame compared to the previous i.e. it's the camera which is moving over a static landscape as opposed to objects moving within that landscape. Hence (I hope), all the macro blocks should produce very similar vectors, and hence could be averaged out to give a reasonable approximation to the motion of the camera itself.

I realize this is an oversimplification, but confirmation that the above is viable will provide sufficient motivation for me to try.

Thanks

Re: CME -x postponed?

Posted: Sat Jun 07, 2014 2:31 pm
by jamesh
That sounds feasible.

Re: CME -x postponed?

Posted: Sat Jun 07, 2014 4:48 pm
by Hove
Thanks, perfect! All I needed was someone in the know to not say my idea will sink like a brick!

Re: CME -x postponed?

Posted: Tue Jun 10, 2014 12:31 am
by jonesymalone
All,

I am interested in using the motion vector output similar to the way Hove described. However, the intent is to provide a sensory interface for waypoint following on a moving platform. The goal is to set a "waypoint", effectively a screen capture at T=0, and measure accumulation of error. Specifically, my plan is to obtain a field of motion vectors and project them down to a single axis (ie: left or right magnitude). Accumulate all motion vectors, and observe accumulated "error" from an initial snapshot (which is periodically updated). This accumulated error is fed back to the motive control loop of the robot, to produce a course adjustment such that the platform continues moving towards the initial (or periodically updated snapshot).

I would like to write a custom program which polls the camera system for motion vectors, performs the projection and accumulation, and produces a course correction. My question, thus, is:

Is it possible for me to import/access a library or utility which I can use to pull motion vectors directly into my program's working memory? In essence this:

1) allocate motion vector struct array
2) initialize array and accumulators
3) loop: getMotionVectors(structArray *mvArray)
//do x-axis projection
// accumulate error over frames.
// if accumulated error exceeds threshold, report position control.

I am still learning the camera interface, and would appreciate pointers to any documentation that would best allow me to integrate continual capture and evaluation of motion vectors into a homebrewed application.

Thanks in advance,
Jonesy