I am interested in using the motion vector output similar to the way Hove described. However, the intent is to provide a sensory interface for waypoint following on a moving platform. The goal is to set a "waypoint", effectively a screen capture at T=0, and measure accumulation of error. Specifically, my plan is to obtain a field of motion vectors and project them down to a single axis (ie: left or right magnitude). Accumulate all motion vectors, and observe accumulated "error" from an initial snapshot (which is periodically updated). This accumulated error is fed back to the motive control loop of the robot, to produce a course adjustment such that the platform continues moving towards the initial (or periodically updated snapshot).
I would like to write a custom program which polls the camera system for motion vectors, performs the projection and accumulation, and produces a course correction. My question, thus, is:
Is it possible for me to import/access a library or utility which I can use to pull motion vectors directly into my program's working memory? In essence this:
1) allocate motion vector struct array
2) initialize array and accumulators
3) loop: getMotionVectors(structArray *mvArray)
//do x-axis projection
// accumulate error over frames.
// if accumulated error exceeds threshold, report position control.
I am still learning the camera interface, and would appreciate pointers to any documentation that would best allow me to integrate continual capture and evaluation of motion vectors into a homebrewed application.
Thanks in advance,