BerryPicker wrote:1st demo was run with the command ./motion.start. The picture occupies the top left of the screen, it is inverted...
Oh, I messed up with the rotating and flipping arguments of raspivid. The --hflip argument in the script turned it upside down. --vflip should be the right one. (If somebody want to know how such an obviously issue can occur:
I live in Australia My camera is mounted with 90 degrees rotation.

)
it has an overall blue wash, and is stretched virtically (compressed horizontally). Left alone (~42 fps) with nothing moving the picture 'blinks' at a rate as if affected by background nuclear radiation! When movement is present, shadows are blobbed as readily as moving objects. My hands seem to form blobs more readily than rigid moving objects. I'm surprised to see such a range of colours in the little pixels; I would expect to see the little pixels to be of similar colour when associated with different parts of the same moving object. Perhaps this is why the big green blobs form less readily than I expected. Occasionally the program terminates by itself reporting
A good hint. I should give you a short description of the usage and graphical output.

I've set up the parameter of this binary to display the two motions with the longest duration.
If you move your hands and both are marked with green rectangles, you can move the rest of the body, but still following the
hands. (This stabilization uses a tiny trick which I can not describe in one sentence.)
The video is overlay-ed with three information:
1. The h264 video encoder delivers motion estimation vectors (IMV). The norm of this vectors will be used to
form areas with the same movement speed. The set of this areas will be filtered and printed out as flickering
colours on the screen. For a better distinction of areas this values (area ids) was mapped into the rgb space. This data switched with each video frame. (Well you could also displayed the IMV data but this currently require recompilation....)
2. Each frame contains many errors. To eliminate this errors a tracker object collect the areas of several
frames. If the tracker decide that the trace of a movement was stable over N frames it will mark the trace as active. The big green rectangles marks the last known position of a movement. (It's the bounding box of an area from (1)).
3. To prove that the green areas not simply marking objects with a very restricted live time, I've added the
drawing of a movement history. Currently, it represents the midpoints of the bounding boxes for each frame.
As you can see, this polygonal line isn't smooth. Thus is not optimal for the detection of movement directions. I'm planning to switch from bounding box midpoints to the barycenters of detected areas. I assume this would stabilize the data.
If you wondering why hand shaking isn't recognized immediately: This is just the detection filter of the tracker. The history line shows where the tracking begins.
Yes, such graphical representation (and detection) would be much more nicer. A high quality estimation of the detected motions is still missing in my application. Probably this detection can be a direct result of an direct analysis of the IMV data of 1-3 frames. Thats differs a little bit from my approach.
It's not on my road map at the moment, but can be an side-effect output
Currently I'm targeting simple motion detection. I hope this can be integrated into a motion control of XBMC
or will be used for some games.
Unhandled fault:alignment exception.
I fear the hunting of this bug

Hopfully I can reproduce this sometime.