Edit: The LED, buffers and related wiring are no longer going to be used (instead just use barcode pads of a specific color not occurring the recording area other than the pads themselves) and the project may use regular Pi-Cams (NoIR no longer necessary). This would raise the limit on the number of pads possible to an arbitrary number (ie: >24). This change also eliminates a complicated synchronization (of LEDs to camera frame start and end) step.
Originally posted in http://www.raspberrypi.org/forums/viewt ... 63&t=52900 as a query on the status of fixes to fps issues when trying to use >30 fps (assuming appropriate-resolution modes).
The project would involve 2 pi's with NoIR cameras with known separation and angles (and aimed to center a fixed location in the view) in order to use binocular geometry to calculate detected points in 3D. Networking to connect them and a desktop machine running a communication process to talk to the pis to get the detected points (2 2D points per detected LED, one per pi) then perform the calculation (based on camera angle, separtion distance and distance from focus target) to make a 3D point. This process would then talk to a blender addon to selectively apply the detected 3D points to the bones of an armature in order to record animations.
I didn't see any existing topics regarding such a project so I created a topic just now.
Any thoughts on the feasibility of such a project before I plan to undertake the around $200-$400 investment (2 pi2B's, 2 noir camera boards, 2 power supplies, 2 SDcards, 3 10/100 ethernet cables, package of IR LEDs, wire (LEDs to GPIOs), buffer chips (to power LEDs from GPIO signal inputs), 4-port 10/100 switch is the probable BOM for this project).
My plan on the pi side is to run the camera in a 90 fps (ie: VGA) mode, where the motion capture is done over four shifts, cycling a different set of LEDs per shift (one shift per camera frame) allowing a maximum of 24 points (using four sets of IR LEDs centered on R.G.B.C.M and Y colored pads) to be tracked at an overall capturing rate of 22.5 fps. The detection logic on the pi would scan the image for the bright spots from the IR LEDs then look for the surrounding color (the pad it is centered on) to differentiate each of the six IR points in the frame's image).
The overall schematic would be like:
https://dl.dropboxusercontent.com/u/552 ... xample.png