Since I can't get one for now, I can at least speculate on what the actual performance is. I have read "60x faster than running on the Pi by itself" but I'm not clear about the absolute performance.
In my experiments using the tiny-yolo weights, the RPi 3 can do basic object recognition in about 2 seconds or 0.5 fps. This post mentions as fast as 0.9 fps: https://www.pyimagesearch.com/2017/10/1 ... th-opencv/
Let's say the RPi3 can do detection at 0.5 fps, if this add-on board is 60x faster than means 30 fps and we have a full real-time detector, presumably with a lot lower power than the big Nvidia GPU you'd otherwise need, since I don't see a heatsink on that chip. Is is true?
Apart from the speed, how does it work in practice? Ideally I'd want the chip to be passing me the bounding box of the detected objects from the real-time video stream, tied to each frame where an object is detected. Does the camera connect to the board, or the Pi, or both? Seems like you need video-rate communication from the board to the Pi, which can't happen just over the GPIO, unless the board acts as a video splitter so both board and RPi get the video feed (?) Even in that case, how do you match up frames because there is significant and perhaps variable latency between the MIPI video data going into the Pi's GPU, and "real-time" communication through any other interfaces.