Let me start by stating that here in the UK, we call Waldo Wally. And as I’m writing this post at my desk at Pi Towers, Cambridge, I have taken the decision to refer to the red and white-clad fellow as Wally moving forward.
Just so you know.
There’s Waldo is a robot built to find Waldo and point at him. The robot arm is controlled by a Raspberry Pi using the PYARM Python library for the UARM Metal. Once initialized the arm is instructed to extend and take a photo of the canvas below.
The magical mind of Matt Reed
Both in his work and personal time, Matt Reed is a maker. In a nutshell, he has the job we all want — Creative Technologist — and gets to spend his working hours building interesting marketing projects for companies such as Redbull and Pi Towers favourite, Oreo. And lucky for us, he uses a Raspberry Pi in many of his projects — hurray!
With There’s Waldo, Matt has trained the AutoML Vision app, Google’s new image content analysis AI service, to recognise Wally in a series of images. With an AI model trained to recognise the features of the elusive traveller, a webcam attached to a Raspberry Pi 3B snaps a photo and the AI algorithm scans all faces, finding familiarities.
model is predicting WAAAYYY better than expected as this webcam image here wasn’t even part the training set. You can run from #ai but apparently can’t hide from #GoogleCloud
Once a match for Wally’s face is found with 95% or higher confidence, a robotic arm, controlled by the Pyarm Python library, points a comically small, plastic hand at where it believes Wally to be.
Deep learning and model training
We’ve started to discover more and more deep learning projects using Raspberry Pi — and with the recent release of TensorFlow 1.9 for the Pi, we’re sure this will soon become an even more common occurrence.