After a lot of swearing and arguing, we have managed to boil down nearly 700 entries to just ten winners. It was a very hard decision, and if you didn’t win please don’t feel too disheartened; we had some really exceptional entries for this competition and found it really hard to come to a final decision.
As you’ll remember if you entered, we were looking for camera projects which would involve the winners in writing some software and doing something interesting with the cameras. The winners, who will receive a rare-as-hen’s-teeth pre-production Raspberry Pi camera board, are (in no particular order):
Brian’s was one of several cat-thwarting project ideas we were sent, but we liked his best. Brian’s cat has a habit of bringing half-dead things home for his master through the catflap, and he’s been trying to deal with the problem by using a webcam/Arduino/PC solution with some custom Python code to detect feline mouthfuls of rat. He wants to replace the setup with a Pi and a camera module so he can get better framerate and better resolution at lower cost and lower power. (You’re right, Brian. Using an Arduino just to control a relay is overkill.)
We liked your Nerf gun idea, Brian, but we also felt sorry for your cat, so it’s the rat-detecting catflap we’d like to see developed, please!
We had several outstanding entries from younger contestants. Matthew’s was one of them. He’s 12, and has an idea for using the camera as a communication tool – we got very excited about this, because we can think of all kinds of useful applications for what Matthew’s planning to build. He says:
Some of my recent builds are a cleaning robot, and a arcade machine. However my most recent build was a morse code key. The key is hooked up to three blue LEDs. I would like to get the Raspberry Pi camera to read those flashes of light so I can communicate with my Raspberry Pi via morse code.
Matthew says he also has some ideas about using the camera in a one-player chess board. We’re really looking forward to seeing what he develops.
Ross is a postdoctoral fellow in a microbiology lab, who studies novel ways of combating antibiotic-resistant bacteria. He tests compounds to investigate their effect on bacterial growth, and their ability to enhance the body’s own defences against pathogens. One of the tests he uses on a daily basis involves exposing bacteria with white blood cells that are isolated from healthy volunteers. White blood cells represent the main line of defence against pathogens and will kill most bacteria they are exposed to, though by adding drugs Ross can either enhance or inhibit their bactericidal capacity.
The tedious part of these experiments involves counting the tiny colonies bacteria form on petri dishes after overnight incubation. Counting visually is time-consuming and prone to error. Ross’s winning project idea is to use a Raspberry Pi with a camera to count the colonies using ImageJ image analysis software with a custom colony-counting module. He says the small size of the colonies will make this type of analysis a challenge, but says that if he can get the system to work it would save his lab massive amounts of time, freeing up attention for more experimental work.
Sebastian’s working on an augmented reality project. He’s developing a cross-platform SDK that can recover the location and orientation of a camera relative to a known piece of the environment. The camera can be “taught” about local 3D objects, or “taught” to recognise 2D patterns. He wants to integrate the video stream from the camera into the SDK, then track a 2D planar pattern and display a virtual character on it, using the HDMI output to display the scene on an external monitor. He got our attention when he proposed doing this with a virtual model of a Raspberry Pi on the Raspberry logo.
Stephen works for a charity designing and implementing medical engineering training courses for developing countries. He specialises in medical devices, with a particular interest in equipment related to premature babies – such as incubators, monitors, and jaundice detection. He says:
One of the potential issues with pre-term babies in incubators (used to keep them warm) is jaundice. A very cheap and effective method to cure this is by using a series of blue LEDs [blue light phototherapy]. However, within Uganda (where are main project is running), there aren’t enough LED devices (mainly because the current medical grade ones are expensive), and the staff aren’t trained well enough to know when to use the device and when to switch it off (and what light levels to use).
If I was to win the camera, I would use it, connected to the Pi and the LEDs, to image the baby, looking for jaundice. If it finds evidence of jaundice, the LEDs will come on, and as the jaundice reduces, so does the LED brightness, until the jaundice is gone, at which point the user is made aware and the device switches off the lights.
Matt had a fantastic idea when researching building a theramin. A regular theramin gives you control of pitch and volume only, with two hands and two aerials. His competition entry was an idea for a camera theremin, where, using a camera and some machine vision instead of an aerial, both pitch and volume can be controlled with one hand, leaving the other one free to fiddle with synth parameters.
I’m thinking a camera, a bit of blob detection and separation, and control of pitch, volume in xy, and filter cutoff by blob area…and for a first cut, probably using some distastefully coloured gloves to make the image processing easy on the Pi.
I did a little dance around the room when I read this one. Thanks Matt!
Amy is 11, and she has a really excellent blog that we’ve been enjoying at http://appler.net/amy/. She’s already a seasoned Pi hacker. She says:
I have a house on a lake, and my mom loves it when birds come in the cove near our house. I want to connect a camera to my Raspberry Pi and have it detect when there are birds in our cove. When it detects that there are birds in our cove, I want the Raspberry Pi to notify my mom by sending her a text message or email so she can come outside and look at the birds.
This would definitely be a challenging project to do, but I’m excited to give it a try!
We have a feeling that Amy will complete her project with flying colours. (And we’re hoping for some great pictures too!)
Andy’s project involves a life-sized robotic Dalek which he’s already built. Gordon was open-mouthed with awe and fear at the photo which Andy sent in:
Andy sent us the code for the Arduino-based Dalek voice modulator he’s been working on to demonstrate his chops; you can download it at GitHub. His camera application will use OpenCV to do facial recognition on the Pi, which will then drive two motors in the head to rotate the dome so that the Dalek can keep eye contact with a person while talking to them. (I can only think of one thing the Dalek is likely to say, but Andy promises us a much larger vocabulary than the usual “Exterminate!”) He plans to open source the results.
Adam’s entry made us laugh, but it’s got serious potential as a useful addition to everybody’s living room (and promises to build healthy relationships everywhere).
Me and my girlfriend argue. She is the calmest person I know and such a sweetheart, but when I get home from work you can guarantee she’s pinched the best spot in the living room for watching TV so I have to sit at the other end of the room, and it just ain’t as good!
So what I’ve been putting together is a little unit that sits between the TV and the stand, and moves the TV to the best position for everyone in the room – ahhh peace!
But how do I do this? Well so far I’ve been reliant on a webcam that sits on top of the telly to take an image of whoever is sat there, be it 1 or however many people. It then uses OpenCV to detect all the faces, workout the midpoint for all the people sat there and using wiringpi, move the TV n degrees to the left or right and settle on a great angle for all to be happy with where jubilation will forth ensue.
At the minute I’m activating the code using a web front end so I can hit ‘Track’ from any device. I’ve also got SiriProxy running on the Pi so I can nicely ask my iPhone and iPad to track us and move the TV.
The problem with the webcam is it takes a long time to get the image, take the image, check the image and work out the details.
What I’d love is to be able to utilise realtime-ish video, plug that into OpenCV and display this on the TV through picture in picture over HDMI so I can make the final decision on who should get the best angle, and also through a streamed link to my iDevice.
We’d love it too, Adam. And if you can now come up with something that outsources the argument about whether you watch Grand Designs or allow your partner to flick wildly between music TV channels for the next hour, we’d be even happier.
David Crawley (and the HackerDojo robotics team)
David is heading up a robot project at Hacker Dojo in Silicon Valley, which he describes as “a makerspace filled with free thinkers”. (I’ve never met a constrained thinker in a hackspace/makerspace. There’s something curiously liberating about milling machines.)
Hercules, Hacker Dojo’s Pi-powered robot, is newly under development, and is working through 12 challenges that David has set the team of volunteers, ranging from seeing and following a person, to delivering a cooler of beer (the prize for the team that accomplishes this is that they get to keep the cooler of beer), or driving around the Hacker Dojo for a week saying hello to known people, and finding and identifying a persona non-grata (this requires algorithms to efficiently move around the Dojo and find people who might be trying to hide, as well as doing all the person detection/ face detection/face recognition work). He says:
We already have developed the capability for our robot to do simple face tracking and face following using the Raspberry Pi with a USB camera. However, the performance is very, very slow and the latency is extraordinarily high. There are several reasons for this:
1) We are running our face detection and face recognition algorithms on the processor only (not using the GPU)
2) We haven’t optimized the code yet
3) The USB interface with our camera seems to be raising a lot of interrupts for the processor that take time away from face detection
So we plan to:
1) Write a shader for the face detection and face recognition so that we can get face detection functioning on the GPU
2) Optimize our code and the camera interface for smooth performance, introduce PID control etc.
3) Test a camera that has a better interface to the Pi
We’ll be watching to see how Hercules gets on with the tasks he’s been set once he’s equipped with his new eyeball!
Thank you so much to everyone who entered. We were pretty overwhelmed by the quality and number of entries; this is an amazing community packed with some amazingly smart people. We’re looking forward to hearing from all the winners when they’re further along with their projects, and learning how the camera coped in these very different applications.