When Jeroen Danckers sent us a blog post with that title he was on to a winner. And let’s face it, who hasn’t dreamed of putting a computer in the toilet connected to a camera that reports back to a web page whether the toilet is occupied or not. I have.
There’s all kind of people running around at our digital agency (Intracto, Belgium). Not only in numbers, but also in all weird forms and shapes. A simple discussion during lunch break about busy toilet usage quickly ends up in high-tech cut ‘n’ paste with webcams, Raspberry Pi and PHP trickery. So now we can check online whether our toilet is taken or not.
Should I stay or should I go?
The trick with the webcam
The system behind our little project has already been the subject of lively discussion amongst our competitor/colleagues of the Belgian web sector. Farfetched solutions like a pressure sensors under the toilet seat or little gnomes in a cage with a switch are sadly not true.
In all its simplicity we put a webcam in our toilet. It luckily doesn’t shoot any compromising images, but measures the light intensity instead. The cam takes a boring shot of the wall in the front hall and then grants a brightness value to each pixel with PHP-GD.
From those -100 or +100 values we calculate the average, which determines if the toilet is taken or not. Almost dark means no toilet action and a lot of light probably means ‘danger zone in action’. Or the cleaning lady is making her rounds.
And it also lets you play Tetris while you are otherwise engaged.
Technical set-up Toiletcam
We connected a USB-hub to the Raspberry Pi, which functions as a power source. Additionally we connected 2 USB webcams to take the pictures (2 toilets) and a USB-stick on which we installed the OS. Lastly we plugged in a SD-card to boot the Raspberry Pi.
The calculation of the state of the toilet happens like this:
Linux-application Streamer takes a still with the webcam and saves it.
Through PHP-GD the light intensity (brightness) of the pixels gets calculated. The average of these values determines the state of the toilet.
This result gets compared to the local cache. Only if there’s a difference something happens!
This event gets filed away in a database log.
The script pauses for one second, to give the poor Raspberry Pi some much earned rest.
Switch toilets, rinse and repeat.
In the same time the Raspberry Pi offers a local website that refreshes each 5 seconds through an Ajax-call, pulls the current states of the toilets and shows them in a nice lay-out.
Surprisingly handy innovations on the working floor
Working at Intracto offers quite a few surprises. In the morning you can moan about the busy toilets on your daily sanitary strolls and in the afternoon the colleagues can already be busy working out a solution to ease the pain.
Next up on our list of innovations is the realtime level of the coffeepot!
Liz: if you haven’t entered our contest to win a pre-production camera board, have a look at the post explaining what you’ll need to do. And if you’re looking for inspiration, here’s a guest post from Gordon, our Head of Software, about a mini-HD camera project he worked on at home using the prototype boards we showed the BBC back in 2011.
I may have mentioned that Gordon does a lot of cycling. He bodged up a 3D helmet cam a couple of years ago: here’s how he did it. (He has also made me include some 2D video because he likes showing off.)
Careful with the last video, which is in 3D – if you’re using bi-coloured 3D glasses to view it, as I did, you are liable to feel VERY motion sick if you’re susceptible to that sort of thing. Over to Gordon!
A few years ago I really wanted to play around with a helmet-mounted camera for my mountain biking. There were quite a few out in the market, but they were quite expensive, and it’s always difficult getting toys past my wife! Because I was working at Broadcom, I was able to get my hands on what we called the MicroDB (the thing David and Eben first showed to the BBC as the Raspberry Pi), and since I had all the software and a bit of competence, I decided to try doing a bit of HD helmet recording.
The hardware I used was based on the same BCM2835 chip that we all know and love. The hardware also had a PMU chip (power supply), which meant you could power it directly from a lithium ion battery and record 720p HD video for about an hour.
So I rigged up some properly engineered mounting. I used a rubber from my daughter’s pencil case (Americans, breathe easy – this is the UK word for what you call an eraser), a couple of cable ties, and a USB socket! I set out on a voyage of discovery…apologies in advance for the lycra clad arses, but It’s something you’ll just have to put up with!
Liz interjects: that’s not the half of it. Eben and Gordon have a regular date on Wednesdays where they take an hour and a half over lunch to go cycling and have a software meeting at the same time. This means a certain amount of strutting sweatily around the office dressed in lycra at the end of the ride. This week, Jack turned up, tutted and said: “You two do realise there are showers downstairs, don’t you.” The rest of us cheered.
This is an example of the helmet cam being used in a chain gang, which is a fast-moving (we’re doing around 26mph average for the whole of the clip) club ride, where you continuously rotate who’s cycling at the front, making it a very efficient way of travelling at speed!
This is another clip from the helmet cam, at the start of a mountain bike race held by a good friend of mine who’s an elite rider.
When I took these videos, I expected to experience the same feeling of speed as when you’re riding for real, but it doesn’t quite make it. The main issue is that the feeling of speed you get is a product of the full 3D stereoscopic experience that the 2D camera throws away. It’s there and it’s fun, but it doesn’t actually feel real; you don’t quite get the full-force feeling of what it’s like to tear down that trail!
I was missing a dimension, so I had to go find it again! OK, now you ask, surely it’s going to cost me a lot of money to buy a proper 3D camera, and you’d be right if you didn’t have a whole bunch of little camera boards kicking around in the office. I realised that all I needed was two of them, and a spot of work to synchronise the pictures: then Bob’s your uncle!
I took two MicroDB’s and connected them together (actually I used a USB -> USB connector which I then cable-tied to my bike helmet with a rubber/eraser to give it something soft to sink into). So what you get out is two videos (each 720p30). To get the images working together, you need to do some processing, which presents a number of problems:
1) The two cameras are not aligned and therefore you have to rotate and translate the images.
2) You also need to invert one of the images.
3) You need to hand-synchronise the two videos (and keep them synchronised during the video).
So I wrote a bit of software based on FFMPEG and SDL, and lots of handcrafted fun code to take the two videos and output them as one in a number of formats, including interleaved line (odd lines are left image, even lines right), horizontal half-resolution and vertical half-resolution (because we had a number of different 3D televisions to play with!) Application of Bresenham’s algorithm is so much fun!
I then went and did a 24-hour mountain bike race in a team of five (we came third that year) and recorded the first half of one of the laps in glorious 3D. You are going to either need a proper 3D television to watch this or use some red/green (actually cyan is closer) glasses (the kind you get in breakfast cereals!) – otherwise you can just hold two bits of suitably coloured filters against your face.
Liz again: editing this post, I have realised that the next video gives me motion sickness even without Gordon’s 3D glasses. Proceed with caution. Gordon, I can’t believe you kept this stuff up without sleeping for 24 hours.
Why am I showing you this? Well mostly because I had so much fun doing it, and it really shows how the real 3D helmet cameras can make the experience of home video just so much better if you’re doing something fast and aggressive. I hope you agree!
Finally, of course, the Raspberry Pi camera (now in production and being released next month) is very closely related to this one – although it’s actually higher quality; the images we’ve been seeing in test are looking fantastic. This project gives you an impression of the kind of thing you’ll be able to do with it with a bit of extra coding – and of the sort of extra legwork we’re looking for from people entering the competition to win a pre-production camera board.
I’ve been looking for where I put the video manipulation code; if I can find it, I’ll put it into GitHub somewhere so you can have a play yourself (if anyone is remotely interested)!
Finally – really finally – you have to think about the fact that the Raspberry Pi has two CSI interfaces, meaning there’s a potential to add two camera boards. Does that mean it would be possible to do all this completely on a single Raspberry Pi? We haven’t experimented with the idea yet – only the future can tell…
Dave Hunt is on a bit of a roll at the moment. Not content with having engineered the water droplet photography setup behind the prettiest post we’ve featured here, he’s also been working with the Pi and an home-made macro rail for sharper macro photographs without all that woolly depth of field. Bokeh – the fuzzy blur from the out-of-focus parts of a picture – is an effect that can be really beautiful, but sometimes you want a sharper picture, which can be nigh-on impossible in macro photography without special equipment.
Dougal, this cow is small. Those ones are far away.
There’s a way professional photographers deal with this, but, of course, it’s expensive. You can buy a rig which allows you to take many images, each taken a little closer to the object, so different parts of it are in focus with each picture. You can then combine or stack all those images in software, as in the cow picture on the right. There’s an open software solution to the matching and stacking problem called CombineZ (somebody port this thing to the Pi; that GPU is built for just this sort of application), but if you want to buy a rail that automates the moving of your camera, things suddenly start to look expensive. Dave says commercial solutions come in at around $600.
Enter the $35 Raspberry Pi and an old flat-bed scanner from the loft.