Vectors from coarse motion estimation

Liz: Gordon Hollingworth, our Director of Software, has been pointing the camera board at things, looking at dots on a screen, and cackling a lot over the last couple of weeks. We asked him what he was doing, so he wrote this for me. Thanks Gordon!

The Raspberry Pi is based on a BCM2835 System on a Chip (SoC), which was originally developed to do lots of media acceleration for mobile phones. Mobile phone media systems tend to follow behind desktop systems, but are far more energy efficient. You can see this efficiency at work in your Raspberry Pi: to decode H264 video on a standard Intel desktop processor requires GHz of processing capability, and many (30-40) Watts of power; whereas the BCM2835 on your Raspberry Pi can decode full 1080p30 video at a clock rate of 250MHz, and only burn 200mW.

Grodon

Because we have this amazing hardware it enables us to do things like video encode and decode in real time without actually doing much work at all on the processor (all the work is done on the GPU, leaving the ARM free to shuffle bits around!) This also means we have access to very interesting bits of the encode pipeline that you’d otherwise not be able to look at.

One of the most interesting of these parts is the motion estimation block in the H264 encoder. To encode video, one of the things the hardware does is to compare the current frame with the previous (or a fixed) reference frame, and work out where the current macroblock (16×16 pixels) best matches the reference frame. It then outputs a set of vectors which tell you where the block came from – i.e. a measure of the motion in the image.

In general, this is the mechanism used within the application motion. It compares the image on the screen with the previous image (or a long-term reference), and uses the information to trigger events, like recording the video or writing a image to a disk, or triggering an alarm. Unfortunately, at this resolution it takes a huge amount of processing to achieve this in the pixel domain; which is silly if the hardware has already done all the hard work for you!

So over the last few weeks I’ve been trying to get the vectors out of the video encoder for you, and the attached animated gif shows you the results of that work. What you are seeing is the magnitude of the vector for each 16×16 macroblock equivalent to the speed at which it is moving! The information comes out of the encoder as side information (it can be enabled in raspivid with the -x flag). It is one integer per macroblock and is ((mb_width+1) × mb_height) × 4 bytes per frame, so for 1080p30 that is 120 × 68 × 4 == 32KByte per frame. And here are the results. (If you think you can guess what the movement you’re looking at here represents, let us know in the comments.)

blamenuttall

Since this represents such a small amount of data, it can be processed very easily which should lead to 30fps motion identification and object tracking with very little actual work!

Go forth and track your motion!

Is the toilet free?

Here at Pi Towers, we are lucky enough to have more toilets than we have people. Some offices don’t. And it’s embarrassing to hear your colleagues micturating (at least for some people – the rest of us chatter through it all and make fun of each other’s shy bladders), so the guys at Made by Many have come up with a Pi-based solution.

It started quite simply. Reed switches on a toilet door would send information to a Pi, which would publish the data to a website, so the folks at Made by Many could check online before going to the loo. They made a LEGO prototype to make sure everything worked.

LEGO

And after applying the switches to the real toilet doors, they ended up with the real thing serving up a result like this when the website was polled.

isthetoiletfreewebpage

Of course, it’s axiomatic that if you can overcomplicate something, you should.

So the Made by Many team started looking at what data they could collect without invading people’s lavatorial privacy (with a privacy document being uploaded to GitHub). No identifying information or information about exactly what was going on in the cubicle was collected at any time.  Over three weeks they ended up with sufficient data points to work some SQL magic and be able to detect:

  • if the toilets are free
  • the total number of visits
  • minimum visit duration
  • maximum visit duration
  • average visit duration
  • total visits by hour
  • total visits by day

From which they could infer:

  • the office’s favourite toilet
  • peak times
  • off-peak times
  • an estimated wait time.

And then they made a command-line-style stats page.

statspage

And because a job half-done is no job at all, they also made a little toilet notifier to live in the menu bar in Mac OS.

toiletosx

They’ve made LED signs. They’ve irritated their colleagues so much that one of them dismantled and abducted one of the reed switches. They’ve demonstrated elegantly that the Internet of Things is always informative, and not always as useful as we think it is. We think this is one of the most entertaining projects we’ve seen in a while. We salute you, Made by Many. And if you’ll excuse me, I drank rather too much coffee after lunch. I’ll just be a minute.

Free goodies for good causes from Pimoroni

Our  friends at Pimoroni have some good news for you. To celebrate making their 100,000th Pibow case, they’re giving away 512 Pibow Rainbow cases (and some accessories) to good causes. Updated to add: Cyntech have just thrown their hat into the ring too: they’ll be supplementing the prize pool with some more PiHubs and Pibrellas, alongside some seven-segment displays – all of which are very useful in the classroom. Thanks folks!

The Pibow Rainbow: Liz's Raspberry Pi case of preference.

The Pibow Rainbow – Raspberry Pi case and thing of beauty

Are you a charity, educational establishment or other worthy cause with a bunch of naked Model B Raspberry Pis? Maybe you’re such a place and you want to buy a bunch of Pis with a free case, or upgrade to something a bit more shiny?

All you need to do is comment below with a valid email address, or email support@pimoroni.com with the subject “WE NEED FREE PIBOWS”.

Say briefly who you are (School, Charity, Good Cause), what you do, and why a classroom kit would be really useful to you. Each kit contains 10 lovely Pibow Rainbow cases (or more!) plus a PiHub, Pibrella and PiGlow to play with. Here’s a video of a PiGlow doing its thing to whet your appetite – you’ll find a tutorial in our Resources section to get you programming yours using Python in easy steps.

Paul, who is half of Pimoroni and who also designed the very fruity Raspberry Pi logo, says:

“We love the things people do with the Pi and Pibow already, and now seems like a perfect time for us to spread a bit of colour and joy to the places where the Pi makes the most difference. Learning about computers, electronics and other geekery should be fun and friendly and for everyone.”

Preview the upcoming Maynard desktop

Some of you will be aware that we’ve been working on a new, more responsive and more modern desktop experience for the Raspberry Pi. We thought you might like an update on where we are with the project.

The chip at the heart of the Raspberry Pi, BCM2835, contains an extremely powerful and flexible hardware video scaler (HVS), which can be used to assemble a stack of windows on the fly for output to the screen. In many ways the HVS resembles the sprite engines you may remember from 8- and 16-bit computers and games consoles from the Commodore 64 onward, with each window treated as a separate translated and scaled “sprite” on top of a fixed background.

The Wayland compositor API gives us a way to present the HVS to applications in a standards-based way. Over the last year we’ve been working with Collabora to implement a custom backend for the Weston reference compositor which uses the HVS to assemble the display. Last year we shipped a technology demonstration of this, and we’ve been working hard since then to improve its stability and performance.

The “missing piece” required before we can consider shipping a Wayland desktop as standard on the Pi is a graphical shell. This is the component that adds task launching and task switching on top of the raw compositor service provided by Wayland/Weston. The LXDE shell we ship with X on the Pi doesn’t support Wayland, while those shells that do (such as GNOME) are too heavyweight to run well on the Pi. We’ve therefore been working with Collabora since the start of the year to develop a lightweight Wayland shell, which we’ve christened Maynard (maintaining the tradition of New England placenames). While it’s some distance from being ready for the prime time, we though we’d share a preview so you can see where we’re going.

Packages for Raspbian are available (this is a work in progress, so you won’t be able to replace your regular Raspbian desktop with this for general use just yet, and you’ll find that some features are slow, and others are missing). Collabra have made a Wiki page with compilation instructions available: and there’s a Git repository you can have a poke around in too.

Mudra: a Braille dicta-teacher

Sanskriti Dawle and Aman Srivastav are second-year students at the Birla Institute of Technology and Science in Goa. After a Raspberry Pi workshop they decided they wanted to do something more meaningful than just flash LEDs on and off, and set this month’s PyCon in Montreal as their deadline.

team-mudra1

Aman Srivastav and Sanskriti Dawle

They ended up producing something really special. Mudra means “sign” in Sanskrit: the Raspberry Pi-based device is a learning tool for visually impaired people, which teaches Braille by translating speech to Braille symbols. Braille literacy among blind people is poor even in the developed world: in India, it’s extremely low, and braille teachers are very, very few. So automating the teaching process – especially in an open and inexpensive way like this – is invaluable.

In its learning mode, Mudra uses Google’s speech API to translate single letters and numbers into Braille, so learners can go at their own speed. Exam modes and auto modes are also available. This whole video is well worth your time, but if you’re anxious to see the device in action, fast-forward to 1:30.

Sanskriti and Aman say:

Mudra is an excellent example of what even programming newbies can achieve using Python. It is built on a Raspi to make it as out-of-the-box as possible. We have close to zero coding experience, yet Python has empowered us enough to make a social impact with Mudra, the braille dicta-teacher, which just might be the future of Braille instruction and learning.

We think Mudra’s a real achievement, and a great example of clean and simple ideas which can have exceptional impact. You can see the Mudra repository on GitHub if you’d like a nose around how things work; we’re hoping that Sanskriti and Aman are able to productise their idea and make it widely available to people all over the world.

Books, the digitising and text-to-speechifying thereof

A couple of books projects for you today. One is simple, practical and of great use to the visually-impaired. The other is over-complicated, and a little bit nuts; nonetheless, we think it’s rather wonderful; and actually kind of useful if you’ve got a lot of patience.

We’ll start with the simple and practical one first: Kolibre is a Finnish non-profit making open-source audiobook software so you can build a reader with very simple controls. This is Vadelma, an internet-enabled audio e-reader. It’s very easy to put together at home with a Raspberry Pi: you can find full instructions and discussion of the project at Kolibre’s website.

The overriding problem with automated audio e-readers is always the quality of the text-to-speech voice, and it’s the reason that books recorded with real, live actors reading them are currently so much more popular; but those are expensive, and it’s likely we’ll see innovations in text-to-speech as natural language processing research progresses (its challenging: people have been hammering away at this problem for half a century), and as this stuff becomes easier to automate and more widespread.

How easy is automation? Well, the good people at Dexter Industries decided that what the Pi community (which, you’ll have noticed, has a distinct crossover with the LEGO community) really needed was a  robot that could use optical character recognition (OCR) to digitise the text of a book, Google Books style. They got that up and running with a Pi and a camera module, using the text on a Kindle as proof of concept pretty quickly.

But if you’re that far along, why stop there? The Dexter team went on to add Lego features, until they ended up with a robot capable of wrangling real paper books, down to turning pages with one of those rubber wheels when the device has finished scanning the current text.

So there you have it: a Google Books project you can make at home, and a machine you can make to read the books to you when you’re done. If you want to read more about what Dexter Industries did, they’ve made a comprehensive writeup available at Makezine. Let us know how you get on if you decide to reduce your own library to bits.

MagPi issue 22

I’m about two weeks late to the party on this one – massive apologies to all at The MagPi. It’s been a bit busy around here so far this month. Right now, Picademy’s underway in the office space we’ve got set up as a classroom, and 24 teachers are busy making blooping noises with Sonic Pi while Clive booms at them in Teachervoice. It’s distracting but curiously enjoyable.

Alongside the preparation for Picademy, this month we’ve seen the launch of this new website, and the announcement about the new Compute Module. While all this was going on, the April edition of The MagPi came out, and I didn’t notice because I was too busy glueing Raspberry Pi logos on sticks and sending boxes of jam to Johnny Ball (true story).

MagPi April 14

As usual, The MagPi is full of wonderful things like internet-enabled garage doors, night lights that repel under-bed goblins, reviews, competitions, tutorials and much more. My favourite article this month discusses a solar cell (this month’s cover star) that tracks the sun to provide 140% more energy than a static cell. Go and read it online for free: you can also order a printed copy for your personal library or for your school. Thanks MagPi folks – I promise to be more timely about letting people know about next month’s issue!

BitScope Micro

We met the folks from BitScope some months ago to talk about their plans for a miniature scope especially for the Raspberry Pi. They’ve just launched the product we discussed today, and our socks have been comprehensively knocked off by it; the engineers who sit behind me think this is one of the most exciting Raspberry Pi add-ons they’ve seen so far.

bitscope

This is the BitScope Micro, built especially for the Raspberry Pi. It turns your Pi into a dual channel digital oscilloscope, a multi-channel logic analyser, a waveform and clock generator and a spectrum analyser; it comes bundled with BitScope’s full suite of software (well worth a look if you’re even slightly interested – this thing has features coming out of the wazoo), and it’s probably the cheapest digital scope we’ve ever seen, coming in at USD $95 if you buy in volume, and $145 at one-off retail. You can read much more at BitScope’s website.

sampleoutput

Sample output

moresampleoutput

More sample output

We think that’s an incredibly good deal – for the same analogue bandwidth in a bench scope you’re looking at an instrument that’s many, many times the price, even at retail.

The BitScope Micro comes bundled with ten signal clips and a USB cable.

The BitScope Micro comes bundled with ten signal clips and a USB cable.

It’s exciting for us to see proper scientific instrumentation coming to the Pi; we’re looking forward to seeing what the community does with tools like the BitScope Micro. Give us your ideas in the comments.

 

New how-to animation – special guest star!

We’ve just taken delivery of another video from the lovely guys at Saladhouse Animation. This one’s for the new Quick Start page, to help out all those beginners who have asked which cables go where, and what they do.

UK readers over 30 or so might recognise the voice of our animated presenter.

Massive thanks, as always, to Sam Alder and Scott Lockhart at Saladhouse, who we love working with; they’re some of our favourite people. But the biggest thanks of all has to go to my childhood hero (he’s also the childhood hero of all the actual grown-ups in the office) – without this guy’s influence when I was a kid, I wouldn’t have ended up loving science, and I wouldn’t have ended up working on Raspberry Pi. Here he is at the recording session with Sam (mouth) and Scott (tall):

samscottjohnny

That’s the LEGENDARY Johnny Ball. Johnny donated his time and his voice talent to the Raspberry Pi Foundation, and we couldn’t be more grateful: Johnny, please watch out for the postman next week, ‘cos we’re sending  you a present to say thanks.

Meet Jasper: open-source voice computing

Meet Jasper. He’s like Siri, but much better, in that it’s open-source and completely customisable. All you need to set up your own is a microphone, a speaker, and a Raspberry Pi.

Jasper already comes with modules to deal with things like time, weather, Gmail, playing your Spotify music, news (and what’s on Hacker News)…and knock knock jokes. You can build your own modules to add more functionality. We’re really impressed by how well-documented Jasper is; new developers should be able to get to grips with building on the platform very easily, and we’re looking forward to watching what you guys get up to with it.