code is now on GitHub!
I would like to show my robot I built this year for the Robocup Junior Competition Flanders 2014, category ‘advanced rescue’. I got a third prize with my robot.
A video of the robot following the line: link
The robot is –of course – based on a Raspberry Pi. I used a model B running Raspbian.
This is not my first robot for the RCJ competition. In 2013 I won the competition with a robot with the Dwengo board
as control unit. It used reflectance and home-made colour sensors. The Dwengo board uses the popular pic18f4550 microcontroller and has amongst other functionalities a motor driver, a 2x16 char screen and a big, 40pin extension connector. The Dwengo board is, like the RPi, designed for educational purposes, with projects in Argentina
As the Dwengo board is a good companion for the Raspberry Pi, I decided to combine both boards in my new robot. While the Pi does high-level image processing, the microcontroller controls the robot.
The Raspberry Pi was programmed in C++ using the openCV libraries
, the wiringPi library
(from Gordon Henderson) and the RaspiCam openCV interface library (from Pierre Raufast
and improved by Emil Valkov
). I overclocked the Pi to 1GHz to get a frame rate of 12 to 14 fps.
Using a camera has some big advantages: first of all, you don’t have that bunch of sensors mounted close to the ground that are interfering with obstacles and deranged by irregularities. The second benefit is that you can see what is in front of the robot without having to build a swinging sensor arm. So, you have information about the actual position of the robot above the line but also on the position of the line in front, allowing calculation of curvature of the line.
In short, following the line is much more controllable.
By using edge detection rather than greyscale thresholding, the program is virtually immune for shadows and grey zones in the image.
If the line would have had less hairpin bends and I would have had a bit more time, I would have implemented a speed regulating algorithm on the base of the curvature of the line. This is surely something that would improve the performance of the robot.
I also used the camera to detect and track the green direction fields at a T-junction where the robot has to take the right direction. I used a simple colour blob tracking algorithm for this.
A short video
of what the robot thinks.
Please note that in reality the robot goes a little bit slower following the line.
Different steps of the image processing:
Image acquired by the camera (with some lines and points already added):
The RPi converts the colour image to a greyscale image.
Then the pixel values on a horizontal line in the image are extracted and put into an array. This array is visualized by putting the values in a graph (also with openCV):
From the first array, a second is calculated by taking the difference from two successive values. In other words, we calculate the derivative.
An iterating loop then searches for the highest and lowest value in the array. To have the horizontal relative position of the line in the array, the two position values - on the horizontal x axe in the graphed image – are averaged. The position is put in memory for the next horizontal scan with a new image. This makes that the scan line does not have to span the whole image but only about a third of it. The scan line moves horizontally with the centre about above the line.
But this is not enough for accurate tracking. From the calculated line position, circles following the line are constructed, each using the same method (but with much more trigonometry calculations as the scan lines are curved). For the second circle, not only the line position but also the line angle is used. Thanks to using functions, adding a circle is a matter of two short lines of code.
The colour tracking is done by colour conversion to HSV, thresholding and then blob tracking, like explained in this excellent video
The colour tracking slows the line following down with a few fps but this is acceptable.
HSV image and thresholded image:
As seen in the video, after all the scan lines and some info points are plotted on the input image so we can see what the robot ‘thinks’.
: Has anyone made or seen projects using a similar line finding method?
After the Raspberry Pi has found the line, it sends the position data and commands at 115,2 kbps over the hardware serial port to the Dwengo microcontroller board. The Dwengo board does some additional calculations, like taking the square root of the proportional error and squaring the ‘integral error’ (curvature of the line). I also used a serial interrupt and made the serial port as bug-free as possible by receiving each character separetely. Thus, the program does not wait for the next character while in the serial interrupt.
The Dwengo board sends an answer character to control the data stream. The microcontroller also takes the analogue input of the SHARP IR long range sensor to detect the obstacles and scan for the container.
In short, the microcontroller is controlling the robot and the Raspberry Pi does an excellent job by running the CPU intensive line following program.
But let's get a bit more technical.
As a common power supply may give problems with ground loops, electrical noise, ... the robot has two 2S LiPo batteries of each 1300mAh to power the Rpi and Dwengo board separately.
Both devices are interconnected by two small boards – one attaches to the RPi and the other to the Dwengo board – that are joined by a right angle header connection. The first does with some resistors the logic level converting (the Dwengo board runs on 5V),
the latter board also has two DC jacks zit diodes in parallel for power input to the RPi. To regulate the power to the Pi, I used a Turnigy UBEC
that delivers a stable 5.25V and feeds it into the Pi by the micro USB connector. This gives a bit more protection to the sensitive Pi. As the camera image was a bit distorted I added a 470uF capacitor to smooth things out. This helped. Even though the whole Pi got hot, the UBEC stayed cold. The power input was between 600 and 700mA at around 8.2 volts.
In most linefollowers, the driving wheels are mounted behind the point of gravity (some info about linefollowers
). This is to make that the sensors in front of the robot can be positioned above the line by making one wheel run slower than the other (or simply blocking it, like most other robots did). However, there were two reasons I wanted the driving wheels in front: Firstly, the robot has to take some bumps and with an undriven front wheel it is very likely the robot will be pushed off the line; secondly, as I am using a camera, I am totally not concerned about the front off the robot being exactly above the line. One of the spectator at the RCJ competition noticed: “your robot turns before the turn – it looks intelligent!”
I used a swivelling wheel to make the robot as free as possible in its movements. Last year I used a ball caster, and I found it didn’t work well on rougher terrain.
However, the drawback this year was that the robot had the tendency to keep turning for a short time after a turn. If I had had some more time I would have implemented a reflectance sensor on each wheel and control the motor speed. This would also have been useful to see if the robot was in front of a bump or hill.
The Sharp IR long range sensor in front of the robot is used for detecting the objects. At first, I mounted it horizontally. However, this turned out to be not a good arrangement for this model (I will look up the exact type). By mounting it vertically with the IR source at the top I had no problems anymore.
The standard horizontal FOV of the Pi camera is roughly 53 degrees. This is not enough to follow the line. With a wide angle lens a widened it to around 100 degrees.
With the extra battery, camera lens, RPi, ... the weight of the robot increased. Last year's robot weighted around 450g, this one weights about 800! However, as the motors were over calculated this gave not much problems. The robot handles bumps of 10mm without effort.
Last year, I almost missed the first place as the robot only just pushed the can out of the field. Not a second time! Having this in thought, I constructed two 14cm long arms that could be turned open by two 9g servos. With the two grippers opened, the robot spans almost 40 centimetres. Despite this, the robot managed – to everyone's annoyance – ‘to take its time before doing its job’, as can be seen in the video.
Building the robot platform
To build the robot platform I followed the same technology as the year before (link
, in Dutch). I made a design in SketchUp, then converted it to a 2D vector drawing and finally lasercutted it at FabLab Leuven
. However, the new robot platform is completely different in design. Last year, I made a ‘riding box’ by taking almost the maximum dimensions and mounting the electronics somewhere on or in it.
This time, I took a different approach. Instead of using an outer shell (like insects have), I made a design that supports and covers the parts only where necessary.
The result of this is that the robot not only looks much better, but also makes that the different components are much easier to mount and that there is more space for extensions and extra sensors.
The design files can be found here: Robot RoboCup Junior – FabLab Leuven
3D renders in SketchUp:
On the day of the RCJ competition I had some bad luck as there wasn’t enough light in he competition room. The shutter time of the camera became much longer. As a consequence, the robot had much more difficulties in following sharp bends in the line. However, this problem did not affect the final outcome of the competition.
Maybe I should have mounted some LEDs to illuminate the line...
Arne Baeyens, 17 - Robotanicus