User avatar
JonnyAlpha
Raspberry Pi Certified Educator
Raspberry Pi Certified Educator
Posts: 560
Joined: Sat Nov 02, 2013 2:06 pm

Open CV Computer Vision - Direction Decision

Mon Nov 12, 2018 7:29 pm

Need a little help here if I may.

I am trying to develop robotic vision using OpenCV and my Raspberry PI based robot.
I am currently working on navigating around by identifying the floor and walls and then the idea is to move without hitting the walls. Combined with this I am working on detecting the walls of a maze in the same way and navigating around the maze. The floor is black and the walls are white (about 30cm high).

I can successfully pick out the edges and better still with the correct camera angle also draw a line along the edges where they meet the floor.

What I am struggling with now is what to do with this information and how to use it to now tell the robot where to go :-)

I have looked at line following using OpenCV and have successfully modified and used the Pi Borg Self Driving Scripts (similar to that used for Formula Pi) https://www.piborg.org/blog/monsterborg ... self-drive to follow a white line but this relies on having a nice coloured line to follow around the maze which is not an option for my project.

What I think I need to do is locate the base of the walls and pick two mid points between the walls, one in the middle distance and one in the near distance, draw a line corresponding to that data and then drive along that line, re sensing, re-calculating and re-drawing as I go. How to do this I am unsure.

The issue with the above is what if only one wall is found, I guess I could re-mount my camera on a servo and pan left and right to get a better overall view of the forward field of view (camera is currently front mounted and fixed in position).

I could also work on an algorithm that follows only one wall? Similar to a plan I had when using distance sensors?

Any pointers would help greatly.
Raspberry Pi Certified Educator. Main Hardware - Raspberry Pi 1 model B revision 2, Raspberry Pi 2 model B, Pi Camera

User avatar
Joel_Mckay
Posts: 177
Joined: Mon Nov 12, 2012 10:22 pm
Contact: Website

Re: Open CV Computer Vision - Direction Decision

Wed Nov 14, 2018 4:04 pm

In robotics, a fast robust algorithm is typically better than the theoretical ideal.

1. optical flow is usually quite good, but not in the sparse feature environment you describe.
I would go with simple edge detection, and RANSAC to figure out the environment... mostly because doing transforms on lines is computationally very fast. ( https://www.mrpt.org/ has several SLAM implementations to learn from... )

2. a simple "wavefront path planner" is simpler for students to finish in a semester.
Something like a POMDP is over ambitious for undergrads: https://en.wikipedia.org/wiki/Partially ... on_process
For competitions, I would suggest students use D* based reachability planners: https://en.wikipedia.org/wiki/D*

3. Visual servoing is not as reliable as LIDAR in most cases, and most end up combining the two ( https://visp.inria.fr/ )

There are also several heuristics based approaches that will prove more reliable depending on the environment.
;-)
Cheers,
~J~

Return to “Automation, sensing and robotics”