Need a little help here if I may.
I am trying to develop robotic vision using OpenCV and my Raspberry PI based robot.
I am currently working on navigating around by identifying the floor and walls and then the idea is to move without hitting the walls. Combined with this I am working on detecting the walls of a maze in the same way and navigating around the maze. The floor is black and the walls are white (about 30cm high).
I can successfully pick out the edges and better still with the correct camera angle also draw a line along the edges where they meet the floor.
What I am struggling with now is what to do with this information and how to use it to now tell the robot where to go
I have looked at line following using OpenCV and have successfully modified and used the Pi Borg Self Driving Scripts (similar to that used for Formula Pi) https://www.piborg.org/blog/monsterborg ... self-drive to follow a white line but this relies on having a nice coloured line to follow around the maze which is not an option for my project.
What I think I need to do is locate the base of the walls and pick two mid points between the walls, one in the middle distance and one in the near distance, draw a line corresponding to that data and then drive along that line, re sensing, re-calculating and re-drawing as I go. How to do this I am unsure.
The issue with the above is what if only one wall is found, I guess I could re-mount my camera on a servo and pan left and right to get a better overall view of the forward field of view (camera is currently front mounted and fixed in position).
I could also work on an algorithm that follows only one wall? Similar to a plan I had when using distance sensors?
Any pointers would help greatly.