Whilst prepping for PiWars 2018 I have started to look at OpenCV for robot vision. I am currently working on the Maze challenge and am looking at a couple of approaches.
One option is to navigate the maze by locating the boundaries of the maze (the walls) and driving along a centre line in between the walls. This would involve using Hough Line Transform or Canny Edge Detection.
The other that I would like help with involves mapping the maze either by building up a mental map of the maze when obstacles are detected. Or navigating the maze using a pre-loaded 2D image of the maze with symbols to confirm location.
Or better still a combination of maze mapping and OpenCV vision.
Obviously navigating in this manner and moving in a direction for a specified amount of time (based on knowing how far the robot will travel each m/s) is fraught with issues such as wheel spin etc but having the added advantage of using OpenCV for line / shape detection for identifying symbols and obstacles will help to build a relative picture of the robots world.
What I would like help with initially is a steer regarding how to get the robot to read a pee-loaded image and or make a map based on what it sees?
Thanks in advance.