Different light settings should be approachable in a simpler way than tensorflow..
e.g. see this tutorial which uses only color matching https://www.pyimagesearch.com/2015/09/1 ... th-opencv/
No need to detect circles or neural networks - anything matching colour, height/width aspect ratio and size (see below) should be sufficient.
Since both robot and balls are resting on the same flat surface, distance to the potential ball (matched by colour/light values) can be calculated from its position in frame.
Knowing distance, computing size of potential ball from image blob pixel size - to see if it reasonably matches size of a real ball -should also be easy.
Expect problems with several balls grouped together, bumping into any "object" matching colour but not size and/or aspect ratio should spread grouped balls and make them visible individually.
If that doesn't work I would even go as far as having a "training mode" in which you point a laser pointer at a ball, the robot follows the laser dot, when laser turns off the robot stores the remaining "coloured blob" as target.
PS I have pair of the same yellow motor-gear-wheels and I want to make a three wheel robot.
Also to have it play with a pi zero, camera and a laser pointer.
Can you please share details on how do they attach to your frame? You 3d printed it?