mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Fri Jan 01, 2016 6:01 pm

How about fixed this morning (for the most part anyways)

Code: Select all

import setproctitle #Set process name to something easily killable
import cv2 #Computer Vision Libraries for webcam use
import subprocess #so I can run subprocesses in the background if I want
import ConfigParser #To read the config file modified by menu.py
from subprocess import call #to call a process in the foreground
import csv #To make an array of the certainty and identity results so we can find the top matches
from operator import itemgetter

setproctitle.setproctitle("jetpac")
Config = ConfigParser.RawConfigParser()
Config.read('/home/pi/aftersight.cfg')

ConfigJetpacNetwork = Config.get('AfterSightConfigSettings','configjetpacnetwork') #Get the path to the selected network from the config
ConfigJetpacThreshold = Config.get('AfterSightConfigSettings','configjetpacthreshold') #Set the identification threshold
ConfigJetpacCamera = Config.get('AfterSightConfigSettings','configjetpaccamera') #Get Selected Camera - Not implemented yet

print ConfigJetpacNetwork
print ConfigJetpacThreshold
print ConfigJetpacCamera

camera_port = 0  #Open Camera 0
camera = cv2.VideoCapture(camera_port) #define where I dump camera input


classify = "/home/pi/classify.txt"
X = 0 # counter used to flush frames from videocapture

while 1:
        while X < 30: #Number of frames to flush
                return_value, image = camera.read()
                X=X+1
        X = 0
        cv2.imwrite("opencv.png", image)

        #./jpcnn -i data/dog.jpg -n ../networks/jetpac.ntwk -t -m s -d
        #Command Line for jetpac I dropped -t and -d because when it works I dont need to know how long it takes or the debug info
        with open (classify, "w+") as f: #write results to classify.txt
                call (["sudo","/home/pi/projects/DeepBeliefSDK/source/./jpcnn","-i","/home/pi/opencv.png","-n",ConfigJetpacNetwork,"-m","s"],stdout = f,stderr = f)
        call (["sudo","rm","-rf","opencv.png"])
        data = csv.reader(open('classify.txt', 'rb'), delimiter="       ", quotechar='|')
        certainty, identity = [], []

        for row in data:
                certainty.append(row[0])
                identity.append(row[1])

        #print certainty
        #print identity

        certainty = [float(i) for i in certainty] # Convert Certainty from string to float to allow sorting
        matrix = zip(certainty, identity) #combine them into a two dimentional list
        matrix.sort(key=itemgetter(0), reverse=True) #Sort Highest to Lowest Based on Certainty
        #Now Espeak the top three terms if they are > threshold
        print matrix [0]
        print matrix [1]
        print matrix [2]
        topthree = [x[1] for x in matrix[0:3]]
        espeakstring = str(topthree[0]) + " " + str(topthree[1]) + " " + str(topthree[2])
        print espeakstring
        espeak_process = subprocess.Popen(["espeak",espeakstring, "--stdout"], stdout=subprocess.PIPE)
        subprocess.Popen(["aplay", "-D", "sysdefault"], stdin=espeak_process.stdout, stdout=subprocess.PIPE)
        #call (["sudo","espeak",topthree[0]])
        #call (["sudo","espeak",topthree[1]])
        #call (["sudo","espeak",topthree[2]])
        call (["sudo","rm","-rf","/home/pi/classify.txt"]) #remove last run of classification info
del(camera)  # so that others can use the camera as soon as possible
I put in a loop that flushes camera frames. I've made it flush 30 frames and get ok performance. For instance, I will often hear the same thing twice, but this has more to do with the long processing time of jetpac. Values lower than 15 seemed to be the same as not doing it at all.

I've increased the speed at which the objects are read back using espeak and suppressed the stdout error messages to do troubleshooting more easily.

Next up will be to set a threshold for recognition and only read back objects that exceed it. I find that oftentimes I am looking at something with a good recognition value greater than 15%, and it also reads back two items that are less than 2%.

I'm still having much better success with the libccv2012 network, but I haven't gone on a nature walk yet.

User avatar
seeingwithsound
Posts: 165
Joined: Sun Aug 28, 2011 6:07 am
Contact: Website

Re: Sight for the Blind for <100$

Fri Jan 01, 2016 7:21 pm

Hi Mike,

Great to hear that you got your finger on the sore spot. Yet I think it needs to be taken one step further, because applying magic numbers like dropping 30 frames sounds like it is going to break for any of a number of reasons, such as a different capturing frame rate, a future different Raspberry Pi device, a neural network with a different CPU load, and so on. With multiple heuristic workarounds adding up, debugging can become a huge pain. When jetpac has consumed (analyzed) an image, anything captured since that image was captured should be discarded, or something like that. I have no knowledge of the Raspberry Pi software stack involving image capture and jetpac, so can here only ask: is this doable?

Secondly, I do not know if confidence scores are comparable to those used by BlindTool for Android, but that app appears to only speak names of objects that have a score exceeding a threshold of 0.25. Doing something similar like you propose sounds good to me. However, sometimes it is the second or third candidate that is closer to reality, and it would be a waste not to use the user's intelligence and knowledge of context by discarding those alternative candidates if they still have fair confidence scores. If you are in a room and jetpac detects something with 4 legs and its best bet is that it is an elephant, the blind user knows that that must be nonsense and also hearing "table" as a second best bet may then be helpful. Maybe speak all candidates with a confidence score above, say, 0.1 up to a maximum of 3 candidates?

Thanks!

Peter


Seeing with Sound - The vOICe
http://www.seeingwithsound.com

User avatar
seeingwithsound
Posts: 165
Joined: Sun Aug 28, 2011 6:07 am
Contact: Website

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 7:44 am

FYI: Wired yesterday discussed Baidu's DuLight, a device based on deep learning neural networks that appears functionally very similar to what is being developed here for Raspberry Pi using jetpac/deep belief/teradeep and also similar to BlindTool for Android, http://www.wired.com/2016/01/2015-was-t ... day-world/

Peter


The vOICe Sensory Substitution for Android
http://www.seeingwithsound.com/android.htm
https://play.google.com/store/apps/deta ... OICe.vOICe

mr_indoj
Posts: 42
Joined: Wed Jul 01, 2015 9:28 am

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 4:34 pm

This is a modification that uses a background thread to read the frames.
You will need to use pip to get the imutils package first:

Code: Select all

$ sudo pip install imutils
Also, i moved the temp files to /dev/shm for ramdisk access

Code: Select all

import setproctitle #Set process name to something easily killable
import cv2 #Computer Vision Libraries for webcam use
import subprocess #so I can run subprocesses in the background if I want
import ConfigParser #To read the config file modified by menu.py
from subprocess import call #to call a process in the foreground
import csv #To make an array of the certainty and identity results so we can find the top matches
from operator import itemgetter
from imutils.video import WebcamVideoStream
setproctitle.setproctitle("jetpac")
Config = ConfigParser.RawConfigParser()
Config.read('/home/pi/aftersight.cfg')

ConfigJetpacNetwork = Config.get('AfterSightConfigSettings','configjetpacnetwork') #Get the path to the selected network from the config
ConfigJetpacThreshold = Config.get('AfterSightConfigSettings','configjetpacthreshold') #Set the identification threshold
ConfigJetpacCamera = Config.get('AfterSightConfigSettings','configjetpaccamera') #Get Selected Camera - Not implemented yet

print ConfigJetpacNetwork
print ConfigJetpacThreshold
print ConfigJetpacCamera

camera_port = 0 #Open Camera 0
camera = WebcamVideoStream(src=camera_port).start() #define where I dump camera input


classify = "/dev/shm/classify.txt"


while 1:
	image = camera.read()
	cv2.imwrite("/dev/shm/opencv.png", image)

	#./jpcnn -i data/dog.jpg -n ../networks/jetpac.ntwk -t -m s -d
	#Command Line for jetpac I dropped -t and -d because when it works I dont need to know how long it takes or the debug info
	with open (classify, "w+") as f: #write results to classify.txt
		call (["sudo","/home/pi/projects/DeepBeliefSDK/source/./jpcnn","-i","/dev/shm/opencv.png","-n",ConfigJetpacNetwork,"-m","s"],stdout = f,stderr = f)
	call (["sudo","rm","-rf","/dev/shm/opencv.png"])
	data = csv.reader(open('/dev/shm/classify.txt', 'rb'), delimiter="	", quotechar='|')
	certainty, identity = [], []

	for row in data:
		certainty.append(row[0])
		identity.append(row[1])

	#print certainty
	#print identity

	certainty = [float(i) for i in certainty] # Convert Certainty from string to float to allow sorting
	matrix = zip(certainty, identity) #combine them into a two dimentional list
	matrix.sort(key=itemgetter(0), reverse=True) #Sort Highest to Lowest Based on Certainty
	#Now Espeak the top three terms if they are > threshold
	print matrix [0]
	print matrix [1]
	print matrix [2]
	topthree = [x[1] for x in matrix[0:3]]
	espeakstring = str(topthree[0]) + " " + str(topthree[1]) + " " + str(topthree[2])
	print espeakstring
	espeak_process = subprocess.Popen(["espeak",espeakstring, "--stdout"], stdout=subprocess.PIPE)
	subprocess.Popen(["aplay", "-D", "sysdefault"], stdin=espeak_process.stdout, stdout=subprocess.PIPE)
	#call (["sudo","espeak",topthree[0]])
	#call (["sudo","espeak",topthree[1]])
	#call (["sudo","espeak",topthree[2]])
	call (["sudo","rm","-rf","/dev/shm/classify.txt"]) #remove last run of classification info
camera.stop()
del(camera) # so that others can use the camera as soon as possible

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 4:40 pm

Excellent!
I shall try this out after work today.

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 5:36 pm

Regarding the confidence scores, they are handled almost exactly as you described, and I have already created a config variable for it in the jetpac settings menu called 'threshold'

I have not implemented a method of changing the setting yet because I didn't know what to expect from operating. I think have it start at 5% and go up in steps of 5 to a max of 30 is probably the way to go.

It will be no big deal to import the setting, and only a little work to apply it to the sorted value with an if statement or two

PranavLal
Posts: 124
Joined: Fri Jun 28, 2013 4:49 pm

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 5:46 pm

Hi all,

I have tried the image that Mikey11 posted. This does not contain the modification that has been made to jetpack.py. I will be trying that in a bit.

A few points.
1. I really like the ability to set the speed of raspby vOICe.
2. The problem I have is that the menu does not speak as I turn the knob while jetpack or the vOICe is running. I would like to run both programs together. I appreciate the problems with this so let's not make this an immediate requirement.
3. As for the object recognition, it first thought my hand was a cloak, then a thimble and then something else. It seems to have a fascination for recognizing items as toilet tissue. <grin
4. Mike, is it possible to change the on off switch on the unit? It is giving me too many problems. I am never sure if I have turned on the device correctly.
5. Can we have a speech message saying that the unit is ready for use? It was a tad disconcerting after the disclaimer when nothing happened.

Pranav

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 6:38 pm

Hi Pranav,

Thanks for the feedback, some of it can be implemented immediately. Some of it is not so easy.

1. With the persistent configuration files, we can now add a number of additional persistent settings. Basically anything that can be passed to raspivoice on the command line can be made into a persistent setting. If you check out options.cpp in the raspivoice github repository, I count about fifteen items. I just added the ones I figured would be the most important first. Some don't even make sense. For instance, I would not want to autostart with a zoomed setting. Others are more likely. Such as starting with a brightness, threshold, or edge detection setting. If you want more settings in the persistent config I will be happy to add them.

2. I will continue to try to figure out how to use v4l2loopback or the avconv methods of creating a video T from the main camera. Until this happens I can't run raspivoice and jetpac together (unless there are two cameras...) I would probably still keep the main menu quiet when either of those are running. The method of operation would be this: Start the device, change raspivoice to autostart, change jetpac to autostart, reset the device. Both programs will start on boot (they don't yet, but they will when the video T problem is solved). I have to keep the main menu quiet in order for the raspivoice menu options to be used. The way to get back to the main menu is to hold in the pushbutton for ~4 seconds. This is how I see simultaneous running working eventually.


3. At first I didn't love the recognition, but now with some of the updates it is working much better. Yesterday I walked outside into a parking lot of an industrial facility. It correctly identified trucks. It also did a misidentification that was understandable. I looked at some industrial steam towers and it identified a set of six of them as a church, mosque, or palace. Not quite right, but I found it impressive.
4. I have made major updates to hardware since I sent your unit out. I would like to send both you and Peter the newest revision. I am planning on doing that when I get the time. I expect to ship them within a week or two. It should help with your concerns
5. This is easy to implement, and will be done at my first opportunity.

User avatar
seeingwithsound
Posts: 165
Joined: Sun Aug 28, 2011 6:07 am
Contact: Website

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 6:48 pm

PranavLal wrote:3. As for the object recognition, it first thought my hand was a cloak, then a thimble and then something else. It seems to have a fascination for recognizing items as toilet tissue. <grin
Hi Pranav,

Regarding your point 3: You can find a list of about 1,000 classes at

http://image-net.org/challenges/LSVRC/2 ... se-synsets

which I think is similar to or even identical to the set of classes that the freely available neural networks currently map into. That list is meant for use in image recognition contests for computer vision researchers, and not intended for practical use in daily living situations. I too found that my hand was never recognized as a hand, but that is apparently just because the word "hand" is not included as a separate entry in this list of 1,000 classes. Your "thimble" and "cloak" are included in the list.

Peter


Seeing with Sound - The vOICe
http://www.seeingwithsound.com

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 10:49 pm

my indoj,

I had to replace the csv delimiter after copying from here. It copies and pastes from here as three spaces. ie " " X 3
when in actual fact it has to be a tab key code.

with that done, your code works fine.

Thanks.

I am going to see about getting the threshold going now.

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sat Jan 02, 2016 11:34 pm

Threshold has been added to jetpac.py

I will add a threshold cycler to the jetpac settings section of menu.py next. This way you can predetermine your threshold value. I shipped the unit with 0.05 (I will epeak that as 5%), but I think 10-15% is a more useful value, so I will probably ship with 15%. I will allow users to set in increments of 5% between 5% and 30%.

Code: Select all

import setproctitle #Set process name to something easily killable
import cv2 #Computer Vision Libraries for webcam use
import subprocess #so I can run subprocesses in the background if I want
import ConfigParser #To read the config file modified by menu.py
from subprocess import call #to call a process in the foreground
import csv #To make an array of the certainty and identity results so we can find the top matches
from operator import itemgetter
from imutils.video import WebcamVideoStream
setproctitle.setproctitle("jetpac")
Config = ConfigParser.RawConfigParser()
Config.read('/home/pi/aftersight.cfg')

ConfigJetpacNetwork = Config.get('AfterSightConfigSettings','configjetpacnetwork') #Get the path to the selected network from the config
ConfigJetpacThreshold = Config.get('AfterSightConfigSettings','configjetpacthreshold') #Set the identification threshold
ConfigJetpacCamera = Config.get('AfterSightConfigSettings','configjetpaccamera') #Get Selected Camera - Not implemented yet
ConfigJetpacThreshold = float(ConfigJetpacThreshold)#Convert to float so I can compare to other numbers

print ConfigJetpacNetwork
print ConfigJetpacThreshold
print ConfigJetpacCamera

camera_port = 0 #Open Camera 0
camera = WebcamVideoStream(src=camera_port).start() #define where I dump camera input


classify = "/dev/shm/classify.txt"


while 1:
   image = camera.read()
   cv2.imwrite("/dev/shm/opencv.png", image)

   #./jpcnn -i data/dog.jpg -n ../networks/jetpac.ntwk -t -m s -d
   #Command Line for jetpac I dropped -t and -d because when it works I dont need to know how long it takes or the debug info
   with open (classify, "w+") as f: #write results to classify.txt
      call (["sudo","/home/pi/projects/DeepBeliefSDK/source/./jpcnn","-i","/dev/shm/opencv.png","-n",ConfigJetpacNetwork,"-m","s"],stdout = f,stderr = f)
   call (["sudo","rm","-rf","/dev/shm/opencv.png"])
   data = csv.reader(open('/dev/shm/classify.txt', 'rb'), delimiter="   ", quotechar='|')
   certainty, identity = [], []

   for row in data:
      certainty.append(row[0])
      identity.append(row[1])

   #print certainty
   #print identity

   certainty = [float(i) for i in certainty] # Convert Certainty from string to float to allow sorting
   matrix = zip(certainty, identity) #combine them into a two dimentional list
   matrix.sort(key=itemgetter(0), reverse=True) #Sort Highest to Lowest Based on Certainty
   #Now Espeak the top three terms if they are > threshold
   print matrix [0]
   print matrix [1]
   print matrix [2]
   topthreeidentity = [x[1] for x in matrix[0:3]]
   topthreecertainty = [x[0] for x in matrix[0:3]]

   if topthreecertainty[0] > ConfigJetpacThreshold:
        FirstItem = str(topthreeidentity[0])
        print topthreecertainty[0], topthreeidentity[0]," 1st item Greater Than Threshold"
   else:
        FirstItem = "Nothing Recognized"
        print "Top item underthreshold"

   if topthreecertainty[1] > ConfigJetpacThreshold:
        SecondItem = str(topthreeidentity[1])
        print topthreecertainty[1], topthreeidentity[1], " 2nd item Greater Than Threshold"
   else:
        SecondItem =  " "
        print "Second Item Under Threshold"

   if topthreecertainty[2] > ConfigJetpacThreshold:
        ThirdItem = str(topthreeidentity[2])
        print topthreecertainty[2], topthreeidentity[2], " 3rd item Greater Than Threshold"
   else:
        ThirdItem = " "

   espeakstring = FirstItem + " " + SecondItem + " " + ThirdItem # read top three
   print espeakstring
   espeak_process = subprocess.Popen(["espeak",espeakstring, "--stdout"], stdout=subprocess.PIPE) #read the list out
   subprocess.Popen(["aplay", "-D", "sysdefault"], stdin=espeak_process.stdout, stdout=subprocess.PIPE) #Make it so no stuttering
   call (["sudo","rm","-rf","/dev/shm/classify.txt"]) #remove last run of classification info
camera.stop()
del(camera) # so that others can use the camera as soon as possible

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 1:24 am

Threshold cycler from 5-30% added to menu.py

Code: Select all

import time #required for sleep pauses
import RPi.GPIO as GPIO #We use lots of GPIOs in this program
import datetime #To allow for keeping track of button press length
import subprocess #To launch external processes
from subprocess import call #Same reasons
import gaugette.rotary_encoder # Lets the rotation be handled with threaded watching
import ConfigParser # Lets us user aftersight.cfg easily


call (["sudo","cp","/home/pi/altgreet.txt","/home/pi/introtext.txt"])

GPIO.setmode(GPIO.BCM)  #setup for pinouts of the chip for GPIO calls. This will be different for the rotary encoder library definitions which have to use wiringpi
GPIO.setup(27, GPIO.IN, pull_up_down=GPIO.PUD_UP) #GPIO for detecting low battery
GPIO.setup(25, GPIO.IN, pull_up_down=GPIO.PUD_UP) #Rotary Pushbutton Input
GPIO.setup(9, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) # GPIO for detecting Power Switch Position, used to shtudown system
GPIO.setup(10, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) # GPIO for Detecting External Power State

t1=0 #t1-t4 used for timing pushbutton events
t2=0 # t2-t4 used as adders amongst a few intervals to allow for assignments of different functions based on time the button is depressed
t3=0 #
t4=0 #The final interval of 7 seconds shuts the device down (software, not electricity). It protects the filesystem and ought to remain

timesinceflip = 0
GPIO.setup(20, GPIO.OUT)   #Define pin 20 as output, for PWM modulation of vibration motor
p = GPIO.PWM(20, 25) #Set some initial value to give it a wiggle
p.start(5)
p.ChangeDutyCycle(0)#then shut it off


#Use ConfigParser to Read Persistent Variables from AfterSight.CFG

Config = ConfigParser.RawConfigParser() #set variable to read config into
Config.read('aftersight.cfg') #read from aftersight.cfg

ConfigVolume = Config.get('AfterSightConfigSettings','configvolume') #volume setting percentage
ConfigRaspivoiceStartup = Config.get('AfterSightConfigSettings','configraspivoicestartup') #Does raspivoice execute on startup? Boolean True/False
ConfigJetpacStartup = Config.get('AfterSightConfigSettings','configjetpacstartup') #Does Jetpac execute on startup? Boolean True/False
ConfigRaspivoicePlaybackSpeed = Config.get('AfterSightConfigSettings','configraspivoiceplaybackspeed') #Playback rate of soundscapes
ConfigRaspivoiceCamera = Config.get('AfterSightConfigSettings','configraspivoicecamera') #Which image source to use? -s0 = image -s1 = rpi cam module -s2 = first usb cam -s3 = second usb cam etc.
ConfigJetpacNetwork = Config.get('AfterSightConfigSettings','configjetpacnetwork') #Which network to use
ConfigJetpacCamera = Config.get('AfterSightConfigSettings','configjetpaccamera') #which camera device to use Generate this from a list of /dev/videoX devices?
ConfigJetpacThreshold = Config.get('AfterSightConfigSettings','configjetpacthreshold') #Certainty threshold expressed as a two digit ratio (ie. 0.15)
ConfigVibrationStartup = Config.get('AfterSightConfigSettings','configvibrationstartup') #Is vibration enable on startup

#If you want to add a variable here, you must create an entry in aftersight.cfg as well. otherwise no bueno

bequiet = False #Key decision making variable. Shuts the main menuing system off while other processes that use the audio channel are in operation
                #Initially we want it to be loud and making decisions unless an application that uses audio launches on startup
if (ConfigRaspivoiceStartup == True and ConfigJetpacStartup == True): #Right now they both can't be true. When we fork the webcam to two virtual cams this will be possible
        ConfigRaspivoiceStartup = True
        ConfigJetpacStartup = False  #The temporary solution to this problem is to only startup raspivoice
if ConfigRaspivoiceStartup == True:
        bequiet = True #Prevents the menu items from being read or executed while external processes that use audio are running
        subprocess.Popen(["sudo","/home/pi/raspivoice/Release/./raspivoice","-A","--speak",ConfigRaspivoiceCamera,ConfigRaspivoicePlaybackSpeed]) #Launch if startup execution is the menu setting
if ConfigJetpacStartup == True:
        subprocess.Popen(["sudo","python","/home/pi/jetpac.py"])

vibration = False #By default vibration is turned off

if ConfigVibrationStartup == True: #If the config file sets rangefinder/vibration for startup, toggle the variable for the vibration motor
        vibration = True

print "Volume",ConfigVolume
print "Raspivoice on Startup?",ConfigRaspivoiceStartup
print "Jetpac on Startup?", ConfigJetpacStartup
print "Raspivoice Soundscape Playback Speed", ConfigRaspivoicePlaybackSpeed
print "Which Camera will raspivOICe use?" ,ConfigRaspivoiceCamera
print "Which Network will Jetpac Use?", ConfigJetpacNetwork
print "Which Camera will Jetpac Use?", ConfigJetpacCamera
print "What is the threshold value for Jetpac?", ConfigJetpacThreshold
print "Is Vibration Turned on at Startup?", ConfigVibrationStartup

#The print statements are just to confirm a good config read

A_Pin=4 #Encoder CC direction
B_Pin=5 #Enconder C direction
encoder = gaugette.rotary_encoder.RotaryEncoder.Worker(A_Pin, B_Pin)#Use worker class to try to catch transitions better.
encoder.start()#start the worker class encoder watcher
encoder.steps_per_cycle = 4 #the encoder always gives 4 for 1 detente

oldexternalpowerstate = 0 # this variable enables an espeak event when the power plug is inserted or removed

Main=["Launch Raspivoice","Launch Jetpac","Toggle Rangefinder Vibration","Settings","acknowledgements","Disclaimer"]
Settings=["Advance Volume","Raspivoice Settings", "Jetpac Settings","Return to main menu"]
RaspivoiceSettings = ["Toggle Playback Speed", "Next Camera", "Toggle Raspivoice Autostart", "Return to Main Menu"]
JetpacSettings = ["Next Network", "Next Threshold", "Next Camera", "Toggle Jetpac Autostart","Return to Main Menu"]
VolumeMenu = ["Volume Up", "Volume Down", "Return to Main Menu"]

#You can change and add menu items above, but you MUST go to the section where the MenuLevel and menupos are evaluated for a button press/release in under three seconds
#You have to change the actions for the items being evaluated there.
#If you don't, no bueno

MenuLevel = Main #Select the Main Menu first
menupos = 0 #position in menu list


seconddelta = 0

call (["sudo","espeak","MainMenu,Rotate,Knob,For,Options"])
while 1:  #Main Loop
    battstate = GPIO.input(27)
    switchstate = GPIO.input(9)
    externalpowerstate = GPIO.input(10)
    CurrentMenuMaxSize = len(MenuLevel)-1 #Have to subtract one because lists start at zero, but the len function returns only natural numbers

    delta = encoder.get_delta()

    if delta!=0:
        #print "rotate %d" % delta

        #The Rotary Encoder has the annoying feature of giving back four delta steps per single detente ~usually~
        #For example, 1,1,1,1 is normal. Quite often it is 1,3 other times 1,2,1 or 1,1,2
        #Using the corrections below, rotations clockwise are normalized to a value of 1
        #Rotations counterclockwise are normalized to -1
        #So the normal output of 1,1,1,1 remains 1,3 becomes 1,1 and 1,2,1 or 1,1,2 become 1,1,1
        #The end result is that most often the values will be 2 or 3, and occasionally 4 after each rotation of one detente occurs
        #the seconddelta variable causes the menu item increase to only happen after the delta accumulates to 3.
        #By changing the top value for seconddelta, responsiveness for single increases changes
        #With a value of three, reliable operation happens

        if delta>0:
                delta=1
        if delta<0:
                delta=-1
        #print "corrected delta",delta
        if seconddelta == 3: #This was the most important value to change to get reliable single increments of the menu items
                seconddelta = 0
                menupos=menupos+delta
                print "MenuPosition" ,menupos
                print MenuLevel
                print "Current Menu Max Size",CurrentMenuMaxSize
                if menupos > CurrentMenuMaxSize:
                        menupos=0
                if menupos<0:
                        menupos=CurrentMenuMaxSize
                print (MenuLevel[menupos])
                if bequiet == False:
                        call(["sudo","killall","espeak"])
                        call(["sudo","espeak",MenuLevel[menupos]])
        elif seconddelta < 3:
                seconddelta = seconddelta + 1
    if (externalpowerstate != oldexternalpowerstate):
        print ('External Power State Changed')
        if(externalpowerstate == 1):
                #print ('External Power Connected')
                call (["sudo","espeak","ExternalPowerConnected"])
        elif (externalpowerstate ==0):
                #print ('External Power Disconnected')
                call (["sudo","espeak","ExternalPowerDisconnected"])
    if (externalpowerstate == 0):
        print ('External Power Disconnected, Running from Internal Battery')
    #if vibration == True:
        #print "Start Vibration Routines, or make sure they are already running"
    #elif (vibration == False):
        #print "End vibration routines, or make sure they are not running"
    if (switchstate == 1):
        #print ('Power Switch Turned Off, System Shutdown Initiated')
        call (["sudo", "espeak", "shutdown"])
        call (["sudo", "shutdown", "-h", "now"])
    #if (switchstate == 0):
        #print ('Power Switch Is On, Keep Working')
    #if (battstate == 1):
        #print ('Battery OK, keep system up')
    #if (battstate == 0):
        #print ('Battery Low, System Shutdown')
    if GPIO.input(25):
        #print('Button Released')
        if (t3 < 3 and t3 > 0): #If the button is released in under 3 seconds, execute the command for the currently selected menu and function
                print "Detected Button Release in less than 3 seconds"
                if bequiet == False:
                #Main=["Launch Raspivoice","Launch Jetpac","Toggle Rangefinder Vibration","Settings","acknowledgements","Disclaimer"]
                        if (MenuLevel == Main and menupos == 0): #1st option in main menu list is launch raspivoice
                                bequiet = True
                                subprocess.Popen(["sudo","/home/pi/raspivoice/Release/./raspivoice","-A","--speak",ConfigRaspivoiceCamera,ConfigRaspivoicePlaybackSpeed]) #Launch using config settings plus a few obligate command line flags for spoken menu and rotary encoder input
                        if (MenuLevel == Main and menupos == 1): #Second Option in main menu list is launch Jetpac
                                bequiet = True
                                subprocess.Popen(["sudo","python","/home/pi/jetpac.py"])
                        if (MenuLevel == Main and menupos == 2):
                                if vibration == True:
                                        call (["sudo","espeak","VibrationToggledOff"])
                                        call (["sudo","killall","rangefinder"])
                                        p.ChangeDutyCycle(0) #If it gets closed while active, this should quiet it down
                                        vibration = False
                                else:
                                        call (["sudo","espeak","VibrationToggledOn"])
                                        subprocess.Popen(["sudo","python","/home/pi/rangefinder.py"])
                                        vibration = True
                        if (MenuLevel == Main and menupos == 3): #Enter The Settings Menu
                                MenuLevel = Settings
                                call (["sudo","espeak","ChangeSettings"])
                                menupos = 10
                        if (MenuLevel == Main and menupos == 4):
                                espeak_process = subprocess.Popen(["espeak", "-f","/home/pi/acknowledgements.txt", "--stdout"], stdout=subprocess.PIPE)
                                subprocess.Popen(["aplay", "-D", "sysdefault"], stdin=espeak_process.stdout, stdout=subprocess.PIPE)
                        if (MenuLevel == Main and menupos == 5):
                                espeak_process = subprocess.Popen(["espeak", "-f","/home/pi/disclaimer.txt", "--stdout"], stdout=subprocess.PIPE)
                                subprocess.Popen(["aplay", "-D", "sysdefault"], stdin=espeak_process.stdout, stdout=subprocess.PIPE)
                #Settings=["Advance Volume","Raspivoice Settings", "Jetpac Settings","Return to main menu"]
                        if (MenuLevel == Settings and menupos == 0):
                                commandlinevolume = int(ConfigVolume)
                                commandlinevolume = commandlinevolume + 10
                                if commandlinevolume > 100: #Wrap volume back to lowest setting
                                        ConfigVolume = "70"
                                        commandlinevolume = 70 #lowest setting
                                if commandlinevolume == 70:
                                        fakevolume = 10 #lowest setting said as 10%
                                if commandlinevolume == 80:
                                        fakevolume = 50 #next setting said as 50%
                                if commandlinevolume == 90:
                                        fakevolume = 75 #next setting said as 75%
                                if commandlinevolume == 100:
                                        fakevolume = 100 #next setting said as 100%
                                fakevolume = str(fakevolume)
                                call (["sudo","espeak","ChangingVolumeTo"])
                                call (["sudo","espeak",fakevolume])
                                call (["sudo","espeak","Percent"])
                                volumearg = ConfigVolume + "%"
                                call (["sudo","amixer","sset","PCM,0",volumearg])
                                ConfigVolume = str(commandlinevolume)
                                menupos = 0 #keep menu position on advance volume to allow for repeated presses
                        if (MenuLevel == Settings and menupos == 1):
                                MenuLevel = RaspivoiceSettings
                                call (["sudo","espeak","RaspivoiceSettings"])
                                menupos = 10
                        if (MenuLevel == Settings and menupos == 2):
                                MenuLevel = JetpacSettings
                                call (["sudo","espeak","JetpacSettings"])
                                menupos = 10
                        if (MenuLevel == Settings and menupos == 3):
                                MenuLevel = Main
                                call (["sudo","espeak","MainMenu"])
                                menupos = 10
                 #RaspivoiceSettings = ["Toggle Playback Speed", "Next Camera", "Toggle Raspivoice Autostart", "Return to Main Menu"]
                        if (MenuLevel == RaspivoiceSettings and menupos == 0):
                                if ConfigRaspivoicePlaybackSpeed  == "--total_time_s=1.05":
                                        call (["sudo","espeak","ChangedToFast"])
                                        ConfigRaspivoicePlaybackSpeed = "--total_time_s=0.5"
                                elif ConfigRaspivoicePlaybackSpeed == "--total_time_s=0.5":
                                        call (["sudo","espeak","ChangedToSlow"])
                                        ConfigRaspivoicePlaybackSpeed = "--total_time_s=2.0"
                                else:
                                        call (["sudo","espeak","ChangedToNormal"])
                                        ConfigRaspivoicePlaybackSpeed = "--total_time_s=1.05"
                                menupos = 0 #Keep at playback speed setting to allow repeated toggle
                        if (MenuLevel == RaspivoiceSettings and menupos == 1):
                                if ConfigRaspivoiceCamera == "-s2":
                                        call (["sudo","espeak","ChangedToCameraModule"])
                                        ConfigRaspivoiceCamera = "-s1"
                                else:
                                        call (["sudo","espeak","ChangedToUSBCamera1"])
                                        ConfigRaspivoiceCamera = "-s2"
                                menupos = 1 #Keep at camera advance point to allow repeated toggle
                        if (MenuLevel == RaspivoiceSettings and menupos == 2):
                                if ConfigRaspivoiceStartup == True:
                                        call (["sudo","espeak","NoLaunchOnStartup"])
                                        ConfigRaspivoiceStartup = False
                                else:
                                        call (["sudo","espeak","RaspivoiceWillAutostart"])
                                        ConfigRaspivoiceStartup = True
                        if (MenuLevel == RaspivoiceSettings and menupos == 3):
                                MenuLevel = Main
                                call (["sudo","espeak","Main Menu"])
                #JetpacSettings = ["Next Network", "Next Threshold", "Next Camera", "Toggle Jetpac Autostart","Return to Main Menu"]
                        if (MenuLevel == JetpacSettings and menupos == 0):
                                if ConfigJetpacNetwork == "/home/pi/projects/DeepBeliefSDK/networks/ccv2010.ntwk":
                                        call (["sudo","espeak","ChangingToLibCCV2012"])
                                        ConfigJetpacNetwork = "/home/pi/projects/DeepBeliefSDK/networks/ccv2012.ntwk" #Slow, Very Accurate = 2nd Best
                                elif ConfigJetpacNetwork == "/home/pi/projects/DeepBeliefSDK/networks/ccv2012.ntwk":
                                        call (["sudo","espeak","ChangingtoJetpacNetwork"])
                                        ConfigJetpacNetwork = "/home/pi/projects/DeepBeliefSDK/networks/jetpac.ntwk" #Fast, Fairly Accurate = Best
                                else:
                                        call (["sudo","espeak","ChangingToLibCCV2010"])
                                        ConfigJetpacNetwork = "/home/pi/projects/DeepBeliefSDK/networks/ccv2010.ntwk" #Slowest, Accurate =  3rd Best
                        if (MenuLevel == JetpacSettings and menupos == 1):
                                if ConfigJetpacThreshold == "0.05":
                                        call (["sudo","espeak","ChangingTo10%"]) #somewhat stringent
                                        ConfigJetpacThreshold = "0.10"
                                elif ConfigJetpacThreshold == "0.10":
                                        call (["sudo","espeak","ChangingTo15%"])
                                        ConfigJetpacThreshold = "0.15"            #More Stringent
                                elif ConfigJetpacThreshold == "0.15":
                                        call (["sudo","espeak","ChangingTo20%"])
                                        ConfigJetpacThreshold = "0.20"           #More Stringent
                                elif ConfigJetpacThreshold == "0.20":
                                        call (["sudo","espeak","ChangingTo25%"])
                                        ConfigJetpacThreshold = "0.25"
                                elif ConfigJetpacThreshold == "0.25":
                                        call (["sudo","espeak","Changingto30%"])
                                        ConfigJetpacThreshold = "0.30"            #Most stringent
                                else:
                                        call (["sudo","espeak","ChangingTo5%"])
                                        ConfigJetpacThreshold = "0.05"            #Mid Stringency
                        if (MenuLevel == JetpacSettings and menupos == 2):
                                if ConfigJetpacCamera == "/dev/video0":
                                        call (["sudo","espeak","ChangingToUSBCamera1"])
                                        ConfigJetpacCamera = "/dev/video1"
                                else:
                                        call (["sudo","espeak","ChangingToUSBCamera0"])
                                        ConfigJetpacCamera = "/dev/video0"
                        if (MenuLevel == JetpacSettings and menupos == 3):
                                if ConfigJetpacStartup == True:
                                        call (["sudo","espeak","NoLaunchOnStartup"])
                                        ConfigJetpacStartup = False
                                else:
                                        call (["sudo","espeak","JetpacWillAutostart"])
                                        ConfigJetpacStartup = True
                        if (MenuLevel == JetpacSettings and menupos == 4):
                                MenuLevel = Main
                                call (["sudo","espeak","Main Menu"])
                                menupos = 10
                        #Are the changes being reflected in the variables we want to write?
                        print "Volume",ConfigVolume
                        print "Raspivoice on Startup?",ConfigRaspivoiceStartup
                        print "Jetpac on Startup?", ConfigJetpacStartup
                        print "Raspivoice Soundscape Playback Speed", ConfigRaspivoicePlaybackSpeed
                        print "Which Camera will raspivOICe use?" ,ConfigRaspivoiceCamera
                        print "Which Network will Jetpac Use?", ConfigJetpacNetwork
                        print "Which Camera will Jetpac Use?", ConfigJetpacCamera
                        print "What is the threshold value for Jetpac?", ConfigJetpacThreshold
                        print "Is Vibration Turned on at Startup?", ConfigVibrationStartup
                        #Now lets set configparser up to write to file
                        Config.set('AfterSightConfigSettings','configvolume',ConfigVolume)
                        Config.set('AfterSightConfigSettings','configraspivoicestartup',ConfigRaspivoiceStartup)
                        Config.set('AfterSightConfigSettings','configjetpacstartup',ConfigJetpacStartup)
                        Config.set('AfterSightConfigSettings','configraspivoiceplaybackspeed',ConfigRaspivoicePlaybackSpeed)
                        Config.set('AfterSightConfigSettings','configraspivoicecamera',ConfigRaspivoiceCamera)
                        Config.set('AfterSightConfigSettings','configjetpacnetwork',ConfigJetpacNetwork)
                        Config.set('AfterSightConfigSettings','configjetpaccamera',ConfigJetpacCamera)
                        Config.set('AfterSightConfigSettings','configjetpacthreshold',ConfigJetpacThreshold)
                        Config.set('AfterSightConfigSettings','configvibrationstartup',ConfigVibrationStartup)
                        with open('aftersight.cfg', 'w') as configfile:    # save
                                Config.write(configfile)

        t1=0 #Reset the timers
        t2=0
        t3=0
        t4=0
    elif t1 == 0:
        t1 = time.time() #clock value when button pressed in
    elif (t1 > 1 and t3 < 3):
        t2 = time.time() #clock value at current moment While the button has been pressed in
        t3 = t2 - t1 #difference from initial press time to current moment
        print ">1<3",t3
    elif (t3 > 3 and t3 <4):
        print ">3<4",t3
        call (["sudo","killall","espeak"])
        call (["sudo","espeak","Terminating Programs"])
        call (["sudo","killall","raspivoice"]) # Kills raspivoice if its running
        call (["sudo","killall","jpcnn"]) #Kills jetpac neural net process
        call (["sudo","killall","jetpac"]) #Kills Jetpac Python Looper
        call (["sudo","killall","rangefinder"]) #Kills rangefinder vibration motor python looper
        p.ChangeDutyCycle(0) #If the vibration motor was interrupted in an energetic config this should quiet it
        bequiet = False
        t3 = 5.1
        t4=5.1
    elif (t4 > 4 and t4 < 7):
        t2=time.time()
        t4 = t2+1 - t1
    elif t4 > 7:
        print "shutdown",t4
        call (["sudo", "shutdown", "-h", "now"])
        call (["sudo", "espeak","Shutdown"])
        exit
    oldexternalpowerstate = externalpowerstate #This captures the current external power state to compare when the loop runs next. critical for knowing when power is plugged in or unplugged

PranavLal
Posts: 124
Joined: Fri Jun 28, 2013 4:49 pm

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 3:41 am

Hi Mike,

I was trying to ssh into the pi.
The pi is not getting an IP address. This appears to be due to an unstable LAN connection. I am going to check my LAN cable but has anyone else experienced this? I can see my router logs and the link keeps going down and coming back up.

Changing the cable did help and I now have a terminal window open to the pi. Time to modify files.

mr_indoj
Posts: 42
Joined: Wed Jul 01, 2015 9:28 am

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 2:29 pm

In order to be able to set the resolution of the video capture, i copied the imutils threading code into a file in the /home/pi folder that we instead can import.
Here's that code, with the possibility to set resolution when we init the class

Code: Select all

# import the necessary packages
from threading import Thread
import cv2

class WebcamVideoStream:
	def __init__(self, src=0, width=1280, height=1024):
		# initialize the video camera stream and read the first frame
		# from the stream
		self.stream = cv2.VideoCapture(src)
		self.stream.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, width)
		self.stream.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, height)
		(self.grabbed, self.frame) = self.stream.read()

		# initialize the variable used to indicate if the thread should
		# be stopped
		self.stopped = False

	def start(self):
		# start the thread to read frames from the video stream
		Thread(target=self.update, args=()).start()
		return self

	def update(self):
		# keep looping infinitely until the thread is stopped
		while True:
			# if the thread indicator variable is set, stop the thread
			if self.stopped:
				return

			# otherwise, read the next frame from the stream
			(self.grabbed, self.frame) = self.stream.read()

	def read(self):
		# return the frame most recently read
		return self.frame

	def stop(self):
		# indicate that the thread should be stopped
		self.stopped = True
I'm also in the process of putting together a modified version of Teradeep, that with a working Torch install can run in the same manner as jetpack.
Currently what i have is that teradeep outputs directly to espeak, so i want to change it so that it can be used like the current implementation of jetpac, putting the indexes and text into a file that we parse and read.

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 3:02 pm

If you are successful with all of that, and you have all the rest of the new code working, perhaps you could create a filesystem image for me?

I have tried and failed to install torch so many times I've lost count.

I initially modify the installdeps script to install openBlas, and then the dependencies installed for ubuntu. Most of that works.

Then when running the torch install script,

I get to a point where a dependency, luaffi, won't install. when I dig I find that its missing call_arm.h then digging deeper I find I have to use dynasm.lua to process call_arm.dasc to create call_arm.h in the first place.

I create call_arm.h and try again, but then it is clear however I've run dynasm is incorrect, or the input file call_arm.dasc is not built correctly for the rpi v2. I don't have a solution for that step.

If you have torch installed, I'd be willing to rebuild everything around a filesystem image of that (even though that would be a bunch of finicky work).

It certainly looked to me from the demos that the teradeep object library is more focused on the day to day, and also that we should be able to get more frequent results than we are seeing with jetpac.

I couldn't be more excited with the progress lately, and I hope we can get teradeep functioning within a few weeks!

mr_indoj
Posts: 42
Joined: Wed Jul 01, 2015 9:28 am

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 3:36 pm

I have it working on the current image that you uploaded a few days ago.
However, i cheated a bit, i took the torch binaries from another older image, and then only had to make sure that all dependencies were installed (OPENBLAS had to be compiled etc). So i don't remember how i solved that back then, but the binaries seems to work.
But, anyway, i'll focus on creating a new image as cleanly as possible based on the current image but with working torch and a starting point with teradeep, that i can upload.

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 3:44 pm

That would be fantastic!

Another thing to consider here is that teradeep should interfere less with raspivOICe. As it is executing mostly on the GPU whereas jetpac is running parallelized on the cpu's.

We can expect smoother performance, and less battery consumption. All good things!

PranavLal
Posts: 124
Joined: Fri Jun 28, 2013 4:49 pm

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 4:43 pm

Hi all,

Exciting progress indeed. I got bitten by the tab character issue too. I'll wait for the new image. Alternatively, while posting code revisions, could we attach the modified files to the post along with pasting the code? That should help get around such issues.

Pranav

User avatar
seeingwithsound
Posts: 165
Joined: Sun Aug 28, 2011 6:07 am
Contact: Website

Re: Sight for the Blind for <100$

Sun Jan 03, 2016 8:11 pm

mikey11 wrote:It certainly looked to me from the demos that the teradeep object library is more focused on the day to day
Indeed, comparing the about 1,000 ILSVRC image classes at http://image-net.org/challenges/LSVRC/2 ... se-synsets with the about 1,000 Teradeep image classes at https://www.dropbox.com/sh/qw2o1nwin5f1 ... gories.txt classes quickly indicates how the latter appears far more relevant to daily living. This matches the Teradeep description at https://github.com/teradeep/demo-apps reading "This is our May 2015 top neural network for large-scale object recognition. It has been trained to recognize most typical home indoor/outdoor objects in our daily life."

Peter


Seeing with Sound - The vOICe
http://www.seeingwithsound.com

mr_indoj
Posts: 42
Joined: Wed Jul 01, 2015 9:28 am

Re: Sight for the Blind for <100$

Mon Jan 04, 2016 12:59 pm

So, i have created a new image.
It's based on the last image + latest code that we have posted here + a very first version of working Teradeep.
I added teradeep to the menu, besides RaspiVoice and
Jetpac. There are however no settings yet.
The url to the image (about 3.5 GB):
https://onedrive.live.com/redir?resid=D ... file%2czip

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Mon Jan 04, 2016 2:45 pm

Thanks,

Will download today and take a look!

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Mon Jan 04, 2016 4:55 pm

while the image was downloading, I also got a message from the creators of teradeep:

Eugenio Culurciello:
You can also try to use this lib:
https://github.com/mvitez/thnets
it is a lot easier to use for embedded systems, and works with models like this one:
https://github.com/teradeep/demo-apps
So once the image finishes (and I have time...) I will explore this as well. Hopefully there will be a performance advantage using either possibility.

mr_indoj
Posts: 42
Joined: Wed Jul 01, 2015 9:28 am

Re: Sight for the Blind for <100$

Mon Jan 04, 2016 6:12 pm

mikey11 wrote:while the image was downloading, I also got a message from the creators of teradeep:

Eugenio Culurciello:
You can also try to use this lib:
The interesting part as i see it is that by using such a lib could open up the possibility to create python bindings with the required functions, so that data can be passed back and forth as object rather than files.

User avatar
seeingwithsound
Posts: 165
Joined: Sun Aug 28, 2011 6:07 am
Contact: Website

Re: Sight for the Blind for <100$

Mon Jan 04, 2016 7:54 pm

mr_indoj wrote:So, i have created a new image.
It's based on the last image + latest code that we have posted here + a very first version of working Teradeep.
I added teradeep to the menu, besides RaspiVoice and
Jetpac. There are however no settings yet.
WOW, great work mr_indoj! I love what Teradeep enables! I flashed your image to my (Mike's) Raspberry Pi prototype, and am impressed. I did a bit of walking around in my house at night, and Teradeep even at one point recognized my curled up sleeping cat as a cat, or otherwise spoke relevant terms such as "fur". It recognized closed curtains from the folds, recognized a coffee cup as a coffee mug, my television as LCD flatscreen television, the tiled living room floor as tiles, a wall clock as a clock, and so on. Of course it is not perfect, with regular weird misses as well, but it is very impressive nonetheless, and usually vastly more relevant to daily living than what the Jetpac network returned. It typically takes some 5 to 10 seconds per recognition.

Peter


Seeing with Sound - The vOICe
http://www.seeingwithsound.com

mikey11
Posts: 355
Joined: Tue Jun 25, 2013 6:18 am
Location: canada
Contact: Website

Re: Sight for the Blind for <100$

Mon Jan 04, 2016 7:58 pm

Brilliant! Can't wait to get the execution time reduced.

Sounds like we have a winner on our hands

Return to “Assistive technology and accessibility”