open a new terminal window (assuming you're running from the x11 desktop) at the prompt type 'top' (without the quotes) to stop top running type q. To find out all about it type 'man top'
I found I had to use cprofile pretty much exactly as I put in my link, but that might have been because of specific issues with the location of the modules. I piped the output into a file which I then imported into a spreadsheet for analysis. In the example in my link just change the path to where your python file is and the name of the file to whatever its name is!
Below is a copy of what I get running a random python program on this machine and breaking out three times. It's doing some very heavy number crunching for a neural network using numpy, the first two times I stopped it while loading from file, subsequently while doing an array multiplication and broadcast.
line 19 which is in calling function load_data() in which on line 10 it's running numpy.loadtxt()
line 27 -> 186 -> 147
the last line is the one actually doing the work.
self is just a name conventionally assigned to the actual object instance which, in python, has to be explicitly defined in the argument list of class methods. It's a slightly confusing aspect of python.
In GUI programs the looping system is normally designed so it goes round 'in the background' and you define events to happen 'on click' or 'on something change' and the display only needs to be refreshed when needed. I haven't found multiprocessing to be faster than threads (but not tried using 11 threads so..)
Is you code available to look at, on github, bitbucket, dropbox, google drive etc?
Paddy
Code: Select all
patrick@patrick-ThinkPad-T420:~/Machine-Learning$ python3 learn_whippet_cap.py
^CTraceback (most recent call last):
File "learn_whippet_cap.py", line 19, in <module>
X = load_data()
File "learn_whippet_cap.py", line 10, in load_data
data = np.loadtxt('Data/learndb.csv', delimiter = ',')
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in <listcomp>
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 657, in floatconv
if b'0x' in x:
KeyboardInterrupt
patrick@patrick-ThinkPad-T420:~/Machine-Learning$ python3 learn_whippet_cap.py
^CTraceback (most recent call last):
File "learn_whippet_cap.py", line 19, in <module>
X = load_data()
File "learn_whippet_cap.py", line 10, in load_data
data = np.loadtxt('Data/learndb.csv', delimiter = ',')
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in <listcomp>
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 659, in floatconv
return float(x)
KeyboardInterrupt
patrick@patrick-ThinkPad-T420:~/Machine-Learning$ python3 learn_whippet_cap.py
Using logistic sigmoid activation in output layer
-------------------- randomized ----------------- 1457134175.0241566
0 29.9 Training error 0.356397, trend 0.0319
^CTraceback (most recent call last):
File "learn_whippet_cap.py", line 27, in <module>
NN.train(X)
File "/home/patrick/Machine-Learning/perceptron.py", line 186, in train
error += self.back_propagate(targets)
File "/home/patrick/Machine-Learning/perceptron.py", line 147, in back_propagate
self.wi -= self.learning_rate * hidden_deltas * self.ai.reshape(self.input, 1)
KeyboardInterrupt