Page 1 of 1

slow processing

Posted: Fri Mar 04, 2016 2:36 pm
by Davies
hi all, I have some code that is a compilation of various small scripts checking inputs, changing outputs and presenting the results on a screen using tkinter.. when compiling the code using pycharm on my pc.. 8core 32gb ram.. it run beautifully but when sent to pi the update to tkinter and navigating between screens as Ive put a settings menu in there was taking 3-5 seconds.. this is probably due to the 11 threads its trying to run and I do intend on revamping the code to fewer threads, but most threads are set to sleep every 30 sec then change a global variable, other threads are set to update the relevant tkinter label every second for the clock and every 15 seconds otherwise. (but the clock is missing multiple seconds between each update on pi)
the threads which do not have sleep are set by multiple variables such as..

Code: Select all

while 1:
    if int_0 == 1 and int_1 == 0 and run_cycle == 0:
        (code to run)
        run_cycle = 1
    if int_0 == 1 and int_1 == 1 and run_cycle == 1:
        (code to run)
        run_cycle = 0
with this example in the instance where int_0 = 0 and int_1 = 0 there is no code to run so I would presume non of the cpu/ram is used for that task? or am I wrong? also does the sleep command use any resources?
is there anyway to benchmark test each part of my code to see how much resources that particular part is using, or a way to see live cpu/ram usage then I can just watch as each particular part is run?
also when I change the page on the tkinter window I'm running

Code: Select all

.grid_forget
for each and every button and label from the previous page that the new page is accessible from, which is an extensive list, is there any way to clear the form that would be quicker or without python having to read 30+ lines of code?

Re: slow processing

Posted: Fri Mar 04, 2016 3:35 pm
by Davies
ive just read elParaguayo's reply to clive_l's post on "best way to create UI".. would kivy be less resource taxing on the raspberry than tkinter for multiple page UI's?

Re: slow processing

Posted: Fri Mar 04, 2016 6:41 pm
by paddyg
People have complained in the past when using tk on the pi and it turned out to be something to do with creating new instances of tk widgets each refresh (or something - can't remember exact details) You can tell if it's that kind of thing by running top in another terminal and see if cpu or memory are increasing.

The simplest, but quite effective, way to profile code is to start your program again and again and stop it with Ctrl-C (assuming you haven't put everything inside a try: except KeyboardInterrupt:) Make a note of the line it stops on and you will get a poll of the time-takers. A more scientific way of profiling is to use cprofile, I put a post on what I found I had to do here.

Using kivy is a good way to plan out your components and structure your project but you would probably have to start again. Unless you are trying to do lots of graphical stuff it probably isn't the cause of the slowness, running lots of threads can slow things down but the marginal difference probably decreases see here

PS can't you update variable only when they change?
PPS put a time.sleep() in your while loops even if only for a small time otherwise I think time is expended deciding on thread ordering.

Re: slow processing

Posted: Fri Mar 04, 2016 8:37 pm
by Davies
thanks for the reply paddy, what is top? you say by running top in another terminal but I'm struggling to find anything relevant with google..
I had tried to use cprofile within my code as seen in another example, but wasn't sure if I had the layout correct, or if I should have changed foo bar to something.. or if the placement of cProfile.run within my code was relevant to the output

Code: Select all

import cProfile
import re
cProfile.run('re.compile("foo|bar")')
I also struggled to make heads or tails of the output..
how would I see what line the script stops on when stopping with ctrl+c?
how do you mean "can't you update variable only when they change?" would you be able to provide an example of such code?
I'm reasonably new to python, my knowledge to date has been self taught over the past 12months (with an understanding of if else from a weekend of play with turbo pascal 10+ years back) I haven't used class within my code simply because I haven't learned them as yet or struggled to find reason.. but could the lack of class (wrappers?) be the reason for slow code processing. ive also used (self) within code a few times but this script does not, I don't understand the difference in using it, could that be slowing things down?
could changing from threads to multiprocessing be beneficial to me?
Thanks again

Re: slow processing

Posted: Fri Mar 04, 2016 11:42 pm
by paddyg
open a new terminal window (assuming you're running from the x11 desktop) at the prompt type 'top' (without the quotes) to stop top running type q. To find out all about it type 'man top'

I found I had to use cprofile pretty much exactly as I put in my link, but that might have been because of specific issues with the location of the modules. I piped the output into a file which I then imported into a spreadsheet for analysis. In the example in my link just change the path to where your python file is and the name of the file to whatever its name is!

Below is a copy of what I get running a random python program on this machine and breaking out three times. It's doing some very heavy number crunching for a neural network using numpy, the first two times I stopped it while loading from file, subsequently while doing an array multiplication and broadcast.
line 19 which is in calling function load_data() in which on line 10 it's running numpy.loadtxt()
line 27 -> 186 -> 147
the last line is the one actually doing the work.

self is just a name conventionally assigned to the actual object instance which, in python, has to be explicitly defined in the argument list of class methods. It's a slightly confusing aspect of python.

In GUI programs the looping system is normally designed so it goes round 'in the background' and you define events to happen 'on click' or 'on something change' and the display only needs to be refreshed when needed. I haven't found multiprocessing to be faster than threads (but not tried using 11 threads so..)

Is you code available to look at, on github, bitbucket, dropbox, google drive etc?

Paddy

Code: Select all

patrick@patrick-ThinkPad-T420:~/Machine-Learning$ python3 learn_whippet_cap.py
^CTraceback (most recent call last):
  File "learn_whippet_cap.py", line 19, in <module>
    X = load_data()
  File "learn_whippet_cap.py", line 10, in load_data
    data = np.loadtxt('Data/learndb.csv', delimiter = ',')
  File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in loadtxt
    items = [conv(val) for (conv, val) in zip(converters, vals)]
  File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in <listcomp>
    items = [conv(val) for (conv, val) in zip(converters, vals)]
  File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 657, in floatconv
    if b'0x' in x:
KeyboardInterrupt


patrick@patrick-ThinkPad-T420:~/Machine-Learning$ python3 learn_whippet_cap.py
^CTraceback (most recent call last):
  File "learn_whippet_cap.py", line 19, in <module>
    X = load_data()
  File "learn_whippet_cap.py", line 10, in load_data
    data = np.loadtxt('Data/learndb.csv', delimiter = ',')
  File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in loadtxt
    items = [conv(val) for (conv, val) in zip(converters, vals)]
  File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 928, in <listcomp>
    items = [conv(val) for (conv, val) in zip(converters, vals)]
  File "/usr/local/lib/python3.4/dist-packages/numpy/lib/npyio.py", line 659, in floatconv
    return float(x)
KeyboardInterrupt


patrick@patrick-ThinkPad-T420:~/Machine-Learning$ python3 learn_whippet_cap.py
Using logistic sigmoid activation in output layer
-------------------- randomized ----------------- 1457134175.0241566
0  29.9 Training error 0.356397, trend 0.0319
^CTraceback (most recent call last):
  File "learn_whippet_cap.py", line 27, in <module>
    NN.train(X)
  File "/home/patrick/Machine-Learning/perceptron.py", line 186, in train
    error += self.back_propagate(targets)
  File "/home/patrick/Machine-Learning/perceptron.py", line 147, in back_propagate
    self.wi -= self.learning_rate * hidden_deltas * self.ai.reshape(self.input, 1)
KeyboardInterrupt

Re: slow processing

Posted: Sat Mar 05, 2016 1:20 pm
by Davies
thanks for the help with this paddy, ive uploaded the code to google drive, here is the link.. (tried to copy and paste but it was messing up the spacing and layout)
https://drive.google.com/file/d/0Bwb0Da ... sp=sharing

I haven't had chance to adjust the code as yet, its still untidy, missing GPIO outputs and sensor data is preset variables, its at the stage where it was just sent to pi, but still able to test on pc.
the cProfile I had tried is still within my code, perhaps you can see if the results of this correspond to what you expect to see, ill have chance to try your example once I'm home later today.

thank you for your explanation of self and class, I did start looking into class tutorials at the beginning of my python adventures but found them very confusing and failed to find a good explanation of why to use them or their importance.

the code controls the environment within a tortoise house, they are the teenage mutant ninja tortoises each named after the classic cartoon ninja turtles.. theyre home is equipped with 4 heat lamps, 1 floor heater pad, 3 fans, and a edible planted vegetation patch which I hope to automate the feeding of, in hopes that will also create higher humidity or I may also get a water spray jet for that.
the ideal temps are 34-38c but the code is set with temps I could bench test with, once the gpios are set up.
all would be gpio's are currently set to a label on the main page

Re: slow processing

Posted: Sat Mar 05, 2016 4:02 pm
by paddyg
Quick look and can't see anything obvious (but it's a long program - that's one of the selling points of using classes, you build your code up from smaller, testable chunks which you can put in multiple files)...

Apart from I think it's a bad idea to put the functions that "call themselves" inside threads. They can just be called once in the main thread and tk will look after re-running subsequently. I ran the code on my RPi 2 and it started v quickly and ran v quickly.

PS I did go through the threaded functions and make sure that ALL of them had a sleep at some stage, one (pumper) didn't so I added a second sleep after the last if clause.

Code: Select all

def start():
    t = threading.Thread(target=temp)
    t.daemon = True
    t.start()
    #t2 = threading.Thread(target=update_c_time)
    #t2.daemon = True
    #t2.start()
    update_c_time()
    t3 = threading.Thread(target=time_lamp)
    t3.daemon = True
    t3.start()
    t4 = threading.Thread(target=pumper)
    t4.daemon = True
    t4.start()
    #t5 = threading.Thread(target=heater)
    #t5.daemon = True
    #t5.start()
    heater()
    t6 = threading.Thread(target=lamp1_of)
    t6.daemon = True
    t6.start()
    t7 = threading.Thread(target=lamp2_of)
    t7.daemon = True
    t7.start()
    t8 = threading.Thread(target=lamp3_of)
    t8.daemon = True
    t8.start()
    t9 = threading.Thread(target=lamp4_of)
    t9.daemon = True
    t9.start()
    #t10 = threading.Thread(target=input_fan)
    #t10.daemon = True
    #t10.start()
    input_fan()
    #t11 = threading.Thread(target=output_fan)
    #t11.daemon = True
    #t11.start()
    output_fan()

Re: slow processing

Posted: Sun Mar 06, 2016 2:50 pm
by Davies
oh my, thank you so much for that the difference is astonishing. previously as said the data was taking a while to process especially presenting the variables to screen. but now it runs so smoothly, thank you.
I must make note of my school boy errors so they don't happen again. (I'm kicking myself a little as I did answer someone on here before using a while without sleep to say the contents will be ran/checked for as quickly as the pi could manage it without a sleep, should take note of my own advise lol, the example below was also created as an answer to someones question on here about passing variables between files)

I knew not to run a root.after from within a while but did not know about not running within a thread, thank you for pointing that out. it seems id be able to change more from whiles in a thread to root.after, do you think this would be beneficial?

when I first started compiling this it was just to control the heat mat, then it became "well if it can do that, it can do this also" hence why its grown into such a huge thing, with more to add including graphs having it run smoothly now was paramount as its only perhaps half way there, since starting to write the environment script I was able to get two separate .py files working together without using classes
file 1, was named test_file

Code: Select all

import bg_run
import time
import threading

t = threading.Thread(target=bg_run.display)
t.daemon = True
t.start()

while 1:
    if bg_run.run == 0:
        print "hello from inside"
        time.sleep(1)
        bg_run.run = 1
file 2 was named bg_run

Code: Select all

import time

run = 0


def display():
    global run
    while 1:
        if run == 1:
            print "hello from outside"
            time.sleep(1)
            run = 0
I was thinking I should revamp the code to use a method like this, but on the other hand would it not make the program slower by having to access multiple files? although on an import I suppose its only read the initial time and then stored at RAM for future use?

Re: slow processing

Posted: Sun Mar 06, 2016 6:19 pm
by paddyg
Good, glad it helped. Really you should invest some time getting used to classes, wherever you find, in your code, that you have nearly the same thing but with a few differences here and there, you are probably looking at something that should an instance of a class. Crucially using global variables to communicate between different parts of your application is fraught with danger (sooner or later you will spend hours or days trying to figure out why something doesn't work) and object oriented programming (using classes) is an elegant way of doing this.

PS I noticed you had a rather complicated and sophisticated structure using ':'.join(map(str, [("%02d" % HH2)... it would be nicer (in my opinion) to switch to the new {} format() system
'{:02d}:{:02d}:{:02d}'.format(HH1, MM1, SS1)

Re: slow processing

Posted: Sun Mar 06, 2016 7:30 pm
by Davies
excellent, really appreciate your input paddy.
is there a better way of adding minutes and hours to a time?, at the moment I'm having to say if the minute is less 0 then minute is 59 or as such for each time which can be adjusted.
for example in pumper()

Code: Select all

timer = (':'.join(map(str, [("%02d" % hour1), ("%02d" % (minute1 + pump_cutout)), ("%02d" % second1)])))
would be changed to

Code: Select all

timer = '{:02d}:{:02d}:{:02d}'.format(hour1,(minute1 + pump_cutout), second1)
however when writing this def I didn't take into account when minute1 + pump_cutout = 60+
so now I change to

Code: Select all

dte1 = list(time.localtime())
hour1 = dte1[3]
minute1 = dte1[4]
second1 = dte1[5]
min_cut = minute1 + pump_cutout
if min_cut >= 60:
    min_cut = (min_cut - 60)
    hour1 = (hour1 + 1)
    if hour1 >= 24:
        hour1 = 00
timer = '{:02d}:{:02d}:{:02d}'.format(hour1, min_cut, second1)
I almost feel like python should know if I add 1 hour to a time It could take me into the following day or am I just not using the correct module import, or referencing incorrectly?.. or would this be the correct way and I'm just expecting too much..

Re: slow processing

Posted: Sun Mar 06, 2016 8:03 pm
by paddyg
Yes you are probably best to 'keep' all your times as seconds from whenever (1970, 1900, not sure what) and add or subtract differences in seconds. Then convert to local time format only when you want to display it. NB you can access class attributes using dte1.tm_hour format which is much nicer than dte1[3]

Code: Select all

now = time.time() # 1457293533.5612555 as it happens
then = now + 60.0 * pump_cutout # pump_cutout minutes in future
t = time.localtime(then)
timer = '{:02d}:{:02d}:{:02d}'.format(t.tm_hour, t.tm_min, t.tm_sec)