(you may find this most immature question but...) would ssd enhance performance drastically or just the slight effect?
Please share your experience. I'm really interested in this option.
That's easy then. Get a Pi4.
how can I find if my sd card is slow or not?Slow SD card?
Python is an interpreted language and as such is always going to be slower than C++ (which is compiled). Python itself is actually written in C. It certainly may be easier for you to write in Python but C or C++ is (a) more powerful and (b) faster.Hassibayub wrote: ↑Sat Jul 11, 2020 12:04 pmC++ is the option i'm aware of but you know when a project like iris recognition system, it is better to use powerful python rather than c++ which is hard to write and not easy for intensive projects.
Given what you're running, that "25%" is probably 100% of one core (of 4 CPU cores your Pi has). If there is anything you can do to run parts of your program in parallel (so those parts will be scheduled on different CPU cores) it might run faster.
If my use of the word "all" is what you're commenting on, then I'll stipulate.
if its a quad-core pi, then its highly likely that your program is single-threaded, and maxing out one cpu coreHassibayub wrote: ↑Sat Jul 11, 2020 11:41 amI've created an iris Recognition program that takes a picture, applies some mathematics to extract feature, and converts it to binary code.
When I verify a user (whether it exists in the system or not) if exits then unlock the locker.
In verify.py it take 10secs while on my laptop it barely takes 1.5secs. and you know 10sec is huge.
I look at task manager in raspberry pi I see it only takes a 25% process. I want that somehow it takes complete resources (90-100%) to run my program faster.
PS: I tried to define my problem in short work. If you want more elaboration, pls let me know.
Methods Four and Five require the least effort and no code changes I would suggest.W. H. Heydt wrote: ↑Sat Jul 11, 2020 8:59 pmThis leaves several areas to look at for optimization. First, choice of language processing method. If I am not mistaken, there are ways to actually compile Python programs instead of running everything through an interpreter. Second, rewrite the program in a more efficient language than Python, such as C or C++. Third, rewrite the code to multi-thread it as much as possible to take advantage of having multiple CPU cores. Fourth, get a newer generation of Pi, substituting a Pi4B for the present Pi3B. Fifth, overclock the Pi.
Also known as "throw hardware at the problem." That works right up until you're using the best available hardware, and if it isn't enough, you're back to trying to improve the software.jahboater wrote: ↑Sat Jul 11, 2020 11:42 pmMethods Four and Five require the least effort and no code changes I would suggest.W. H. Heydt wrote: ↑Sat Jul 11, 2020 8:59 pmThis leaves several areas to look at for optimization. First, choice of language processing method. If I am not mistaken, there are ways to actually compile Python programs instead of running everything through an interpreter. Second, rewrite the program in a more efficient language than Python, such as C or C++. Third, rewrite the code to multi-thread it as much as possible to take advantage of having multiple CPU cores. Fourth, get a newer generation of Pi, substituting a Pi4B for the present Pi3B. Fifth, overclock the Pi.
Which can often give the best results.
Great! you pointed at the right direction! All I need to do is use multithreading in python... Thank you.!Rascas wrote: ↑Sat Jul 11, 2020 1:04 pmIf your python program is using only 25% CPU it is because it is single thread. Make it multithreaded to use all 4 cores and it will be about 4 times faster.
Don't know your program code, but getting a faster microsd or one SSD will give you only a fraction of speed if any at all.
Exactly. my laptop is newer intel processor and Im running a 64bit processor.malchore wrote: ↑Sat Jul 11, 2020 5:22 pmThe OP (original poster) said it was a run time difference of 10 secs vs 1.5 secs on a "laptop". And no one thought to ask him what specs the laptop was using?
If his laptop was using a much newer Intel 4GHz chip, that might explain the single-core runtime difference. Intel is still the king (circa July 2020) for single-core speed. (Sidenote: That's the one and only spec AMD has to resolve before Intel becomes the AMD of 10 years ago.)
Now someone got the problem and driving me into the right direction. Will surely check the link and learn how to use multithreading.Anyways to the OP - Your python code sounds like it's almost certainly single-threaded. (By the way, it's the same on your laptop as well.) Rewrite the python code to take advantage of muti-threading. Can read here: https://www.tutorialspoint.com/python/p ... eading.htm
Thank you so very much. your post really help to find out the problem. I liked the way you explained the multithread is easy and understandable.In super basic terms, you'll need to split your image into 4 peices of data and then "process" the images against the known database. OR - compare the image against 4 known images simultaneously.
As for all the previous replies to this thread - they were taking about the "speed" of your storage devices. (Disks.) If you have a lot of disk I/O while comparing images, then having "fast" disk read speeds could speed up the performance of your Iris scan considerably on the Raspberry Pi.
And lastly, if the OS version of your Raspberry Pi OS is 32-bit vs. 64-bit OS on your laptop, which python libraries are in use? I'm very new to Python but there might be speed trade-offs between 32-bit and 64-bit libraries. (I can very well be wrong on this front.)
I'll check each of the section manually to see which one is taking most of the time. i'll write that section in c language as mentioned in https://docs.python.org/3.8/extending/extending.html. and make my code run for multi-threads.rpdom wrote: ↑Sun Jul 12, 2020 5:58 amTo my surprise all of the graph plotting queries were fast. The one that took ages was an almost insignificant one that just reported the current temperature outdoors in the summary area of the plot. I tweaked the SQL on that query a little and it now takes less than 0.5 seconds to generate that page. I think that when I originally wrote the code there was hardly any data on the database, so it ran fast enough, but now there is some years worth of data on there it takes a lot longer with the old query method.
True. But I repeat, its risk free and near zero effort, no code change, for quite a large gain.