IllustriousFrosting2 wrote: ↑Tue May 05, 2020 2:59 pm
Hello!
I am currently using Python to transfer a fairly large (~400MB-1GB) file directory over a network. Using just one port right now I predict it would take 45 minutes to complete the entire transaction. Would it be unreasonable to open a hundred or so ports to get the job done in about half a minute?
Thank you!!
Gut feeling says you would get no overall improvement.
If you include the time spent coding and debugging it then .....
Presumably you are using TCP which is very efficient.
The Linux disk cache will hide most of the disk I/O delays especially if you have a 4GB Pi (you are using a Pi4 with GiGe aren't you?)
Read the files with a decent block size (a power of two larger than 4KB - the "cp" command uses 128KB).
Consider posix_fadvise( fd, 0, 0, POSIX_FADV_SEQUENTIAL ) to double the prefetch.
Disk writes should be instant.
You might get a small improvement by using something faster than Python (C is more common for this sort of program).
Just guesses, never tried it!!!