ftp max file size


11 posts
by scape » Thu Jun 28, 2012 4:37 pm
so i tried two different ftp programs and noticed that the pi seems to fail on transfers (uploads in particular) of files larger than about 170MB; I'm curious if this is the case because the ftp server is downloading the file into memory before writing it to disk. Has anyone else come across this issue?
Posts: 15
Joined: Wed Jun 27, 2012 12:09 pm
by whiteshepherd » Sat Jun 30, 2012 4:04 pm
I noticed the same thing last night. I decided to "stress" test my pi server mysql config and used "ftp" to log into my production server to download my 2 gb mysql database in a sql file. The problem is that this Debian ftp uses 1 meg of ram for every 1 meg of a file downloaded! It's almost as if it is trying to store it into ram and disk at the same time as it downloads! So my ftp download of course crashed my pi. I had this problem with older Linux in many programs for may years. Saving any kind of file took as much ram as the file size regardless of how small the file writes to disk. Windows never showed this problem (ftp just took the program size in memory) and the newer Ubuntu releases have also fixed this problem. What caused it in the older Linux I don't know as I have limited coding experience.

I'm downloading my 2gb database into windows now and I'm going to use SSH scp to copy it to my pi and see if that gets around the memory issue? I'll know in about an hour.
Posts: 27
Joined: Thu Nov 03, 2011 7:59 pm
by scape » Sun Jul 01, 2012 12:56 pm
i tried various ftp for the debian squeeze on pi, all seem to do this; i ended up abandoning ftp and writing my own twisted python solution, which I have instead streaming the file to the disk during upload. once i get it in a bit better shape and stress test it to see how much memory is absorbed during a transfer, i'll post the code here to use freely; shame b/c i didn't want to reinvent the wheel as i thought ftp was good enough. let us know if you find a solution for ftp and large files :D
Posts: 15
Joined: Wed Jun 27, 2012 12:09 pm
by Joe Schmoe » Sun Jul 01, 2012 1:21 pm
I never knew that (i.e., never experienced it) about FTP, but I guess I'm not entirely surprised.

But the thing is - nowadays, most everyone uses SSH and its associated tools (e.g., SCP), so I guess the question is: Why aren't you using SCP?
Never answer the question you are asked. Rather, answer the question you wish you had been asked.

- Robert S. McNamara - quoted in "Fog of War" -
Posts: 2510
Joined: Sun Jan 15, 2012 1:11 pm
by jojopi » Sun Jul 01, 2012 5:14 pm
Joe Schmoe wrote:I never knew that (i.e., never experienced it) about FTP, but I guess I'm not entirely surprised.
I think you should be surprised. Or rather, deeply sceptical. I would certainly be very surprised to learn that an FTP implementation cannot handle files bigger than available memory. I have not done so in this thread, of course, because OP has named neither the distro nor any of the "various different" FTP daemons that allegedly have the issue. The description of the problem, "pi seems to fail", is also so vague as to neither support nor contradict the diagnosis.

The comments about memory usage sound very like the usual misconceptions over what it means for memory to be "used" or free in Linux. When you have recently written a file to disk (or read it), it is normal for many of the pages to still be in RAM. They will stay there until the memory is needed for something else. (Or the file is deleted.)

Good point about SSH though.
User avatar
Posts: 1960
Joined: Tue Oct 11, 2011 8:38 pm
by Joe Schmoe » Sun Jul 01, 2012 5:19 pm
I didn't say I believed it. I just said I never knew it (and still don't) and had never experienced it (and still haven't).

But the real point is that even if it is true (but it probably isn't), one tends to assume that the new tools (i.e., SSH & friends) will have fixed all the old bugs (*).

(*) Alas, this assumption is not quite true - one example is the shell problem, recently written up in news:comp.unix.shell.
Never answer the question you are asked. Rather, answer the question you wish you had been asked.

- Robert S. McNamara - quoted in "Fog of War" -
Posts: 2510
Joined: Sun Jan 15, 2012 1:11 pm
by CCitizenTO » Mon Jul 02, 2012 5:11 pm
The question I have is why isnt the ftp storing only a certain amount of data before writing to the disk.

I would imagine it has to do with flash memory being used because on conventional disks most FTP programs seem to reserve a section of disk space (like a block for the whoe thing) then write to a .part file or something like that at least while downloading.
Posts: 81
Joined: Sun May 20, 2012 2:14 am
by jojopi » Mon Jul 02, 2012 11:17 pm
CCitizenTO wrote:The question I have is why isnt the ftp storing only a certain amount of data before writing to the disk.
The question I have it what makes you think it is not.
I would imagine it has to do with flash memory being used because on conventional disks most FTP programs seem to reserve a section of disk space (like a block for the whoe thing) then write to a .part file or something like that at least while downloading.
You seem to know a lot about how FTP programs behave. (I would not have expected them to fallocate, nor to write to a temporary file and rename.) But your theory that they behave differently on flash media is preposterous. Anyway, I have tested uploading a 3.6GiB file to a (SD card on a) Pi, and experienced no problems:
Code: Select all
230 User pi logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> put distro.img
local: distro.img remote: distro.img
227 Entering Passive Mode (192,168,30,30,181,27)
150 Opening BINARY mode data connection for 'distro.img'.
226 Transfer complete.
3892314112 bytes sent in 634 secs (6141.16 Kbytes/sec)
ftp>
This is on debian wheezy spindle beta image. I am reluctant to name the FTP daemon that works unless someone can prove that their exists at least one that does not.
User avatar
Posts: 1960
Joined: Tue Oct 11, 2011 8:38 pm
by scape » Sun Jul 08, 2012 1:03 pm
I only ever explained that this issue is from what I experienced attempting to use ftp on my Pi; I still have yet to determine the reason for this and really decided to abandon the ftp solution altogether since the issue doesn't make sense to begin with. The Pi is running Debian Squeeze and tried both vsftp and proftpd and on the client I tried both a custom python approach using the ftp module as well as filezilla ftp client, on two different computers running windows 7/64. I think if you reread my posts correctly, you'll see that I never pinpointed the reason nor did I forget to mention the OS the Pi was running on. Talk about some harsh forum users! Beyond that, I'm glad someone else is not having this issue; but it does sound strange that at least one other user is having a similar issue of transferring large files over ftp.
Posts: 15
Joined: Wed Jun 27, 2012 12:09 pm
by jojopi » Sun Jul 08, 2012 3:18 pm
scape wrote:The Pi is running Debian Squeeze and tried both vsftp and proftpd and on the client I tried both a custom python approach using the ftp module as well as filezilla ftp client, on two different computers running windows 7/64. I think if you reread my posts correctly, you'll see that I never pinpointed the reason nor did I forget to mention the OS the Pi was running on. Talk about some harsh forum users!
I appreciate that you only questioned whether the issue might be due to storing the entire upload in ram. But a couple of other participants have assumed that to be the case and then gone on to question the sense of it. Effectively insulting the intelligence of the program authors. I do not think it is harsh to point out that the diagnosis was unproven, and indeed wrong.

There have been three different Pi images based on debian squeeze. I have now tested the most recent one (2012-04-19) with both vsftpd and proftpd-basic. (And the implementation that I previously tested on wheezy was plain "ftpd", which I believe is derived from openbsd.) All of these were able to receive files well in excess of the available ram, and I was able to confirm that they were all writing the data to disk as it was arriving.
Code: Select all
(vsftpd)
227 Entering Passive Mode (192,168,30,30,87,206).
150 Ok to send data.
226 Transfer complete.
495892225 bytes sent in 171 secs (2897.76 Kbytes/sec)

(proftpd)
227 Entering Passive Mode (192,168,30,30,212,26).
150 Opening BINARY mode data connection for usd.img
226 Transfer complete
1083999823 bytes sent in 359 secs (3022.69 Kbytes/sec)
Before running these tests I downgraded the firmware and kernel image back to the original April 19 release files. I did not enable any swap. I am using vm.min_free_kbytes=8192, which I believe has been the default in all squeeze-based releases.

Is it possible that you are using a different firmware or kernel? Whatever was causing your problem, it was not the ftpds' fault, anyway, and you were merely lucky to avoid it by changing to a different protocol or implementation. The problem is presumably still there.
User avatar
Posts: 1960
Joined: Tue Oct 11, 2011 8:38 pm
by scape » Mon Jul 09, 2012 12:11 am
it wasn't my intention to mislead anyone, I was simply confused myself as to why the transfers were failing and decided to put a feeler on the forum to see if it was not just me. I've since started from a fresh image and do not have these issues, go figure. thanks for your interest, as help is always appreciated :)
Posts: 15
Joined: Wed Jun 27, 2012 12:09 pm