Hey sorry for the late reply, busy weekend.
Okay so let me describe what I'm trying to achieve and why I used individual files.
My program sends packets of information to a server throughout its operation. The RPi is in a remote location and is connected to the internet via a 3G dongle so its connection may bounce up and down. The power to the RPi may be lost every now and again.
Now I don't want to loose any of these packets so when the server is unavailable I cache them to individual files as python pickles.
instead you could just have a single 1GB file with a million fixed length records. Use lseek(), read() or write() to access individual records. Very simple.
The reason I used individual files rather than a large file(s) is that the packets need to be read out of the file one by one, sent to the server and then deleted. As I understand doing this requires you to read the whole file into memory read the first line, remove it and then write the whole file back again which would be pretty slow.
I had thought about using a database and I even did a few tests with an SQLite database. However, doesn't a database store operations in memory before writing it to disk? If the RPi happens to loose power in that time won't the last few operations that have been store in memory but not written to disk yet be lost?
What if the number of i-nodes have been restricted, or the user tries something daft like FAT16.
The "user" doesn't get to change anything on the RPi, it is configured and installed by me so this isn't an issue as I have total control of the hardware and software and how they operate together.
I'm fully aware that there are gaps in my knowledge and that have made assumptions when designing this program. Please correct me if anything I have said is incorrect and that way I can keep learning, thanks