Hi,
I have a rp3 with a large number of files (>100,000) each of which is about 80K bytes that need to be deleted from the SD card. They are image files that I'm capturing over a day. So that is several GB of data in total. I'm finding that the two methods that I've tried take a very long time, on the order of 10 minutes to remove them all.
The card is formatted ext4. It is 32GB Samsung EVO+
During the delete process the card seems to be written to in 80K chunks over and over again. So it doesn't appear that only the file allocation table (or whatever it is for ext4) is being updated. It seems that the data on the card is being overwritten.
Top shows that 1cpu is at or near 100% wait state, the others are 95%+ idle.
I've tried:
rm pathtofile/* This doesn't work as written as * causes an "argument list too long" error so I need to do it block by block.
find pathtodir -type f -delete This is slower than rm for 10,000 files and fails - goes nowhere after hours - for larger numbers of files.
I know that SD cards operate differently than hard disks, and writes take a longer time, but I thought that only the file allocation table would be written to so although there are a large number of files there wouldn't be that much writing to the card to delete them.
Any pointers would be appreciated that can move this from minutes to seconds time frame.
Gord_W