This is a bit of a tough one to explain but I will try my best:
I am working on a live music gig project using the RPi3 as a "synth module" for my MIDI keyboard. I'm hoping, in future, to use this setup as a synth exclusively, because synths are quite expensive these days...
None of the software I'm using currently is CPU bound at all (20-40% CPU on one core may be the most I've seen thus far), but memory is a completely different story. I have many sample banks stored on external storage, most quite large in size. I don't use these all at once, but the one I do use the most, the piano bank, is 512 MB Of course, that's all Linear PCM data, which is effortless to spit out of speakers.
The problem appears when I'm in the process of switching banks, or layering two banks, and I think we can all see why. When playing live, we can't assume we're only going to use just a few samples, so that 512 MB chunk is read into RAM. Soon enough, I'm hitting the limits of RAM, and swap kicks in. I wouldn't have issue with this if the operation didn't take everything down with it as it writes to the SD Card, including any other module that doesn't use samples. I don't want to write to the SD card anyway, at least not that much!
So in a nutshell, I'm dealing with massive memory pages being kicked in and out, which is tolerable as long as I do not hit the "edge". Besides choosing some smaller sample banks (which I really don't want to do), are there any solutions to dealing with these "spikes"?
I have googled around and found some mentions on "Virtual Memory Compression" for the Linux kernel, and that just might be the golden ticket. I'm competent in the terminal, so no solution is too difficult to try, I just don't know the best path to take given my situation. Anything to reduce or eliminate the lag as much as possible would be great.