
I'm writing a toy logic analyser / oscilloscope in assembler for my toy OS (written in assembler) [ie. no C, no libraries]. At the moment I'm just sampling the system timer (SYST_LO) register (at 44 Msamples/s) but I'll be switching to (8 pins from) GPIO_0 when I'm back home and can rig up a signal generator.
I'm sort of familiar with the Nyquist frequency / low-pass filters and with oversampling/decimation (+ noise) to increase the resolution of an ADC but I'm not entirely sure how this applies to a logic analyser.
Say I have a time base of 10 µs (ie one division, 40 pixels, represents 10 µs) then I have 440 samples from which I want to choose 40 to display. I'm effectively sampling at 4 MHz and could display a 2 MHz signal (ie. a pin alternating between 1 and 0 at 2 MHz).
To get my 40 samples (for one division) I could simply take each 11th one from the sample buffer. But to avoid aliasing I'm considering adding a pseudorandom(*) dither of up to +/- 5. Ie. instead of taking samples 1, 12, 23, 34, 45, ... taking e.g. 1, 10, 27, 29, 48, ...
Is this the right way to go about it?
(*) I'm reading about PRNGs on wikipedia. An XORshift RNG looks like a fun one to implement.