Below are details of what I have found and 1 alternative I tried, plus a couple of thoughts as to what else I might try. I would appreciate views on these, or any other, ways I might improve the performance without using extra hardware.
I'm running this on a raspbian-lite install on a pi zero-W.
In outline the approach I have taken runs the motor controller in a separate python process using a pipe from the control process.
To handle the rotary encoder I use a pigpio callback with no callback function and merely read the tally count at appropriate intervals - I know which way the motor (should) be turning so I don't need to do clever things with the 2 quad inputs, just counting all the edges gives me pretty good accuracy. The little class that I use for this part is on github here.
The motor runs at ~13,000 rpm and I use both pins of a 3 pole / rev quad encoder. This means the maximum edge period is ~0.75 milliseconds or ~ 1,300 edges per second.
It seems that pigpio fires off callback data from it's daemon down its internal connection to my process, where pigpio python code picks up the data and updates a local tally count. I can then read the tally count at my convenience. (for pid style feedback 50 millisecond intervals works pretty well)
The cpu utilisation I see on a pi zero is:
* no monitoring of encoder - any motor speed: cpu ~ 10%
* monitoring on, motor stationary: cpu 10%
* 1 motor at 1,600 rpm (about slowest reliable speed) cpu: 28%
* 1 motor at 6,000 rpm: cpu: 60%
* 1 motor flat out (13,000 rpm): cpu: 84%
* 2 motors at 1,600 rpm) cpu: 30%
* 2 motors at 6,000 rpm: cpu: 60%
* 2 motors flat out cpu: 84%
Clearly there is some smart stuff going on that increases the efficiency of processing the callbacks as the rate increases.
I have also checked with top to see where the cpu is going. At idle the pigpio daemon uses ~7.5% cpu. 2 motors flat out the pigpio daemon uses ~ 20% cpu. The rest is the python process. In these tests I was not running any extra code to read the tally counts or do other processing so I believe my process' cpu is pretty much entirely the pigpio callback processing.
I tried writing a C process to monitor the pins using wiringpi, running in a separate process using shared memory which I could watch from another process running python, but while this had much lower cpu utilisation, it was pretty hopeless and started dropping a few edges at 2,000rpm, and quickly got a LOT worse.
The other ideas I thought might improve things are:
- use the C interface to pigpio in my code - I'd write a small C wrapper so most of my code would still be python, but pigpio's code to pick up the edge callbacks would now be C and hopefully lots faster!
- implement an extension to pigpio's daemon code specifically geared to this type of use which would count edges and send messages back based on a timer, which would invoke a callback in my code (at say 10 or 20 times a second). This would drastically reduce the message rate between the daemon and my code and hopefully make a big reduction in overall cpu utilisation. However this would be MUCH harder than option 1!
- totally roll my own and replace pigpio. Not at all keen on this idea - there is clearly a lot of very smart stuff going on in pigpio that I would rather not have to understand in detail!