Well of course physically the program does stop while the ISR is executed. What I meant was not adding some call in the program that makes the program stop and wait for the interrupt to be executed, as was described earlier in the thread. That's only useful if the program is doing something that requires waiting for an event, such as processing a stream, or waiting for input.rurwin wrote:That would be either a contradiction in terms or how every computer handles interrupts.Daverj wrote: not requiring the program to stop and wait for an interrupt.
Hardware interrupts on general purpose pins usually involve a separate interrupt per pin. But those interrupts in this case can be serviced by the same interrupt service routine.rurwin wrote:The encoders I have experience of only have two outputs: A and B, and they only require one interrupt.
Many rotary encoders include a third pin for "Index" which allows you to reset your position counter based on a known physical position of the encoder, allowing an incremental encoder to be used like an absolute encoder.
And yes, a rotary encoder could generate a lot of interrupts if it is high resolution and turned quickly. So clearly there would be practical limits in what the Pi could handle. I suspect with very small ISRs written in assembler they could execute in under 100ns and have very little impact on the rest of the system.
I haven't had much time to play with the Pi yet, but when I do I'll give your method a email@example.com wrote:Think of it this way: You split your program into 2 threads. These 2 threads then run in parallel. One of these 2 waits for the interrupt (it waits in the kernel, it's not busy polling using CPU) When the interrupt happens, the other thread is stopped and the program that's waiting for the interrupt starts. When it's done that program goes back to sleep, waiting on the next interrupt, and the main program carries on as in nothing has happened. It shouldn't miss the interrupt, but may be delayed if there's a higher priority thread/program active at the time.
It sounds like your method isn't using a real interrupt service routine created by the user, but instead is getting Linux to switch tasks based on the interrupt. Still a true interrupt, but with a lot more delay and overhead before the interrupt is serviced.
What I had in mind would, I think, be a lot less overhead for Linux. I was thinking of creating a very small interrupt service routine in assembler, so it could run very quickly, and stick it's addresses into the CPU interrupt vector table. Then just have a common variable that it and the main program could access.
I also don't see the need for the mutex in your example. If the variable is something that can be accessed in a single instruction cycle (ie: a 16 bit unsigned int can be read or written in a single instruction, even using the ARM thumb instruction set) then reading it from the main program will always give a valid value, even if the interrupt happens one cycle before or after the variable is read. For reading arrays or larger variables, or if the main program had to modify the variable, then I could see the need.