Bogdan
Posts: 3
Joined: Thu Dec 12, 2013 4:54 pm
Location: Bucharest, RO

Real time GPIO monitoring with Pi4J [SOLVED: use pigpio]

Thu Dec 12, 2013 5:06 pm

I have a project that requires me to measure GPIO events with good time precision (<3 ms). To that end, I'm using Pi4J to monitor events using interrupts, from a dedicated, high-priority thread. Unfortunately, it appears that the precision is less than ideal.

Here's how I'm testing. I'm using an Arduino to set a GPIO pin HIGH for 100ms, then LOW for 2000ms, and so on in an endless loop. I'm interested in the length of time the pin stays HIGH (ideally, 100ms every time). In fact, I find that I get 100ms most of the time, 98-102ms a few times (maybe 10% of the events), and occasionally I get aberrant values (<40ms, >200ms, and sometimes even 1000+ms).

I realize the RPi is not a real time environment, but this particular box isn't really doing anything else (although I haven't been particularly aggressive in turning services off).

I have two questions:
1. Is there any specific technique that would improve my results using Pi4J? (e.g. turn off all services, write the code in some very particular fashion, etc)
2. Should I expect any significant improvements if I switched to a low level programming language for this part of the code (i.e. C)?

I'd appreciate any suggestions or pointers to other pages/conversations on this topic.
Last edited by Bogdan on Fri Dec 13, 2013 9:54 pm, edited 3 times in total.

User avatar
DougieLawson
Posts: 41424
Joined: Sun Jun 16, 2013 11:19 pm
Location: A small cave in deepest darkest Basingstoke, UK
Contact: Website Twitter

Re: Best way to achieve time precision on GPIO interrupts

Fri Dec 13, 2013 8:44 am

Do it in bare metal.

There's too much running in Linux to get that kind of accuracy. The kernel dispatcher isn't made to support real-time there's a massive number of tasks that you simply can't run without.

RiscOS may be a better starting point.
Any language using left-hand whitespace for syntax is ridiculous

Any DMs sent on Twitter will be answered next month.
Fake doctors - are all on my foes list.

Any requirement to use a crystal ball or mind reading will result in me ignoring your question.

Bogdan
Posts: 3
Joined: Thu Dec 12, 2013 4:54 pm
Location: Bucharest, RO

Re: Best way to achieve time precision on GPIO interrupts

Fri Dec 13, 2013 10:16 am

Yes, I am using Raspbian (sorry I failed to mention that in the original post). I understand there's an overhead using Linux, but I still feel it should be able to read ~300Hz waves with a dedicated library as long as people report being able to generate consistent waves in the kHz range from the shell, and in the MHz range from C [1]. What am I missing?

User avatar
piglet
Posts: 934
Joined: Sat Aug 27, 2011 1:16 pm

Re: Best way to achieve time precision on GPIO interrupts

Fri Dec 13, 2013 10:21 am

joan here on the forum has written some pretty comprehensive C libraries that might help:

http://abyz.co.uk/rpi/pigpio/

User avatar
jojopi
Posts: 3490
Joined: Tue Oct 11, 2011 8:38 pm

Re: Best way to achieve time precision on GPIO interrupts

Fri Dec 13, 2013 1:59 pm

Bogdan wrote:In fact, I find that I get 100ms most of the time, 98-102ms a few times (maybe 10% of the events), and occasionally I get aberrant values (<40ms, >200ms, and sometimes even 1000+ms).
I do think that your problem is Java, and not Linux.

Once you have set real-time process priority you take precedence over all other tasks except kernel threads and interrupts. You do not need to worry about background services.

Another thing that you should do is mlockall(). Otherwise (even if you have disabled swap), clean pages can be discarded and re-loaded from the SD card on demand. That obviously is slow.

In fact, you want to avoid reading or writing from the card at all in your critical timing sections. A read or write can take a significant fraction of a second, or longer, especially if it happens just after a write by another process.

I have reproduced your AVR test, but with 100ms on and off, and timed it in C on a perfectly ordinary Raspbian install.

Code: Select all

#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/time.h>
#include <sys/mman.h>
#include <sched.h>
#include <poll.h>

int main()
{
  struct sched_param param;
  struct timeval tv[2];
  struct pollfd pfd;
  int fd;
  char buf[8];
  int i, elap, min=0, max=0, count=0;

  // assume gpio already exported and set to interrupt on both edges
  if ((fd = open("/sys/class/gpio/gpio25/value", O_RDONLY)) < 0) {
    perror("open");
    exit(1);
  }
  param.sched_priority = sched_get_priority_max(SCHED_FIFO);
  if (sched_setscheduler(0, SCHED_FIFO, &param) < 0)
    perror("warning: sched_setscheduler");
  if (mlockall(MCL_CURRENT | MCL_FUTURE) < 0)
    perror("warning: mlockall");
  pfd.fd = fd;
  pfd.events = POLLPRI;
  for (i=1; ; i^=1) {
    read(fd, buf, sizeof buf);
    lseek(fd, 0, SEEK_SET);
    poll(&pfd, 1, -1);
    gettimeofday(tv+i, 0);
    read(fd, buf, sizeof buf);
    lseek(fd, 0, SEEK_SET);
    if (buf[0] != '0'+i) {
      printf("missed interrupt\n");
      i = 0;
      continue;
    }
    if (i == 0) {
      count++;
      elap = (tv[0].tv_sec-tv[1].tv_sec)*1000000 + tv[0].tv_usec-tv[1].tv_usec;
      if (elap > max) max = elap;
      if (elap < min || !min) min = elap;
      printf("min = %dµs, max = %dµs\n", min, max);
    }
  }
}
After running for half an hour, this says:

Code: Select all

min = 100280µs, max = 100540µs
You could probably improve on this by disabling the SD and USB drivers and interrupts completely, but then you may as well not use a Pi. Or by doing the timing in a custom kernel module interrupt handler.

There is really no reason to go to bare metal unless you are sensitive to things like TLB flushing on context switches.

User avatar
joan
Posts: 15650
Joined: Thu Jul 05, 2012 5:09 pm
Location: UK

Re: Best way to achieve time precision on GPIO interrupts

Fri Dec 13, 2013 2:40 pm

As piglet suggests the pigpio library does this down to microseconds. Each edge is time-stamped with the system microsecond clock (tick). The gpio callback returns the gpio, the gpio's (new) level, and the tick.

I've added a Python module to the pigpio download so you can get the same accuracy from Python.

Code: Select all

#!/usr/bin/python

import pigpio
import time

def cbf(g, L, t):
   message = "gpio=" + str(g) + " level=" + str(L) + " at " + str(t)
   print(message)

pigpio.start()

cb = pigpio.callback(22, pigpio.EITHER_EDGE, cbf)

time.sleep(30)

cb.cancel()

pigpio.stop()
will print lines such as

gpio=22 level=1 at 548556842
gpio=22 level=0 at 551316679
gpio=22 level=1 at 553411795
gpio=22 level=0 at 555269219
gpio=22 level=1 at 557689701

User avatar
Douglas6
Posts: 5020
Joined: Sat Mar 16, 2013 5:34 am
Location: Chicago, IL

Re: Best way to achieve time precision on GPIO interrupts

Fri Dec 13, 2013 3:13 pm

Java, with its garbage collection, is probably particularly bad at this sort of thing.

Bogdan
Posts: 3
Joined: Thu Dec 12, 2013 4:54 pm
Location: Bucharest, RO

Re: Best way to achieve time precision on GPIO interrupts

Fri Dec 13, 2013 3:54 pm

Thank you all for the suggestions, I'm becoming convinced that I will need to take advantage of them. Here's my progress so far, for future reference.

I tried to stay within Java -- I need it anyway for later processing, and it makes sense to attempt anything I possibly could to remain in a single environment throughout the application, for application maintainability. So I tried two things: (1) I stopped all nonessential services and ran the same code; that seemed to be encouraging (but it's really not, as you'll see later); and (2) I made a few simple changes to Pi4J, as to timestamp events as close to the source as possible (i.e. in the GpioPinEvent constructor). I ran three tests: (a) with my previous code, (b) with my altered version of Pi4J, and (c) with the altered Pi4J and with some services stopped. I saw a measurable improvement between (a) and (b), and then a considerable degradation between (b) and (c).

The services I stopped after running (b) and before running (c) were dphys-swapfile and rsync, so it increasingly appeared Douglas was correct in assuming Java's garbage collector could be one of the main culprits. So I ran another few tests with verbose gc output -- it does indeed seem like the garbage collector introduces some disturbance in the Force, but it's so bizarre that I don't think even manic control over when it's triggered could significantly improve the process (several successive events are disturbed by a single garbage collection).

This convinced me that I do need to break my application in two parts, a "real time" C event collection service, and a separate, "offline" Java processing application. Which brings me to two other questions: consistent timestamping and IPC; but I don't want to cross-post, so I'll open separate threads for those two.

Once again, thank you all for the great input and suggestions!

UPDATE: I hadn't realized how amazing pigpio is -- you folks are underselling that gem! I originally thought I would have to write some home brewed code on top of pigpio in C, and then implement some sort of IPC solution to make that talk to my main Java application. It's become quite obvious to me during the past few hours that I won't need to write anything in C, after all -- pigpio does everything I ever wanted, and then some. Thank you for that amazing piece of software, joan!

UPDATE 2: I finally have some concrete statistics to show. So here goes.

Setup (similar to the original post): an Arduino sets a GPIO pin as follows, in an endless loop [ms]: 100H, 100L, 30H, 100L (long story why we chose those specific values; the reason is related to our project's idiosyncrasies). We're only interested in how long the pin is set HIGH, so in an ideal world we would get an infinite series of 100ms, 30ms, 100ms, 30ms, and so on.

Methodology: we took at least 700 measurements for each type of delay (100ms, 30ms), for each type of scenario (1400+ for each scenario). Then we calculated the standard deviation (which measures the standard deviation over the sample) and the absolute minimum and maximum measured times compared to the known times (which measures the absolute worst case deviation over the sample). For Pi4J, we originally implemented a very sterile, high priority thread, and we pushed the (almost) raw data to a different thread for processing, using Java's own ConcurrentLinkedQueue; subsequently we altered the Pi4J library itself in order to timestamp events upstream from our (possibly not perfectly clean) code. (Incidentally, it's curious that Pi4J doesn't provide timestamping by default.) For pigpio we used the default daemon settings with a very dirty, default priority PHP process that reads the FIFO pipe directly (pigpio timestamps events itself, so it doesn't really matter how clean your consumer is; we wanted to use our own processing over pig2vcd as a matter of choice, simply because we wanted to understand what pigpiod pushes over the FIFO pipe -- you can use whichever solution you prefer, since events are provided with a timestamp regardless).

Results: these are average values over several tests. If anything, we've been generous with Pi4J.
  • Standard deviation: Pi4J was ~3 orders of magnitude higher (worse) than pigpio (Pi4J: 3ms; pigpio: 0.003ms)
    Absolute worst case deviation: Pi4J was ~3 orders of magnitude higher (worse) than pigpio (Pi4J: 10ms; pigpio: 0.02ms)
    CPU load: Pi4J was using roughly 1.5 times less (better) CPU than pigpio (Pi4J: 4%; pigpio: 7%)
Conclusion: pigpio FTW! I mean, seriously, there's just no contest. For whoever might be picking on the CPU load, be advised that my original request was for 3ms precision in the worst case scenario; we were utterly unable to reach that precision with Pi4J (as I said above, the 10ms worst case scenario listed above for Pi4J is very generous, in truth we found occasional 1000+ ms worst case deviations). On the other hand, pigpio provided a consistent average of under 0.1ms worst case ever deviation with the default daemon settings; also, keep in mind that the Arduino itself introduces some delays, and there are physical limitations to how fast the pins can change state and read the state change (of course, all of these are minor, but they add up, and we're already talking about standard deviations of 3 microseconds). Finally, the daemon could easily be tamed down to less precision, which would certainly result is way less CPU load (which is probably what we're actually going to do).
Last edited by Bogdan on Sat Dec 14, 2013 1:02 pm, edited 2 times in total.

User avatar
joan
Posts: 15650
Joined: Thu Jul 05, 2012 5:09 pm
Location: UK

Re: Real time GPIO monitoring with Pi4J [SOLVED: use pigpio]

Sat Dec 14, 2013 8:28 am

Interesting results.

It does show that time-stamping as close to the event as possible can be very useful.

Return to “Java”