maldus
Posts: 22
Joined: Fri Dec 15, 2017 8:36 am

Help understanding aarch64 interrupts

Fri Aug 31, 2018 10:12 am

Hello everyone,
I find myself very confused about several topics concerning interrupt management on armv8. Specifically, I'm working on a bare metal OS for RPi3 on Aarch64.

I've seen many examples about running interrupts for timers and UART, but I can't understand the difference between the interrupt registers described in the BCM2835/36/37 peripheral datasheet (https://web.stanford.edu/class/cs140e/d ... herals.pdf) and the GIC (?) documented in the Qa7_rev3 document (https://www.raspberrypi.org/documentati ... rev3.4.pdf).

Are those two different peripherals? The first one is at (phisical) address 0x3F00B200 while the second is around 0x40000000. Is the second one internal to the ARM processor, while the first is added on the RPi board?
Is there any difference in the type and amount of IRQ I can manage on both of those (eg. I have successfully run the UART0 rx interrupt on the interrupt controller at 0x3F00B200; would it be possible to do the same with the other, and with which difference?).

Is one preferable to the other considering I would like to run both on hardware and qemu?

bzt
Posts: 160
Joined: Sat Oct 14, 2017 9:57 pm

Re: Help understanding aarch64 interrupts

Fri Aug 31, 2018 1:30 pm

Hi,

First, if something has an MMIO address, then it's not an integrated part of the ARM CPU, rather a peripheral integrated in the SoC.
CPU uses system control registers (for example VBAR_ELx) and instructions only. This means you can trigger a software generated interrupt (SVC instruction) or an exception (abort in ARM terminology, see BRK instruction), once you have set up which function to execute, but that's all.

What you also need is a circuit which connects the peripherals to the CPU by converting their signals into the aformentioned CPU interrupts. This circuit is often called an IRQ controller (or IRQ routing circuit), and that's what the peripherals at 0x3F00B200 and 0x40000000 for (meaning the Broadcom SoC has one CPU and two IRQ controllers). The IO peripherals are often hardwired to the IRQ controllers, but you can choose which interrupt to be triggered on a specific IRQ by programming those IRQ controllers. ARM is very limited in this regard, an IRQ signal is either translated to an IRQ interrupt or an FIQ interrupt (which is good, because makes things simpler). In the interrupt handler code you can read a system register to figure out which peripheral signaled the CPU. As a contrast, the x86 architecture does not have such a system register, but has 224 CPU interrupts available for IRQs, therefore you have to tell the IRQ controller to fire a different interrupt for each peripheral. Imho ARM way is better, and all OS on x86 (that I know of) just sets a variable and calls a common interrupt handler anyway (to mimic ARM's which-IRQ-was-fired register). To complicate things even more, on multicore systems the IRQ controller also has to figure out on which core to trigger the interrupt, because one core is enough to execute the IRQ handler, the rest can do what they were doing uninterrupted.

Back to your original question, since the SoC has more controllers, I believe some peripherals are wired to one controller, while others are wired to the other controller. Things a little bit more complicated here, because the GPU can also receive and generate IRQs. According to Section 3.2 on page 4 of Qa7_rev3, the IRQ controllers are cascaded, meaning all the 64 IRQ lines of the first controller are translated into 2 IRQ lines of the second controller (which has several additional IRQ sources too, some are core related (so it's obvious which core to interrupt), some (like the 2 IRQ lines originated from the GPU) are not).

Example: let's say the UART received a byte, and therefore triggers one of the 64 IRQ lines of GPU's ARM Controller (MMIO 0x3F00...). That will be translated to a FIQ signal, recevied by the second IRQ controller (MMIO 0x4000...) on it's GPU FIQ line, and would trigger a CPU interrupt on the 2nd core for example, which in turn will execute the function FIQ interrupt handler from memory pointed by the VBAR_ELx system register.

Cheers,
bzt

maldus
Posts: 22
Joined: Fri Dec 15, 2017 8:36 am

Re: Help understanding aarch64 interrupts

Sat Sep 01, 2018 7:14 am

Many thanks, this is exactly what I needed.
If I understand correctly I can use whatever controller I see more fit to manage the interrupts that I need, either directly or cascated from the other (I'm going to need to check the proper cause registers)

bzt
Posts: 160
Joined: Sat Oct 14, 2017 9:57 pm

Re: Help understanding aarch64 interrupts

Sat Sep 01, 2018 8:03 am

maldus wrote:
Sat Sep 01, 2018 7:14 am
Many thanks, this is exactly what I needed.
Welcome!
maldus wrote:If I understand correctly I can use whatever controller I see more fit to manage the interrupts that I need, either directly or cascated from the other (I'm going to need to check the proper cause registers)
Nope. You have to use the controller that's the peripheral is wired to. For example if you want a Mailbox interrupt, then there's no point in configuring the MMIO 0x3F00... controller. Likewise, if you need UART0 interrupt, then you have to configure *both* controllers because they are cascaded, and UART's signal goes through both before it reaches the CPU.

You can only configure which interrupt type (IRQ or FIQ) to be triggered and (if the IRQ is not core-related then) on which core to trigger, but you can't decide which controller should receive the IRQ signal. That depends on how the SoC is wired.

Just for the records, the Linux kernel does not use FIQ, all peripheral signals are translated as IRQ interrupts (and the controllers select which core should be triggered).

Cheers,
bzt

LdB
Posts: 856
Joined: Wed Dec 07, 2016 2:29 pm

Re: Help understanding aarch64 interrupts

Sat Sep 01, 2018 4:23 pm

I suspect that might also be to allow the multicores to handle the interrupts, otherwise one core would always have to handle all the interrupts.

When you do it on xilinx soc multicores the table is setup in an area they call an injection control block that all the cores share data in and all interrupts are "void (*interrupt) (void)" the next available core coming into the scheduler runs along the the table grabs the function address and races off to process the interrupt.

The actual interrupt code itself is really short on xilinx it just queues an interrupt number and unless the core was free it returns back. So any spare core before grabbing another task will check that the interrupt count is zero and if it isn't dequeue the next interrupt number and race off and process the interrupt handler. The actual number in the queue is just a system number it doesn't directly mean anything other than there is code that maps the number to an IRQ line that the code knows what peripherals are on that line and so what it must check.

From memory on heavy transfers using IRQ they set an affinity so the first core dealing with the IRQ stays with it so they don't thrash the caches or something like that. I guess it comes down to how much of an RTOS you want versus trying to balance the load on the cores.

Return to “Bare metal, Assembly language”

Who is online

Users browsing this forum: No registered users and 2 guests