thunderbird985
Posts: 9
Joined: Mon Aug 24, 2020 5:00 pm

Understanding Low Level SPI communications [Help]

Sun Jul 18, 2021 7:38 pm

Hi,

I am working on an application that uses SPI to communicate with a 12 bit ADC - the MCP3208 - and I'm having problems translating the SPI data into usable values. I'm using the pigpio library's spiXfer function.

Something just isn't clicking for me. I understand how bit shifting works. I understand how the SGL/DIFF, D2, D1, and D0 bits are set and sent to the ADC. But after that, I get lost when trying to follow how the ADC data out is put into the 3 different one-byte buffers, and how to prepare the captured data to turn it into a usable value in the 12-bit range.

For example, I have my scope hooked up to CLK and MOSI, and I captured the following MOSI transaction:

Image

I printed the values that spiXfer put into the rxBuf as well:

Code: Select all

Buf[0]: 0   Buf[1]: 2   Buf[2]: 247
I don't think my scope is setup to decode properly because I am providing the ADC a 1.65V signal and Vref is 3.3V, so the ADC output value should be about 2047. The scope is decoding it as 2944 as shown in the screenshot.

I am a bit frustrated right now because I have 3 different measurements of the data and none of them agree with each other. (scope, C code, and multimeter). The only thing I know is that I am providing the ADC input channel with a 1.65V signal. The scope decode is showing me 2944/4095 which suggests a voltage of 2.37V, and the rxBuf in my code holds some arbitrary values.

I am making the assumption that the values in the rxBuf somehow translate into the value ~2047, but I can't figure out how. The only idea I've had is that buf[1] and buf[2] are two 8-bit numbers that are supposed to be combined/concatenated, like:
... 00000001 and 11110111 being concatenated for a final value of 111110111... this doesn't make sense because the decimal value of this number is 503.

<BREAK>

Looking at the 10-bit sample code I"ve found for spiXfer, there are some bitwise operations being done to the values in the rxBuf array:

Code: Select all

spiXfer(h, buf, buf, 3);
int v = ((buf[1] & 3) << 8) | buf[2];
I don't understand why buf[1] is AND'd with 3, and then shifted left 8, and then OR'd with buf[2]. I'm hoping that if I can understand why these specific steps are done for a 10-bit value, then I can understand how to adjust it for a 12-bit value.

I've found examples for 12-bit ADCs where buf[1] gets AND'd with 15 instead of 3. I've tried this, however, and this gives me an output value of 503 instead of 2047.

I would sincerely appreciate someone helping me out and pointing out where I'm going wrong, and even explaining the reasoning behind the bitwise operations!

User avatar
typematrix
Posts: 32
Joined: Sun Jul 02, 2017 3:55 pm
Location: Europe
Contact: Website

Re: Understanding Low Level SPI communications [Help]

Mon Jul 19, 2021 1:22 am

Code: Select all

spiXfer(h, buf, buf, 3);
int v = ((buf[1] & 3) << 8) | buf[2];
The purpose of AND'd 3 with b1 is too mask off the unwanted bits in this case we only want the first two bits

b1= XXXX XXYY
3= 0000 0011

This leaves us with 0000 00YY

We then bit shift this to left 8 places and end up with

YY 0000 0000

We then OR that b2 = ZZZZ ZZZZ
and end up with YY ZZZZ ZZZZ ( 10 bit resolution )

So for 12 bit
we want XXXX YYYY of b1
so we need to apply a bitmask of 1111 with b1 or F

replace the 3 with 0x0F

jayben
Posts: 306
Joined: Mon Aug 19, 2019 9:56 pm

Re: Understanding Low Level SPI communications [Help]

Mon Jul 19, 2021 8:26 am

Doing a quick decode by eye of the scope trace, it looks like the returned data is 00000001 11101000 00000000

The data sheet is at https://ww1.microchip.com/downloads/en/ ... 21298e.pdf

If you look at the response format defined in figure 5-1, you'll see there are 6 dummy bits (covering the outgoing command) then a 'null bit', then 12 data bits.

So the data bits are 011110100000, which is 7A0 hex, or 1952 decimal.

Not quite the 2047 value you are expecting, but there may well be a noise or offset issue, or the voltage reference is slightly off.

The scope decode is completely wrong because you've set LSB-first, and 12 bits starting with the dummy bits, which should be ignored. I suggest you set it to MSB-first 8 bits, then do the decode as above.

LdB
Posts: 1703
Joined: Wed Dec 07, 2016 2:29 pm

Re: Understanding Low Level SPI communications [Help]

Mon Jul 19, 2021 1:13 pm

What you have not shown is what you put in the buffer before you begin clocking

You state you understand how an SPI works but without seeing the MOSI or what you put in the buffer we have no idea what is coming out the MISO because we can only hope you set SGL/DIFF, D2, D1, and D0 bits right ..... I doubt it given that output.

For example if the buffer was just 3 zero bytes you are going to get ANI0 ANI1 differential voltage back.

SGL/DIFF, D2, D1, and D0 bits have to be set in the buffer before you start clocking because you are using the same buffer for TX and RX on the SPI ... this line of code tells us that you are using the same buffer. No problem with that it will clock byte 0 out then fill byte zero, then clock byte 1 out and fill byte 1 and clock byte 2 out and fill byte 2

Code: Select all

spiXfer(h, buf, buf, 3);
So please what is in the buffer before that call.

thunderbird985
Posts: 9
Joined: Mon Aug 24, 2020 5:00 pm

Re: Understanding Low Level SPI communications [Help]

Tue Jul 20, 2021 1:03 am

Thank you all for your replies. I've replied to each of them below.
typematrix wrote:
Mon Jul 19, 2021 1:22 am

Code: Select all

spiXfer(h, buf, buf, 3);
int v = ((buf[1] & 3) << 8) | buf[2];
The purpose of AND'd 3 with b1 is too mask off the unwanted bits in this case we only want the first two bits

b1= XXXX XXYY
3= 0000 0011

This leaves us with 0000 00YY

We then bit shift this to left 8 places and end up with

YY 0000 0000

We then OR that b2 = ZZZZ ZZZZ
and end up with YY ZZZZ ZZZZ ( 10 bit resolution )

So for 12 bit
we want XXXX YYYY of b1
so we need to apply a bitmask of 1111 with b1 or F

replace the 3 with 0x0F
This helps - thanks! I see why the &3 is required now, thank you. And I understand why we need to &15 (or a hex F) to work with 12 bit. I still haven't figured it out 100% though...

jayben wrote: Doing a quick decode by eye of the scope trace, it looks like the returned data is 00000001 11101000 00000000

The data sheet is at https://ww1.microchip.com/downloads/en/ ... 21298e.pdf

If you look at the response format defined in figure 5-1, you'll see there are 6 dummy bits (covering the outgoing command) then a 'null bit', then 12 data bits.

So the data bits are 011110100000, which is 7A0 hex, or 1952 decimal.

Not quite the 2047 value you are expecting, but there may well be a noise or offset issue, or the voltage reference is slightly off.

The scope decode is completely wrong because you've set LSB-first, and 12 bits starting with the dummy bits, which should be ignored. I suggest you set it to MSB-first 8 bits, then do the decode as above.
The next post after yours made me realize I had my scope on MOSI, not MISO... so I think the data you decoded was what I sent to the ADC and not what it sent back to me. I'm sorry - but thank you! I have now set my scope to MSB first with a 8-bit length and I'm still not sure if it's decoding properly. I don't think it knows that it needs to ignore the first 6 bits, so changed the decoding to binary so I can try to do it by hand. See photo below.
LdB wrote: What you have not shown is what you put in the buffer before you begin clocking

You state you understand how an SPI works but without seeing the MOSI or what you put in the buffer we have no idea what is coming out the MISO because we can only hope you set SGL/DIFF, D2, D1, and D0 bits right ..... I doubt it given that output.

For example if the buffer was just 3 zero bytes you are going to get ANI0 ANI1 differential voltage back.

SGL/DIFF, D2, D1, and D0 bits have to be set in the buffer before you start clocking because you are using the same buffer for TX and RX on the SPI ... this line of code tells us that you are using the same buffer. No problem with that it will clock byte 0 out then fill byte zero, then clock byte 1 out and fill byte 1 and clock byte 2 out and fill byte 2

Code: Select all

spiXfer(h, buf, buf, 3);
So please what is in the buffer before that call.
Aggghh, I had the scope pin set on MISO instead of MOSI. So I wasn't even comparing the correct data to what I was getting in C. I've re-ran the capture on MOSI instead so that we can see the raw bit output, including what I'm sending to the ADC in the buffers:

Code: Select all

Sending:    Buf0:  1      Buf1:    208      Buf2:   0
Received:   Buf0:  0      Buf1:    1        Buf2:   246
I build the buffer like so:

Code: Select all

char buf[3];
buf[0] = 1;
buf[1] = 13 << 4;
buf[2] = 0;
I understand that buf[1] is the sgl/diff, d2, d1, and d0 bits. I am assuming that b[0] contains only the start bit, or in other words, 00000001. This code samples channel 5 specifically, in single-ended configuration, which is 13 = 1101. This value then gets right-padded with zeros to fill out an entire byte. And then I understand that it doesn't matter what goes into b[2] because the datasheet specifies it "doesn't care" what comes in on the 6th clock and beyond. That's my understanding at least...

It's the response that I'm still getting stuck on. Here is a new transaction with my scope set properly (I think). I am displaying binary instead of trying to decode to decimal as I think that was messing me up even more.

Image

So, from the top... I received (from C's perspective... not the scope screenshot):
  • b[0] = 0 Makes sense to me, this byte wouldn't be touched unless we were working with 16+ bits (half-words I think they're called?)
  • b[1] = 1 Assuming this is one of the MSBs of the 12-bit value. Anyways, this expands to 00000001, and the right most bits correspond with B11, B10, B9, and B8 in the response.
  • b[2] = 246, which is 11110110. This exact byte was picked up by my scope, so I'm happy to see something that finally agrees.

What odd is that b[1] from my scope's perspective is 11011001, and the four important bits are 1001, which is decimal 9. However, my C code said that b[1] is simply just 1.

Now, just walking through the logic process as I understand it using the values that my C code printed, NOT the values my scope showed...
We need all 8 bits of b[2], plus the first four bits of b[1] to build a 12 bit value. (By first four, I mean the right-most four smaller bits of b[1]... the positions representing values 1, 2, 4, and 8 in a byte).

So, masking b[2] with 00001111 with &15 returns 00000001. Left shift this 8 positions to make room for the next 8 bits in b[2], and drop the leading zeros, and we get 100000000. Then, this value is OR'd with b[2], which comes out to be 111110110... or 502 in decimal.

502 is not correct, but it's quite close to being 25% of what I'd expect. Does that mean I need to left shift two more places, so left shift 10 instead of 8?

If I follow the same bitwise process using the b[1] that the scope reported, I end up with binary 100111110110 = decimal 2550. I don't think this is anywhere close and my scope is probably just decoding the first two bytes incorrectly still.

thunderbird985
Posts: 9
Joined: Mon Aug 24, 2020 5:00 pm

Re: Understanding Low Level SPI communications [Help]

Tue Jul 20, 2021 5:17 am

I took a break and came back to troubleshooting with fresh eyes. I'm still stuck. I'm convinced something is wrong elsewhere in my setup because in all the C drivers online for the MCP320x, I see the same bitwise conversion ((buf[1] &15) << 8) for 12-bit output values.

So, I just rewrote this little snippet from scratch in case there was something hiding in my existing code. The result is still the same - I'm still getting what seems to be like 10 bit nonsense out of the MCP3208.

I have checked everything with my hardware. I have two MCP3208's on my breadboard, and a handful of prototype PCBs I designed with dual MCP3208s, a stable 3.3 Vref IC, and terminal blocks for connecting to the ADC input pins. All setups exhibit the same behavior, so I've pretty much ruled out hardware.

So, onto software... I started with a fresh slate and just rewrote this little test. The results are the same:

Code: Select all

#include "pigpio.h"
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <unistd.h>

// Compile with:
// gcc -o 12bit 12bit.c -lpigpio

// SPI Configuration
#define spi_mode 0
#define spi_interface_1 0
#define spi_interface_2 1
#define clock 500000
#define CHAN0 0
#define CHAN1 1
#define CHAN2 2
#define CHAN3 3
#define CHAN4 4
#define CHAN5 5
#define CHAN6 6
#define CHAN7 7

void main()
{
    int init = gpioInitialise();
    if (init < 0)
    {
        printf("Problem initializing GPIOs.\n");
        exit(0);
    }

    int h1 = spiOpen(spi_interface_1, clock, 0);
    int h2 = spiOpen(spi_interface_2, clock, 0);

    while (1)
    {
        char buf1[3], buf2[3];
        buf1[0] = 1;
        buf1[1] = (CHAN0 | 0b1000) << 4;
        buf2[0] = 1;
        buf2[1] = (CHAN0 | 0b1000) << 4;
        int returned_bytes1 = spiXfer(h1, buf1, buf1, 3);
        int returned_bytes2 = spiXfer(h2, buf2, buf2, 3);
        int value1 = ((buf1[1] & 15) << 8) | buf1[2];
        int value2 = ((buf2[1] & 15) << 8) | buf2[2];
        printf("ADC1: %d\t\tADC2: %d\n", value1, value2);
        usleep(100000);
    }
}
I hooked up a 10K potentiometer to channel 0 of both ADCs for the above snippet. Cycling the potentiometer from completely closed to completely open yields values from 0 - 1023.

I am completely baffled as to how I'm only getting 10 bit values.

jayben
Posts: 306
Joined: Mon Aug 19, 2019 9:56 pm

Re: Understanding Low Level SPI communications [Help]

Tue Jul 20, 2021 9:40 am

The scope decode is still completely wrong; those low-amplitude (roughly 1 volt) pulses are caused by the clock & MOSI signals being carried over by capacitative coupling with the MISO line, as it is floating (not being driven by the CPU or ADC).

So this proves that the command you are sending is wrong; referring to the original trace, it starts with several zero bits; according to the datasheet, the ADC will do nothing until it sees a non-zero 'start' bit, hence the large amount of noise on the MISO line.

Once the ADC sees the start bit, it will take the following 4 bits as command bits, then a sample bit, then a null bit, then the data will start appearing on MISO.

So to decode MISO in your current setup, skip 7 bits because you've sent the 'start' bit 7 bits late, then skip another 6 bits to cover the command. You'll then see that instead of pulsing up to 1V, the MISO signal is being driven by the ADC to 0V, indicating it has received and processed your command. The first response bit is fixed at zero (null) then comes the 12 data bits; I think they are 0111 1101 1000 which is 7D8 hex, which is near enough right.

The key thing to remember is that the ADC waits for the MOSI 'start' bit before doing anything, so if that is delayed by an arbitrary number of zeros, then the MISO response will be delayed by the same number of bits.

thunderbird985
Posts: 9
Joined: Mon Aug 24, 2020 5:00 pm

Re: Understanding Low Level SPI communications [Help]

Wed Jul 21, 2021 9:29 pm

Thank you so much jayben. That was it. My code was sending the start bit 2 bits late, and so all the rest of the data from the ADC was 2 bits shifted and the final two LSB of data were truncated.

I was using the 10-bit version of the code to send to the chip because I didn't fully understand the charts in the datasheet. Now I do. Using Figure 6.1 in the datasheet was the key to breaking up the channel configuration selection into the LSB of the first byte and the MSB of the second byte.

Here is a working example I put together for future readers:

Code: Select all

#include <pigpio.h>

int get_sample(int channel, int handle)
{
    // Accepts an integer representing the ADC handle and an integer corresponding to the channel number.

    char buf[3];
    buf[0] = 0b00000110 | (channel >> 2);
    buf[1] = 0b11000000 & (channel << 6);
    buf[2] = 0;
    spiXfer(handle, buf, buf, 3);
    uint v = ((buf[1] & 15) << 8) | buf[2];
    return v;
}

Return to “C/C++”