Thank you all for your replies. I've replied to each of them below.
typematrix wrote: ↑
Mon Jul 19, 2021 1:22 am
Code: Select all
spiXfer(h, buf, buf, 3);
int v = ((buf & 3) << 8) | buf;
The purpose of AND'd 3 with b1 is too mask off the unwanted bits in this case we only want the first two bits
b1= XXXX XXYY
3= 0000 0011
This leaves us with 0000 00YY
We then bit shift this to left 8 places and end up with
YY 0000 0000
We then OR that b2 = ZZZZ ZZZZ
and end up with YY ZZZZ ZZZZ ( 10 bit resolution )
So for 12 bit
we want XXXX YYYY of b1
so we need to apply a bitmask of 1111 with b1 or F
replace the 3 with 0x0F
This helps - thanks! I see why the &3 is required now, thank you. And I understand why we need to &15 (or a hex F) to work with 12 bit. I still haven't figured it out 100% though...
Doing a quick decode by eye of the scope trace, it looks like the returned data is 00000001 11101000 00000000
The data sheet is at https://ww1.microchip.com/downloads/en/ ... 21298e.pdf
If you look at the response format defined in figure 5-1, you'll see there are 6 dummy bits (covering the outgoing command) then a 'null bit', then 12 data bits.
So the data bits are 011110100000, which is 7A0 hex, or 1952 decimal.
Not quite the 2047 value you are expecting, but there may well be a noise or offset issue, or the voltage reference is slightly off.
The scope decode is completely wrong because you've set LSB-first, and 12 bits starting with the dummy bits, which should be ignored. I suggest you set it to MSB-first 8 bits, then do the decode as above.
The next post after yours made me realize I had my scope on MOSI, not MISO... so I think the data you decoded was what I sent to the ADC and not what it sent back to me. I'm sorry - but thank you! I have now set my scope to MSB first with a 8-bit length and I'm still not sure if it's decoding properly. I don't think it knows that it needs to ignore the first 6 bits, so changed the decoding to binary so I can try to do it by hand. See photo below.
What you have not shown is what you put in the buffer before you begin clocking
You state you understand how an SPI works but without seeing the MOSI or what you put in the buffer we have no idea what is coming out the MISO because we can only hope you set SGL/DIFF, D2, D1, and D0 bits right ..... I doubt it given that output.
For example if the buffer was just 3 zero bytes you are going to get ANI0 ANI1 differential voltage back.
SGL/DIFF, D2, D1, and D0 bits have to be set in the buffer before you start clocking because you are using the same buffer for TX and RX on the SPI ... this line of code tells us that you are using the same buffer. No problem with that it will clock byte 0 out then fill byte zero, then clock byte 1 out and fill byte 1 and clock byte 2 out and fill byte 2
So please what is in the buffer before that call.
Aggghh, I had the scope pin set on MISO instead of MOSI. So I wasn't even comparing the correct data to what I was getting in C. I've re-ran the capture on MOSI instead so that we can see the raw bit output, including what I'm sending to the ADC in the buffers:
Code: Select all
Sending: Buf0: 1 Buf1: 208 Buf2: 0
Received: Buf0: 0 Buf1: 1 Buf2: 246
I build the buffer like so:
Code: Select all
buf = 1;
buf = 13 << 4;
buf = 0;
I understand that buf is the sgl/diff, d2, d1, and d0 bits. I am assuming that b contains only the start bit, or in other words, 00000001. This code samples channel 5 specifically, in single-ended configuration, which is 13 = 1101. This value then gets right-padded with zeros to fill out an entire byte. And then I understand that it doesn't matter what goes into b because the datasheet specifies it "doesn't care" what comes in on the 6th clock and beyond. That's my understanding at least...
It's the response that I'm still getting stuck on. Here is a new transaction with my scope set properly (I think). I am displaying binary instead of trying to decode to decimal as I think that was messing me up even more.
So, from the top... I received (from C's perspective... not the scope screenshot):
- b = 0 Makes sense to me, this byte wouldn't be touched unless we were working with 16+ bits (half-words I think they're called?)
- b = 1 Assuming this is one of the MSBs of the 12-bit value. Anyways, this expands to 00000001, and the right most bits correspond with B11, B10, B9, and B8 in the response.
- b = 246, which is 11110110. This exact byte was picked up by my scope, so I'm happy to see something that finally agrees.
What odd is that b from my scope's perspective is 11011001, and the four important bits are 1001, which is decimal 9. However, my C code said that b is simply just 1.
Now, just walking through the logic process as I understand it using the values that my C code printed, NOT the values my scope showed...
We need all 8 bits of b, plus the first four bits of b to build a 12 bit value. (By first four, I mean the right-most four smaller bits of b... the positions representing values 1, 2, 4, and 8 in a byte).
So, masking b with 00001111 with &15 returns 00000001. Left shift this 8 positions to make room for the next 8 bits in b, and drop the leading zeros, and we get 100000000. Then, this value is OR'd with b, which comes out to be 111110110... or 502 in decimal.
502 is not correct, but it's quite close to being 25% of what I'd expect. Does that mean I need to left shift two more places, so left shift 10 instead of 8?
If I follow the same bitwise process using the b that the scope reported, I end up with binary 100111110110 = decimal 2550. I don't think this is anywhere close and my scope is probably just decoding the first two bytes incorrectly still.