Rust has its own odd behaviours:jahboater wrote: ↑Thu Oct 03, 2019 3:27 pmOK, if the original operand fits in a signed int then that's what it is promoted to.plugwash wrote: ↑Thu Oct 03, 2019 3:06 pmThe problem is if int is larger than 16 bits then the code snippet isn't unsigned arithmetic, it's conversion to int, followed by signed arithmetic, followed by conversion back to uint16_t .
Even though it would also fit in an unsigned int which would be more sensible!
Code: Select all
let a:u8=255; let b:u8=1;
(This in Rust release mode; in debug mode; it will fail with an overflow error.)
It seems to me that this makes Rust lower level than C, as well as much stricter, as every expression will be of exactly 8, 16, 32 or 64 bits, and both operands of a binary op will always be the same width.
The type of a literal constant like 1 seems to be adjusted to that of the other operand, so can be u8, u32 or u16, unless both are constants, when the wider type is used. However, expressions such as 2000000000+2000000000 or 1<<62 overflow; you have to use 1u64<<62, as the width of a literal seems to be capped at u32.
So still a little messy. (My own languages are 64-bits, since I thought all hardware was now, and those cans are kicked down the road far enough they will give the expected results on all these examples without needing to do anything special.)
Getting back to C: if you were to draw up a chart of the 64 combinations of i8/i16/i32/i64 and u8/u16/u32/u64, as to whether the binary operation is done as signed or unsigned, then the results will not have the regular pattern that you might expect, partly due to the discontuity between 32-bit and 64-bit types. I think Rust solves this at least, by not allowing such mixed arithmetic!
Too many high-end C compilers rely on undefined behaviour for them to be able to do their optimisations.It wont be undefined behavior for very long. Two's complement is now deemed to be universal.