Or fixed and improved in the next itteretions of the hardware. Somehow they want it to go faster while using less space and less energy, they need to design new physical processes. That's not so easy, takes plenty of people and research to get to that point.ShiftPlusOne wrote: ↑Mon Oct 09, 2017 2:36 pmLarge software projects need constant bug fixes, any patches which haven't been applied upstream need to be ported to each new release (which sometimes means a complete re-write of the new features), security patches need to be applied and depending on how close to head of tree you are, new bugs are added all the time.
With hardware, once it's released, it's done. All hardware problems which are discovered after that point become software problems.
Code: Select all
-mfix-cortex-a53-835769 -mno-fix-cortex-a53-835769 Enable or disable the workaround for the ARM Cortex-A53 erratum number 835769. This involves inserting a NOP instruction between memory instructions and 64-bit integer multiply-accumulate instructions. -mfix-cortex-a53-843419 -mno-fix-cortex-a53-843419 Enable or disable the workaround for the ARM Cortex-A53 erratum number 843419. This erratum workaround is made at link time and this will only pass the corresponding flag to the linker.
I think there could be a lot more bugs than that in a large piece of software.
It's kind of hard to compare.Software is probably at least two orders of magnitude harder than hardware. ....How many bugs does a modern chip have compared to a program of similar size?
Interesting, did not know that, yet real Software Engineering started with the 360, which had a huge 8MB of memory.One (in)famous release of OS/360 was said to have 10,000 *significant* software faults in it
You might be in the wrong place.
Mind you, I had trouble getting my head around the Prop1's 8 x 32bit cores and am only here because the Prop2 is late.