Nope, entirely wrong.DavidS wrote: ↑Fri Nov 16, 2018 10:28 amNo you are speeding up the compile process. If you compiler with a less optimizing compiler in situations where the compile time with the better optimizing compiler takes to long, and you are only optimizing one or two loops (which is usually all that is needed) then you are taking a 2+ hour build and turning it into a 1 hour or less build.jamesh wrote: ↑Fri Nov 16, 2018 10:18 amWhich is why people have build machines and build servers. You are slowing down the compile process in order to improve the final result, which is the important bit. You only need to buy a big build rig once, but the improvements to the final output are seem by ALL the machines running the resulting code.DavidS wrote: ↑Thu Nov 15, 2018 11:32 pmRemember that there is a fairly big price we are paying for compilers to be capable of doing that level of code analysis, as it can get rather complex. The code analysis is eating RAM, CPU Time, and disk space. This is why a simple compiler like TCC is so much faster, smaller, and more effecient in its own use of resources, it does not attempt to do any significant analysis.
Almost anything you can push in to the compile process from the development and execution processes is worth doing.
And it is well known that always needing a faster system to compile is a very poor excuse.
For more see my other replies.
Compilers nowadays are fantastic, you should use them and move as much as possible to compile time as you can, even if your compile times increase. 1. Because the slower optimising compile is only done at the end of the development process. 2. Because you are trading off a long compile time for making your final product faster for everyone 3. Because buying one big compile server is cheaper than having to improve the specs on every machine your binary runs on (both in actual cost, plus environmental damage) 4. Because compilers are almost always better than handcoded assembler so it's a much faster dev process
These are all facts.