Heater wrote: ↑
Sun Jan 13, 2019 2:48 am
I sense a change of emphasis now that you guys are including build time in the performance metric.
Not for me.
I suppose its different for these fibo programs which are run once and then forgotten, but I normally try to produce useful programs that are run countless times.
As far as possible,I require that the executable contains no instructions that are not directly involved in solving the problem.
To that end, I avoid languages with automatic garbage collection, interpreted languages, even JIT interpreted languages.
Take a look at the system commands on Linux that are run frequently. They are often strung together in large numbers in shell scripts. Most of them are executed millions of times after compilation.
I mean things like:- test, sed, ls, cp, cut, awk, mv etc. They must be fast, and they must be fast to start up.
They are all compiled C programs, a few KB in size (test is 47KB, cp is 148KB, even the powerful language awk is only 630KB).
Ridiculous. Utterly ridiculous
Code: Select all
$ time /bin/cp makefile /tmp/t
Two milliseconds elapsed to copy a 13KB file.
Compilers have no time limit
and have access to the entire source file
, even to a degree, other source files (LTO).
They might even have access to profiling feedback.
Obviously interpreters have to limit their optimization because the time required is eating into the total execution time. For long running programs it may be worthwhile, for short programs, it is a disaster. I doubt many (if any) interpreters will examine the entire source to understand how each piece of code is actually used, or even prove its not used at all.
Anything other than solving the problem is an overhead.
A = B + C must be one single machine instruction.
For any interpreted language it will be thousands.