SiriusHardware wrote:Hi gents, thanks for the interesting replies. My reasons for posing this question were twofold -
One, that code assembled or compiled to executable code should always run faster than the interpreted alternative - although of course that depends upon how it is done. In the old days the interpreter would work through every line of code working out what to do each time that line of code was encountered - a terrible waste of processor time.
That's the definition of an interpreter. A pure interpreter would always interpret each line as it came to it (with a performance hit to boot). BBC BASIC is a little better than that in that (at least) it "tokenises" recognised keywords before they're run - so that when interpreting a line it doesn't have to deal with a five character keyword (like INPUT) when doing the interpretation - instead it has a single byte tokenised character to check.
SiriusHardware wrote:I suppose that on modern machines with memory to burn, the interpreter could do a single initial pass through the whole source code effectively assembling or compiling it to a ready-to run image in RAM, and then run it.
I don't think it's a memory issue, in fairness. It was more an issue of speed. On older machines compiling a program would take an appreciable amount of time (anywhere from a second or two for a simple program up to several minutes for long ones).
When you give a command to run (in a Compiler) the entire program has to be analysed before ANY of it can be run. An interpreter (on the other hand) can start the moment you type RUN (after all it only needs to concern itself with analysing one line at a time), even if the program is 100,000 lines long the Interpreter still gets to start virtually instantly - whereas the compiler would sit there churning away for minutes.
Modern computers are so much faster that unless a program is pretty big and complicated an interpreter and compiler would in many instances seem to start simultaneously - so the downside of compiling isn't as obvious (also modern compilers are smart they don't compile any code that hasn't changed since the last compile).
The big thing is a compiler checks ALL the program and then can verify if it is all syntactically correct (an interpreter doesn't it only "looks" at the bits it executes). For example if I made a mistake in a bit of BASIC but it was rarely called - then most times the code would run correctly (the interpreter wouldn't complain as it never was "tripped" up by the error it didn't reach). But if (on another occasion) the interpreter DID enter that invalid code fragment then it would fail with a runtime error.
This makes testing interpreted programs a bit of a pain (you need to check ALL parts are reached and that they are all correct). In complex code that can be difficult.
A compiler, on the other hand, would go through the whole program before it's run- checking that each part was valid - and if there were one or more errors it would list them all. If you like the compiler "helps" you find the errors rather than letting your users find them.....