This is gonna have a load of quotes taken pretty much at random from the thread, and some of those quotes will be deliberately taken out of context to make a point. Sorry about all of that in advance.
Heater wrote:… Dijkstra was totally on the mark
Dijkstra was famously rude. Some of his other quotes are fantastic. In this case, it's very easy to take it as a frontal attack on BASIC (which it is), but it's also an attack on the absurdly stupid concept of teaching programming using a deliberately dumbed down language, which cannot (or, rather could not
) express "modern" concepts. And he was right in both respects.
Heater wrote:So when you are done BASIC is what they think computing is. There is nothing else … they never get inspired to explore further.
Exactly. And without wanting to be rude, DavidS is a shining example of this.
And Dijkstra's famous comment holds.
Heater wrote:Indeed, modern processors make exection time almost unkowable. In the extreme an instruction fetch can require data from cache which is not there, which then causes a fetch from memory that is not there
Which is a fault, and <something happens here, dependent on operating system>. As far as the processor side of things goes, the time for the fetch to execute is bounded by the time to clear and fill a / many cache line(s), but may fail.
Heater wrote:But Lucol was more than that. It was impossible to write loops that you could not know the execution time of. It was possible to know every possible execution path through a module.
That's an exceptionally interesting language / toolset. Especially given the emphasis on guaranteed correct parallel processing. A cynic might point at the fact it was developed by Lucas, who couldn't get even make car and motorcycle headlights work correctly, and laugh, but that would be unkind. *cough* Prince of Darkness *cough*.
jahboater wrote:It is at least theoretically possible with some modern languages to prove that a program is correct.
That's always been possible, but only for "a" program, and not for "any" program. You must restrict some combination of the type of things you can do with the language, the type of programs you accept, the data you can accept for a given program. Otherwise the Turing tarpit will drag you down every time, and Gödel will kick you in the nuts. The main difficulty is making the subset you're restricted to useful.
Heater wrote:like lisp and scheme. No types there.
Paaaarp! No. There's types all over the place in lisp and scheme, it's just that you don't have to explicitly state them. The type of cons
, for example, is pair <- *, *
. A major part of making a performant lisp interpreter / compiler is to do with static type analysis, and in particular where you can discard dynamic type checking.
Heater wrote:Java … I started to realize that type are stupid. They get in the way. They obscure what you want to say with a mass of syntactic junk
No, that's a syntactic issue with Java, caused by a desire for provable type-correctness at compile time. And yet, as you point out
Code: Select all
int amps = 10;
int volts = 3;
int result = amps + volts;
… but the problem there isn't types, it's inadequate
types. For starters, as "typed" above, if you'll forgive the pun, it's a problem with languages that chain their type systems to very light abstractions over underlying machine types. But there's more to it than that. Semantics and representability are two separate things.
The solution is to deal with semantic types, so for a given simulation, volt
is a type for a continuously variable numeric value, perhaps between two bounds. Likewise amp
, and the add
operator is then only defined for amp <- amp, amp
and volt <- volt, volt
. The multiply
operator, then, might be defined as watt <- volt, amp
None of this means you necessarily have to annotate the types all over your source, either. Hindley-Milner is pretty powerful, and there's all sorts of extensions to it; most type checking can be done at compile time (we can consider an interpreter as a special case of JIT compilation). That said, a certain amount of annotation generally makes type errors much easier to read and localise.