Heater wrote: ↑
Fri Nov 16, 2018 3:06 am
And as such none of what is being ported took advantage of features specific to either platform...
That is a fair point.
I imagine that inorder to be cross platform they have to work at some level of lowest common denominator in terms of GUI features.
Personally I very happy that that lowest common denominator is high enough that all those applications behave exactly the same on all platforms they run on. That means I don't have any weirdness or even differences to get used to when moving from place to place. Everything is the same familiar and comfortable.
The ease of maintenance that comes from Assembly...
Sometimes I start to believe that you are joking with us. Nobody could possibly claim, with a straight face, that writing code in assembler increases the ease of maintenance. It does not. Having worked for too many years in assembler, on various machines, I know this to be true.
Kind of a joke of forms I guess. I was pointing out how messy a lot of high level language code is now days. I see uninteligable C presented as good examples every day, and worse C code in production level products. Sometimes it looks like the author of sections of many projects is bidding to win the Obfuscated C Contest, and is very good at the obfucation part.
There are multiple sides to that coin. Yes well written code in a High Level Language will be more maintainable, though code written by enough people in C can become much more of a headache to maintain than the same written in well thought out assembly.
Though as long as the code is clean and has a minimal of contributers, yes high level languages are much easier to maintain. Or if the assembly language author is not very good about keeping there code modular high level languages are much easier to maintain.
...sacrifices some portabiltiy.
Similarly no one can claim that assembler is more portable. It is not.
Which is why I said that using assembly sacrifices some portability. At best it ties you to a single CPU.
The tradition used to be let the compiler do the simple optimizations, and hand optimize the speed critical sections of the compilers output.
Perhaps it was.
Three decades ago I had such notions on a project for it's performance bottle neck. Not even in C, this project was in PL/M 86. I found I could get a huge speed boost by rearranging the algorithms PL/M source a bit and unrolling a loop. After that I looked into the assembler output to see what further gains to could be had. Turned out to only be a few percent. We left as PL/M source as we had achieved the performance target and having readable, maintainable source was a better idea.
Cool. Though a few percent can make a huge difference in code that is run hundreds of thousands of time per second, a fraction of a percent can make a huge difference at that.
Today it's very hard to out do the compiler with even those source tweaks. It knows many more tricks to optimize code that you do. Also optimizations that work well on one machine may deoptimize things when moved to another machine. This is true even when moving code between different generations of the same architecture. New processor revisions with different caches, pipelines, branch predictors, execution units, instructions may turn you old optimizations into deoptimizations.
There is no question that modern compilers are capable of extreme optimizations, though at a cost which is often way to high. It makes since to allow the compiler to do the optimizations if the project is small enough that it does not take very long to compile with the optimizations turned on.
On the other hand when a project gets to the point where it takes more than half an hour to compile with the needed level of optimization (which as you know does not need to be very big), then it no longer makes sence to have the compiler do so much optimization (and it is time to fall back to tcc
Remember that the time critical sections are usually only a few small sections of the code, and if you know exactly where these are it is fairly easy to optimize by hand in a fairly small period of time (under half an hour in 99% of cases) as it is often less than 1% of the code that needs optimized. And once you have optimized the output from the compiler by hand it need not be redone until there is a change in the code (unless there was a bug introduced by the hand optimization, fairly rare though does happen).
So it is still worth the time to hand optimize for a given target, as it saves many hours of wasted time waiting on a compiler.
At the end of the day the last thing you want in a large code base is different source code for every architecture you want to run on and every variant of processor within each architecture. That would be a huge waste of man power and a maintenance nightmare.
I agree with that completely. To the limit of it being worth the time. Wasting time is not worth it either.