Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 1:02 am

When you put the output/destination on the left then code then matches more closely the arithmetic expression as we normally think of it and see in the comments:
PROCbigcpy(f%%, b%%) : REM f = b
PROCbiguadd(g%%, b%%, c%%) : REM g = b + c
That is one less transformation the reader needs to make in their head to understand things.

ejolson
Posts: 3072
Joined: Tue Mar 18, 2014 11:47 am

Re: Introduction to BBC BASIC

Tue May 28, 2019 2:21 am

RichardRussell wrote:
Mon May 27, 2019 8:47 pm
RichardRussell wrote:
Mon May 27, 2019 7:08 pm
I'll take a look.
I notice that one of the optimisations involves the use of 'biginc' and 'bigdec' routines, which I haven't provided in my library, so that's not something I can instantly implement.
You can create increment and decrement operations from the ones already coded by initializing the big number one with the value 1 at the start of the program and then calling bigadd and bigsub with one as the final argument. If increment and decrement operators based on n mod 4 seem too fiddly, it may be possible to code a squaring routine that somehow leverages the symmetries in a*a so that such a multiplication only takes half as many operations as a fully general multiplication.

I put the return value first so the arguments in

bigadd(x,a,b)

appear in the same order as

x = a + b

I suspect this is also the reason GMP choose the same convention, though as already mentioned, it is a fairly arbitrary convention.

As for the classic.bas being spaghetti, as far as I know there is only one gratuitous GOTO in the entire program which occurs middle of the Karatsuba algorithm because I ran out of line numbers. All other GOTO and GOSUB statements are direct mappings of the structured flow control from the other programs. Note, however, to maintain the traditional style of classic Basic no indenting was used.

Although classic.bas does seem a bit difficult to follow, I suspect much the same effect would result by removing the indentation from the fibonacci.c program, renaming each routine something like sub0012 and then flowing the code into paragraphs.

Further discussion of program readability might best be moved here to the continuation of the Why Avoid Basic thread, which is currently entitled A Final Fibonacci Challenge.

Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 3:38 am

There is something I don't understand.

That algorithm in the classic BASIC is calculating two fibs at a time with a worker function inside the fibo function. Like the "fiboFaster" function I created in Javascript here: https://github.com/ZiCog/fibo_4784969/b ... pt/fibo.js. Which is based on one of ejolson's solutions.

It's a lot longer and more complex than the other fast fibo solutions. Yet I am seeing that it only speeds things up by about 4%. If I remove the memoization from my other JS fibo's then it is faster by about 20%.

Given the complexity of the thing I'd give up the 4%/20% performance hit and implement the simpler algorithm.

Is this just a quirk of Javascript? Or is it only marginally faster in other languages as well?

ejolson
Posts: 3072
Joined: Tue Mar 18, 2014 11:47 am

Re: Introduction to BBC BASIC

Tue May 28, 2019 5:11 am

Heater wrote:
Tue May 28, 2019 3:38 am
There is something I don't understand.

That algorithm in the classic BASIC is calculating two fibs at a time with a worker function inside the fibo function. Like the "fiboFaster" function I created in Javascript here: https://github.com/ZiCog/fibo_4784969/b ... pt/fibo.js. Which is based on one of ejolson's solutions.

It's a lot longer and more complex than the other fast fibo solutions. Yet I am seeing that it only speeds things up by about 4%. If I remove the memoization from my other JS fibo's then it is faster by about 20%.

Given the complexity of the thing I'd give up the 4%/20% performance hit and implement the simpler algorithm.

Is this just a quirk of Javascript? Or is it only marginally faster in other languages as well?
Could it be longer but less complex?

The resulting algorithm is tail recursive if you do two at a time. This reflects the fact that the Fibonacci sequence is a two-step scheme. From one point of view, memoisation using suitable hash functions is pretty complicated compared to tail recursion. This is especially true in line-numbered Basic.

Note that for a three-step scheme (an extra term for rabbits dying?) one could work with a three dimensional state space and a three-by-three matrix to achieve a similar effect.

User avatar
RichardRussell
Posts: 531
Joined: Thu Jun 21, 2012 10:48 am

Re: Introduction to BBC BASIC

Tue May 28, 2019 6:58 am

ejolson wrote:
Tue May 28, 2019 2:21 am
You can create increment and decrement operations from the ones already coded by initializing the big number one with the value 1 at the start of the program and then calling bigadd and bigsub
Yes, but it's particularly inefficient in the case of my library. The use of BBC BASIC's whole-array addition can be a big win compared with a loop, but only if the two values being added have the same or similar number of limbs. Adding a number with many limbs to a single-digit like '1' will be significantly slower than a naive increment, so if I'm going to include 'biginc' and 'bigdec' I want to code them independently.
I suspect this is also the reason GMP choose the same convention, though as already mentioned, it is a fairly arbitrary convention.
I've changed my library now.
Although classic.bas does seem a bit difficult to follow, I suspect much the same effect would result by removing the indentation from the fibonacci.c program, renaming each routine something like sub0012 and then flowing the code into paragraphs.
You're right, I was a bit glib about the spaghetti comment, sorry. There are many reasons for the lack of readability, most of which don't stem from your coding style but limitations of that particular BASIC dialect.

User avatar
RichardRussell
Posts: 531
Joined: Thu Jun 21, 2012 10:48 am

Re: Introduction to BBC BASIC

Tue May 28, 2019 9:22 am

RichardRussell wrote:
Tue May 28, 2019 6:58 am
I want to code them independently.
I've now added biginc and bigdec to my library, reducing the overall execution time to about 200 seconds using this code (based on visual.bas):

Code: Select all

      HIMEM = PAGE + 20000000
      INSTALL @lib$ + "bigint"
      PROCbiginit

      fibo%% = FNbignew(1000000)

      TIME = 0
      PROCfibo(4784969, fibo%%)
      *spool fibo1M.txt
      PRINT FNbigstr(fibo%%)
      *spool
      PRINT TIME/100 " seconds"
      END

      REM Fibonacci calculation using the doubling algorithm:
      DEF PROCfibo(N%, f%%)
      LOCAL S%, a%%, b%%, c%%
      S% = N% * LOG((SQR(5) + 1) / 2) + 1
      a%% = FNbignew(S%) : b%% = FNbignew(S%)
      PROCfibo2(N% DIV 2, a%%, b%%)
      c%% = FNbignew(S%)

      IF N% AND 1 THEN
        REM f = b*(2*b-a)-(-1)^k
        PROCbiguadd(f%%, b%%, b%%)
        PROCbigusub(c%%, f%%, a%%)
        PROCbigumul(f%%, b%%, c%%)
        IF N% MOD 4=1 THEN PROCbigudec(f%%) ELSE PROCbiguinc(f%%)
      ELSE
        REM f = b*(2*a+b)
        PROCbiguadd(f%%, a%%, a%%)
        PROCbiguadd(c%%, f%%, b%%)
        PROCbigumul(f%%, b%%, c%%)
      ENDIF
      ENDPROC

      DEF PROCfibo2(N%, f%%, g%%)
      LOCAL S%, a%%, b%%, c%%, d%%
      S% = N% * LOG((SQR(5) + 1) / 2) + 1
      IF N% = 0 THEN
        PROCbigval(f%%, "0") : REM f = 0
        PROCbigval(g%%, "1") : REM g = 1
        ENDPROC
      ENDIF
      a%% = FNbignew(S%) : b%% = FNbignew(S%)
      PROCfibo2(N% DIV 2, a%%, b%%)
      c%% = FNbignew(S%) : d%% = FNbignew(S%)

      IF N% AND 1 THEN
        REM f = a*(2*a+b)+(-1)^k
        REM g = b*(2*a+b)
        PROCbiguadd(c%%, a%%, a%%)
        PROCbiguadd(d%%, c%%, b%%)
        PROCbigumul(g%%, b%%, d%%)
        PROCbigumul(f%%, a%%, d%%)
        IF N% MOD 4 = 1 THEN PROCbiguinc(f%%) ELSE PROCbigudec(f%%)
      ELSE
        REM f = a*(2*b-a)
        REM g = b*(2*b-a)-(-1)^k
        PROCbiguadd(c%%, b%%, b%%)
        PROCbigusub(d%%, c%%, a%%)
        PROCbigumul(f%%, a%%, d%%)
        PROCbigumul(g%%, b%%, d%%)
        IF N% MOD 4 = 0 THEN PROCbigudec(g%%) ELSE PROCbiguinc(g%%)
      ENDIF
      ENDPROC

Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 5:06 pm

ejolson,
Could it be longer but less complex?
No, it could not. Trust me, I know.

It's like this: A mathematical dunder head is doing his drunkards walk though the world wide web and per chance stumbles into this page about fast ways to calculate Fibonacci numbers: https://www.nayuki.io/page/fast-fibonacci-algorithms. "That's interesting" he thinks, and after vaguely grasping the maths of it all decides to code up the formulas shown there:

F(2k) = F(k)(2F(k + 1) - F(k))
F(2k + 1) = F(k + 1)^2 + F(k)^2

Half an hour later he has a working fast fibo in Javascript: Now archived here: https://github.com/ZiCog/fibo_4784969/b ... pt/fibo.js

Code: Select all

function fibo (n) {
  if (typeof memo[n] != 'undefined') {
    return memo[n]
  }
  let k = (n / 2) | 0
  let a = fibo(k);
  let b = fibo(k + 1);
  if (isEven(n)) {
    return memo[n] = a * ((b * 2n) - a)
  }
  return memo[n] = a ** 2n + b ** 2n
}
What could be less complex? Even our dunder head could do it?

Later came some smarter solutions, some much longer and more complex. With an increase in performance of only 1.8. Given that dunder head's solution is already over 30 times faster than the schoolboy iterative solution what the heck? Give me simple.

Without memoization:

Code: Select all

fibo             in 15237ms
fiboFaster       in 7000ms
With original memoization of fibo() by dunder head:

Code: Select all

fibo             in 12366ms
fiboFaster       in 6858ms
The resulting algorithm is tail recursive if you do two at a time.
That is great and all. Perhaps from an algorithmic purist kind of view. However the only language system we have here that supports tail call optimization is Scheme (as far as I know). Besides, given all the other work going on with big integer maths I don't think tail call optimization would even show up in the performance results. It's not required anyway as not much stack depth is used at these number sizes.
From one point of view, memoization using suitable hash functions is pretty complicated compared to tail recursion.
Now here is the thing. The challenge was all about language expressiveness. It happens that Javascript, Python and the like can express memoization trivially. How it's implemented under the hood is of no more concern than how big integers are implemented or even GOTO in BASIC. The desired algorithm is clearly stated.

I have no idea about those multi-dimensional rabbits :)

Note: Timings above do not include actual printing of the fibo(4784969) result. That takes Javascript an inordinate two and a half minutes!

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 5:12 pm

Heater wrote:
Tue May 28, 2019 5:06 pm
However the only language system we have here that supports tail call optimization is Scheme (as far as I know).
And C of course. Well GCC does it routinely.

Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 5:50 pm

jahboater,
And C of course. Well GCC does it routinely.
We should not confuse a language, say C, with an implementation, say GCC. They are very different things.

The C language standard does not mandate tail call optimization. The Scheme definition does.

GCC and Clang/LLVM may well do tail call optimization. They only do it when optimizations are enabled and the circumstances under which they do it are poorly documented.

In Scheme one can do this:

Code: Select all

$ cat count.scm
(define countUp (lambda (n)
    (display n)
    (newline)
    (countUp (+ n 1))
))
(countUp 0)
$ chibi-scheme count.scm
0
1
2
3
...
And be sure it ain't never going to stop. By definition.

In C this is not to be relied on.

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 6:30 pm

Heater wrote:
Tue May 28, 2019 5:50 pm
GCC and Clang/LLVM may well do tail call optimization. They only do it when optimizations are enabled and the circumstances under which they do it are poorly documented.
Its not documented at all. It is, as you quite rightly say, an implementation detail.
Heater wrote:
Tue May 28, 2019 5:50 pm
In Scheme one can do this:

Code: Select all

$ cat count.scm
(define countUp (lambda (n)
    (display n)
    (newline)
    (countUp (+ n 1))
))
(countUp 0)
$ chibi-scheme count.scm
0
1
2
3
...
And be sure it ain't never going to stop. By definition.

In C this is not to be relied on.
I see what you mean. In Scheme each iteration requires no extra stack, so it loops forever.
I wrote this in C ...

Code: Select all

static void
countup( int n )
{
  printf( "%d\n", n );
  countup(n + 1);
}

int
main( void )
{
  countup(0);
}
Unfortunately, the compiler detects the infinite recursive loop, and just outputs ...

n = 0;
loop:
printf( "%d\n", n );
++n;
goto loop;

There is no tail call optimization, its removed the entire function!
So it does the same as the Scheme version, loops forever, as it must do because there is no terminating condition.
I guess you are saying a crap compiler might implement it literally, with a function and recursion, which will fail when the stack runs out.

Interesting!

Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 7:12 pm

jahboater ,

Wait a minute there. What is going on? Before I posted I tried an almost identical experiment with C and GCC.This C code:

Code: Select all

$ cat count.c
#include <stdio.h>

void countUp(int n) {
    printf ("%d\n", n);
    countUp(n + 1);
}

int main (int argc, char* argv[]) {
    countUp(0);
    return 0;
}
Compiled without optimization displays a rising count until it segfaults when out of stack:
$ gcc -Wall -O0 -o count count.c
$ ./count
0
1
2
3
...
261878
261879
261880
261881
Segmentation fault (core dumped)
$
[/code]
But compiled with -O3 it is still running....

I would have to inspect the compiled assembler to verify it does tail call optimization in this case. As you say, I would expect the function to get inlined and disappear. There should be no function, only a loop. Same thing really.

This is all very dodgy territory. Turning on optimization should not change the behavior of a program. Except of course if the source is depending on behavior that is undefined by the C standard. Which it is in this case.

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 7:21 pm

I tried various optimization levels

-O0 plain recursive function - fails when stack runs out
-O1 same
-O2 replaces function with a simple loop
-O3 same

As you say, its probably UB so the compiler can do what it likes.
Its amusing that in this case it has corrected the code ....
Last edited by jahboater on Tue May 28, 2019 7:24 pm, edited 1 time in total.

Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 7:22 pm

jahboater,
I guess you are saying a crap compiler might implement it literally, with a function and recursion, which will fail when the stack runs out.
No. That is not what I am saying.

We have two thing going on here at least:

1) Function inlining.

2) Tail call optimization.

I'm sure either one of those could be done on it's own with optimizations turned on. Depending on how ones source meets the criteria of the optimizers.

I could imagine the function is not inlined but is tail call optimized. Still gets called, but jumps back to it's beginning rather than calling itself.

If by "crap compiler" you mean GCC or Clang with -O0 then yes, I hope they implement it literally.

What I'm saying is that one should not rely on behavior that is undefined by the language specification.

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 7:32 pm

Heater wrote:
Tue May 28, 2019 7:22 pm
I could imagine the function is not inlined but is tail call optimized. Still gets called, but jumps back to it's beginning rather than calling itself.
Yes. But I cannot get GCC to do that. Above -O1, it has clearly deduced what the purpose of the function was and replaced it with an equivalent loop (one step better than inlining).
Heater wrote:
Tue May 28, 2019 7:22 pm
What I'm saying is that one should not rely on behavior that is undefined by the language specification.
100% agree!

Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 7:40 pm

jahboater,

Yeah, I have no idea what the criteria GCC or Clang has for function inlining. I would imagine if the function was big enough and used from many places it would refuse to inline it.

If we ever managed to make something it won't inline then the next question is, what criteria does it have for putting in tail call optimization?

I bet if we had enough time to tinker with it we could trigger all possibilities!

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 7:49 pm

Heater wrote:
Tue May 28, 2019 7:40 pm
what criteria does it have for putting in tail call optimization?
If a function ends with a call to another function, then its almost certain to replace the "call" with a "jump".
It may have to tidy up the callers stack first.
I guess in the example above it found a better optimization, so just didn't bother :)

It will also do it part way down a function, or multiple times:-

Code: Select all

 if( xxx )
   foo();
 else
   bar();
}
will likely replace them both with jumps, even one or other of them needs different stuff in the registers.

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 8:32 pm

Heater wrote:
Tue May 28, 2019 7:40 pm
Yeah, I have no idea what the criteria GCC or Clang has for function inlining. I would imagine if the function was big enough and used from many places it would refuse to inline it.
Exactly. But the heuristics it uses are complicated. It also depends on the optimization level.
For -Os it inlines less functions than for -O3 where it doesn't care about the executable size.
Obviously functions called once are always inlined.
There is "partial inlining" where, if the compiler's data-flow analysis can determine the arguments values, or its called with a literal, it inlines only the relevant parts of the function!!!

You can say ... "__attribute__ ((always_inline))"
Last edited by jahboater on Tue May 28, 2019 8:37 pm, edited 1 time in total.

plugwash
Forum Moderator
Forum Moderator
Posts: 3418
Joined: Wed Dec 28, 2011 11:45 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 8:37 pm

Heater wrote:
Tue May 28, 2019 7:12 pm
This is all very dodgy territory. Turning on optimization should not change the behavior of a program. Except of course if the source is depending on behavior that is undefined by the C standard. Which it is in this case.
IIRC the C standard has a massive caveat that even a well-formed program may fail if it exceeds the resources available on the system. The resource usage of a program can absolutely depend on the optimization level.
jahboater wrote:
Tue May 28, 2019 7:49 pm
If a function ends with a call to another function, then its almost certain to replace the "call" with a "jump".
Sadly not the case. There are at least two big difficulties with tail call optimisation in C.

The first is that in most C ABIs stack cleanup is handled by the caller. This means that the space available on the stack for passing parameters to a tail call is limited. So for example the following code cannot be tail call optimized on arm32 (it can be tail call optimized on arm64, but one could write a similar example that couldn't be by adding more parameters).

Code: Select all

void f2(int a, int b, int c, int d,int e);
void f1(int a, int b, int c, int d) {
    f2(a,b,c,d,1);
}
Even when sufficient stack space is available compilers sometimes to fail to optimise tail calls that involve parameters on the stack. For example when I try the following code on the godbolt compiler explorer it seems that older versions of gcc fail to optimise it.

Code: Select all

void f2(int a, int b, int c, int d,int e);
void f1(int a, int b, int c, int d,int e) {
    f2(a,b,c,d,1);
}
A second problem is pointers to local variables, for example the following code cannot be tail call optimized.

Code: Select all

void f2(int a, int b, int c, int d,int e);
void f3(int *a);
void f1(int a, int b, int c, int d,int e) {
    int z;
    f3(&z);
    f2(a,b,c,d,z);
}
Because the compiler doesn't know what f3 (assumed here to be in another compilation unit and not visible to the optimiser) will do with the pointer. It would be perfectly legal for f3 to store the pointer in a global variable and for f2 to read it back from said global variable and dereference it.

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Tue May 28, 2019 9:01 pm

plugwash wrote:
Tue May 28, 2019 8:37 pm
jahboater wrote:
Tue May 28, 2019 7:49 pm
If a function ends with a call to another function, then its almost certain to replace the "call" with a "jump".
Sadly not the case. There are at least two big difficulties with tail call optimisation in C.
Ah yes. I try to keep the number of arguments within reasonable bounds and avoid questionable aliasing ...
plugwash wrote:
Tue May 28, 2019 8:37 pm
For example when I try the following code on the godbolt compiler explorer it seems that older versions of gcc fail to optimise it.
Yes, not surprising. One reason why all my Pi's have GCC 9.1 :)

Thanks for the notes, interesting.

User avatar
RichardRussell
Posts: 531
Joined: Thu Jun 21, 2012 10:48 am

Re: Introduction to BBC BASIC

Tue May 28, 2019 10:27 pm

jahboater wrote:
Tue May 28, 2019 9:01 pm
Thanks for the notes, interesting.
Indeed. Dragging this back on topic, the million-digit Fibonacci calculation is, I think, the only BBC BASIC program I've ever encountered which actually runs faster using the 64-bit edition of the interpreter (coded in C, with -O2 optimisation) than using the 32-bit edition (coded in assembler). Normally the assembler implementation outperforms the C one by about a factor of two in speed.

Presumably because the Fibonacci calculation is dominated by 64-bit integer arithmetic (that's the size of my bigint limbs) the advantages of 64-bit registers and instructions are outweighing the shortcomings in code generation. But even so it only beats the 32-bit edition by a small margin.

ejolson
Posts: 3072
Joined: Tue Mar 18, 2014 11:47 am

Re: Introduction to BBC BASIC

Wed May 29, 2019 1:53 am

Heater wrote:
Tue May 28, 2019 5:06 pm
That is great and all. Perhaps from an algorithmic purist kind of view. However the only language system we have here that supports tail call optimization is Scheme (as far as I know).
A fairly transparent way of seeing how simple the calculation is when you use (a,b) to keep track of two values at a time is the Julia code on Rosetta:

Code: Select all

fib(n) = ([1 1 ; 1 0]^n)[1,2]
The fact that the algorithm is tail recursive means it is not really recursive at all and can be coded as a loop if desired.

In C you can make the local variables static. That way even if the optimiser doesn't recognize the tail call, very little stack space is used per iteration compared to pushing all those million digit numbers.

The line-numbered Basic program still calls GOSUB recursively, but since the algorithm is tail recursive it is fine for a, b and all the temporary variables to be global. This is surprisingly useful because classic line-numbered Basic doesn't even have a notion of local variables.

Update: Changed to clarify the difference between modern Basic that accepts line numbers and classic line-numbered Basic which is close to the withdrawn ECMA-55 minimal Basic standard.
Last edited by ejolson on Wed May 29, 2019 5:11 pm, edited 3 times in total.

User avatar
RichardRussell
Posts: 531
Joined: Thu Jun 21, 2012 10:48 am

Re: Introduction to BBC BASIC

Wed May 29, 2019 8:24 am

ejolson wrote:
Wed May 29, 2019 1:53 am
This is surprisingly useful because line-numbered Basic doesn't even have a notion of local variables.
The term "line-numbered BASIC" could be misleading. BBC BASIC can use line numbers (they're optional) but of course has local variables (indeed my versions also have 'static' variables using the PRIVATE keyword); I'm sure that's also true of many other BASIC dialects. I'd be more inclined to stick to the term 'classic BASIC' or perhaps 'unstructured BASIC' to describe what you mean.

I chose not to use global bigints in my BBC BASIC version of the Fibonacci program. Although doing so would have reduced memory usage, it impairs code clarity in my opinion. Indeed I found visual.bas slightly harder to follow because of the use of global a() and b() to pass 'parameters' into the recursive function, especially as it wasn't immediately obvious to me that the tail recursion made it possible.

ejolson
Posts: 3072
Joined: Tue Mar 18, 2014 11:47 am

Re: Introduction to BBC BASIC

Wed May 29, 2019 5:08 pm

RichardRussell wrote:
Wed May 29, 2019 8:24 am
ejolson wrote:
Wed May 29, 2019 1:53 am
This is surprisingly useful because line-numbered Basic doesn't even have a notion of local variables.
The term "line-numbered BASIC" could be misleading.
That's a good point. I've updated my post to make the distinction more clear.

Heater
Posts: 12692
Joined: Tue Jul 17, 2012 3:02 pm

Re: Introduction to BBC BASIC

Wed May 29, 2019 6:15 pm

plugwash,
IIRC the C standard has a massive caveat that even a well-formed program may fail if it exceeds the resources available on the system. The resource usage of a program can absolutely depend on the optimization level.
Indeed. I would expect that to be so.

However, tail call optimization, or the lack of it, is a somewhat different matter.

A tail call recursive function in C is "well formed". Both syntactically and semantically. If one were expecting to be able to make recursive solutions in C that don't use a ton of resources, as one can in Scheme, one would be sadly disappointed.

It's as well to assume C does not guarantee tail call optimization.

jahboater
Posts: 4462
Joined: Wed Feb 04, 2015 6:38 pm

Re: Introduction to BBC BASIC

Wed May 29, 2019 8:06 pm

Heater wrote:
Wed May 29, 2019 6:15 pm
If one were expecting to be able to make recursive solutions in C that don't use a ton of resources, as one can in Scheme, one would be sadly disappointed.
Why? Have you actually bench marked recursive Scheme code compared to the equivalent recursive C code?
I would put money on C's simple loop being faster than any Scheme recursive code, using tail call optimization or not.

Take a look at the output of a modern C compiler and you will see it happily using tail call optimization most of the time, but it isn't always the fastest solution.

Tail call optimization is just one of many hundreds of different optimizations C compilers do.

As long as the result is correct, and its fast enough, who cares how its done?
Last edited by jahboater on Wed May 29, 2019 8:07 pm, edited 1 time in total.

Return to “Other programming languages”