see

https://numba.pydata.org/ it's not as battle hardened as js but I think it's the scientific python users trying to streamline the odd bottleneck function without refactoring the problem into numpy, tensorflow or cython. It is geared up to split over parallel processors quite easily.

I will look at fibo(4784969) but I don't know what big number capacity is built into it.

EDIT

OK, running fibo(4784969) on this computer with vanilla python3 gave a timeit value of

~~0.69ms~~0.70s, with numba jit this worsened to

~~0.81ms~~0.71s. The reason was that it can't convert the function to jit (it fails if nopython=True initially because dict isn't yet supported but I don't think it will be able to cope without some bigint functionality)

PS for completeness I looked a bit more at this and a) spotted an error in which the same dict was reused between timeit runs

so underestimated the time to do the calculation by 1000x! b) I modified the python code to use a ndarray instead of the dict which can calculate up to fibo(90) without overflowing. I also found that numba makes globals into static variables so fibs has to be passed in as a function argument. Using the code below the numba @jit function works out fibo(90) in 2.8micros c.f. vanilla python3 in 9.6micros

Code: Select all

```
import timeit
setup = '''
import math
import numpy as np
from numba import jit
N = 90
fibs = np.ones((N,), dtype=np.int64)
@jit(nopython=True)
def fib(n, fibs):
if fibs[n] > -1:
return fibs[n]
k = (n + 1) // 2
fk = fib(k, fibs)
fk1 = fib(k - 1, fibs)
if n & 1:
result = fk ** 2 + fk1 ** 2
else:
result = (2 * fk1 + fk) * fk
fibs[n] = result
return result
'''
fn = '''
fibs[:3] = [0, 1, 1]
fibs[3:] = -1
fib(N, fibs)
'''
print(timeit.timeit(fn, setup=setup, number=1000000))
```