How to Compute Fibonacci Numbers?

In early 2019, due to a silly flamewar, some friends in Taiwan and I took an interest in computation of Fibonacci numbers. This post involving some inductive proofs and some light program derivation. If you think the fastest way to compute Fibonacci numbers is by a closed-form formula, you should read on!

Source: sciencefreak @ pixabay.

Let Nat be the type of natural numbers. We shall all be familiar with the following definition of Fibonacci number:

‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib :: Nat -> Nat
   fib 0     = 0
‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib 1     = 1
‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib (n+2) = fib (n+1) + fib n

(When defining functions on natural numbers I prefer to see 0, (+1) (and thus (+2)
= (+1) . (+1)
), as constructors that can appear on the LHS, while avoiding subtraction on the RHS. It makes some proofs more natural, and it is not hard to recover the Haskell definition anyway.)

Executing the definition without other support (such as memoization) gives you a very slow algorithm, due to lots of re-computation. I had some programming textbooks in the 80’s wrongly using this as an evidence that “recursion is slow” (fib is usually one of the only two examples in a sole chapter on recursion in such books, the other being tree traversal).

By defining fib2 n = (fib n, fib (n+1)), one can easily derive an inductive definition of fib2,

‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib2 :: Nat -> (Nat, Nat)
   fib2 0     = (0, 1)
‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib2 (n+1) = (y, x+y)
      where (x,y) = fib2 n 

which computes fib n (and fib (n+1)) in O(n) recursive calls. Be warned, however, that it does not imply that fib2 n runs in O(n) time, as we shall see soon.

Binet’s Formula

To be even faster, some might recall, do we not have a closed-form formula for Fibonacci numbers?

‍‍‍‍‍‍ ‍‍  fib n = (((1+√5)/2)^n - ((1-√5)/2)^n) /√5

It was believed that the formula was discovered by Jacques P. M. Binet in 1843, thus we call it Binet’s formula by convention, although the formula can be traced back earlier. Proving (or even discovering) the formula is a very good exercise in inductive proofs. On that I recommend this tutorial by Joe Halpern (CS 280 @ Cornell, , 2005). Having a closed-form formula gives one an impression that it give you a quick algorithm. Some even claim that it delivers a O(1) algorithm for computing Fibonacci numbers. One shall not assume, however, that ((1+√5)/2)^n and ((1-√5)/2)^n can always be computed in a snap!

When processing large numbers, we cannot assume that arithmetic operations such as addition and multiplication take constant time. In fact, it is fascinating knowing that multiplying large numbers, something that appears to be the most fundamental, is a research topic that can still see new breakthrough in 2019 [HvdH19].

Vorobev’s Equation

There is another family of algorithms that manages to compute fib n in O(log n) recursive calls. To construct such algorithms, one might start by asking oneself: can we express fib (n+k) in terms of fib n and fib k (and some other nearby fib if necessary)? Given such a formula, we can perhaps compute fib (n+n) from fib n, and design an algorithm that uses only O(log n) recursive calls.

Indeed, for n >= 1, we have

‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib (n+k) = fib (n-1) * fib k + fib n * fib (k+1) .      -- (Vor)

This property can be traced back to Nikolai. N. Vorobev, and we therefore refer to it as Vorobev’s Equation. A proof will be given later. For now, let us see how it helps us.

With Vorobev’s equation we can derive a number of (similar) algorithms that computes fib n in O(log n) recursive calls. For example, let n, k in (Vor) be n+1, n, we get

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   fib (2n+1) = (fib (n+1))^2 + (fib n)^2                   -- (1)

Let n, k be n+1, n+1, we get

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   fib (2n+2) = 2 * fib n * fib (n+1) + (fib (n+1))^2       -- (2)

Subtract (1) from (2), we get

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib 2n = 2 * fib n * fib (n+1) - (fib n)^2               -- (3)


The LHS of (1) and (3) are respectively odd and even, while their RHSs involve only fib n and fib (n+1). Define fib2v n = (fib n, fib (n+1)), we can derive the program below, which uses only O(log n) recursive calls.

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍  fib2v :: Nat -> (Nat, Nat)
  fib2v 0 = (0, 1)
  fib2v n | n `mod` 2 == 0 = (c,d)
          | otherwise      = (d, c + d)
     where (a, b) = fib2v (div n 2)
           c      = 2 * a * b - a * a
           d      = a * a + b * b

Which Runs Faster?

Having so many algorithms, the ultimate question is: which runs faster?

Interestingly, in 1988, James L. Holloway devoted an entire Master’s thesis to analysis and benchmarking of algorithms computing Fibonacci numbers. The thesis reviewed algorithms including (counterparts of) all those mentioned in this post so far, and some more algorithms based on matrix multiplication. I will summarise some of his results below.

For a theoretical analysis, we need know the number of bits needed to represent fib n. Holloway estimated that to represent fib n, we need approximately n * 0.69424 bits. We will denote this number by N n. That N n is linear in n is consistent with our impression that fib n grows exponentially in n.

Algorithm fib2 makes O(n) recursive calls, but it does not mean that the running time is O(n). Instead, fib2 n needs around N (n^2/2 - n/2) bit operations to compute. (Note that we are not talking about big-O here, but an approximated upper bound.)

What about Binet formula? We can compute √5 by Newton’s method. One can assume that each n bit division needs n^2 operations. In each round, however, we need only the most significant N n + log n bits. Overall, the number of bit operations needed to compute Binet formula is dominated by log n * (N n + log n)^2 — not faster than fib2.

Holloway studied several matrix based algorithm. Generally, they need around (N n)^2 bit operations, multiplied by different constants.

Meanwhile, algorithms based on Vorobev’s Equation perform quite well — it takes about 1/2 * (N n)^2 bit operations to compute fib2v n!

What about benchmarking? Holloway ran each algorithm up to five minutes. In one of the experiments, the program based on Binet’s formula exceeds 5 minutes when log n = 7. The program based on fib2 terminated within 5 minutes until log n = 15. In another experiment (using simpler programs considering only cases when n is a power of 2), the program based on Binet’s formula exceeds 5 minutes when log n = 13. Meanwhile the matrix-based algorithms terminated within 3 to 5 seconds, while the program based on Vorobev’s Equation terminated within around 2 seconds.

Proving Vorobev’s Equation

Finally, let us see how Vorobev’s Equation can be proved. We perform induction on n. The cases when n := 1 and 2 can be easily established. Assuming the equation holds for n (that is, (Vor)) and n:= n+1 (abbreviating fib to f):

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+1+k) = f n * f k + f (n+1) * f(k+1)      -- (Vor')

we prove the case for n:=n+2:

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+2+k)
  = { definition of f }
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+k) + f (n+k+1)
  = { (Vor) & (Vor') }
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n-1) * f k + f n * f (k+1) +
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f n * f k + f (n+1) * f(k+1)
  = { f (n+1) = f n + f (n-1) }
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+1) * f k + f n * f (k+1) + f (n+1) * f (k+1)
  = { f (n+2) = f (n+1) + f n } 
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+1) * f k + f (n+2) * f (k+1) .

Thus completes the proof.

Related Work

Dijkstra derived another algorithm that computes fib n in O(log n) recursive calls in EWD654 [Dij78].

Besides his master’s thesis, Holloway and his supervisor Paul Cull also published a journal version of their results [CH89]. I do not know the whereabouts of Holloway — it seems that he didn’t pursue a career in academics. I wish him all the best. It comforts me imagining that any thesis that is written with enthusiasm and love, whatever the topic, will eventually be found by some readers who are also enthusiastic about it, somewhere, sometime.

I found many interesting information on this page hosted by Ron Knott from University of Surrey, and would recommend it too.

After the flamewar, Yoda Lee (李祐棠) conducted many experiments computing Fibonacci, taking into considerations things like precision of floating point computation and choosing suitable floating point libraries. It is worth a read too. (In Chinese.)

So, what was the flamewar about? It started with someone suggesting that we should store on the moon (yes, the moon. Don’t ask me why) some important constants such as π and e and, with the constants being available in very large precision, many problems can be solved in constant time. Then people started arguing what it means computing something in constant time, whether Binet’s formula gives you a constant time algorithm… and here we are. Silly, but we learned something fun.

References

[CH89] Paul Cull, James L. Holloway. Computing fibonacci numbers quickly. Information Processing Letters, 32(3), pp 143-149. 1989.

[Dij78] Dijkstra. In honor of Fibonacci. EWD654, 1978.

[Hol88] James L. Holloway. Algorithms for Computing Fibonaci Numbers Quickly. Master Thesis, Oregon State University, 1988.

[HvdH19] David Harvey, Joris Van Der Hoeven. Integer multiplication in time O(n log n). 2019. hal-02070778.

Leave a Comment

Your email address will not be published. Required fields are marked *