In the end of FLOLAC ’10 I had a chance to show the students, in 25 minutes, what functional program calculation is about. The student have just been exposed to functional programming a week ago in a three-hour course, after which they have written some simple programs handling concrete data but may have problem grasping those more abstract concepts like folds. I have talked to them about maximum segment sum way too many times (in the context of imperative program derivation, though), and it is perhaps too complex to cover in 25 minutes. The steep list problem, on the other hand, can be dealt with in 5 minutes. Thus I need another example.

This is what I eventually came up with: given a list of numbers `a₀, a₁, a₂ ... a`

and a constant _{n}`X`

, compute `a₀ + a₁X, + a₂X² + ... + a`

. In Haskell it can be specified as a one-liner:_{n}X^{n}

```
``` poly as = sum (zipWith (×) as (iterate (×X) 1))

`One problem of this example is that the specification is already good enough: it is a nice linear time algorithm. To save some multiplications, perhaps, we may try to further simplify it.`

It is immediate that `poly [] = 0`

. For the non-empty case, we reason:

```
``` poly (a : as)
= { definition of poly }
sum (zipWith (×) (a:as) (iterate (×X) 1))
= { definition of iterate }
sum (zipWith (×) (a:as) (1 : iterate (×X) X))
= { definition of zipWith }
sum (a : zipWith (×) as (iterate (×X) X))
= { definition of sum }
a + sum (zipWith (×) as (iterate (×X) X))

`The expression to the right of `

`a +`

is unfortunately not `poly as`

— the last argument to `iterate`

is `X`

rather than `1`

. One possibility is to generalise `poly`

to take another argument. For this problem, however, we can do slightly better:

```
``` a + sum (zipWith (×) as (iterate (×X) X))
= { since iterate f (f b) = map f (iterate f b) }
a + sum (zipWith (×) as (map (×X) (iterate (×X) 1)))
= { zipWith (⊗) as . map (⊗X) = map (⊗X) . zipWith (⊗) as
if ⊗ associative }
a + sum (map (×X) (zipWith (×) as (iterate (×X) 1)))
= { sum . map (×X) = (×X) . sum }
a + (sum (zipWith (×) as (iterate (×X) 1))) × X
= { definition of poly }
a + (poly as) × X

We have thus come up with the program

```
``` poly [] = 0
poly (a : as) = a + (poly as) × X

Besides the definitions of `sum`

, `zipWith`

, `iterate`

, etc, the rules used include:

`map f (iterate f x) = iterate f (f x)`

`zipWith (⊗) as . map (⊗X) = map (⊗X) . zipWith (⊗) as`

if`⊗`

associative`sum . map (×X) = (×X) . sum`

, a special case of`foldr ⊕ e . map (⊗X) = (⊗X) . foldr ⊕ e`

if`(a ⊕ b) ⊗ X = (a ⊗ X) ⊕ (b ⊗ X)`

and`e ⊗ X = e`

.

Well, this is not a very convincing example. Ideally I’d like to have a derivation, like the steep list, where we gain some improvement in complexity by calculation.

What is your favourite example for functional program calculation?

Pingback: Tweets that mention Evaluating Simple Polynomials -- Topsy.com

Dougal StantonI really liked that second example because you managed to derive Horner’s rule (eg, from exercise 2.38 in SICP, http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-15.html) without much trouble.

ShinPost authorThanks for the comment. Yes, it’s all about Horner’s rule!

In Algebra of Programming Bird and de Moor generalised Horner’s rule:

1. the part using

`zipWith`

and`iterate`

is instead formulated using repeated`map`

,2.

`sum`

is generalised to a fold,3. and the whole thing is generalised to structures other than lists.

They then proved the rule using fold-fusion. It was in fact the first Horner’s rule I learned about. Here I specialised Horner’s rule back to lists and try to prove it without fold-fusion.

dagitI think I’m missing something. Doesn’t this program have a type error?

poly [] = 0

poly (a : as) = a + poly as × X

The first equation gives poly 1 parameter on the LHS and the RHS is a value (instead of a function of one or more parameters). The second equation also gives poly one parameter on the LHS, but it then it calls poly with 3 parameters. Am I missing something obvious here?

dagitI think I see why I was confused. You’re using x instead of * for multiplication. So the second equation has this precedence:

poly (a : as) = a + ((poly as) * X)

ShinPost authorSorry for the confusion. In fact it is not the letter x, but ×, a unicode character for multiplication. I’ve made it worse by using a typewriter font in which they look similar, and having capital

`X`

as the constant. I should perhaps have added the brackets to avoid confusion!dagitThanks for the reply. I have another question.

Could you derive this form from your original?

poly as = foldl (\acc a -> a + acc * X) 0 as

The reason I ask is because I’m often told that using higher-order recursive functions is better than coding the recursion directly. Here I feel like it’s actually harder to see what’s going on when I express it as a fold than the explicitly recursive version. For example, I had to play with the Expr type and SimpleReflect to convince myself that I wanted foldl instead of foldr.

Doing a point free transformation we can get the even more obscure:

poly = foldl ((+) . (X *)) 0

ShinPost authorHmmm.. I’m not sure that’s right, dagit. Now that we have derived an inductive definition of

`poly`

:`one can see that`

`poly`

is in fact a`foldr`

:`which does not seem to be the same function as`

`foldl (λacc a → a + acc × X) 0`

.It is true for many people that definitions in terms of those structured recursion combinators tend to be more abstract and harder to understand. The advantage, on the other hand, is that once a program is expressed this way, we know more about its structure and properties. If a program turns out to be a fold, for example, we know that all laws about folds are applicable to the program.

The manipulation in the program is in fact an instance of

`foldr`

-fusion. You may try to express`(λas → zipWith (×) as (iterate (×X) X))`

as a`foldr`

(to do so you will need the law in the post about`iterate`

and`map`

), and fuse`sum`

into it. Then you get a definition of`poly`

as a`foldr`

. To be extreme, you can even do`and perform a`

`foldr`

-fusion. The derivation will be very similar to that in the post.dagitYou’re right. I misinterpreted the output from lambdabot. I was thinking that the coefficient next to (0 * x) was the lowest degree, but due to the nesting of the parens, if I actually multiply things out I see that it has the highest degree. My intuition had been foldr, but I did too many steps in my head and tricked myself :)

I can even use lambdabot to see this expanded out for some examples:

> foldl (\acc a -> a + acc * x) 0 [1..4] :: Expr

4 + (3 + (2 + (1 + 0 * x) * x) * x) * x

> let { poly [] = 0; poly (a : as) = a + poly as * x } in poly [1..4] :: Expr

1 + (2 + (3 + (4 + 0 * x) * x) * x) * x

> foldr (\a acc -> a + acc * x) 0 [1..4] :: Expr

1 + (2 + (3 + (4 + 0 * x) * x) * x) * x

I use lower case x here as that is already in scope with type Expr. I see that indeed, poly matches the foldr version. Thanks again! I’ll check out some of your other posts that you point out.

ShinPost authorI didn’t know about SimpleReflect before. Looks like a very useful tool (and I’m curious how it works). Thanks!

dagitI use it through lambdabot (the channel bot for #haskell on freenode). It’s also available as a standalone package on hackage:

http://hackage.haskell.org/package/simple-reflect

You can see most of the source code here:

http://hackage.haskell.org/packages/archive/simple-reflect/0.2/doc/html/src/Debug-SimpleReflect-Expr.html

It uses many of the standard type classes in clever ways to build up expressions and then pretty print them. Simple, but effective.