In a previous paper of mine, regrettably, I wrongly attributed the origin of the maximum segment sum problem to Dijkstra and Feijen’s *Een methode van programmeren*. In fact, the story behind the problem was told very well in Jon Bentley’s Programming Pearls.

### The Problem, and the Linear-Time Algorithm

Given a list of numbers, the task is to compute the largest possible sum of a consecutive segment. In a functional language the problem can be specified by:

```
mss = max . map sum . segments
```

where `segments = concat . map inits . tails`

enlists all segments of the input list, `map sum`

computes the sum of each of the segments, before `max :: Ord a ⇒ [a] → a`

picks the maximum. The specification, if executed, is a cubic time algorithm. Yet there is a linear time algorithm scanning through the list only once:

```
mss = snd . foldr step (0,0)
where step x (p,s) = (0 ↑ (x+p), (0 ↑ (x+p)) ↑ s)
```

where `a ↑ b`

yields the maximum of `a`

and `b`

.

Both the specification and the linear time program are short. The program is merely a `foldr`

that can be implemented as a simple for-loop in an imperative language. Without some reasoning, however, it is not that trivial to see why the program is correct (hint: the `foldr`

computes a pair of numbers, the first one being the maximum sum of all *prefixes* of the given list, while the second is the maximum sum of all segments). Derivation of the program (given below) is mostly mechanical, once you learn the basic principles of program calculation. Thus the problem has become a popular choice as the first non-trivial example of program derivation.

### Origin

Jon Bentley recorded in Programming Pearls that the problem was proposed by Ulf Grenander of Brown University. In a pattern-matching procedure he designed, a subarray having maximum sum is the most likely to yield a certain pattern in a digitised image. The two dimensional problem took too much time to solve, so he simplified to one dimension in order to to understand its structure.

In 1977 [Grenander] described the problem to Michael Shamos of UNILOGIC, Ltd. (then of Carnegie-Mellon University) who overnight designed Algorithm 3. When Shamos showed me the problem shortly thereafter, we thought that it was probably the best possible; … A few days later Shamos described the problem and its history at a Carnegie-Mellon seminar attended by statistician Jay Kadane, who designed Algorithm 4 within a minute.

Jon Bentley, Programming Pearls (1st edition), page 76.

Jay Kadane’s Algorithm 4 is the now well-known linear time algorithm, the imperative version of the functional program above:

```
maxpre, maxseg = 0, 0
for i in range (0, N):
maxpre = 0 ↑ (maxpre + a[i])
maxseg = maxpre ↑ maxseg
```

Algorithm 3, on the other hand, is a divide and conquer algorithm. An array `a`

is split into two halves `a₁ ⧺ a₂`

, and the algorithm recursively computes the maximum segment sums of `a₁`

and `a₂`

. However, there could be some segment across `a₁`

and `a₂`

that yields a good sum, therefore the algorithm performs two additional loops respectively computing the maximum suffix sum of `a₁`

and the maximum prefix sum of `a₂`

, whose sum is the maximum sum of segment crossing the edge. The algorithm runs in `O(N log N)`

time. (My pseudo Python translation of the algorithm is given below.)

In retrospect, Shamos did not have to compute the maximum prefix and suffix sums in two loops each time. The recursive function could have computed a ~~triple~~ quadruple of (maximum prefix sum, maximum segment sum, maximum suffix sum, and sum of the whole array) for each array. The prefix and suffix sums could thus be computed bottom-up. I believe that would result in a `O(N)`

algorithm. This linear time complexity might suggest that the “divide” is superficial — we do not have to divide the array in the middle. It is actually easier to divide the array into a head and a tail — which was perhaps how Kadane quickly came up with Algorithm 4!

### A Functional Derivation

I learned the function derivation of the maximum segment sum problem from one of Jeremy’s papers [3] and was very amazed. It was perhaps one of the early incident that inspired my interest in program calculation. The derivation does not appear to be very well known outside the program derivation circle — not even for functional programmers, so I would like to redo it here.

The first few steps of the derivation goes:

```
max . map sum . segs
= { definition of segs }
max . map sum . concat . map inits . tails
= { since map f . concat = concat . map (map f) }
max . concat . map (map sum) . map inits . tails
= { since max . concat = max . map max }
max . map max . map (map sum) . map inits . tails
= { since map f . map g = map (f.g) }
max . map (max . map sum . inits) . tails
```

The purpose of the book-keeping transformation above is to push `max . map sum`

closer to `inits`

. The fragment `max . map sum . inits`

is a function which, given a list of numbers, computes the maximum sum among all its prefixes. We denote it by `mps`

, for maximum prefix sum. The specification has been transformed to:

` mss = max . map mps . tails `

This is a common strategy for segment problems: to solve a problem looking for an optimal segment, proceed by looking for an optimal prefix of each suffix. (Symmetrically we could process the list the other way round, look for an optimal suffix for each prefix.)

We wish that `mps`

for each of the suffixes can be efficiently computed in an incremental manner. For example, to compute `mps [-1,3,3,-4]`

, rather than actually enumerating all suffixes, we wish that it can be computed from `-1`

and `mps [3,3,-4] = 6`

, which can in turn be computed from `3`

and `mps [3,-4] = 3`

, all in constant time. In other words, we wish that `mps`

is a `foldr`

using a constant time step function. If this is true, one can imagine that we could efficiently implement `map mps . tails`

in linear time. Indeed, `scanr f e = map (foldr f e) . tails`

!

The aim now is to turn `mps = max . map sum . inits`

into a `foldr`

. Luckily, `inits`

is actually a `foldr`

. In the following we will perform `foldr`

-fusion twice, respectively fusing `map sum`

and `max`

into `inits`

, thus turning the entire expression into a `foldr`

.

The first fusion goes:

```
max . map sum .inits
= { definition of inits }
max . map sum . foldr (\x xss -> [] : map (x:) xss) [[]]
= { fold fusion, see below }
max . foldr zplus [0]
```

The fusion condition can be established below, through which we also construct the definition of `zplus`

:

```
map sum ([] : map (x:) xss)
= 0 : map (sum . (x:)) xss
= { by definition, sum (x : xs) = x + sum xs }
0 : map (x+) (map sum xss)
= { define zplus x xss = 0 : map (x+) xss }
zplus x (map sum xss)
```

We continue with the derivation and perform another fusion:

```
max . foldr zplus [0]
= { fold fusion, let zmax x y = 0 ↑ (x+y) }
foldr zmax 0 {-"."-}
```

For the second fold fusion to work, we have to prove the following fusion condition:

```
max (0 : map (x+) xs)
= 0 ↑ max (map (x+) xs)
= { since max (map (x +) xs) = x + max xs }
0 ↑ (x + max xs) {-"."-}
```

The property `max (map (x +) xs) = x + max xs`

in the last step follows from that `(↑)`

distributes into `(+)`

, that is, `(x + y) ↑ (x + z) = x + (y ↑ z)`

. This is the key property that allows the whole derivation to work.

By performing `foldr`

-fusion twice we have established that

`mps = foldr zmax 0`

In words, `mps (x : xs)`

, the best prefix sum of `x : xs`

, can be computed by `zmax x (mps xs)`

. The definition of `zmax`

says that if `x + mps xs`

is positive, it is the maximum prefix sum; otherwise we return `0`

, sum of the empty prefix.

Therefore, `mss`

can be computed by a `scanr`

:

```
mss
= { reasoning so far }
max . map (foldr zmax 0) . tails
= { introducing scanr }
max . scanr zmax 0 {-"."-}
```

We have derived `mss = max . scanr zmax 0`

, where `zmax x y = 0 ↑ (x+y)`

.

Many functional derivations usually stop here. This gives us an algorithm that runs in linear time, but takes linear space. A tupling transformation eliminates the need for linear space:

` mss = snd . (head &&& max) . scanr zmax 0 `

where `(f &&& g) a = (f a, g a)`

. The part `(head &&& max) . scanr zmax 0`

returns a pair, the first component being the result of `mps`

, the second `mss`

. By some mechanical simplification we get the final algorithm:

```
mss = snd . foldr step (0,0)
where step x (p,s) = (0 ↑ (x+p), (0 ↑ (x+p)) ↑ s)
```

### A Relational Derivation?

The maximum segment sum problem later turned out to be a example of Richard and Oege’s Greedy Theorem [2]. It is an exercise in the Algebra of Programming book, but I have not seen the solution given anywhere. For completeness, I recorded a relational derivation in a paper of mine about some other variations of the maximum segment sum problem[4].

### References

- Bentley, Jon. Programming Pearls. Addison-Wesley, Inc, 1987.
- Bird, Richard and de Moor, Oege. Algebra of Programming. Prentice-Hall, 1997
- Gibbons, Jeremy. Calculating Functional Programs. Proceedings of ISRG/SERG Research Colloquium, Oxford Brookes University, November 1997.
- Mu, Shin-Cheng. Maximum segment sum is back: deriving algorithms for two segment problems with bounded lengths. Partial Evaluation and Program Manipulation (PEPM ’08), pp 31-39. January 2008.

### Appendix: Algorithm 3

```
def mss(l,u):
if l > u:
return 0 # empty array
else if l == u:
return (0 ↑ a[l]) # singleton array
else:
m = (l + u) / 2
# compute maximum suffix sum of a[0..m]
sum, maxToLeft = 0, 0
for i in range (m, l-1, -1):
sum = sum + a[i]
maxToLeft = maxToLeft ↑ sum
# compute maximum prefix sum of a[m+1..u]
sum, maxToRight = 0, 0
for i in range (m+1, u+1):
sum = sum + a[i]
maxToLeft = maxToRight ↑ sum
maxCrossing = maxToLeft + maxToRight
# recursively compute mss of a[0..m] and a[m+1..u]
maxInL = mss(l,m)
maxInR = mss(m+1,u)
return (maxInL ↑ maxCrossing ↑ maxInR)
```

Pingback: 程式推導範例：快速排序 (Quicksort)與快速選擇 (Quickselect) |

Smit PatelThis algorithm is not working for the input : { 12,-1,-2,-1,-1,13 }

Can you please check it?

ShinHmm.. the functional code (replace

`↑`

by``max``

before you run the code in Haskell) returns the sum of the entire list, as expected. If you meant the Python code, I will have to double-check them again.The code for Algorithm 3 in the appendix was merely translated by hand for reference. It could be buggy!

Jeremy GibbonsI’ve just reacquainted myself with the relational (and datatype-generic!) derivation of the solution in the paper Generic functional programming with types and relations by Richard, Oege, and Paul Hoogendijk (JFP 1996).

Edon KelmendiVery nice derivation.

Minor detail: you’re not handling the case where all elements are negative. I would suggest something like this:

mss xs = snd . foldr step (0, max xs) xs

where step x (p,s) = (0 ↑ (x+p), ((x+p) ↑ s)

ShinSorry that the reply took so long!

If all the elements are negative, the segment having the maximum sum is the empty list — this is allowed by the specification, and the function

`mss`

should return`0`

. We could have started with a different specification where`segments`

computes only those non-empty segments, from which a slightly different algorithm would emerge.Pingback: La somme de segment maximale

Sharon Curtis“The recursive function could have computed a triple of (max. prefix sum, max. seg. sum, max. suffix sum) for each array.”Surely it would be a quadruple that the recursive function returns? That is, also including the total sum of the whole array.

ShinWelcome, Sharon!

Hmmm.. yes! Say if we split a list into two halves

`x ⧺ y`

, we need the max. prefix sum of`x`

and`y`

,andthe sum of`x`

, to compute the max. prefix sum of`x ⧺ y`

.