# Modularity from Lazy Evaluation - Richardson Extrapolation

This is an F# example of how higher-order functions together with lazy evaluation can reduce complexity and lead to more modular software.

## Background

Richardson extrapolation is a method of combining multiple function estimates to increase estimate accuracy. This post will cover estimating the derivative and the integral of a function to an arbitrary accuracy.

/// Derivative estimate.
let derivativeEstimate f x h = (f(x+h)-f(x-h))/h*0.5

/// Integral estimate (h = (b-a)/n where n is an integer).
let integralEstimate f a b h = h*(f a+f b)*0.5 + h*Seq.sumBy f {a+h..h*2.0..b}


Both the derivative and integral estimate can be shown to have an error term that has even powers of $$h$$.

$Actual = Estimate(h) + e_1 h^2 + e_2 h^4 + \cdots$

Richardson extrapolation combines multiple estimates to eliminate the lower power error terms.

R_{ij} = \left\{ \begin{align} & Estimate\left(\frac{h}{2^i}\right) & j=0 \\ & \frac{4^j R_{i,j-1} - R_{i-1,j-1}}{4^j-1} & i \geq j, \, j > 0 \\ \end{align} \right.

For small $$h$$ this rapidly improves the accuracy of the estimate.

$Actual = R_{ij} + \hat{e}_1 h^{2j+2} + \hat{e}_2 h^{2j+4} + \cdots$

This produces a triangle of improving estimates.

$\begin{matrix} R_{00} & \\ & \searrow & \\ R_{10} & \rightarrow & R_{11} \\ & \searrow & & \searrow \\ R_{20} & \rightarrow & R_{21} & \rightarrow & R_{22} \\ \vdots & & \vdots & & \vdots & \ddots \\ \end{matrix}$

The stopping criteria is usually that $$|R_{n-2,n-2}-R_{n-1,n-1}|$$ and $$|R_{n-1,n-1}-R_{n,n}|$$ are within a desired accuracy.

## First attempt

The first implementation will be attempted without using functional techniques.

/// Stopping criteria for a given accuracy and list of Richardson estimates.
let stoppingCriteriaNonFunctional tol (rows:List<float array>) =
let c = rows.Count
c>2 &&
abs(rows.[c-3].[c-3]-rows.[c-2].[c-2])<=tol &&
abs(rows.[c-2].[c-2]-rows.[c-1].[c-1])<=tol

/// The Richardson formula for a function estimate that has even power error terms.
let richardsonFormula currentRowR previousRowR pow4 =
(currentRowR*pow4-previousRowR)/(pow4-1.0)

/// Derivative accurate to tol using Richardson extrapolation.
let derivativeNonFunctional tol h0 f x =
let richardsonRows = List<float array>()
let mutable h = h0*0.5
let rec run () =
let lastRow = Seq.last richardsonRows
let row = Array.zeroCreate (Array.length lastRow+1)
row. <- derivativeEstimate f x h
let mutable pow4 = 4.0
for i = 0 to Array.length lastRow-1 do
row.[i+1] <- richardsonFormula row.[i] lastRow.[i] pow4
pow4 <- pow4*4.0
if stoppingCriteriaNonFunctional tol richardsonRows then Array.last lastRow
else
h<-h*0.5
run()
run()

/// Iterative integral estimate (h is half the value used in the previous estimate).
let integralEstimateIterative f a b previousEstimate h =
previousEstimate*0.5+h*Seq.sumBy f {a+h..h*2.0..b}

/// Integral accurate to tol using Richardson extrapolation.
let integralNonFunctional tol f a b =
let richardsonRows = List<float array>()
let mutable h = (b-a)*0.5
let rec run () =
let lastRow = Seq.last richardsonRows
let row = Array.zeroCreate (Array.length lastRow+1)
row. <- integralEstimateIterative f a b lastRow. h
let mutable pow4 = 4.0
for i = 0 to Array.length lastRow-1 do
row.[i+1] <- richardsonFormula row.[i] lastRow.[i] pow4
pow4 <- pow4*4.0
if stoppingCriteriaNonFunctional tol richardsonRows then Array.last lastRow
else
h<-h*0.5
run()
run()



## The refactor

There is a lot of duplicate code in the functions above.

The object-oriented solution to this is the Template Method design pattern. The downside of this approach is that it results in a lot of boiler-plate code with state being shared across multiple classes.

Leif Battermann has a very good post on how this can be solved in a functional way using higher-order functions. This results in much more modular and testable code.

Unfortunately, in this case higher-order functions alone will not solve the problem. The integral estimate needs the previous estimate for its calculation. This difference in state means the higher-order function would need different signatures for the derivative and integral.

The solution can be found in the excellent paper Why Functional Programming Matters by John Hughes. Lazy evaluation is a functional language feature that can greatly improve modularity.

Lazy evaluation allows us to cleanly split the implementation into three parts:

1. A function that produces an infinite sequence of function estimates.
2. A function that produces a sequence of Richardson estimates from a sequence of function estimates.
3. A function that iterates a sequence of Richardson estimates and stops at a desired accuracy.

Lazy evaluation can be achieved in F# by using the Seq collection and also the lazy keyword.

## Final code

/// Infinite sequence of derivative estimates.
let derivativeEstimates f x h0 =
Seq.unfoldInf ((*)0.5) h0 |> Seq.map (derivativeEstimate f x)

/// Infinite sequence of integral estimates.
let integralEstimates f a b =
let h0 = (b-a)*0.5
let i0 = (f a+f b)*h0
Seq.unfoldInf ((*)0.5) h0 |> Seq.scan (integralEstimateIterative f a b) i0

/// Richardson extrapolation for a given estimate sequence.
let richardsonExtrapolation s =
let createRow previousRow estimate_i =
let richardsonAndPow4 (current,pow4) previous =
richardsonFormula current previous pow4, pow4*4.0
Seq.scan richardsonAndPow4 (estimate_i,4.0) previousRow
|> Seq.map fst |> Seq.cache
Seq.scan createRow Seq.empty s |> Seq.tail

/// Stopping criteria for a given accuracy and sequence of Richardson estimates.
let stoppingCriteria tol s =
Seq.map Seq.last s |> Seq.triplewise
|> Seq.find (fun (a,b,c) -> abs(a-b)<=tol && abs(b-c)<=tol) |> trd

/// Derivative accurate to tol using Richardson extrapolation.
let derivative tol f x h0 =
derivativeEstimates f x h0 |> richardsonExtrapolation |> stoppingCriteria tol

/// Integral accurate to tol using Richardson extrapolation.
let integral tol f a b =
integralEstimates f a b |> richardsonExtrapolation |> stoppingCriteria tol


## Conclusion

Lazy evaluation makes it possible to modularise software into a producer that constructs a large number of possible answers, and a consumer that chooses the appropriate one.

Without it, either state has to be fully generated upfront or production and consumption have to be done in the same place.

Higher-order functions and lazy evaluation can be applied to all software layers. The Why Functional Programming Matters paper has examples of their use in game artificial intelligence and other areas. In my experience the complexity reduction it produces allows software functionality to be pushed further more easily.

Modularity is the most important concept in software design. It makes software easier to write, understand, test and reuse. The features of functional languages enable greater modularity.

module Main
namespace System
namespace System.Collections
namespace System.Collections.Generic
module Seq

from Microsoft.FSharp.Collections
val unfoldInf : f:('a -> 'a) -> s:'a -> seq<'a>

Returns an infinite sequence that contains the elements generated by the given computation.
val f : ('a -> 'a)
val s : 'a
Multiple items
val seq : sequence:seq<'T> -> seq<'T>

--------------------
type seq<'T> = IEnumerable<'T>
val triplewise : s:seq<'a> -> seq<'a * 'a * 'a>

Returns a sequence of each element in the input and its two predecessors.
val s : seq<'a>
val e : IEnumerator<'a>
IEnumerable.GetEnumerator() : IEnumerator<'a>
System.Collections.IEnumerator.MoveNext() : bool
val i : 'a ref
Multiple items
val ref : value:'T -> 'T ref

--------------------
type 'T ref = Ref<'T>
property IEnumerator.Current: 'a
val j : 'a ref
val k : 'a
val trd : 'a * 'b * i:'c -> 'c

Returns the third item from a 3-tuple.
val i : 'c
val derivativeEstimate : f:(float -> float) -> x:float -> h:float -> float

Derivative estimate.
val f : (float -> float)
val x : float
val h : float
val integralEstimate : f:(float -> float) -> a:float -> b:float -> h:float -> float

Integral estimate (h = (b-a)/n where n is an integer).
val a : float
val b : float
Multiple items
module Seq

from Main

--------------------
module Seq

from Microsoft.FSharp.Collections
val sumBy : projection:('T -> 'U) -> source:seq<'T> -> 'U (requires member ( + ) and member get_Zero)
val stoppingCriteriaNonFunctional : tol:float -> rows:List<float array> -> bool

Stopping criteria for a given accuracy and list of Richardson estimates.
val tol : float
val rows : List<float array>
Multiple items
type List<'T> =
new : unit -> List<'T> + 2 overloads
member Add : item:'T -> unit
member AddRange : collection:IEnumerable<'T> -> unit
member BinarySearch : item:'T -> int + 2 overloads
member Capacity : int with get, set
member Clear : unit -> unit
member Contains : item:'T -> bool
member ConvertAll<'TOutput> : converter:Converter<'T, 'TOutput> -> List<'TOutput>
member CopyTo : array:'T[] -> unit + 2 overloads
...
nested type Enumerator

--------------------
List() : List<'T>
List(capacity: int) : List<'T>
List(collection: IEnumerable<'T>) : List<'T>
Multiple items
val float : value:'T -> float (requires member op_Explicit)

--------------------
type float = System.Double

--------------------
type float<'Measure> = float
type 'T array = 'T []
val c : int
property List.Count: int
val abs : value:'T -> 'T (requires member Abs)
val richardsonFormula : currentRowR:float -> previousRowR:float -> pow4:float -> float

The Richardson formula for a function estimate that has even power error terms.
val currentRowR : float
val previousRowR : float
val pow4 : float
val derivativeNonFunctional : tol:float -> h0:float -> f:(float -> float) -> x:float -> float

Derivative accurate to tol using Richardson extrapolation.
val h0 : float
val richardsonRows : List<float array>
val mutable h : float
val run : (unit -> float)
val lastRow : float array
val last : source:seq<'T> -> 'T
val row : float []
module Array

from Microsoft.FSharp.Collections
val zeroCreate : count:int -> 'T []
val length : array:'T [] -> int
val mutable pow4 : float
val i : int
val last : array:'T [] -> 'T
val integralEstimateIterative : f:(float -> float) -> a:float -> b:float -> previousEstimate:float -> h:float -> float

Iterative integral estimate (h is half the value used in the previous estimate).
val previousEstimate : float
val integralNonFunctional : tol:float -> f:(float -> float) -> a:float -> b:float -> float

Integral accurate to tol using Richardson extrapolation.
val derivativeEstimates : f:(float -> float) -> x:float -> h0:float -> seq<float>

Infinite sequence of derivative estimates.
val map : mapping:('T -> 'U) -> source:seq<'T> -> seq<'U>
val integralEstimates : f:(float -> float) -> a:float -> b:float -> seq<float>

Infinite sequence of integral estimates.
val i0 : float
val scan : folder:('State -> 'T -> 'State) -> state:'State -> source:seq<'T> -> seq<'State>
val richardsonExtrapolation : s:seq<float> -> seq<seq<float>>

Richardson extrapolation for a given estimate sequence.
val s : seq<float>
val createRow : (seq<float> -> float -> seq<float>)
val previousRow : seq<float>
val estimate_i : float
val richardsonAndPow4 : (float * float -> float -> float * float)
val current : float
val previous : float
val fst : tuple:('T1 * 'T2) -> 'T1
val cache : source:seq<'T> -> seq<'T>
val empty<'T> : seq<'T>
val tail : source:seq<'T> -> seq<'T>
val stoppingCriteria : tol:float -> s:seq<#seq<float>> -> float

Stopping criteria for a given accuracy and sequence of Richardson estimates.
val s : seq<#seq<float>>
val find : predicate:('T -> bool) -> source:seq<'T> -> 'T
val c : float
val derivative : tol:float -> f:(float -> float) -> x:float -> h0:float -> float

Derivative accurate to tol using Richardson extrapolation.
val integral : tol:float -> f:(float -> float) -> a:float -> b:float -> float

Integral accurate to tol using Richardson extrapolation.