Programming Languages (CSC-302 98S)
Outline of Class 29: Lazy Evaluation
Held: Monday, April 13, 1998
- Any more questions on
- Don't forget that it's due this Wednesday.
- I'm happy to discuss permutations, if a number of you have no
idea how to recursively develop permutations.
- Today the brown-bag lunch series continues with
Encapsulation and Inheritance in C++. This is our
penultimate film of the semester and is likely to be at least as
good as many of the Titular Head films (no offence to Mr. Dale).
- In the current imlementation of Haskell, the type of
or "if type t is a numeric type then this a function from that type
to that type.
- Sorry, we're not going to go outside today. However, I will continue
to try to write larger.
- The question for today was: Consider the following type definition.
What is interesting about it?
type List t =
Cons t (List t)
- How might we define a tree? A binary tree?
- One of the nicest things about Haskell is that it is pattern-based.
- In the equations you use to define functions, you can give patterns for
- You can also use multiple equations to define one function.
- The patterns use variables and constructors.
- For example, to define factorial, we might instead use
factorial :: Int -> Int
factorial 0 = 1
factorial n = n * (fact (n-1))
- Haskell attempts to apply the patterns in order. Some other functional
languages require that the patterns not overlap (that is, that
there be no expression for which both patterns apply).
- Patterns are very useful. For example, in the original defintion of LISP,
the following two equations appear in the report to define
car (Cons x xs) = x
cdr (Cons x xs) = xs
- As you read the data-structures literature, you'll soon realize that
many data structures are "defined" by a series of equations like this.
- For example, many people define stacks by five operations:
- We can do the same in Haskell
data Stack a = NewStack | Push a (Stack a)
pop :: Stack a -> Stack a
pop (Push item stack) = stack
pop NewStack = error
top :: Stack a -> a
top (Push item stack) = item
top NewStack = error
empty :: Stack a -> Bool
empty NewStack = true
empty (Push item stack) = false
- Of course, this does seem to require that we have an error value.
- You can also leave out the error equations, and Haskell will report
an error when it encounters these patterns.
- Patterns are very powerful. They can even be used to define control
if (I'm writing it as iif so that
we don't collide with the built-in version.)
iif :: Bool -> a -> a -> a
iif True x y = x
iif False x y = y
- Of course, we do need lazy evaluation to take advantage of this.
- There are two basic evaluation strategies in functional languages
(and similarly in the lambda calculus).
- In eager (innermost) evaluation, the arguments
are evaluated before the function is called.
- In lazy (outermost) evaluation, the arguments
are not evaluated until they are needed (in effect, an encapsulation
of the arguments is passed to the function).
- For example, if we were lazily evaluating
square (plus 2 3),
we might compute
square (plus 2 3) =>
times (plus 2 3) (plus 2 3) =>
times 5 (plus 2 3) =>
times 5 5 =>
- Doing this eagerly would result in
square (plus 2 3) =>
square 5 =>
times 5 5
- Using examples like this, many have argued that lazy evaluation is
- However, in a pure language, it may be possible to avoid the recomputation of
the duplicated argument (by remembering the relationship between the two).
- Surprisingly, lazy evaluation is required for recursive computation.
- Consider the standard factorial function,
fact n = iif (n == 0) 1 (n * (fact (n - 1)))
- Let us compute the factorial of 1 using eager evaluation.
fact 1 =>
iif (1 == 0) 1 (1 * (fact (1 - 1))) =>
iif false 1 (n * (fact (1 - 1))) =>
iif false 1 (n * (fact 0)) =>
iif false 1 (iif (0 == 0) 1 (0 * (fact (0-1)))) =>
iif false 1 (iif false 1 (0 * (fact (0-1)))) =>
iif false 1 (iif false 1 (0 * (fact (-1)))) =>
iif false 1 (iif false 1 (0 * (iif (-1 == 0) 1 (-1 * fact(-1-1))))) =>
- For this and similar cases, lazy evaluation isn't just a better idea,