Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is using applicative-order evaluation or normal-order evaluation. To refresh your memory, “applicative-order” means that procedure arguments get evaluated before being substituted into a procedure. “Normal-order” means that procedure arguments get evaluated later and only if
needed. He defines the following two procedures:
(define (p) (p)) (define (test x y) (if (= x 0) 0 y))
Then he evaluates the expression
(test 0 (p))
What behavior will Ben observe with an interpreter that uses applicative-order evaluation? What behavior will he observe with an interpreter that uses normal-order evaluation? Explain your answer. (Assume that the evaluation rule for the special form if is the same whether the interpreter is using normal or applicative order: The predicate expression is evaluated first, and the result determines whether to evaluate the consequent or the alternative expression.)
Consider applicative order first where arguments are evaluated first before being passed into the function.
(test 0 (p)), the arguments are
0 is a literal primitive so it just evaluates to itself (zero).
(p) is a procedure application (a procedure that accepts no arguments) that returns the result of another procedure application (the application of itself,
p). Since it’s an application that returns the application of itself with no terminating condition, it’ll continue applying itself indefinitely!
Now with normal order evaluation where the arguments are evaluated on when needed,
(test 0 (p)) will expand to
(if (= 0 0) 0 (p)). Since the condition is true, and given that the if special form never evaluates the right hand side expression unless the condition is false, the return value is 0. If we changed our procedure application to
(test 1 (p)) however, it’ll enter into an infinite loop.
This small example demonstrates a fairly deep idea. Given the same arguments to a function and applying the substitution model of reasoning about how symbols get replaced with values, evaluation order matters. This is not an obvious concept and one that surprised me even though I’ve been programming for many years.
If most languages you’ve used implement applicative order evaluation, code that looks like should exhibit the same behavior in a language that uses normal order evaluation would be surprising.
What are the pros and cons of normal order evaluation?
Are there any programming languages that use normal order evaluation?
How do you implement normal order evaluation in an interpreter?