Variations on a Scheme – Lazy Evaluation
Now that we have an evaluator expressed as a Lisp program, we can experiment with alternative choices in language design simply by modifying the evaluator. Indeed, new languages are often invented by first writing an evaluator that embeds the new language within an existing high-level language. For example, if we wish to discuss some aspect of a proposed modification to Lisp with another member of the Lisp community, we can supply an evaluator that embodies the change. The recipient can then experiment with the new evaluator and send back comments as further modifications. Not only does the high-level implementation base make it easier to test and debug the evaluator; in addition, the embedding enables the designer to snarf@footnote{Snarf: ``To grab, especially a large document or file for the purpose of using it either with or without the owner's permission.'' Snarf down: ``To snarf, sometimes with the connotation of absorbing, processing, or understanding.'' (These definitions were snarfed from Steele et al. 1983. See also Raymond 1993.)} features from the underlying language, just as our embedded Lisp evaluator uses primitives and control structure from the underlying Lisp. Only later (if ever) need the designer go to the trouble of building a complete implementation in a low-level language or in hardware. In this section and the next we explore some variations on Scheme that provide significant additional expressive power. @menu * 4-2-1:: Normal Order and Applicative Order * 4-2-2:: An Interpreter with Lazy Evaluation * 4-2-3:: Streams as Lazy Lists @end menu
4.2.1 Normal Order and Applicative Order
In section 1.1, where we began our discussion of models of evaluation, we noted that Scheme is an applicative-order language, namely, that all the arguments to Scheme procedures are evaluated when the procedure is applied. In contrast, normal-order languages delay evaluation of procedure arguments until the actual argument values are needed. Delaying evaluation of procedure arguments until the last possible moment (e.g., until they are required by a primitive operation) is called lazy evaluation .@footnote{The difference between the ``lazy'' terminology and the ``normal-order'' terminology is somewhat fuzzy. Generally, ``lazy'' refers to the mechanisms of particular evaluators, while ``normal-order'' refers to the semantics of languages, independent of any particular evaluation strategy. But this is not a hard-and-fast distinction, and the two terminologies are often used interchangeably.} Consider the procedure
Evaluating (try 0 (/ 1 0)) generates an error in Scheme. With lazy evaluation, there would be no error. Evaluating the expression would return 1, because the argument (/ 1 0) would never be evaluated.
An example that exploits lazy evaluation is the definition of a procedure unless
that can be used in expressions such as
This won't work in an applicative-order language because both the usual value and the exceptional value will be evaluated before unless is called (compare Exercise 1-6). An advantage of lazy evaluation is that some procedures, such as unless, can do useful computation even if evaluation of some of their arguments would produce errors or would not terminate.
If the body of a procedure is entered before an argument has been evaluated we say that the procedure is non-strict in that argument. If the argument is evaluated before the body of the procedure is entered we say that the procedure is strict in that argument.@footnote{The ``strict''versus ``non-strict'' terminology means essentially the same thing as ``applicative-order'' versus ``normal-order,'' except that it refers to individual procedures and arguments rather than to the language as a whole. At a conference on programming languages you might hear someone say, ``The normal-order language Hassle has certain strict primitives. Other procedures take their arguments by lazy evaluation.''} In a purely applicative-order language, all procedures are strict in each argument. In a purely normal-order language, all compound procedures are non-strict in each argument, and primitive procedures may be either strict or non-strict. There are also languages (see Exercise 4-31) that give programmers detailed control over the strictness of the procedures they define.
A striking example of a procedure that can usefully be made non-strict is cons (or, in general, almost any constructor for data structures). One can do useful computation, combining elements to form data structures and operating on the resulting data structures, even if the values of the elements are not known. It makes perfect sense, for instance, to compute the length of a list without knowing the values of the individual elements in the list. We will exploit this idea in section 4.2.3 to implement the streams of Chapter 3 as lists formed of non-strict cons pairs.
Exercise 4.25: Suppose that (in ordinary applicative-order Scheme) we define unless as shown above and then define factorial in terms of unless as
What happens if we attempt to evaluate (factorial 5)? Will our definitions work in a normal-order language?
Exercise 4.26: Ben Bitdiddle and Alyssa P. Hacker disagree over the importance of lazy evaluation for implementing things such as unless. Ben points out that it's possible to implement unless in applicative order as a special form. Alyssa counters that, if one did that, unless would be merely syntax, not a procedure that could be used in conjunction with higher-order procedures. Fill in the details on both sides of the argument. Show how to implement unless as a derived expression (like cond or let), and give an example of a situation where it might be useful to have unless available as a procedure, rather than as a special form.
4.2.2 An Interpreter with Lazy Evaluation
In this section we will implement a normal-order language that is the same as Scheme except that compound procedures are non-strict in each argument. Primitive procedures will still be strict. It is not difficult to modify the evaluator of section 4.1.1 so that the language it interprets behaves this way. Almost all the required changes center around procedure application.
The basic idea is that, when applying a procedure, the interpreter must determine which arguments are to be evaluated and which are to be delayed. The delayed arguments are not evaluated; instead, they are transformed into objects called thunks .@footnote{The word thunk was invented by an informal working group that was discussing the implementation of call-by-name in Algol 60. They observed that most of the analysis of (``thinking about'') the expression could be done at compile time; thus, at run time, the expression would already have been ``thunk'' about (Ingerman et al. 1960).} The thunk must contain the information required to produce the value of the argument when it is needed, as if it had been evaluated at the time of the application. Thus, the thunk must contain the argument expression and the environment in which the procedure application is being evaluated.
The process of evaluating the expression in a thunk is called forcing .@footnote{This is analogous to the use of force on the delayed objects that were introduced in Chapter 3 to represent streams. The critical difference between what we are doing here and what we did in Chapter 3 is that we are building delaying and forcing into the evaluator, and thus making this uniform and automatic throughout the language.} In general, a thunk will be forced only when its value is needed: when it is passed to a primitive procedure that will use the value of the thunk; when it is the value of a predicate of a conditional; and when it is the value of an operator that is about to be applied as a procedure. One design choice we have available is whether or not to memoize thunks, as we did with delayed objects in section 3.5.1. With memoization, the first time a thunk is forced, it stores the value that is computed. Subsequent forcings simply return the stored value without repeating the computation. We'll make our interpreter memoize, because this is more efficient for many applications. There are tricky considerations here, however.@footnote{Lazy evaluation combined with memoization is sometimes referred to as call-by-need
argument passing, in contrast to call-by-name argument passing. (Call-by-name, introduced in Algol 60, is similar to non-memoized lazy evaluation.) As language designers, we can build our evaluator to memoize, not to memoize, or leave this an option for programmers (Exercise 4-31). As you might expect from Chapter 3, these choices raise issues that become both subtle and confusing in the presence of assignments. (See Exercise 4-27 and Exercise 4-29.) An excellent article by Clinger (1982) attempts to clarify the multiple dimensions of confusion that arise here.}
Modifying the evaluator
The main difference between the lazy evaluator and the one in section 4.1 is in the handling of procedure applications in eval and apply.
The application? clause of eval becomes
This is almost the same as the application? clause of eval in section 4.1.1. For lazy evaluation, however, we call apply with the operand expressions, rather than the arguments produced by evaluating them. Since we will need the environment to construct thunks if the arguments are to be delayed, we must pass this as well. We still evaluate the operator, because apply needs the actual procedure to be applied in order to dispatch on its type (primitive versus compound) and apply it.
Whenever we need the actual value of an expression, we use
instead of just eval, so that if the expression's value is a thunk, it will be forced.
Our new version of apply is also almost the same as the version in section 4.1.1. The difference is that eval has passed in unevaluated operand expressions: For primitive procedures (which are strict), we evaluate all the arguments before applying the primitive; for compound procedures (which are non-strict) we delay all the arguments before applying the procedure.
The procedures that process the arguments are just like list-of-values from section 4.1.1, except that list-of-delayed-args delays the arguments instead of evaluating them, and list-of-arg-values uses actual-value instead of eval:
The other place we must change the evaluator is in the handling of if, where we must use actual-value instead of eval to get the value of the predicate expression before testing whether it is true or false:
Finally, we must change the driver-loop procedure (section 4.1.4) to use actual-value instead of eval, so that if a delayed value is propagated back to the read-eval-print loop, it will be forced before being printed. We also change the prompts to indicate that this is the lazy evaluator:
With these changes made, we can start the evaluator and test it. The successful evaluation of the try expression discussed in section 4.2.1 indicates that the interpreter is performing lazy evaluation:
Representing thunks
Our evaluator must arrange to create thunks when procedures are applied to arguments and to force these thunks later. A thunk must package an expression together with the environment, so that the argument can be produced later. To force the thunk, we simply extract the expression and environment from the thunk and evaluate the expression in the environment. We use actual-value rather than eval so that in case the value of the expression is itself a thunk, we will force that, and so on, until we reach something that is not a thunk:
One easy way to package an expression with an environment is to make a list containing the expression and the environment. Thus, we create a thunk as follows:
Actually, what we want for our interpreter is not quite this, but rather thunks that have been memoized. When a thunk is forced, we will turn it into an evaluated thunk by replacing the stored expression with its value and changing the thunk tag so that it can be recognized as already evaluated.@footnote{Notice that we also erase the env from the thunk once the expression's value has been computed. This makes no difference in the values returned by the interpreter. It does help save space, however, because removing the reference from the thunk to the env once it is no longer needed allows this structure to be garbage-collected and its space recycled, as we will discuss in section 5.3.
Similarly, we could have allowed unneeded environments in the memoized delayed objects of section 3.5.1 to be garbage-collected, by having memo-proc do something like (set! proc '()) to discard the procedure proc (which includes the environment in which the delay was evaluated) after storing its value.}
Notice that the same delay-it procedure works both with and without memoization.
Give the missing values in the following sequence of interactions, and explain your answers.@footnote{This exercise demonstrates that the interaction between lazy evaluation and side effects can be very confusing. This is just what you might expect from the discussion in Chapter 3.}
Exercise 4.28: Eval uses actual-value rather than eval to evaluate the operator before passing it to apply, in order to force the value of the operator. Give an example that demonstrates the need for this forcing.
Exercise 4.29: Exhibit a program that you would expect to run much more slowly without memoization than with memoization. Also, consider the following interaction, where the id procedure is defined as in Exercise 4-27 and count starts at 0:
Give the responses both when the evaluator memoizes and when it does not.
Exercise 4.30: Cy D. Fect, a reformed C programmer, is worried that some side effects may never take place, because the lazy evaluator doesn't force the expressions in a sequence. Since the value of an expression in a sequence other than the last one is not used (the expression is there only for its effect, such as assigning to a variable or printing), there can be no subsequent use of this value (e.g., as an argument to a primitive procedure) that will cause it to be forced. Cy thus thinks that when evaluating sequences, we must force all expressions in the sequence except the final one. He proposes to modify eval-sequence from section 4.1.1 to use actual-value rather than eval:
-
@item
Ben Bitdiddle thinks Cy is wrong. He shows Cy the for-each procedure
described in Exercise 2-23, which gives an important example of a
sequence with side effects:
Exercise 4.31: The approach taken in this section is somewhat unpleasant, because it makes an incompatible change to Scheme. It might be nicer to implement lazy evaluation as an upward-compatible extension , that is, so that ordinary Scheme programs will work as before. We can do this by extending the syntax of procedure declarations to let the user control whether or not arguments are to be delayed. While we're at it, we may as well also give the user the choice between delaying with and without memoization. For example, the definition
would define f to be a procedure of four arguments, where the first and third arguments are evaluated when the procedure is called, the second argument is delayed, and the fourth argument is both delayed and memoized. Thus, ordinary procedure definitions will produce the same behavior as ordinary Scheme, while adding the lazy-memo declaration to each parameter of every compound procedure will produce the behavior of the lazy evaluator defined in this section. Design and implement the changes required to produce such an extension to Scheme. You will have to implement new syntax procedures to handle the new syntax for define. You must also arrange for eval or apply to determine when arguments are to be delayed, and to force or delay arguments accordingly, and you must arrange for forcing to memoize or not, as appropriate.
4.2.3 Streams as Lazy Lists
In section 3.5.1, we showed how to implement streams as delayed lists. We introduced special forms delay and cons-stream, which allowed us to construct a ``promise'' to compute the cdr of a stream, without actually fulfilling that promise until later. We could use this general technique of introducing special forms whenever we need more control over the evaluation process, but this is awkward. For one thing, a special form is not a first-class object like a procedure, so we cannot use it together with higher-order procedures.@footnote{This is precisely the issue with the unless procedure, as in Exercise 4-26.} Additionally, we were forced to create streams as a new kind of data object similar but not identical to lists, and this required us to reimplement many ordinary list operations (map, append, and so on) for use with streams.
With lazy evaluation, streams and lists can be identical, so there is no need for special forms or for separate list and stream operations. All we need to do is to arrange matters so that cons is non-strict. One way to accomplish this is to extend the lazy evaluator to allow for non-strict primitives, and to implement cons as one of these. An easier way is to recall (section 2.1.3) that there is no fundamental need to implement cons as a primitive at all. Instead, we can represent pairs as procedures:@footnote{This is the procedural representation described in Exercise 2-4. Essentially any procedural representation (e.g., a message-passing implementation) would do as well. Notice that we can install these definitions in the lazy evaluator simply by typing them at the driver loop. If we had originally included cons, car, and cdr as primitives in the global environment, they will be redefined. (Also see Exercise 4-33 and Exercise 4-34.)}
In terms of these basic operations, the standard definitions of the list operations will work with infinite lists (streams) as well as finite ones, and the stream operations can be implemented as list operations. Here are some examples:
Note that these lazy lists are even lazier than the streams of Chapter 3: The car of the list, as well as the cdr, is delayed.@footnote{This permits us to create delayed versions of more general kinds of list structures, not just sequences. Hughes 1990 discusses some applications of ``lazy trees.''} In fact, even accessing the car or cdr of a lazy pair need not force the value of a list element. The value will be forced only when it is really needed -- e.g., for use as the argument of a primitive, or to be printed as an answer.
Lazy pairs also help with the problem that arose with streams in section 3.5.4, where we found that formulating stream models of systems with loops may require us to sprinkle our programs with explicit delay operations, beyond the ones supplied by cons-stream. With lazy evaluation, all arguments to procedures are delayed uniformly. For instance, we can implement procedures to integrate lists and solve differential equations as we originally intended in section 3.5.4:
Exercise 4.32: Give some examples that illustrate the difference between the streams of Chapter 3 and the ``lazier'' lazy lists described in this section. How can you take advantage of this extra laziness?
Exercise 4.33: Ben Bitdiddle tests the lazy list implementation given above by evaluating the expression
To his surprise, this produces an error. After some thought, he realizes that the ``lists'' obtained by reading in quoted expressions are different from the lists manipulated by the new definitions of cons, car, and cdr. Modify the evaluator's treatment of quoted expressions so that quoted lists typed at the driver loop will produce true lazy lists.
Exercise 4.34: Modify the driver loop for the evaluator so that lazy pairs and lists will print in some reasonable way. (What are you going to do about infinite lists?) You may also need to modify the representation of lazy pairs so that the evaluator can identify them in order to print them.
Notes
Based on Structure and Interpretation of Computer Programs, a work at https://mitpress.mit.edu/sicp/.