Thinking is most mysterious, and by far the greatest light upon it that we have is thrown by the study of language.
In this post, I hope to defuse the pervasive myth that continuations are intrinsically a whole-computation device. They aren't. I'd originally meant to write about the relationship between continuations and delimited continuations, but find that defusing the myth is prerequisite to the larger discussion, and will fill up a post by itself.
To defuse the myth, I'll look at how continuations are handled in the vau-control-calculus. Explaining that calculus involves explaining the unconventional way vau-calculi handle variables. So, tracing back through the tangle of ideas to find a starting point, I'll begin with some remarks on the use of variables in term-rewriting calculi.
While I'm extracting this high-level insight from the lower-level math of the situation, I'll also defuse a second common misapprehension about continuations, that they are essentially function-like. This is a subtle point: continuations are invoked as if they were functions, and traditionally appear in the form of first-class functions, but their control-flow behavior is orthogonal to function application. This point is (as I've just demonstrated) difficult even to articulate without appealing to lower-level math; but it arises from the same lower-level math as the point about whole-computation, so I'll extract it here with essentially no additional effort.Partial-evaluation variables
From the lasting evidence, Alonzo Church had a thing about variables. Not as much of a thing as Haskell Curry, who developed a combinatorial calculus with no variables at all; but Church did feel, apparently, that a meaningful logical proposition should not have unbound variables in it. He had an elegant insight into how this could be accomplished: have a single binding construct —which for some reason he called λ— for the variable parameter in a function definition, and then —I quite enjoyed this— you don't need additional binding constructs for the existential and universal quantifiers, because you can simply make them higher-order functions and leave the binding to their arguments. For his quantifiers Π and Σ, Π(F,G) meant for all values v such that F(v) is true, G(v) is true; and Σ(F) meant there exists some value v such that F(v) is true. The full elegance of this was lost because only the computational subset of his logic survived, under the name λ-calculus, so the quantifiers fell by the wayside; but the habit of a single binding construct has remained.
In computation, though, I suggest that the useful purpose of λ-bound variables is partial evaluation. This notion dawned on me when working out the details of elementary vau-calculus. Although I've blogged about elementary vau-calculus in an earlier post, there I was looking at a different issue (explicit evaluation), and took variables for granted. Suppose, though, that one were centrally concerned only with capturing the operational semantics of Lisp (with fexprs) in a term-rewriting calculus at all, rather than capturing it in a calculus that looks as similar as possible to λ-calculus. One might end up with something like this:
T ::= S | s | (T . T) | [wrap T] | AMost of this is the same as in my earlier post (explicit evaluation), but there are three differences: the internal structure of environments (e) is described; operatives have a different structure, which is fully described; and there are no variables.
A ::= [eval T T] | [combine T T T]
S ::= d | () | e | [operative T T T T]
e ::= ⟪ B* ⟫
B ::= s ← T
Wait. No variables?
Here, a term (T) is either a self-evaluating term (S), a symbol (s), a pair, an applicative ([wrap T], where T is the underlying combiner), or an active term (A). An active term is the only kind of term that can be the left-hand side of a rewrite rule: it is either a plan to evaluate something in an environment, or a plan to invoke a combiner with some operands in an environment. A self-evaluating term is either an atomic datum such as a number (d), or nil, or an environment (e), or an operative — where an operative consists of a parameter tree, an environment parameter, a body, and a static environment. An environment is a delimited list of bindings (B), and a binding associates a symbol (s) with an assigned value (T).
The rewrite rules with eval on the left-hand side are essentially just the top-level logic of a Lisp evaluator:
[eval S e] → SThat leaves two rules with combine on the left-hand side: one for combining an applicative, and one for combining an operative. Combining applicatives is easy:
[eval s e] → lookup(s,e) if lookup(s,e) is defined
[eval (T1 . T2) e] → [combine [eval T1 e] T2 e]
[eval [wrap T] e] → [wrap [eval T e]]
[combine [wrap T0] (T1 ... Tn) e] → [combine T0 ([eval T1 e] ... [eval Tn e]) e]Combining operatives is a bit more complicated. (It will help if you're familiar with how Kernel's
$vauworks; see here.) The combination is rewritten as an evaluation of the body of the operative (its third element) in a local environment. The local environment starts as the static environment of the operative (its fourth element); then the ordinary parameters of the operative (its first element) are locally bound to the operands of the combination; and the environment parameter of the operative (its second element) is locally bound to the dynamic environment of the combination.
[combine [operative T1 T2 T3 T4] V e] → [eval T3 match(T1,V) · match(T2,e) · T4] if the latter is definedwhere match(X,Y) constructs an environment binding the symbols in definiend X to the corresponding subterms in Y; echild · eparent concatenates two environments, producing an environment that tries to look up symbols in echild, and failing that looks for them in eparent; and a value (V) is a term such that every active subterm is inside a self-evaluating subterm.
Sure enough, there are no variables here. This calculus behaves correctly. However, it has a weak equational theory. Consider evaluating the following two expressions in a standard environment e0.
[evalClearly, these two expressions are equivalent; we can see that they are interchangeable. They both construct an applicative that takes one numerical argument and returns it unchanged. However, the rewriting rules of the calculus can't tell us this. These terms reduce to
($lambda (x) (+ 0 x))e0]
($lambda (x) (* 1 x))e0]
[wrap [operativeand both of these terms are irreducible! Whenever we call either of these combiners, its body is evaluated in a local environment that's almost like e0; but within the calculus, we can't even talk about what will happen when the body is evaluated. To do so we would have to construct an active evaluation term for the body; to build the active term we'd need to build a term for the local environment of the call; and to build a term for that local environment, we'd need to bind
(+ 0 x)e0]]
(* 1 x)e0]]
xto some sort of placeholder, meaning "some term, but we don't know what it is yet".
A variable is just the sort of placeholder we're looking for. So let's add some syntax. First, a primitive domain of variables. We call this domain xp, where the "p" stands for "partial evaluation", since that's what we want these variables for (and because, it turns out, we're going to want other variables that are for other purposes). We can't put this primitive domain under nonterminal S because, when we find out later what a variable stands for, what it stands for might not be self-evaluating; nor under nonterminal A because what it stands for might not be active. So xp has to go directly under nonterminal T.
T ::= xpWe also need a binding construct for these variables. It's best to use elementary devices in the calculus, to give lots of opportunities for provable equivalences, rather than big monolithic devices that we'd then be hard-put to analyze. So we'll use a traditional one-variable construct, and expect to introduce other devices to parse the compound definiends that were handled, in the variable-less calculus, by function match.
S ::= ⟨λ xp.T⟩governed by, essentially, the usual β-rule of λ-calculus:
[combine ⟨λ xp.T⟩ V e] → T[xp ← V]That is, combine a λ-expression by substituting its operand (V) for its parameter (xp) in its body (T). Having decided to bind our variables xp one at a time, we use three additional operative structures to deliver the various parts of the combination one at a time (a somewhat souped-up version of currying): one structure for processing a null list of operands, one for splitting a dotted-pair operand into its two halves, and one for capturing the dynamic environment of the combination.
S ::= ⟨λ0.T⟩ | ⟨λ2.T⟩ | ⟨ε.T⟩The corresponding rewrite rules are
[combine ⟨λ0.T⟩ () e] → T
[combine ⟨λ2.T0⟩ (T1 . T2) e] → [combine [combine T0 T1 e] T2 e]
[combine ⟨ε.T0⟩ T1 e] → [combine [combine T0 e ⟪⟫] T1 e]
Unlike the variable-less calculus, where the combine rewrite rule initiated evaluation of the body of an operative, here evaluation of the body must be built into the body when the operative is constructed. This would be handled by the δ-rules (specialized operative-call rewrite rules) for evaluating function definitions. For example, for variables x,y and standard environment e0,
[evalVariable y is a dummy-variable used to discard the dynamic environment of the call, which is not used by ordinary functions. Variable x is our placeholder, in the constructed term to evaluate the body, for the unknown operand to be provided later.
($lambda (x) (+ 0 x))e0] →+ [wrap ⟨ε.⟨λy.⟨λ2.⟨λx.⟨λ0.[eval
(+ 0 x)⟪
x← x⟫ · e0]⟩⟩⟩⟩⟩]
The innermost redex (reducible expression) here, [eval
(+ 0 x) ⟪
x ← x⟫ · e0], can be rewritten through a series of steps,
[evalWhere we can go from here depends on additional information of one or another kind. We may have a rule that tells us the addition operative + doesn't use its dynamic environment, so that we can garbage-collect the environment,
(+ 0 x)⟪
x← x⟫ · e0]
→ [combine [eval
x← x⟫ · e0]
x← x⟫ · e0]
→ [combine [wrap +]
x← x⟫ · e0]
→+ [combine +
x← x⟫ · e0]
→ [combine +If we have some contextual information that the value of x will be numeric, and a rule that zero plus any number is that number back again, we'd have
→ xAt any rate, we only have the opportunity to even start the partial evaluation of the body, and contemplate these possible further steps, because the introduction of variables allowed us to write a term for the partial evaluation in the first place.
[edit: I'd goofed, in this post, on the combination rule for λ0; it does not of course induce evaluation of T. Fixed now.]Continuations by global rewrite
The idea of using λ-calculus to model programming language semantics goes back at least to Peter Landin in the early 1960s, but there are a variety of programming language features that don't fit well with λ-calculus. In 1975, Gordon Plotkin proposed a remedy for one of these features — eager argument evaluation, whereas ordinary λ-calculus allows lazy argument evaluation and thereby has different termination properties. Plotkin designed a variant calculus, the λv-calculus, and proved that on one hand λv-calculus correctly models the semantics of a programming language with eager argument evaluation, while on the other hand it is comparably well-behaved to traditional λ-calculus. Particularly, the calculus rewriting relation is compatible and Church-Rosser, and satisfies soundness and completeness theorems relative to the intended operational semantics. (I covered those properties and theorems a bit more in an earlier post.)
In the late 1980s, Matthias Felleisen showed that a technique similar to Plotkin's could be applied to other, more unruly kinds of programming-language behavior traditionally described as "side-effects": sequential control (continuations), and sequential state (mutable variables). This bold plan didn't quite work, in that he had to slightly weaken the well-behavedness properties of the calculus. In both cases (control and state), the problem is to distribute the consequences of a side-effect to everywhere it needs to be known; and Felleisen did this by having special constructs that would "bubble up" through the term, carrying the side-effect with them, until they encompassed the whole term, at which point there would be a whole-term rewriting rule to distribute the side-effect to everywhere it needed to go. The whole-term rewriting rules were the measure by which the well-behavedness of the calculus would fail, as whole-term rewriting isn't compatible.
For sequential control (our central interest here), Felleisen added two operators, C and A, to λv-calculus. The syntax of λv-calculus, before the addition, is just that of λ-calculus:
T ::= x | (λx.T) | (TT)In place of the classic β-rule of λ-calculus, λv-calculus has βv, which differs in that the operand in the rule is a value (redexes have to be inside λ-terms):
((λx.T)V) → T[x ← V]The operational semantics, which acts only on whole terms, uses (per Felleisen) an evaluation context E to uniquely determine which subterm is reduced:
E ::= ⎕ | (ET) | ((λx.T)E)
E[((λx.T)V)] ↦ E[T[x ← V]]For the control calculus, the term syntax adds A and C,
T ::= x | (λx.T) | (TT) | (AT) | (CT)Neither of these operators has the semantics of call-with-current-continuation. Instead, (AT) means "abort the surrounding computation and just do T", while (CT) means "abort the surrounding computation and apply T to the (aborted) continuation". Although it's possible to build conventional call-with-current-continuation out of these primitive operators, the primitives themselves are obviously intrinsically whole-term operators. Operationally, evaluation contexts don't change at all, and the operational semantics has additional rules
E[(AT)] ↦ TThe compatible rewrite relation, →, has rules to move the new operators upward through a term until they reach its top level. The compatible rules for A are dead easy:
E[(CT)] ↦ T(λx.(AE[x])) for unused variable x
(AT1)T2 → AT1Evidently, though, once the A operator reaches the top of the term, the only way to get rid of it, so that computation can proceed, is a whole-term rewrite rule,
V(AT) → AT
AT ᐅ TThe whole-term rule for C is easy too,
CT ᐅ T(λx.(Ax))but the compatible rewrite rules for C are, truthfully, just a bit frightening:
(CT1)T2 → C(λx1.(T1(λx2.(A(x1(x2T2)))))) for unused xkThis does produce the right behavior (unless I've written it out wrong!), but it powerfully perturbs the term structure; Felleisen's description of this as "bubbling up" is apt. Imho, it's quite a marvelous achievement, especially given the prior expectation that nothing of the kind would be possible — an achievement in no way lessened by what can now be done with a great deal of hindsight.
V(CT) → C(λx1.(T(λx2.(A(x1(Vx2)))))) for unused xk
The perturbation effect appears to me, in retrospect, to be a consequence of describing the control flow of continuations using function-application structure. My own approach makes no attempt to imitate function-application and, seemingly as a result, its constructs move upward without the dramatic perturbation of Felleisen's C.
Various constraints can be tampered with to produce more well-behaved results. Felleisen later proposed to adjust the target behavior — the operational semantics — to facilitate well-behavedness, in work considered formative for the later notion of delimited continuations. The constraint I've tampered with isn't a formal condition, but rather a self-imposed limitation on what sort of answers can be considered: I introduce a new binding construct whose form doesn't resemble λ, and whose rewriting rules use a different substitution function than the β-rule.Control variables
Consider the following Scheme expression:
(call/cc (lambda (c) (c 3)))Presuming this is evaluated in an environment with the expected binding for call/cc, we can easily see it is operationally equivalent to 3. Moreover, our reasoning to deduce this is evidently local to the expression; so why should our formalism have to rewrite the whole surrounding term (perturbing it in the process) in order to deduce this?
Suppose, instead of Felleisen's strategy of bubbling-up a side-effect to the top level of a term and then distributing it from there, we were to bubble-up (or, at least, migrate up) a side-effect to some sort of variable-binding construct, and then distribute it from there by some sort of substitution function to all free occurrences of the variable within the binding scope. The only problem, then, would be what happens if the side-effect has to be distributed more widely than the given scope — such as if a first-class continuation gets carried out of the subterm in which it was originally bound — and that can be solved by allowing the binding construct itself to migrate upward in the term, expanding its scope as much as necessary to encompass all instances of the continuation.
I did this originally in vau-calculus, of course, but for comparison with Felleisen's A/C, let's use λv-calculus instead. Introduce a second domain of control variables, xc, disjoint from xp, and "catch" and "throw" term structures (κx.T) and (τx.T).
T ::= xp | (TT) | (λxp.T) | (κxc.T) | (τxc.T)Partial-evaluation variables are bound by λ, control variables are bound by κ (catch). Control variables aren't terms; they can only occur free in τ-expressions, where they identify the destination continuation for the throw. κ and τ are evaluation contexts; that is,
E ::= ... | (κx.E) | (τx.E)
The rewrite rules for τ are pretty much the same as for Felleisen's A, except that there is now a compatible rewrite rule for what to do once the throw reaches its matching catch, rather than a whole-term rewrite for eliminating the A once it reaches the top of the term.
(τx.T1)T2 → τx.T1What about rewrite rules for κ? One simple rule we need, in order to relate expressions with κ to expression without, is "garbage collection":
V(τx.T) → τx.T
κx.(τx.T) → κx.T
κx.T → T if x does not occur free in TWe also want rules for κ to migrate upward —non-destructively— when it occurs in an evaluation context; but κ may be the target of matching τ expressions, and if we move the κ without informing a matching τ, that τ will no longer do what it was meant to. Consider a κ, poised to move upward, with a matching τ somewhere in its body (embedded in some context C that doesn't capture the control variable).
V(κx.C[τx.T])If C happens to be an evaluation context, then it is possible for the τ to move upward to meet the κ and disappear; and, supposing x doesn't occur free in T, we'd have (VT). Even if C isn't an evaluation context, (τx.T) thus represents the potential to form (VT). If we move the κ over the V, then, in order for the τ to still represent the same potential it did before, we'd have to change it to (τx.(VT)). And this has to happen for every matching τ. So let's fashion a substitution function for control variables, T[x ← C] where C doesn't capture any variables:
y[x ← C] → yThe "where" conditions are met by α-renaming as needed. Now we're ready to write our rewrite rules for moving κ upward:
(T1T2)[x ← C] → ((T1[x ← C])(T2[x ← C]))
(λy.T)[x ← C] → (λy.(T[x ← C])) where y isn't free in C
(κy.T)[x ← C] → (κy.(T[x ← C])) where y isn't x or free in C
(τy.T)[x ← C] → (τy.(T[x ← C])) if y isn't x
(τx.T)[x ← C] → (τx.C[T[x ← C]])
(κx.T1)T2 → κx.(T1[x ← ⎕T2] T2) where x isn't free in T2As advertised, κ moves upward without perturbing the term structure (contrast with the bubbling-up rules for C). If we need a first-class continuation, we can simply wrap τ in λ: (λy.(τx.y)). The δ-rule for call/cc would be
V(κx.T) → κx.(V(T[x ← V⎕])) where x isn't free in V
κy.(κx.T) → κy.(T[x ← y])
(call/cc T) → κx.(T(λy.(τx.y))) for unused xIf this occurs in some larger context, and the first-class continuation escapes into that larger context, then the matching κ will have had to move outward before it over some evaluation context E, and substitutions will have transformed the continuation to (λy.(τx.E[y])). Insights
The orthogonality of continuation control-flow to function application is, in the lower-level math, rather explicitly demonstrated by the way κ moves smoothly upward through the term, in contrast to the perturbations of the bubbling-up rules for C as it forcibly interacts with function-application structure. The encapsulation of a τ within a λ to form a first-class continuation seals the deal.
The notion that continuations are a whole-term phenomenon —or, indeed, that any side-effect is a whole-term phenomenon— breaks down under the use of floating binding-constructs such as κ, which doesn't require the side-effect to remain encapsulated within a particular subterm, but does allow it to do so and thus allows local reasoning about it to whatever extent its actual behavior remains local. Whether or not that makes traditional continuations "undelimited" is a question of word usage: the κ binding-construct is a delimiter, but a movable one.
As a matter of tangential interest here, the vau-calculus handling of sequential state involves two new kinds of variables and four new syntactic constructs (two of which are binders, one for each of the new kinds of variables). Here's a sketch: Mutable data is contained in symbol-value assignments, which in turn are attached to environments; the identity of an environment is a variable, and its binding construct defines the region of the term over which that environment may be accessible. Assignments are a separate, non-binding syntactic construct, which floats upward toward its matching environment-binding. When a symbol is evaluated in an environment, a pair of syntax elements are created at the point of evaluation: a query construct to seek an assignment for the given symbol in the given environment, which binds a query-variable, and within it a matching result construct with a free occurrence of the query-variable. The result construct is an indivisible term. The query is a compound, which migrates upward through the term looking for an assignment for the symbol in the environment. When the query encounters a matching assignment, the query is annihilated (rather as a τ meeting its matching κ), while performing a substitution that replaces all matching result-constructs with the assigned value (there may by this time be any number of matching result-constructs, since the result of the original lookup may have been passed about in anticipation of eventually finding out what its value is).
As a final, bemusing note, there's a curious analogy (which I footnoted in my dissertation) between variables in the full side-effectful vau-calculus, and fundamental forces in physics. The four forces of nature traditionally are gravity, electromagnetism, strong nuclear force, and weak nuclear force; one of these —gravity— is quite different from the others, with a peculiar sort of uniformity that the others lack (gravity is only attractive). Whilst in vau-calculus we have four kinds of variables (partial-evaluation, control, environment, and query), of which one —partial-evaluation— is quite different from the others, with a peculiar sort of uniformity that the others lack (each of the other kinds of variable has an α-renaming substitution to maintain hygiene, and one or more separate kinds of substitution to aid its purpose in rewriting; but partial-evaluation variables have only one kind of substitution, of which α-renaming is a special case).
[Note: I later explored the physics analogy in a separate post, here.]