[[!toc]] ## Coroutines ## Recall [[the recent homework assignment|/exercises/assignment12]] where you solved the same-fringe problem with a `make_fringe_enumerator` function, or in the Scheme version using streams instead of zippers, with a `lazy-flatten` function. The technique illustrated in those solutions is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*. With cooperative threads, one typically yields control to the thread, and then back again to the main program, multiple times. Here's the pattern in which that happens in our `same_fringe` function: main program next1 thread next2 thread ------------ ------------ ------------ start next1 (paused) starting (paused) calculate first leaf (paused) <--- return it start next2 (paused) starting (paused) (paused) calculate first leaf (paused) (paused) <-- return it compare leaves (paused) (paused) call loop again (paused) (paused) call next1 again (paused) (paused) (paused) calculate next leaf (paused) (paused) <-- return it (paused) ... and so on ... If you want to read more about these kinds of threads, here are some links: * [[!wikipedia Coroutine]] * [[!wikipedia Iterator]] * [[!wikipedia Generator_(computer_science)]] * [[!wikipedia Fiber_(computer_science)]] The way we built cooperative threads using `make_fringe_enumerator` crucially relied on two heavyweight tools. First, it relied on our having a data structure (the tree zipper) capable of being a static snapshot of where we left off in the tree whose fringe we're enumerating. Second, it either required us to manually save and restore the thread's snapshotted state (a tree zipper); or else we had to use a mutable reference cell to save and restore that state for us. Using the saved state, the next invocation of the `next_leaf` function could start up again where the previous invocation left off. It's possible to build cooperative threads without using those tools, however. Already our [[solution using streams|/exercises/assignment12#streams2]] uses neither zippers nor any mutation. Instead it saves the thread's state in the code of explicitly-created thunks, and resumes the thread by forcing the thunk. Some languages have a native syntax for coroutines. Here's how we'd write the same-fringe solution using native coroutines in the language Lua: > function fringe_enumerator (tree) if tree.leaf then coroutine.yield (tree.leaf) else fringe_enumerator (tree.left) fringe_enumerator (tree.right) end end > function same_fringe (tree1, tree2) -- coroutine.wrap turns a function into a coroutine local next1 = coroutine.wrap (fringe_enumerator) local next2 = coroutine.wrap (fringe_enumerator) local function loop (leaf1, leaf2) if leaf1 or leaf2 then return leaf1 == leaf2 and loop( next1(), next2() ) elseif not leaf1 and not leaf2 then return true else return false end end return loop (next1(tree1), next2(tree2)) end > return same_fringe ( {leaf=1}, {leaf=2} ) false > return same_fringe ( {leaf=1}, {leaf=1} ) true > return same_fringe ( {left = {leaf=1}, right = {left = {leaf=2}, right = {leaf=3}}}, {left = {left = {leaf=1}, right = {leaf=2}}, right = {leaf=3}} ) true We're going to think about the underlying principles to this execution pattern, and instead learn how to implement it from scratch---without necessarily having zippers or dedicated native syntax to rely on. ##Exceptions and Aborts## To get a better understanding of how that execution pattern works, we'll add yet a second execution pattern to our plate, and then think about what they have in common. While writing OCaml code, you've probably come across errors. In fact, you've probably come across errors of several sorts. One sort of error comes about when you've got syntax errors and the OCaml interpreter isn't even able to parse your code. A second sort of error is type errors, as in: # let lst = [1; 2] in "a" :: lst;; --- Error: This expression has type int list but an expression was expected of type string list Type errors are also detected and reported before OCaml attempts to execute or evaluate your code. But you may also have encountered a third kind of error, that arises while your program is running. For example: # 1/0;; Exception: Division_by_zero. # List.nth [1;2] 10;; Exception: Failure "nth". These "Exceptions" are **run-time errors**. OCaml will automatically detect some of them, like when you attempt to divide by zero. Other exceptions are manually *raised* by code. For instance, here is the standard implementation of `List.nth`: let nth l n = if n < 0 then invalid_arg "List.nth" else let rec nth_aux l n = match l with | [] -> failwith "nth" | a::l -> if n = 0 then a else nth_aux l (n-1) in nth_aux l n (The Juli8 version of `List.nth` only differs in sometimes raising a different error.) Notice the two clauses `invalid_arg "List.nth"` and `failwith "nth"`. These are two helper functions which are shorthand for: raise (Invalid_argument "List.nth");; raise (Failure "nth");; where `Invalid_argument "List.nth"` constructs a value of type `exn`, and so too `Failure "nth"`. When you have some value `bad` of type `exn` and evaluate the expression: raise bad the effect is for the program to immediately stop without evaluating any further code: # let xcell = ref 0;; val xcell : int ref = {contents = 0} # let bad = Failure "test" in let _ = raise bad in xcell := 1;; Exception: Failure "test". # !xcell;; - : int = 0 Notice that the line `xcell := 1` was never evaluated, so the contents of `xcell` are still `0`. I said when you evaluate the expression: raise bad the effect is for the program to immediately stop. That's not exactly true. You can also programmatically arrange to *catch* errors, without the program necessarily stopping. In OCaml we do that with a `try ... with PATTERN -> ...` construct, analogous to the `match ... with PATTERN -> ...` construct. (In OCaml 4.02 and higher, there is also a more inclusive construct that combines these, `match ... with PATTERN -> ... | exception PATTERN -> ...`.) # let foo x = try (if x = 1 then 10 else if x = 2 then raise (Failure "two") else raise (Failure "three") ) + 100 with Failure "two" -> 20 ;; val foo : int -> int = # foo 1;; - : int = 110 # foo 2;; - : int = 20 # foo 3;; Exception: Failure "three". Notice what happens here. If we call `foo 1`, then the code between `try` and `with` evaluates to `110`, with no exceptions being raised. That then is what the entire `try ... with ...` block evaluates to; and so too what `foo 1` evaluates to. If we call `foo 2`, then the code between `try` and `with` raises an exception `Failure "two"`. The pattern in the `with` clause matches that exception, so we get instead `20`. If we call `foo 3`, we again raise an exception. This exception isn't matched by the `with` block, so it percolates up to the top of the program, and then the program immediately stops. So what I should have said is that when you evaluate the expression: raise bad *and that exception is never caught*, then the effect is for the program to immediately stop. **Trivia**: what's the type of the `raise (Failure "two")` in: if x = 1 then 10 else raise (Failure "two") What's its type in: if x = 1 then "ten" else raise (Failure "two") So now what do you expect the type of this to be: fun x -> raise (Failure "two") How about this: (fun x -> raise (Failure "two") : 'a -> 'a) Remind you of anything we discussed earlier? (At one point earlier in term we were asking whether you could come up with any functions of type `'a -> 'a` other than the identity function.) **/Trivia.** Of course, it's possible to handle errors in other ways too. There's no reason why the implementation of `List.nth` *had* to raise an exception. They might instead have returned `Some a` when the list had an nth member `a`, and `None` when it does not. But it's pedagogically useful for us to think about the exception-raising pattern now. When an exception is raised, it percolates up through the code that called it, until it finds a surrounding `try ... with ...` that matches it. That might not be the first `try ... with ...` that it encounters. For example: # try try (raise (Failure "blah") ) + 100 with Failure "fooey" -> 10 with Failure "blah" -> 20;; - : int = 20 The matching `try ... with ...` block need not *lexically surround* the site where the error was raised: # let foo b x = try (b x ) + 100 with Failure "blah" -> 20 in let bar x = raise (Failure "blah") in foo bar 0;; - : int = 20 Here we call `foo bar 0`, and `foo` in turn calls `bar 0`, and `bar` raises the exception. Since there's no matching `try ... with ...` block in `bar`, we percolate back up the history of who called that function, and we find a matching `try ... with ...` block in `foo`. This catches the error and so then the `try ... with ...` block in `foo` (the code that called `bar` in the first place) will evaluate to `20`. OK, now this exception-handling apparatus does exemplify the second execution pattern we want to focus on. But it may bring it into clearer focus if we **simplify the pattern** even more. Imagine we could write code like this instead: # let foo x = try begin (if x = 1 then 10 else abort 20 ) + 100 end ;; then if we called `foo 1`, we'd get the result `110`. If we called `foo 2`, on the other hand, we'd get `20` (note, not `120`). This exemplifies the same interesting "jump out of this part of the code" behavior that the `try ... raise ... with ...` code does, but without the details of matching which exception was raised, and handling the exception to produce a new result. Many programming languages have this simplified exceution pattern, either instead of or alongside a `try ... with ...`-like pattern. In Lua and many other languages, `abort` is instead called `return`. In Lua, the preceding example would be written: > function foo(x) local value if (x == 1) then value = 10 else return 20 -- abort early end return value + 100 -- in a language like Scheme, you could omit the `return` here -- but in Lua, a function's normal result must always be explicitly `return`ed end > return foo(1) 110 > return foo(2) 20 Okay, so that's our second execution pattern. ##What do these have in common?## In both of these patterns --- coroutines and exceptions/aborts --- we need to have some way to take a snapshot of where we are in the evaluation of a complex piece of code, so that we might later resume execution at that point. In the coroutine example, the two threads need to have a snapshot of where they were in the enumeration of their tree's leaves. In the abort example, we need to have a snapshot of where to pick up again if some embedded piece of code aborts. Sometimes we might distill that snapshot into a data structure like a zipper. But we might not always know how to do so; and learning how to think about these snapshots without the help of zippers will help us see patterns and similarities we might otherwise miss. A more general way to think about these snapshots is to think of the code we're taking a snapshot of as a *function.* For example, in this code: let foo x = (* same definition as before *) try begin (if x = 1 then 10 else abort 20 ) + 100 end in (foo 2) + 1;; (* this line is new *) we can imagine a box: let foo x = +---try begin----------------+ | (if x = 1 then 10 | | else abort 20 | | ) + 100 | +---end----------------------+ in (foo 2) + 1000;; and as we're about to enter the box, we want to take a snapshot of the code *outside* the box. If we decide to abort, we'd be aborting *to* that snapshotted code. What would a "snapshot of the code outside the box" look like? Well, let's rearrange the code somewhat. It should be equivalent to this: let x = 2 in let foo_result = +---try begin----------------+ | (if x = 1 then 10 | | else abort 20 | | ) + 100 | +---end----------------------+ in (foo_result) + 1000;; and we can think of the code starting with `let foo_result = ...` as a function, with the box being its parameter, like this: let foo_result = < > in foo_result + 100 or, spelling out the gap `< >` as a bound variable: fun box -> let foo_result = box in (foo_result) + 1000 That function is our "snapshot". Normally what happens is that code *inside* the box delivers up a value, and that value gets supplied as an argument to the snapshot-function just described. That is, our code is essentially working like this: let x = 2 in let snapshot = fun box -> let foo_result = box in (foo_result) + 1000 in let foo_applied_to_x = (if x = 1 then 10 else ... (* we'll come back to this part *) ) + 100 in shapshot foo_applied_to_x;; But now how should the `abort 20` part, that we ellided here, work? What should happen when we try to evaluate that? Well, that's when we use the snapshot code in an unusual way. If we encounter an `abort 20`, we should abandon the code we're currently executing, and instead just supply `20` to the snapshot we saved when we entered the box. That is, something like this: let x = 2 in let snapshot = fun box -> let foo_result = box in (foo_result) + 1000 in let foo_applied_to_x = (if x = 1 then 10 else snapshot 20 ) + 100 in shapshot foo_applied_to_x;; Except that isn't quite right, yet---in this fragment, after the `snapshot 20` code is finished, we'd pick up again inside `let foo_applied_to_x = (...) + 100 in snapshot foo_applied_to_x`. That's not what we want. We don't want to pick up again there. We want instead to do this: let x = 2 in let snapshot = fun box -> let foo_result = box in (foo_result) + 1000 in let foo_applied_to_x = (if x = 1 then 10 else snapshot 20 THEN STOP ) + 100 in shapshot foo_applied_to_x;; We can get that by some further rearranging of the code: let x = 2 in let snapshot = fun box -> let foo_result = box in (foo_result) + 1000 in let continue_foo_normally = fun from_value -> let value = from_value + 100 in snapshot value in (* start of foo_applied_to_x *) if x = 1 then continue_foo_normally 10 else snapshot 20;; And this is indeed what is happening, at a fundamental level, when you use an expression like `abort 20`. Here is the original code for comparison: let foo x = +---try begin----------------+ | (if x = 1 then 10 | | else abort 20 | | ) + 100 | +---end----------------------+ in (foo 2) + 1000;; A similar kind of "snapshotting" lets coroutines keep track of where they left off, so that they can start up again at that same place. ##Continuations, finally## These snapshots are called **continuations** because they represent how the computation will "continue" once some target code (in our example, the code in the box) delivers up a value. You can think of them as functions that represent "how the rest of the computation proposes to continue." Except that, once we're able to get our hands on those functions, we can do exotic and unwholesome things with them. Like use them to suspend and resume a thread. Or to abort from deep inside a sub-computation: one function might pass the command to abort *it* to a subfunction, so that the subfunction has the power to jump directly to the outside caller. Or a function might *return* its continuation function to the outside caller, giving *the outside caller* the ability to "abort" the function (the function that has already returned its value---so what should happen then?) Or we may call the same continuation function *multiple times* (what should happen then?). All of these weird and wonderful possibilities await us. The key idea behind working with continuations is that we're *inverting control*. In the fragment above, the code `(if x = 1 then ... else snapshot 20) + 100`---which is written as if it were to supply a value to the outside context that we snapshotted---itself *makes non-trivial use of* that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an *argument* or bound variable. That is: the code that is *written* like it's supplying an argument to the outside context is instead *getting that context as its own argument*. He who is written as value-supplying slave is instead become the outer context's master. In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with our [[abortable list traversals|/topics/week12_abortable_traversals]]. This inversion of control should also remind you of Montague's treatment of determiner phrases in ["The Proper Treatment of Quantification in Ordinary English"](http://www.blackwellpublishing.com/content/BPL_Images/Content_store/Sample_chapter/0631215417%5CPortner.pdf) (PTQ). A naive semantics for atomic sentences will say the subject term is of type `e`, and the predicate of type `e -> t`, and that the subject provides an argument to the function expressed by the predicate. Monatague proposed we instead take the subject term to be of type `(e -> t) -> t`, and that now it'd be the predicate (still of type `e -> t`) that provides an argument to the function expressed by the subject. If all the subject did then was supply an `e` to the `e -> t` it receives as an argument, we wouldn't have gained anything we weren't already able to do. But of course, there are other things the subject can do with the `e -> t` it receives as an argument. For instance, it can check whether anything in the domain satisfies that `e -> t`; or whether most things do; and so on. This inversion of who is the argument and who is the function receiving the argument is paradigmatic of working with continuations. Continuations come in many varieties. There are **undelimited continuations**, expressed in Scheme via `(call/cc (lambda (k) ...))` or the shorthand `(let/cc k ...)`. (`call/cc` is itself shorthand for `call-with-current-continuation`.) These capture "the entire rest of the computation." There are also **delimited continuations**, expressed in Scheme via `(reset ... (shift k ...) ...)` or `(prompt ... (control k ...) ...)` or any of several other operations. There are subtle differences between those that we won't be exploring in the seminar. Ken Shan has done terrific work exploring the relations of these operations to each other. When working with continuations, it's easiest in the first place to write them out explicitly, the way that we explicitly wrote out the `snapshot` continuation when we transformed this: let foo x = +---try begin----------------+ | (if x = 1 then 10 | | else abort 20 | | ) + 100 | +---end----------------------+ in (foo 2) + 1000;; into this: let x = 2 in let snapshot = fun box -> let foo_result = box in (foo_result) + 1000 in let continue_foo_normally = fun from_value -> let value = from_value + 100 in snapshot value in (* start of foo_applied_to_x *) if x = 1 then continue_foo_normally 10 else snapshot 20;; Code written in the latter form is said to be written in **explicit continuation-passing style** or CPS. Later we'll talk about algorithms that mechanically convert an entire program into CPS. There are also different kinds of "syntactic sugar" we can use to hide the continuation plumbing. Of course we'll be talking about how to manipulate continuations **with a Continuation monad.** We'll also talk about a style of working with continuations where they're **mostly implicit**, but special syntax allows us to distill the implicit continuation into a first-class value (the `k` in `(let/cc k ...)` and `(shift k ...)`. For reference, here's how the preceding code looks, using Scheme's `abort` or `shift` operators: #lang racket (require racket/control) (let ([foo (lambda (x) (reset (+ (if (eqv? x 1) 10 (abort 20)) 100)))]) (+ (foo 2) 1000)) (let ([foo (lambda (x) (reset (+ (shift k (if (eqv? x 1) (k 10) 20)) 100)))]) (+ (foo 1) 1000)) Various of the tools we've been introducing over the past weeks are inter-related. We saw coroutines implemented first with zippers; here we've talked in the abstract about their being implemented with continuations. Oleg says that "Zipper can be viewed as a delimited continuation reified as a data structure." Ken expresses the same idea in terms of a zipper being a "defunctionalized" continuation---that is, take something implemented as a function (a continuation) and implement the same thing as an inert data structure (a zipper). Mutation, delimited continuations, and monads can also be defined in terms of each other in various ways. We find these connections fascinating but the seminar won't be able to explore them very far. We recommend reading [the Yet Another Haskell Tutorial on Continuation Passing Style](http://en.wikibooks.org/wiki/Haskell/YAHT/Type_basics#Continuation_Passing_Style)---though the target language is Haskell, this discussion is especially close to material we're discussing in the seminar.