## Same-fringe using a zipper-based coroutine

Recall back in Assignment4, we asked you to enumerate the "fringe" of a leaf-labeled tree. Both of these trees (here I am drawing the labels in the diagram):

    .                .
/ \              / \
.   3            1   .
/ \                  / \
1   2                2   3


have the same fringe: [1; 2; 3]. We also asked you to write a function that determined when two trees have the same fringe. The way you approached that back then was to enumerate each tree's fringe, and then compare the two lists for equality. Today, and then again in a later class, we'll encounter new ways to approach the problem of determining when two trees have the same fringe.

Supposing you did work out an implementation of the tree zipper, then one way to determine whether two trees have the same fringe would be: go downwards (and leftwards) in each tree as far as possible. Compare the targetted leaves. If they're different, stop because the trees have different fringes. If they're the same, then for each tree, move rightward if possible; if it's not (because you're at the rightmost position in a sibling list), move upwards then try again to move rightwards. Repeat until you are able to move rightwards. Once you do move rightwards, go downwards (and leftwards) as far as possible. Then you'll be targetted on the next leaf in the tree's fringe. The operations it takes to get to "the next leaf" may be different for the two trees. For example, in these trees:

    .                .
/ \              / \
.   3            1   .
/ \                  / \
1   2                2   3


you won't move upwards at the same steps. Keep comparing "the next leaves" until they are different, or you exhaust the leaves of only one of the trees (then again the trees have different fringes), or you exhaust the leaves of both trees at the same time, without having found leaves with different labels. In this last case, the trees have the same fringe.

If your trees are very big---say, millions of leaves---you can imagine how this would be quicker and more memory-efficient than traversing each tree to construct a list of its fringe, and then comparing the two lists so built to see if they're equal. For one thing, the zipper method can abort early if the fringes diverge early, without needing to traverse or build a list containing the rest of each tree's fringe.

Let's sketch the implementation of this. We won't provide all the details for an implementation of the tree zipper, but we will sketch an interface for it.

First, we define a type for leaf-labeled, binary trees:

type 'a tree = Leaf of 'a | Node of ('a tree * 'a tree)


Next, the interface for our tree zippers. We'll help ourselves to OCaml's record types. These are nothing more than tuples with a pretty interface. Instead of saying:

# type blah = Blah of (int * int * (char -> bool));;


and then having to remember which element in the triple was which:

# let b1 = Blah (1, (fun c -> c = 'M'), 2);;
Error: This expression has type int * (char -> bool) * int
but an expression was expected of type int * int * (char -> bool)
# (* damnit *)
# let b1 = Blah (1, 2, (fun c -> c = 'M'));;
val b1 : blah = Blah (1, 2, <fun>)


records let you attach descriptive labels to the components of the tuple:

# type blah_record = { height : int; weight : int; char_tester : char -> bool };;
# let b2 = { height = 1; weight = 2; char_tester = (fun c -> c = 'M') };;
val b2 : blah_record = {height = 1; weight = 2; char_tester = <fun>}
# let b3 = { height = 1; char_tester = (fun c -> c = 'K'); weight = 3 };; (* also works *)
val b3 : blah_record = {height = 1; weight = 3; char_tester = <fun>}


These were the strategies to extract the components of an unlabeled tuple:

let h = fst some_pair;; (* accessor functions fst and snd are only predefined for pairs *)

let (h, w, test) = b1;; (* works for arbitrary tuples *)

match b1 with
| (h, w, test) -> ...;; (* same as preceding *)


Here is how you can extract the components of a labeled record:

let h = b2.height;; (* handy! *)

let {height = h; weight = w; char_tester = test} = b2
in (* go on to use h, w, and test ... *)

match test with
| {height = h; weight = w; char_tester = test} ->
(* same as preceding *)


Anyway, using record types, we might define the tree zipper interface like so:

type 'a starred_level = Root | Starring_Left of 'a starred_nonroot | Starring_Right of 'a starred_nonroot
and 'a starred_nonroot = { parent : 'a starred_level; sibling: 'a tree };;

type 'a zipper = { level : 'a starred_level; filler: 'a tree };;

let rec move_botleft (z : 'a zipper) : 'a zipper =
(* returns z if the targetted node in z has no children *)
(* else returns move_botleft (zipper which results from moving down and left in z) *)

let rec move_right_or_up (z : 'a zipper) : 'a zipper option =
(* if it's possible to move right in z, returns Some (the result of doing so) *)
(* else if it's not possible to move any further up in z, returns None *)
(* else returns move_right_or_up (result of moving up in z) *)


The following function takes an 'a tree and returns an 'a zipper focused on its root:

let new_zipper (t : 'a tree) : 'a zipper =
{level = Root; filler = t}
;;


Finally, we can use a mutable reference cell to define a function that enumerates a tree's fringe until it's exhausted:

let make_fringe_enumerator (t: 'a tree) =
(* create a zipper targetting the botleft of t *)
let zbotleft = move_botleft (new_zipper t)
(* create a refcell initially pointing to zbotleft *)
in let zcell = ref (Some zbotleft)
(* construct the next_leaf function *)
in let next_leaf () : 'a option =
match !zcell with
| Some z -> (
(* extract label of currently-targetted leaf *)
let Leaf current = z.filler
(* update zcell to point to next leaf, if there is one *)
in let () = zcell := match move_right_or_up z with
| None -> None
| Some z' -> Some (move_botleft z')
(* return saved label *)
in Some current
| None -> (* we've finished enumerating the fringe *)
None
)
(* return the next_leaf function *)
in next_leaf
;;


Here's an example of make_fringe_enumerator in action:

# let tree1 = Leaf 1;;
val tree1 : int tree = Leaf 1
# let next1 = make_fringe_enumerator tree1;;
val next1 : unit -> int option = <fun>
# next1 ();;
- : int option = Some 1
# next1 ();;
- : int option = None
# next1 ();;
- : int option = None
# let tree2 = Node (Node (Leaf 1, Leaf 2), Leaf 3);;
val tree2 : int tree = Node (Node (Leaf 1, Leaf 2), Leaf 3)
# let next2 = make_fringe_enumerator tree2;;
val next2 : unit -> int option = <fun>
# next2 ();;
- : int option = Some 1
# next2 ();;
- : int option = Some 2
# next2 ();;
- : int option = Some 3
# next2 ();;
- : int option = None
# next2 ();;
- : int option = None


You might think of it like this: make_fringe_enumerator returns a little subprogram that will keep returning the next leaf in a tree's fringe, in the form Some ..., until it gets to the end of the fringe. After that, it will keep returning None.

Using these fringe enumerators, we can write our same_fringe function like this:

let same_fringe (t1 : 'a tree) (t2 : 'a tree) : bool =
let next1 = make_fringe_enumerator t1
in let next2 = make_fringe_enumerator t2
in let rec loop () : bool =
match next1 (), next2 () with
| Some a, Some b when a = b -> loop ()
| None, None -> true
| _ -> false
in loop ()
;;


The auxiliary loop function will keep calling itself recursively until a difference in the fringes has manifested itself---either because one fringe is exhausted before the other, or because the next leaves in the two fringes have different labels. If we get to the end of both fringes at the same time (next1 (), next2 () matches the pattern None, None) then we've established that the trees do have the same fringe.

The technique illustrated here with our fringe enumerators is a powerful and important one. It's an example of what's sometimes called cooperative threading. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called coroutines or generators.

With cooperative threads, one typically yields control to the thread, and then back again to the main program, multiple times. Here's the pattern in which that happens in our same_fringe function:

main program        next1 thread        next2 thread
------------        ------------        ------------
start next1
(paused)            starting
(paused)            calculate first leaf
(paused)            <--- return it
start next2         (paused)            starting
(paused)            (paused)            calculate first leaf
(paused)            (paused)            <-- return it
compare leaves      (paused)            (paused)
call loop again     (paused)            (paused)
call next1 again    (paused)            (paused)
(paused)            calculate next leaf (paused)
(paused)            <-- return it       (paused)
... and so on ...


The way we built cooperative threads here crucially relied on two heavyweight tools. First, it relied on our having a data structure (the tree zipper) capable of being a static snapshot of where we left off in the tree whose fringe we're enumerating. Second, it relied on our using mutable reference cells so that we could update what the current snapshot (that is, tree zipper) was, so that the next invocation of the next_leaf function could start up again where the previous invocation left off.

It's possible to build cooperative threads without using those tools, however. Some languages have a native syntax for them. Here's how we'd write the same-fringe solution above using native coroutines in the language Lua:

> function fringe_enumerator (tree)
if tree.leaf then
coroutine.yield (tree.leaf)
else
fringe_enumerator (tree.left)
fringe_enumerator (tree.right)
end
end

> function same_fringe (tree1, tree2)
local next1 = coroutine.wrap (fringe_enumerator)
local next2 = coroutine.wrap (fringe_enumerator)
local function loop (leaf1, leaf2)
if leaf1 or leaf2 then
return leaf1 == leaf2 and loop( next1(), next2() )
elseif not leaf1 and not leaf2 then
return true
else
return false
end
end
return loop (next1(tree1), next2(tree2))
end

> return same_fringe ( {leaf=1}, {leaf=2} )
false

> return same_fringe ( {leaf=1}, {leaf=1} )
true

> return same_fringe ( {left = {leaf=1}, right = {left = {leaf=2}, right = {leaf=3}}},
{left = {left = {leaf=1}, right = {leaf=2}}, right = {leaf=3}} )
true


We're going to think about the underlying principles to this execution pattern, and instead learn how to implement it from scratch---without necessarily having zippers or dedicated native syntax to rely on.

## Exceptions and Aborts

To get a better understanding of how that execution pattern works, we'll add yet a second execution pattern to our plate, and then think about what they have in common.

While writing OCaml code, you've probably come across errors. In fact, you've probably come across errors of two sorts. One sort of error comes about when you've got syntax errors or type errors and the OCaml interpreter isn't even able to understand your code:

# let lst = [1; 2] in
"a" :: lst;;
Error: This expression has type int list
but an expression was expected of type string list


But you may also have encountered other kinds of error, that arise while your program is running. For example:

# 1/0;;
Exception: Division_by_zero.
# List.nth [1;2] 10;;
Exception: Failure "nth".


These "Exceptions" are run-time errors. OCaml will automatically detect some of them, like when you attempt to divide by zero. Other exceptions are raised by code. For instance, here is the implementation of List.nth:

let nth l n =
if n < 0 then invalid_arg "List.nth" else
let rec nth_aux l n =
match l with
| [] -> failwith "nth"
| a::l -> if n = 0 then a else nth_aux l (n-1)
in nth_aux l n


Notice the two clauses invalid_arg "List.nth" and failwith "nth". These are two helper functions which are shorthand for:

raise (Invalid_argument "List.nth");;
raise (Failure "nth");;


where Invalid_argument "List.nth" is a value of type exn, and so too Failure "nth". When you have some value bad of type exn and evaluate the expression:

raise bad


the effect is for the program to immediately stop without evaluating any further code:

# let xcell = ref 0;;
val xcell : int ref = {contents = 0}
# let bad = Failure "test"
in let _ = raise bad
in xcell := 1;;
Exception: Failure "test".
# !xcell;;
- : int = 0


Notice that the line xcell := 1 was never evaluated, so the contents of xcell are still 0.

I said when you evaluate the expression:

raise bad


the effect is for the program to immediately stop. That's not exactly true. You can also programmatically arrange to catch errors, without the program necessarily stopping. In OCaml we do that with a try ... with PATTERN -> ... construct, analogous to the match ... with PATTERN -> ... construct:

# let foo x =
try
(if x = 1 then 10
else if x = 2 then raise (Failure "two")
else raise (Failure "three")
) + 100
with Failure "two" -> 20
;;
val foo : int -> int = <fun>
# foo 1;;
- : int = 110
# foo 2;;
- : int = 20
# foo 3;;
Exception: Failure "three".


Notice what happens here. If we call foo 1, then the code between try and with evaluates to 110, with no exceptions being raised. That then is what the entire try ... with ... block evaluates to; and so too what foo 1 evaluates to. If we call foo 2, then the code between try and with raises an exception Failure "two". The pattern in the with clause matches that exception, so we get instead 20. If we call foo 3, we again raise an exception. This exception isn't matched by the with block, so it percolates up to the top of the program, and then the program immediately stops.

So what I should have said is that when you evaluate the expression:

raise bad


and that exception is never caught, then the effect is for the program to immediately stop.

Trivia: what's the type of the raise (Failure "two") in:

if x = 1 then 10
else raise (Failure "two")


What's its type in:

if x = 1 then "ten"
else raise (Failure "two")


So now what do you expect the type of this to be:

fun x -> raise (Failure "two")


(fun x -> raise (Failure "two") : 'a -> 'a)


Remind you of anything we discussed earlier? /Trivia.

Of course, it's possible to handle errors in other ways too. There's no reason why the implementation of List.nth had to raise an exception. They might instead have returned Some a when the list had an nth member a, and None when it does not. But it's pedagogically useful for us to think about the exception-raising pattern now.

When an exception is raised, it percolates up through the code that called it, until it finds a surrounding try ... with ... that matches it. That might not be the first try ... with ... that it encounters. For example:

# try
try
(raise (Failure "blah")
) + 100
with Failure "fooey" -> 10
with Failure "blah" -> 20;;
- : int = 20


The matching try ... with ... block need not lexically surround the site where the error was raised:

# let foo b x =
try
(b x
) + 100
with Failure "blah" -> 20
in let bar x =
raise (Failure "blah")
in foo bar 0;;
- : int = 20


Here we call foo bar 0, and foo in turn calls bar 0, and bar raises the exception. Since there's no matching try ... with ... block in bar, we percolate back up the history of who called that function, and we find a matching try ... with ... block in foo. This catches the error and so then the try ... with ... block in foo (the code that called bar in the first place) will evaluate to 20.

OK, now this exception-handling apparatus does exemplify the second execution pattern we want to focus on. But it may bring it into clearer focus if we simplify the pattern even more. Imagine we could write code like this instead:

# let foo x =
try begin
(if x = 1 then 10
else abort 20
) + 100
end
;;


then if we called foo 1, we'd get the result 110. If we called foo 2, on the other hand, we'd get 20 (note, not 120). This exemplifies the same interesting "jump out of this part of the code" behavior that the try ... raise ... with ... code does, but without the details of matching which exception was raised, and handling the exception to produce a new result.

Many programming languages have this simplified exceution pattern, either instead of or alongside a try ... with ...-like pattern. In Lua and many other languages, abort is instead called return. In Lua, the preceding example would be written:

> function foo(x)
local value
if (x == 1) then
value = 10
else
return 20         -- abort early
end
return value + 100    -- in Lua, a function's normal value
-- must always also be explicitly returned
end

> return foo(1)
110

> return foo(2)
20


Okay, so that's our second execution pattern.

## What do these have in common?

In both of these patterns, we need to have some way to take a snapshot of where we are in the evaluation of a complex piece of code, so that we might later resume execution at that point. In the coroutine example, the two threads need to have a snapshot of where they were in the enumeration of their tree's leaves. In the abort example, we need to have a snapshot of where to pick up again if some embedded piece of code aborts. Sometimes we might distill that snapshot into a data structure like a zipper. But we might not always know how to do so; and learning how to think about these snapshots without the help of zippers will help us see patterns and similarities we might otherwise miss.

A more general way to think about these snapshots is to think of the code we're taking a snapshot of as a function. For example, in this code:

let foo x =
try begin
(if x = 1 then 10
else abort 20
) + 100
end
in (foo 2) + 1;;


we can imagine a box:

let foo x =
+---try begin----------------+
|       (if x = 1 then 10    |
|       else abort 20        |
|       ) + 100              |
+---end----------------------+
in (foo 2) + 1000;;


and as we're about to enter the box, we want to take a snapshot of the code outside the box. If we decide to abort, we'd be aborting to that snapshotted code.

What would a "snapshot of the code outside the box" look like? Well, let's rearrange the code somewhat. It should be equivalent to this:

let x = 2
in let foo_result =
+---try begin----------------+
|       (if x = 1 then 10    |
|       else abort 20        |
|       ) + 100              |
+---end----------------------+
in (foo_result) + 1000;;


and we can think of the code starting with let foo_result = ... as a function, with the box being its parameter, like this:

fun box ->
let foo_result = box
in (foo_result) + 1000


That function is our "snapshot". Normally what happens is that code inside the box delivers up a value, and that value gets supplied as an argument to the snapshot-function just described. That is, our code is essentially working like this:

let x = 2
in let snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let value =
(if x = 1 then 10
else ... (* we'll come back to this part *)
) + 100
in shapshot value;;


But now how should the abort 20 part, that we ellided here, work? What should happen when we try to evaluate that?

Well, that's when we use the snapshot code in an unusual way. If we encounter an abort 20, we should abandon the code we're currently executing, and instead just supply 20 to the snapshot we saved when we entered the box. That is, something like this:

let x = 2
in let snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let value =
(if x = 1 then 10
else snapshot 20
) + 100
in shapshot value;;


Except that isn't quite right, yet---in this fragment, after the snapshot 20 code is finished, we'd pick up again inside let value = (...) + 100 in snapshot value. That's not what we want. We don't want to pick up again there. We want instead to do this:

let x = 2
in let snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let value =
(if x = 1 then 10
else snapshot 20 THEN STOP
) + 100
in shapshot value;;


We can get that by some further rearranging of the code:

let x = 2
in let snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let continue_normally = fun from_value ->
let value = from_value + 100
in snapshot value
in
if x = 1 then continue_normally 10
else snapshot 20;;


And this is indeed what is happening, at a fundamental level, when you use an expression like abort 20.

A similar kind of "snapshotting" lets coroutines keep track of where they left off, so that they can start up again at that same place.

## Continuations, finally

These snapshots are called continuations because they represent how the computation will "continue" once some target code (in our example, the code in the box) delivers up a value.

You can think of them as functions that represent "how the rest of the computation proposes to continue." Except that, once we're able to get our hands on those functions, we can do exotic and unwholesome things with them. Like use them to suspend and resume a thread. Or to abort from deep inside a sub-computation: one function might pass the command to abort it to a subfunction, so that the subfunction has the power to jump directly to the outside caller. Or a function might return its continuation function to the outside caller, giving the outside caller the ability to "abort" the function (the function that has already returned its value---so what should happen then?) Or we may call the same continuation function multiple times (what should happen then?). All of these weird and wonderful possibilities await us.

The key idea behind working with continuations is that we're inverting control. In the fragment above, the code (if x = 1 then ... else snapshot 20) + 100---which is written as if it were to supply a value to the outside context that we snapshotted---itself makes non-trivial use of that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an argument or bound variable. That is: the code that is written like it's supplying an argument to the outside context is instead getting that context as its own argument. He who is written as value-supplying slave is instead become the outer context's master.

In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had in the first place to be supplied to the pair as an argument. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with v2 and v5 lists. Version 5 lists were the ones that let us abort a fold early: go back and re-read the material on "Aborting a Search Through a List" in Week4.

This inversion of control should also remind you of Montague's treatment of determiner phrases in "The Proper Treatment of Quantification in Ordinary English" (PTQ).

A naive semantics for atomic sentences will say the subject term is of type e, and the predicate of type e -> t, and that the subject provides an argument to the function expressed by the predicate.

Monatague proposed we instead take the subject term to be of type (e -> t) -> t, and that now it'd be the predicate (still of type e -> t) that provides an argument to the function expressed by the subject.

If all the subject did then was supply an e to the e -> t it receives as an argument, we wouldn't have gained anything we weren't already able to do. But of course, there are other things the subject can do with the e -> t it receives as an argument. For instance, it can check whether anything in the domain satisfies that e -> t; or whether most things do; and so on.

This inversion of who is the argument and who is the function receiving the argument is paradigmatic of working with continuations.

Continuations come in many varieties. There are undelimited continuations, expressed in Scheme via (call/cc (lambda (k) ...)) or the shorthand (let/cc k ...). (call/cc is itself shorthand for call-with-current-continuation.) These capture "the entire rest of the computation." There are also delimited continuations, expressed in Scheme via (reset ... (shift k ...) ...) or (prompt ... (control k ...) ...) or any of several other operations. There are subtle differences between those that we won't be exploring in the seminar. Ken Shan has done terrific work exploring the relations of these operations to each other.

When working with continuations, it's easiest in the first place to write them out explicitly, the way that we explicitly wrote out the snapshot continuation when we transformed this:

let foo x =
try begin
(if x = 1 then 10
else abort 20
) + 100
end
in (foo 2) + 1000;;


into this:

let x = 2
in let snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let continue_normally = fun from_value ->
let value = from_value + 100
in snapshot value
in
if x = 1 then continue_normally 10
else snapshot 20;;


Code written in the latter form is said to be written in explicit continuation-passing style or CPS. Later we'll talk about algorithms that mechanically convert an entire program into CPS.

There are also different kinds of "syntactic sugar" we can use to hide the continuation plumbing. Of course we'll be talking about how to manipulate continuations with a Continuation monad. We'll also talk about a style of working with continuations where they're mostly implicit, but special syntax allows us to distill the implicit continuaton into a first-class value (the k in (let/cc k ...) and (shift k ...).

Various of the tools we've been introducing over the past weeks are inter-related. We saw coroutines implemented first with zippers; here we've talked in the abstract about their being implemented with continuations. Oleg says that "Zipper can be viewed as a delimited continuation reified as a data structure." Ken expresses the same idea in terms of a zipper being a "defunctionalized" continuation---that is, take something implemented as a function (a continuation) and implement the same thing as an inert data structure (a zipper).

Mutation, delimited continuations, and monads can also be defined in terms of each other in various ways. We find these connections fascinating but the seminar won't be able to explore them very far.

We recommend reading the Yet Another Haskell Tutorial on Continuation Passing Style---though the target language is Haskell, this discussion is especially close to material we're discussing in the seminar.