Recent changes to this wiki:

add Unreliable Guide OCaml Modules
diff --git a/readings.mdwn b/readings.mdwn
index 7d72fa7..74a726d 100644
@@ -274,4 +274,7 @@ in M. Broy, editor, *Marktoberdorf Summer School on Program Design Calculi*, Spr

*	[[!wikipedia Linear logic]]

+### Other ###
+
+*   [Unreliable Guide to OCaml Modules](http://lambdafoo.com/blog/2015/05/15/unreliable-guide-to-ocaml-modules/)


edits
diff --git a/content.mdwn b/content.mdwn
index 06fcfe2..d8e66ff 100644
--- a/content.mdwn
+++ b/content.mdwn
@@ -91,8 +91,8 @@ week in which they were introduced.
*   [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]
*   [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]
*   [[Let/cc and reset/shift|topics/week13_native_continuation_operators]]
-    *   CPS transforms
-
+    *   [[Introducing continuations|/topics/week14_continuations]]
+    *   [[Continuation applications to natural language|/topics/week15_continuation_applications]]

## Topics by week ##


edits
diff --git a/content.mdwn b/content.mdwn
index be10a97..06fcfe2 100644
--- a/content.mdwn
+++ b/content.mdwn
@@ -190,4 +190,13 @@ Week 13:
*   [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]
*   [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]
*   [[Let/cc and reset/shift|topics/week13_native_continuation_operators]]
-*   CPS transforms
+
+Week 14:
+
+*   [[Introducing continuations|topics/week14_continuations]] (includes CPS transforms from week 13)
+
+Week 15:
+
+*   [[Continuation applications|topics/week15_continuation_applications]]
+


move
diff --git a/topics/_manipulating_trees_with_monads.mdwn b/topics/_manipulating_trees_with_monads.mdwn
deleted file mode 100644
index 1a4a1dd..0000000
+++ /dev/null
@@ -1,465 +0,0 @@
-[[!toc]]
-
-Manipulating trees with monads
-------------------------------
-
-This topic develops an idea based on a suggestion of Ken Shan's.
-We'll build a series of functions that operate on trees, doing various
-things, including updating leaves with a Reader monad, counting nodes
-with a State monad, copying the tree with a List monad, and converting
-a tree into a list of leaves with a Continuation monad.  It will turn
-out that the continuation monad can simulate the behavior of each of
-
-From an engineering standpoint, we'll build a tree machine that
-deals in monads.  We can modify the behavior of the system by swapping
-a layer of funtionality without disturbing the underlying system, for
-instance, in the way that the Reader monad allowed us to add a layer
-of intensionality to an extensional grammar. But we have not yet seen
-the utility of replacing one monad with other.
-
-First, we'll be needing a lot of trees for the remainder of the
-course.  Here again is a type constructor for leaf-labeled, binary trees:
-
-    type 'a tree = Leaf of 'a | Node of ('a tree * 'a tree);;
-
-[How would you adjust the type constructor to allow for labels on the
-internal nodes?]
-
-We'll be using trees where the nodes are integers, e.g.,
-
-
-	let t1 = Node (Node (Leaf 2, Leaf 3),
-	               Node (Leaf 5, Node (Leaf 7,
-                                           Leaf 11)))
-	    .
-	 ___|___
-	 |     |
-	 .     .
-	_|_   _|__
-	|  |  |  |
-	2  3  5  .
-	        _|__
-	        |  |
-	        7  11
-
-Our first task will be to replace each leaf with its double:
-
-	let rec tree_map (leaf_modifier : 'a -> 'b) (t : 'a tree) : 'b tree =
-	  match t with
-	    | Leaf i -> Leaf (leaf_modifier i)
-	    | Node (l, r) -> Node (tree_map leaf_modifier l,
-	                           tree_map leaf_modifier r);;
-
-tree_map takes a tree and a function that transforms old leaves into
-new leaves, and maps that function over all the leaves in the tree,
-leaving the structure of the tree unchanged.  For instance:
-
-	let double i = i + i;;
-	tree_map double t1;;
-	- : int tree =
-	Node (Node (Leaf 4, Leaf 6), Node (Leaf 10, Node (Leaf 14, Leaf 22)))
-
-	    .
-	 ___|____
-	 |      |
-	 .      .
-	_|__  __|__
-	|  |  |   |
-	4  6  10  .
-	        __|___
-	        |    |
-	        14   22
-
-We could have built the doubling operation right into the tree_map
-code.  However, because we've made what to do to each leaf a
-parameter, we can decide to do something else to the leaves without
-needing to rewrite tree_map.  For instance, we can easily square
-each leaf instead, by supplying the appropriate int -> int operation
-in place of double:
-
-	let square i = i * i;;
-	tree_map square t1;;
-	- : int tree =
-	Node (Node (Leaf 4, Leaf 9), Node (Leaf 25, Node (Leaf 49, Leaf 121)))
-
-Note that what tree_map does is take some unchanging contextual
-information---what to do to each leaf---and supplies that information
-to each subpart of the computation.  In other words, tree_map has the
-behavior of a Reader monad.  Let's make that explicit.
-
-In general, we're on a journey of making our tree_map function more and
-more flexible.  So the next step---combining the tree transformer with
-a Reader monad---is to have the tree_map function return a (monadized)
-tree that is ready to accept any int -> int function and produce the
-updated tree.
-
-	fun e ->    .
-	       _____|____
-	       |        |
-	       .        .
-	     __|___   __|___
-	     |    |   |    |
-	    e 2  e 3  e 5  .
-	                 __|___
-	                 |    |
-	                e 7  e 11
-
-That is, we want to transform the ordinary tree t1 (of type int
-tree) into a reader monadic object of type (int -> int) -> int
-tree: something that, when you apply it to an int -> int function
-e returns an int tree in which each leaf i has been replaced
-with e i.
-
-[Application note: this kind of reader object could provide a model
-for Kaplan's characters.  It turns an ordinary tree into one that
-expects contextual information (here, the e) that can be
-used to compute the content of indexicals embedded arbitrarily deeply
-in the tree.]
-
-With our previous applications of the Reader monad, we always knew
-which kind of environment to expect: either an assignment function, as
-in the original calculator simulation; a world, as in the
-intensionality monad; an individual, as in the Jacobson-inspired link
-monad; etc.  In the present case, we expect that our "environment"
-will be some function of type int -> int. "Looking up" some int in
-the environment will return us the int that comes out the other side
-of that function.
-
-	type 'a reader = (int -> int) -> 'a;;
-	let reader_unit (a : 'a) : 'a reader = fun _ -> a;;
-	let reader_bind (u: 'a reader) (f : 'a -> 'b reader) : 'b reader =
-	  fun e -> f (u e) e;;
-
-It would be a simple matter to turn an *integer* into an int reader:
-
-	let asker : int -> int reader =
-	  fun (a : int) ->
-	    fun (modifier : int -> int) -> modifier a;;
-	asker 2 (fun i -> i + i);;
-	- : int = 4
-
-asker a is a monadic box that waits for an an environment (here, the argument modifier) and returns what that environment maps a to.
-
-How do we do the analagous transformation when our ints are scattered over the leaves of a tree? How do we turn an int tree into a reader?
-A tree is not the kind of thing that we can apply a
-function of type int -> int to.
-
-But we can do this:
-
-	let rec tree_monadize (f : 'a -> 'b reader) (t : 'a tree) : 'b tree reader =
-	    match t with
-	    | Leaf a -> reader_bind (f a) (fun b -> reader_unit (Leaf b))
-	    | Node (l, r) -> reader_bind (tree_monadize f l) (fun l' ->
-	                       reader_bind (tree_monadize f r) (fun r' ->
-	                         reader_unit (Node (l', r'))));;
-
-This function says: give me a function f that knows how to turn
-something of type 'a into an 'b reader---this is a function of the same type that you could bind an 'a reader to, such as asker or reader_unit---and I'll show you how to
-turn an 'a tree into an 'b tree reader.  That is, if you show me how to do this:
-
-	              ------------
-	  1     --->  |    1     |
-	              ------------
-
-then I'll give you back the ability to do this:
-
-	              ____________
-	  .           |    .     |
-	__|___  --->  |  __|___  |
-	|    |        |  |    |  |
-	1    2        |  1    2  |
-	              ------------
-
-And how will that boxed tree behave? Whatever actions you perform on it will be transmitted down to corresponding operations on its leaves. For instance, our int reader expects an int -> int environment. If supplying environment e to our int reader doubles the contained int:
-
-	              ------------
-	  1     --->  |    1     |  applied to e  ~~>  2
-	              ------------
-
-Then we can expect that supplying it to our int tree reader will double all the leaves:
-
-	              ____________
-	  .           |    .     |                      .
-	__|___  --->  |  __|___  | applied to e  ~~>  __|___
-	|    |        |  |    |  |                    |    |
-	1    2        |  1    2  |                    2    4
-	              ------------
-
-In more fanciful terms, the tree_monadize function builds plumbing that connects all of the leaves of a tree into one connected monadic network; it threads the
-'b reader monad through the original tree's leaves.
-
-	- : int tree =

(Diff truncated)

edits
diff --git a/topics/_manipulating_trees_with_monads.mdwn b/topics/_manipulating_trees_with_monads.mdwn
index 0d9e33d..1a4a1dd 100644
@@ -375,87 +375,6 @@ able to simulate any other monad (Google for "mother of all monads").

If you want to see how to parameterize the definition of the tree_monadize function, so that you don't have to keep rewriting it for each new monad, see [this code](/code/tree_monadize.ml).

-The idea of using continuations to characterize natural language meaning
-------------------------------------------------------------------------
-
-We might a philosopher or a linguist be interested in continuations,
-especially if efficiency of computation is usually not an issue?
-Well, the application of continuations to the same-fringe problem
-shows that continuations can manage order of evaluation in a
-well-controlled manner.  In a series of papers, one of us (Barker) and
-Ken Shan have argued that a number of phenomena in natural langauge
-semantics are sensitive to the order of evaluation.  We can't
-reproduce all of the intricate arguments here, but we can give a sense
-of how the analyses use continuations to achieve an analysis of
-natural language meaning.
-
-**Quantification and default quantifier scope construal**.
-
-We saw in the copy-string example ("abSd") and in the same-fringe example that
-local properties of a structure (whether a character is 'S' or not, which
-integer occurs at some leaf position) can control global properties of
-the computation (whether the preceeding string is copied or not,
-whether the computation halts or proceeds).  Local control of
-surrounding context is a reasonable description of in-situ
-quantification.
-
-    (1) John saw everyone yesterday.
-
-This sentence means (roughly)
-
-    forall x . yesterday(saw x) john
-
-That is, the quantifier *everyone* contributes a variable in the
-direct object position, and a universal quantifier that takes scope
-over the whole sentence.  If we have a lexical meaning function like
-the following:
-
-	let lex (s:string) k = match s with
-	  | "everyone" -> Node (Leaf "forall x", k "x")
-	  | "someone" -> Node (Leaf "exists y", k "y")
-	  | _ -> k s;;
-
-Then we can crudely approximate quantification as follows:
-
-	# let sentence1 = Node (Leaf "John",
-						  Node (Node (Leaf "saw",
-									  Leaf "everyone"),
-								Leaf "yesterday"));;
-
-	# tree_monadize lex sentence1 (fun x -> x);;
-	- : string tree =
-	Node
-	 (Leaf "forall x",
-	  Node (Leaf "John", Node (Node (Leaf "saw", Leaf "x"), Leaf "yesterday")))
-
-In order to see the effects of evaluation order,
-observe what happens when we combine two quantifiers in the same
-sentence:
-
-	# let sentence2 = Node (Leaf "everyone", Node (Leaf "saw", Leaf "someone"));;
-	# tree_monadize lex sentence2 (fun x -> x);;
-	- : string tree =
-	Node
-	 (Leaf "forall x",
-	  Node (Leaf "exists y", Node (Leaf "x", Node (Leaf "saw", Leaf "y"))))
-
-The universal takes scope over the existential.  If, however, we
-replace the usual tree_monadizer with tree_monadizer_rev, we get
-inverse scope:
-
-	# tree_monadize_rev lex sentence2 (fun x -> x);;
-	- : string tree =
-	Node
-	 (Leaf "exists y",
-	  Node (Leaf "forall x", Node (Leaf "x", Node (Leaf "saw", Leaf "y"))))
-
-There are many crucially important details about quantification that
-are being simplified here, and the continuation treatment used here is not
-scalable for a number of reasons.  Nevertheless, it will serve to give
-an idea of how continuations can provide insight into the behavior of
-quantifiers.
-
-
==============


edits
diff --git a/topics/week14_continuations.mdwn b/topics/week14_continuations.mdwn
index 7788af9..ffc67a9 100644
--- a/topics/week14_continuations.mdwn
+++ b/topics/week14_continuations.mdwn
@@ -739,46 +739,6 @@ So too will examples. We'll give some examples, and show you how to try them out

<!-- GOTCHAS?? -->

--- cutting for control operators --
-
-3.	callcc was originally introduced in Scheme. There it's written call/cc and is an abbreviation of call-with-current-continuation. Instead of the somewhat bulky form:
-
-		(call/cc (lambda (k) ...))
-
-	I prefer instead to use the lighter, and equivalent, shorthand:
-
-		(let/cc k ...)
-
-
-Callcc/letcc examples
----------------------
-
-First, here are two examples in Scheme:
-
-	(+ 100 (let/cc k (+ 10 1)))
-	       |-----------------|
-
-This binds the continuation outk of the underlined expression to k, then computes (+ 10 1) and delivers that to outk in the normal way (not through k). No unusual behavior. It evaluates to 111.
-
-What if we do instead:
-
-	(+ 100 (let/cc k (+ 10 (k 1))))
-	       |---------------------|
-
-This time, during the evaluation of (+ 10 (k 1)), we supply 1 to k. So then the local continuation, which delivers the value up to (+ 10 [_]) and so on, is discarded. Instead 1 gets supplied to the outer continuation in place when let/cc was invoked. That will be (+ 100 [_]). When (+ 100 1) is evaluated, there's no more of the computation left to evaluate. So the answer here is 101.
-
-You are not restricted to calling a bound continuation only once, nor are you restricted to calling it only inside of the call/cc (or let/cc) block. For example, you can do this:
-
-	(let ([p (let/cc k (cons 1 k))])
-  	  (cons (car p) ((cdr p) (cons 2 (lambda (x) x)))))
-	; evaluates to '(2 2 . #<procedure>)
-
-What happens here? First, we capture the continuation where p is about to be assigned a value. Inside the let/cc block, we create a pair consisting of 1 and the captured continuation. This pair is bound to p. We then proceed to extract the components of the pair. The head (car) goes into the start of a tuple we're building up. To get the next piece of the tuple, we extract the second component of p (this is the bound continuation k) and we apply it to a pair consisting of 2 and the identity function. Supplying arguments to k takes us back to the point where p is about to be assigned a value. The tuple we had formerly been building, starting with 1, will no longer be accessible because we didn't bring along with us any way to refer to it, and we'll never get back to the context where we supplied an argument to k. Now p gets assigned not the result of (let/cc k (cons 1 k)) again, but instead, the new pair that we provided: '(2 . #<identity procedure>). Again we proceed to build up a tuple: we take the first element 2, then we take the second element (now the identity function), and feed it a pair '(2 . #<identity procedure>), and since it's an argument to the identity procedure that's also the result. So our final result is a nested pair, whose first element is 2 and whose second element is the pair '(2 . #<identity procedure>). Racket displays this nested pair like this:
-
-	'(2 2 . #<procedure>)
-
--- end of cut --
-
Ok, so now let's see how to perform these same computations via CPS.

In the lambda evaluator:
@@ -844,221 +804,7 @@ The third example is more difficult to make work with the monadic library, becau

<!-- FIXME -->

--- cutting following section for control operators --
-
-Some callcc/letcc exercises
----------------------------
-
-Here are a series of examples from *The Seasoned Schemer*, which we recommended at the start of term. It's not necessary to have the book to follow the exercises, though if you do have it, its walkthroughs will give you useful assistance.
-
-For reminders about Scheme syntax, see [here](/assignment8/) and [here](/week1/) and [here](/translating_between_ocaml_scheme_and_haskell). Other resources are on our [[Learning Scheme]] page.
-
-Most of the examples assume the following preface:
-
-	#lang racket
-
-	(define (atom? x)
-	  (and (not (pair? x)) (not (null? x))))
-
-Now try to figure out what this function does:
-
-	(define alpha
-	  (lambda (a lst)
-	    (let/cc k ; now what will happen when k is called?
-	      (letrec ([aux (lambda (l)
-	                      (cond
-	                        [(null? l) '()]
-	                        [(eq? (car l) a) (k (aux (cdr l)))]
-	                        [else (cons (car l) (aux (cdr l)))]))])
-	        (aux lst)))))
-
-Here is [the answer](/hints/cps_hint_1), but try to figure it out for yourself.
-
-Next, try to figure out what this function does:
-
-	(define beta
-	  (lambda (lst)
-	    (let/cc k ; now what will happen when k is called?
-	      (letrec ([aux (lambda (l)
-	                      (cond
-	                        [(null? l) '()]
-	                        [(atom? (car l)) (k (car l))]
-	                        [else (begin
-	                                ; what will the value of the next line be? why is it ignored?
-	                                (aux (car l))
-	                                (aux (cdr l)))]))])
-	        (aux lst)))))
-
-Here is [the answer](/hints/cps_hint_2), but try to figure it out for yourself.
-
-Next, try to figure out what this function does:
-
-	(define gamma
-	  (lambda (a lst)
-	    (letrec ([aux (lambda (l k)
-	                    (cond
-	                      [(null? l) (k 'notfound)]
-	                      [(eq? (car l) a) (cdr l)]
-	                      [(atom? (car l)) (cons (car l) (aux (cdr l) k))]
-	                      [else
-	                       ; what happens when (car l) exists but isn't an atom?
-	                       (let ([car2 (let/cc k2 ; now what will happen when k2 is called?
-	                                     (aux (car l) k2))])
-	                         (cond
-	                           ; when will the following condition be met? what happens then?
-	                           [(eq? car2 'notfound) (cons (car l) (aux (cdr l) k))]
-	                           [else (cons car2 (cdr l))]))]))]
-	             [lst2 (let/cc k1 ; now what will happen when k1 is called?
-	                     (aux lst k1))])
-	      (cond
-	        ; when will the following condition be met?
-	        [(eq? lst2 'notfound) lst]
-	        [else lst2]))))
-
-Here is [the answer](/hints/cps_hint_3), but try to figure it out for yourself.
-
-Here is the hardest example. Try to figure out what this function does:
-
-	(define delta
-	  (letrec ([yield (lambda (x) x)]
-	           [resume (lambda (x) x)]
-	           [walk (lambda (l)
-	                   (cond
-	                     ; is this the only case where walk returns a non-atom?
-	                     [(null? l) '()]
-	                     [(atom? (car l)) (begin
-	                                        (let/cc k2 (begin
-	                                          (set! resume k2) ; now what will happen when resume is called?
-	                                          ; when the next line is executed, what will yield be bound to?
-	                                          (yield (car l))))
-	                                        ; when will the next line be executed?
-	                                        (walk (cdr l)))]
-	                     [else (begin
-	                             ; what will the value of the next line be? why is it ignored?
-	                             (walk (car l))
-	                             (walk (cdr l)))]))]
-	           [next (lambda () ; next is a thunk
-	                   (let/cc k3 (begin
-	                     (set! yield k3) ; now what will happen when yield is called?
-	                     ; when the next line is executed, what will resume be bound to?
-	                     (resume 'blah))))]
-	           [check (lambda (prev)
-	                    (let ([n (next)])
-	                      (cond
-	                        [(eq? n prev) #t]
-	                        [(atom? n) (check n)]
-	                        ; when will n fail to be an atom?
-	                        [else #f])))])
-	    (lambda (lst)
-	      (let ([fst (let/cc k1 (begin
-	                   (set! yield k1) ; now what will happen when yield is called?
-	                   (walk lst)
-	                   ; when will the next line be executed?
-	                   (yield '())))])
-	        (cond
-	          [(atom? fst) (check fst)]
-	          ; when will fst fail to be an atom?
-	          [else #f])
-	        ))))
-
-Here is [the answer](/hints/cps_hint_4), but again, first try to figure it out for yourself.
-
-
-Delimited control operators
-===========================
-
-Here again is the CPS transform for callcc:
-
- 	[callcc (\k. body)] = \outk. (\k. [body] outk) (\v localk. outk v)
-
-callcc is what's known as an *undelimited control operator*. That is, the continuations outk that get bound into our ks include all the code from the call/cc ... out to *and including* the end of the program. Calling such a continuation will never return any value to the call site.
-
-(See the technique employed in the delta example above, with the (begin (let/cc k2 ...) ...), for a work-around. Also. if you've got a copy of *The Seasoned Schemer*, see the comparison of let/cc vs. "collector-using" (that is, partly CPS) functions at pp. 155-164.)
-
-Often times it's more useful to use a different pattern, where we instead capture only the code from the invocation of our control operator out to a certain boundary, not including the end of the program. These are called *delimited control operators*. A variety of these have been formulated. The most well-behaved from where we're coming from is the pair reset and shift. reset sets the boundary, and shift binds the continuation from the position where it's invoked out to that boundary.
-
-It works like this:
-
-	(1) outer code
-	------- reset -------
-	| (2)               |
-	| +----shift k ---+ |
-	| | (3)           | |
-	| |               | |
-	| |               | |
-	| +---------------+ |
-	| (4)               |
-	+-------------------+

(Diff truncated)

edits
diff --git a/topics/week15_continuation_applications.mdwn b/topics/week15_continuation_applications.mdwn
index 328b506..dc486d8 100644
--- a/topics/week15_continuation_applications.mdwn
+++ b/topics/week15_continuation_applications.mdwn
@@ -1,5 +1,5 @@
<!-- λ ◊ ≠ ∃ Λ ∀ ≡ α β γ ρ ω φ ψ Ω ○ μ η δ ζ ξ ⋆ ★ • ∙ ● ⚫ 𝟎 𝟏 𝟐 𝟘 𝟙 𝟚 𝟬 𝟭 𝟮 ⇧ (U+2e17) ¢ -->
-[[!toc]]
+[[!toc levels=2]]

# Applications of continuations to natural language

@@ -599,7 +599,20 @@ spontaneous resets, as long as the types match the following recipe:
LOWER (---:---) == g[p]:α
p   S

-This will be easiest to explain by presenting our first complete
-example from natural language:
-
+At this point, it should be clear how the approach in the seminar
+relates to the system developed in Barker and Shan 2014.  Many
+applications of continuations to natural langauge are developed in
+detail there, including
+
+* Scope-taking
+* Quantificational binding
+* Weak crossover
+* Generalized coordination
+* Dynamic Binding
+* WH-movement as delayed binding
+* Semantic reconstruction effects
+* Linear order effects in negative polarity licensing
+* Donkey anaphora
+
+and much more.


movements
diff --git a/topics/_cps.mdwn b/topics/_cps.mdwn
deleted file mode 100644
index 125bc3a..0000000
--- a/topics/_cps.mdwn
+++ /dev/null
@@ -1,3 +0,0 @@
-**Note to Chris**: [[don't forget this material to be merged in somehow|/topics/_cps_and_continuation_operators]]. I marked where I cut some material to put into week13_control_operators, but that page is still a work in progress in my browser...
-
-
diff --git a/topics/_cps_and_continuation_operators.mdwn b/topics/_cps_and_continuation_operators.mdwn
deleted file mode 100644
index f09f336..0000000
--- a/topics/_cps_and_continuation_operators.mdwn
+++ /dev/null
@@ -1,2 +0,0 @@
-[[!toc]]
-
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
deleted file mode 100644
index 328b506..0000000
--- a/topics/_week15_continuation_applications.mdwn
+++ /dev/null
@@ -1,605 +0,0 @@
-<!-- λ ◊ ≠ ∃ Λ ∀ ≡ α β γ ρ ω φ ψ Ω ○ μ η δ ζ ξ ⋆ ★ • ∙ ● ⚫ 𝟎 𝟏 𝟐 𝟘 𝟙 𝟚 𝟬 𝟭 𝟮 ⇧ (U+2e17) ¢ -->
-[[!toc]]
-
-# Applications of continuations to natural language
-
-We've seen a number of applications of monads to natural language,
-including presupposition projection, binding, intensionality, and the
-dynamics of the GSV fragment.
-
-In the past couple of weeks, we've introduced continuations, first as
-a functional programming technique, then in terms of list and tree
-zippers, then as a monad.  In this lecture, we will generalize
-continuations slightly beyond a monad, and then begin to outline some
-of the applications of the generalized continuations.
-
-Many (though not all) of the applications are discussed in detail in
-Barker and Shan 2014, *Continuations in Natural Language*, OUP.
-
-To review, in terms of list zippers, the continuation of a focused
-element in the list is the front part of the list.
-
-    list zipper for the list [a;b;c;d;e;f] with focus on d:
-
-        ([c;b;a], [d;e;f])
-         -------
-     defunctionalized
-     continuation
-
-In terms of tree zippers, the continuation is the entire context of
-the focused element--the entire rest of the tree.
-
-[drawing of a tree zipper]
-
-We explored continuations first in a list setting, then in a tree
-setting, using the doubling task as an example.
-
-    "abSd" ~~> "ababd"
-    "ab#deSfg" ~~> "abdedefg"
-
-The "S" functions like a shifty operator, and "#" functions like a reset.
-
-Although the list version of the doubling task was easy to understand
-thoroughly, the tree version was significantly more challenging.  In
-particular, it remained unclear why
-
-    "aScSe" ~~> "aacaceecaacaceecee"
-
-We'll burn through that conceptual fog today by learning more about
-how to work with continuations.
-
-The natural thing to try would have been to defunctionalize the
-continuation-based solution using a tree zipper.  But that would not
-have been easy, since the natural way to implement the doubling
-behavior of the shifty operator would have been to simply copy the
-context provided by the zipper.  This would have produced two
-uncoordinated copies of the other shifty operator, and we'd have been
-in the situation described in class of having a reduction strategy
-that never reduced the number of shifty operators below 2.  The
-limitation is that zippers by themselves don't provide a natural way
-to establish a dependency between two distant elements of a data
-structure.  (There are ways around this limitation of tree zippers,
-but they are essentially equivalent to the technique given just
-below.)
-
-Instead, we'll re-interpreting what the continuation monad was doing
-in more or less defunctionalized terms by using Quantifier Raising, a
-technique from linguistics.
-
-But first, motivating quantifier scope as a linguistic application.
-
-# The primary application of continuations to natural language: scope-taking
-
-We have seen that continuations allow a deeply-embedded element to
-take control over (a portion of) the entire computation that contains
-it.  In natural language semantics, this is exactly what it means for
-a scope-taking expression to take scope.
-
-    1. [Ann put a copy of [everyone]'s homeworks in her briefcase]
-
-    2. For every x, [Ann put a copy of x's homeworks in her briefcase]
-
-The sentence in (1) can be paraphrased as in (2), in which the
-quantificational DP *everyone* takes scope over the rest of the sentence.
-Even if you suspect that there could be an analysis of (2) on which
-"every student's term paper" could denote some kind of mereological
-fusion of a set of papers, it is much more difficult to be satisfied
-with a referential analysis when *every student* is replaced with
-*no student*, or *fewer than three students*, and so on---see any
-semantics text book for abundant discussion.
-
-We can arrive at an analysis by expressing the meaning of
-quantificational DP such as *everyone* using continuations:
-
-    3. everyone = shift (\k.∀x.kx)
-
-Assuming there is an implicit reset at the top of the sentence (we'll
-explicitly address determining where there is or isn't a reset), the
-reduction rules for shift will apply the handler function (\k.∀x.kx)
-to the remainder of the sentence after abstracting over the position
-of the shift expression:
-
-    [Ann put a copy of [shift (\k.∀x.kx)]'s homeworks in her briefcase]
-    ~~> (\k.∀x.kx) (\v. Ann put a copy of v's homeworks in her briefcase)
-    ~~> ∀x. Ann put a copy of x's homeworks in her briefcase
-
-(To be a bit pedantic, this reduction sequence is more suitable for
-shift0 than for shift, but we're not being fussy here about subflavors
-of shifty operators.)
-
-The standard technique for handling scope-taking in linguistics is
-Quantifier Raising (QR).  As you might suppose, the rule for Quantifier
-Raising closely resembles the reduction rule for shift:
-
-    Quantifier Raising: given a sentence of the form
-
-             [... [QDP] ...],
-
-    build a new sentence of the form
-
-    [QDP (\x.[... [x] ...])].
-
-Here, QDP is a scope-taking quantificational DP.
-
-Just to emphasize the similarity between QR and shift, we can use QR
-to provide insight into the tree version of the doubling task that
-mystified us earlier.  Here's the starting point:
-
-<!--
-\tree (. (a)((S)((d)((S)(e)))))
--->
-
-<pre>
-  .
-__|___
-|    |
-a  __|___
-   |    |
-   S  __|__
-      |   |
-      d  _|__
-         |  |
-         S  e
-</pre>
-
-First we QR the lower shift operator, replacing it with a variable and
-abstracting over that variable.
-
-<!--
-\tree (. (S) ((\\x) ((a)((S)((d)((x)(e)))))))
--->
-
-<pre>
-   .
-___|___
-|     |
-S  ___|___
-   |     |
-   \x  __|___
-       |    |
-       a  __|___
-          |    |
-          S  __|__
-             |   |
-             d  _|__
-                |  |
-                x  e
-</pre>
-
-Next, we QR the upper shift operator
-
-<!--
-\tree (. (S) ((\\y) ((S) ((\\x) ((a)((y)((d)((x)(e)))))))))
--->
-
-<pre>
-   .
-___|___

(Diff truncated)

fixes
diff --git a/topics/_cps.mdwn b/topics/_cps.mdwn
index 3873215..125bc3a 100644
--- a/topics/_cps.mdwn
+++ b/topics/_cps.mdwn
@@ -1,289 +1,3 @@
**Note to Chris**: [[don't forget this material to be merged in somehow|/topics/_cps_and_continuation_operators]]. I marked where I cut some material to put into week13_control_operators, but that page is still a work in progress in my browser...

-Gaining control over order of evaluation
-----------------------------------------
-
-We know that evaluation order matters.  We're beginning to learn how
-to gain some control over order of evaluation (think of Jim's abort handler).
-We continue to reason about order of evaluation.
-
-A lucid discussion of evaluation order in the
-context of the lambda calculus can be found here:
-[Sestoft: Demonstrating Lambda Calculus Reduction](http://www.itu.dk/~sestoft/papers/mfps2001-sestoft.pdf).
-Sestoft also provides a lovely on-line lambda evaluator:
-[Sestoft: Lambda calculus reduction workbench](http://www.itu.dk/~sestoft/lamreduce/index.html),
-which allows you to select multiple evaluation strategies,
-and to see reductions happen step by step.
-
-Evaluation order matters
-------------------------
-
-We've seen this many times.  For instance, consider the following
-reductions.  It will be convenient to use the abbreviation w =
-\x.xx.  I'll
-indicate which lambda is about to be reduced with a * underneath:
-
-<pre>
-(\x.y)(ww)
- *
-y
-</pre>
-
-Done!  We have a normal form.  But if we reduce using a different
-strategy, things go wrong:
-
-<pre>
-(\x.y)(ww) =
-(\x.y)((\x.xx)w) =
-        *
-(\x.y)(ww) =
-(\x.y)((\x.xx)w) =
-        *
-(\x.y)(ww)
-</pre>
-
-Etc.
-
-As a second reminder of when evaluation order matters, consider using
-Y = \f.(\h.f(hh))(\h.f(hh)) as a fixed point combinator to define a recursive function:
-
-<pre>
-Y (\f n. blah) =
-(\f.(\h.f(hh))(\h.f(hh))) (\f n. blah)
-     *
-(\f.f((\h.f(hh))(\h.f(hh)))) (\f n. blah)
-       *
-(\f.f(f((\h.f(hh))(\h.f(hh))))) (\f n. blah)
-         *
-(\f.f(f(f((\h.f(hh))(\h.f(hh)))))) (\f n. blah)
-</pre>
-
-And we never get the recursion off the ground.
-
-
-Using a Continuation Passing Style transform to control order of evaluation
----------------------------------------------------------------------------
-
-We'll present a technique for controlling evaluation order by transforming a lambda term
-using a Continuation Passing Style transform (CPS), then we'll explore
-what the CPS is doing, and how.
-
-In order for the CPS to work, we have to adopt a new restriction on
-beta reduction: beta reduction does not occur underneath a lambda.
-That is, (\x.y)z reduces to z, but \u.(\x.y)z does not reduce to
-\u.z, because the \u protects the redex in the body from
-reduction.  (In this context, a "redex" is a part of a term that matches
-the pattern ...((\xM)N)..., i.e., something that can potentially be
-the target of beta reduction.)
-
-Start with a simple form that has two different reduction paths:
-
-reducing the leftmost lambda first: (\x.y)((\x.z)u)  ~~> y
-
-reducing the rightmost lambda first: (\x.y)((\x.z)u)  ~~> (\x.y)z ~~> y
-
-After using the following call-by-name CPS transform---and assuming
-that we never evaluate redexes protected by a lambda---only the first
-reduction path will be available: we will have gained control over the
-order in which beta reductions are allowed to be performed.
-
-Here's the CPS transform defined:
-
-    [x] = x
-    [\xM] = \k.k(\x[M])
-    [MN] = \k.[M](\m.m[N]k)
-
-Here's the result of applying the transform to our simple example:
-
-    [(\x.y)((\x.z)u)] =
-    \k.[\x.y](\m.m[(\x.z)u]k) =
-    \k.(\k.k(\x.[y]))(\m.m(\k.[\x.z](\m.m[u]k))k) =
-    \k.(\k.k(\x.y))(\m.m(\k.(\k.k(\x.z))(\m.muk))k)
-
-Because the initial \k protects (i.e., takes scope over) the entire
-transformed term, we can't perform any reductions.  In order to watch
-the computation unfold, we have to apply the transformed term to a
-trivial continuation, usually the identity function I = \x.x.
-
-    [(\x.y)((\x.z)u)] I =
-    (\k.[\x.y](\m.m[(\x.z)u]k)) I
-     *
-    [\x.y](\m.m[(\x.z)u] I) =
-    (\k.k(\x.y))(\m.m[(\x.z)u] I)
-     *           *
-    (\x.y)[(\x.z)u] I           --A--
-     *
-    y I
-
-The application to I unlocks the leftmost functor.  Because that
-functor (\x.y) throws away its argument (consider the reduction in the
-line marked (A)), we never need to expand the
-CPS transform of the argument.  This means that we never bother to
-reduce redexes inside the argument.
-
-Compare with a call-by-value xform:
-
-    {x} = \k.kx
-    {\aM} = \k.k(\a{M})
-    {MN} = \k.{M}(\m.{N}(\n.mnk))
-
-This time the reduction unfolds in a different manner:
-
-    {(\x.y)((\x.z)u)} I =
-    (\k.{\x.y}(\m.{(\x.z)u}(\n.mnk))) I
-     *
-    {\x.y}(\m.{(\x.z)u}(\n.mnI)) =
-    (\k.k(\x.{y}))(\m.{(\x.z)u}(\n.mnI))
-     *             *
-    {(\x.z)u}(\n.(\x.{y})nI) =
-    (\k.{\x.z}(\m.{u}(\n.mnk)))(\n.(\x.{y})nI)
-     *
-    {\x.z}(\m.{u}(\n.mn(\n.(\x.{y})nI))) =
-    (\k.k(\x.{z}))(\m.{u}(\n.mn(\n.(\x.{y})nI)))
-     *             *
-    {u}(\n.(\x.{z})n(\n.(\x.{y})nI)) =
-    (\k.ku)(\n.(\x.{z})n(\n.(\x.{y})nI))
-     *      *
-    (\x.{z})u(\n.(\x.{y})nI)       --A--
-     *
-    {z}(\n.(\x.{y})nI) =
-    (\k.kz)(\n.(\x.{y})nI)
-     *      *
-    (\x.{y})zI
-     *
-    {y}I =
-    (\k.ky)I
-     *
-    I y
-
-In this case, the argument does get evaluated: consider the reduction
-in the line marked (A).
-
-Both xforms make the following guarantee: as long as redexes
-underneath a lambda are never evaluated, there will be at most one
-reduction available at any step in the evaluation.
-That is, all choice is removed from the evaluation process.
-
-Now let's verify that the CBN CPS avoids the infinite reduction path
-discussed above (remember that w = \x.xx):
-
-    [(\x.y)(ww)] I =
-    (\k.[\x.y](\m.m[ww]k)) I
-     *
-    [\x.y](\m.m[ww]I) =
-    (\k.k(\x.y))(\m.m[ww]I)
-     *             *
-    (\x.y)[ww]I
-     *
-    y I
-
-
-Questions and exercises:
-
-1. Prove that {(\x.y)(ww)} does not terminate.
-
-2. Why is the CBN xform for variables [x] = x instead of something
-involving kappas (i.e., k's)?
-
-3. Write an Ocaml function that takes a lambda term and returns a
-CPS-xformed lambda term.  You can use the following data declaration:
-
-    type form = Var of char | Abs of char * form | App of form * form;;
-
-4. The discussion above talks about the "leftmost" redex, or the
-"rightmost".  But these words apply accurately only in a special set

(Diff truncated)

name change
diff --git a/topics/_week14_continuations.mdwn b/topics/_week14_continuations.mdwn
deleted file mode 100644
index 42f2c11..0000000
--- a/topics/_week14_continuations.mdwn
+++ /dev/null
@@ -1,277 +0,0 @@
-<!-- λ ◊ ≠ ∃ Λ ∀ ≡ α β γ ρ ω φ ψ Ω ○ μ η δ ζ ξ ⋆ ★ • ∙ ● ⚫ 𝟎 𝟏 𝟐 𝟘 𝟙 𝟚 𝟬 𝟭 𝟮 ⇧ (U+2e17) ¢ -->
-
-[[!toc]]
-
-# Continuations
-
-Last week we saw how to turn a list zipper into a continuation-based
-list processor.  The function computed something we called "the task",
-which was a simplified langauge involving control operators.
-
-    abSdS ~~> ababdS ~~> ababdababd
-
-The task is to process the list from left to right, and at each "S",
-double the list so far.  Here, "S" is a kind of control operator, and
-captures the entire previous computation.  We also considered a
-variant in which '#' delimited the portion of the list to be copied:
-
-    ab#deSfg ~~> abededfg
-
-In this variant, "S" and "#" correspond to shift and reset, which
-
-The expository logic of starting with this simplified task is the
-notion that as lists are to trees, so is this task to full-blown
-continuations.  So to the extent that, say, list zippers are easier to
-grasp than tree zippers, the task is easier to grasp than full
-continuations.
-
-We then presented CPS transforms, and demonstrated how they provide
-an order-independent analysis of order of evaluation.
-
-In order to continue to explore continuations, we will proceed in the
-following fashion: we introduce the traditional continuation monad,
-and show how it solves the task, then generalize the task to
-include doubling of both the left and the right context.
-
-## The continuation monad
-
-In order to build a monad, we start with a Kleisli arrow.
-
-    Continuation monad: types: given some ρ, Mα => (α -> ρ) -> ρ
-                        ⇧ == \ak.ka : a -> Ma
-                        bind == \ufk. u(\x.fxk)
-
-We'll first show that this monad solves the task, then we'll consider
-the monad in more detail.
-
-The unmonadized computation (without the shifty "S" operator) is
-
-    t1 = + a (+ b (+ c d)) ~~> abcd
-
-where "+" is string concatenation and the symbol a is shorthand for
-the string "a".
-
-In order to use the continuation monad to solve the list task,
-we choose α = ρ = [Char].  So "abcd" is a list of characters, and
-a boxed list has type M[Char] == ([Char] -> [Char]) -> [Char].
-
-Writing ¢ in between its arguments, t1 corresponds to the following
-
-    mt1 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (⇧+ ¢ ⇧c ¢ ⇧d))
-
-We have to lift each functor (+) and each object (e.g., "b") into the
-monad using mid (⇧), then combine them using monadic function
-application, where
-
-    ¢ M N = \k -> M (\f -> N (\a -> k(f x)))
-
-for the continuation monad.
-
-The way in which we extract a value from a continuation box is by
-applying it to a continuation; often, it is convenient to supply the
-trivial continuation, the identity function \k.k = I.  So in fact,
-
-    t1 = mt1 I
-
-That is, the original computation is the monadic version applied to
-the trivial continuation.
-
-We can now add a shifty operator.  We would like to replace just the
-one element, and we will do just that in a moment; but in order to
-simulate the original task, we'll have to take a different strategy
-initially.  We'll start by imagining a shift operator that combined
-direction with the tail of the list, like this:
-
-    mt2 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ ⇧d))
-
-We can now define a shift operator to perform the work of "S":
-
-    shift u k = u(\s.k(ks))
-
-Shift takes two arguments: a string continuation u of type M[Char],
-and a string continuation k of type [Char] -> [Char].  Since u is the
-the argument to shift, it represents the tail of the list after the
-shift operator.  Then k is the continuation of the expression headed
-by shift.  So in order to execute the task, shift needs to invoke k
-twice.  The expression \s.k(ks) is just the composition of k with itself.
-
-    mt2 I == "ababd"
-
-just as desired.
-
-Let's just make sure that we have the left-to-right evaluation we were
-hoping for by evaluating "abSdeSf":
-
-    mt3 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ (⇧+ ¢ ⇧d ¢ (⇧+ ¢ ⇧e ¢ (shift ⇧f)))))
-
-Then
-
-    mt3 I = "ababdeababdef"   -- structure: (ababde)(ababde)f
-
-
-As expected.
-
-For a reset operator #, we can have
-
-    # u k = k(u(\k.k))   -- ex.: ab#deSf ~~> abdedef
-
-The reset operator executes the remainder of the list separately, by
-giving it the trivial continuation (\k.k), then feeds the result to
-the continuation corresponding to the position of the reset.
-
-So the continuation monad solves the list task using continuations in
-a way that conforms to our by-now familiar strategy of lifting a
-computation into a monad, and then writing a few key functions (shift,
-reset) that exploit the power of the monad.
-
-## Generalizing to the tree doubling task
-
-Now we should consider what happens when we write a shift operator
-that takes the place of a single letter.
-
-    mt2 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ ⇧d))
-    mt4 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (⇧+ ¢ shift' ¢ ⇧d))
-
-Instead of mt2 (copied from above), we have mt4.  So now the type of a
-leaf (a boxed string, type M[Char]) is the same as the type of the new
-shift operator, shift'.
-
-    shift' = \k.k(k"")
-
-This shift operator takes a continuation k of type [Char]->[Char], and
-invokes it twice.  Since k requires an argument of type [Char], we
-need to use the first invocation of k to construction a [Char]; we do
-this by feeding it a string.  Since the task does not replace the
-shift operator with any marker, we give the empty string "" as the
-argument.
-
-But now the new shift operator captures more than just the preceeding
-part of the construction---it captures the entire context, including
-the portion of the sequence that follows it.  That is,
-
-    mt4 I = "ababdd"
-
-We have replaced "S" in "abSd" with "ab_d", where the underbar will be
-replaced with the empty string supplied in the definition of shift'.
-Crucially, not only is the prefix "ab" duplicated, so is the suffix
-"d".
-
-Things get interesting when we have more than one operator in the
-initial list.  What should we expect if we start with "aScSe"?
-If we assume that when we evaluate each S, all the other S's become
-temporarily inert, we expect a reduction path like
-
-    aScSe ~~> aacSecSe
-
-But note that the output has just as many S's as the input--if that is
-what our reduction strategy delivers, then any initial string with
-more than one S will never reach a normal form.
-
-But that's not what the continuation operator shift' delivers.
-
-    mt5 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ shift' ¢ (⇧+ ¢ ⇧c ¢ (⇧+ ¢ shift' ¢ "e")))
-
-    mt5 I = "aacaceecaacaceecee" -- structure: "aacaceecaacaceecee"
-
-Huh?
-
-This is considerably harder to understand than the original list task.
-The key is figuring out in each case what function the argument k to
-the shift operator gets bound to.
-
-Let's go back to a simple one-shift example, "aSc".  Let's trace what
-the shift' operator sees as its argument k by replacing ⇧ and ¢ with
-their definitions:
-
-<pre>
-      ⇧+ ¢ ⇧a ¢ (⇧+ ¢ shift' ¢ ⇧c) I
-   = \k.⇧+(\f.⇧a(\x.k(fx))) ¢ (⇧+ ¢ shift' ¢ ⇧c) I
-   = \k.(\k.⇧+(\f.⇧a(\x.k(fx))))(\f.(⇧+ ¢ shift' ¢ ⇧c)(\x.k(fx))) I
-   ~~> (\k.⇧+(\f.⇧a(\x.k(fx))))(\f.(⇧+ ¢ shift' ¢ ⇧c)(\x.I(fx)))
-   ~~> (\k.⇧+(\f.⇧a(\x.k(fx))))(\f.(⇧+ ¢ shift' ¢ ⇧c)(f))
-   ~~> ⇧+(\f.⇧a(\x.(\f.(⇧+ ¢ shift' ¢ ⇧c)(f))(fx))))

(Diff truncated)

edits
diff --git a/topics/_week14_continuations.mdwn b/topics/_week14_continuations.mdwn
index 0b99065..42f2c11 100644
--- a/topics/_week14_continuations.mdwn
+++ b/topics/_week14_continuations.mdwn
@@ -65,7 +65,7 @@ We have to lift each functor (+) and each object (e.g., "b") into the
monad using mid (⇧), then combine them using monadic function
application, where

-    ¢ mf mx = \k -> mf (\f -> mx (\a -> k(f x)))
+    ¢ M N = \k -> M (\f -> N (\a -> k(f x)))

for the continuation monad.

@@ -73,17 +73,16 @@ The way in which we extract a value from a continuation box is by
applying it to a continuation; often, it is convenient to supply the
trivial continuation, the identity function \k.k = I.  So in fact,

-   t1 = mt1 I
+    t1 = mt1 I

That is, the original computation is the monadic version applied to
the trivial continuation.

-We can now imagine replacing the third element ("c") with a shifty
-operator.  We would like to replace just the one element, and we will
-do just that in a moment; but in order to simulate the original task,
-we'll have to take a different strategy initially.  We'll start by
-imagining a shift operator that combined direction with the tail of
-the list, like this:
+We can now add a shifty operator.  We would like to replace just the
+one element, and we will do just that in a moment; but in order to
+simulate the original task, we'll have to take a different strategy
+initially.  We'll start by imagining a shift operator that combined
+direction with the tail of the list, like this:

mt2 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ ⇧d))

@@ -96,7 +95,7 @@ and a string continuation k of type [Char] -> [Char].  Since u is the
the argument to shift, it represents the tail of the list after the
shift operator.  Then k is the continuation of the expression headed
by shift.  So in order to execute the task, shift needs to invoke k
-twice.
+twice.  The expression \s.k(ks) is just the composition of k with itself.

mt2 I == "ababd"

@@ -118,19 +117,26 @@ For a reset operator #, we can have

# u k = k(u(\k.k))   -- ex.: ab#deSf ~~> abdedef

+The reset operator executes the remainder of the list separately, by
+giving it the trivial continuation (\k.k), then feeds the result to
+the continuation corresponding to the position of the reset.
+
So the continuation monad solves the list task using continuations in
a way that conforms to our by-now familiar strategy of lifting a
computation into a monad, and then writing a few key functions (shift,
reset) that exploit the power of the monad.

+## Generalizing to the tree doubling task
+
Now we should consider what happens when we write a shift operator
that takes the place of a single letter.

mt2 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ ⇧d))
mt4 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (⇧+ ¢ shift' ¢ ⇧d))

-Instead of mt2, we have mt4.  So now the type of "c" (a boxed string,
-type M[Char]) is the same as the type of the new shift operator, shift'.
+Instead of mt2 (copied from above), we have mt4.  So now the type of a
+leaf (a boxed string, type M[Char]) is the same as the type of the new
+shift operator, shift'.

shift' = \k.k(k"")

@@ -179,6 +185,7 @@ Let's go back to a simple one-shift example, "aSc".  Let's trace what
the shift' operator sees as its argument k by replacing ⇧ and ¢ with
their definitions:

+<pre>
⇧+ ¢ ⇧a ¢ (⇧+ ¢ shift' ¢ ⇧c) I
= \k.⇧+(\f.⇧a(\x.k(fx))) ¢ (⇧+ ¢ shift' ¢ ⇧c) I
= \k.(\k.⇧+(\f.⇧a(\x.k(fx))))(\f.(⇧+ ¢ shift' ¢ ⇧c)(\x.k(fx))) I
@@ -199,106 +206,14 @@ their definitions:
~~> shift(\x.⇧c(\x'.(+a)((+x)x'))))
= shift(\x.(\k.kc)(\x'.(+a)((+x)x'))))
~~> shift(\x.(+a)((+x)c))
+</pre>

So now we see what the argument of shift will be: a function k from
strings x to the string asc.  So shift k will be k(k "") = aacc.

Ok, this is ridiculous.  We need a way to get ahead of this deluge of
-lambda conversion.  We'll adapt the notational strategy developed in
-Barker and Shan 2014:
-
-
-    \k.g(kf): (α -> ρ) -> ρ
-
-we'll write
-
-    g[]    ρ
-    --- : ---
-     f     α
-
-Then
-             []
-    mid(x) = --
-             x
-
-and
-
-    g[]    ρ      h[]    ρ    g[h[]]    ρ
-    --- : ----  ¢ --- : --- = ------ : ---
-     f    α->β     x     α     fx       β
-
-Here's the justification:
-
-        (\FXk.F(\f.X(\x.k(fx)))) (\k.g(kf)) (\k.h(kx))
-    ~~> (\Xk.(\k.g(kf))(\f.X(\x.k(fx)))) (\k.h(kx))
-    ~~> \k.(\k.g(kf))(\f.(\k.h(kx))(\x.k(fx)))
-    ~~> \k.g((\f.(\k.h(kx))(\x.k(fx)))f)
-    ~~> \k.g((\k.h(kx))(\x.k(fx)))
-    ~~> \k.g(h(\x.k(fx))x)
-    ~~> \k.g(h(k(fx)))
-
-Then
-                          (\ks.k(ks))[]
-    shift = \k.k(k("")) = -------------
-                               ""
-
-Let 2 == \ks.k(ks).
-
-so aSc lifted into the monad is
-
-    []     2[]   []
-    -- ¢ ( --- ¢ --- ) =
-    a      ""    c
-
-First, we need map2 instead of map1.  Second, the type of the shift
-operator will be a string continuation, rather than a function from
-string continuations to string continuations.
-
-(\k.k(k1))(\s.(\k.k(k2))(\r.sr))
-(\k.k(k1))(\s.(\r.sr)((\r.sr)2))
-(\k.k(k1))(\s.(\r.sr)(s2))
-(\k.k(k1))(\s.s(s2))
-(\s.s(s2))((\s.s(s2))1)
-(\s.s(s2))(1(12))
-(1(12))((1(12)2))
-
-
-
-
-
-But here's the interesting part: from the point of view of the shift
-operator, the continuation that it will be fed will be the entire rest
-of the computation.  This includes not only what has come before, but
-what will come after it as well.  That means when the continuation is
-doubled (self-composed), the result duplicates not only the prefix
-(ab ~~> abab), but also the suffix (d ~~> dd).  In some sense, then,
-continuations allow functions to see into the future!
-
-What do you expect will be the result of executing "aScSe" under the
-second perspective?  The answer to this question is not determined by
-the informal specification of the task that we've been using, since
-under the new perspective, we're copying complete (bi-directional)
-contexts rather than just left contexts.
-
-It might be natural to assume that what execution does is choose an S,
-and double its context, temporarily treating all other shift operators
-as dumb letters, then choosing a remaining S to execute.  If that was
-our interpreation of the task, then no string with more than one S
-would ever terminate, since the number S's after each reduction step
-would always be 2(n-1), where n is the number before reduction.
-
-But there is another equally natural way to answer the question.
-Assume the leftmost S is executed first.  What will the value of its
-continuation k be?  It will be a function that maps a string s to the
-result of computing ascSe, which will be ascascee.  So k(k "")
-will be k(acacee), which will result in a(acacee)ca(acacee)cee
-(the parentheses are added to show stages in the construction of the
-final result).
-
-So the continuation monad behaves in a way that we expect a
-continuation to behave.  But where did the continuation monad come
-from?  Let's consider some ways of deriving the continuation monad.
+lambda conversion.  We'll see how to understand what is going on
+when we talk about quantifier raising in the next lecture.

## Viewing Montague's PTQ as CPS

@@ -325,80 +240,6 @@ quantificational DPs, as in *John and no dog left*.  Then generalized
quantifier corresponding to the proper name *John* is the quantifier
\k.kj.

-At this point, we have a type for our Kliesli arrow and a value for
-our mid.  Given some result type (such as t in the Montague application),
-

(Diff truncated)

edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index 3bdbd2a..328b506 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -377,7 +377,7 @@ hole in it.  For instance, we might have g[x] = \forall x.P[x].
We'll use a simply-typed system with two atomic types, DP (the type of
individuals) and S (the type of truth values).

-## LIFT
+## LIFT (mid)

Then in the spirit of monadic thinking, we'll have a way of lifting an
arbitrary value into the tower system:
@@ -569,7 +569,7 @@ We'll change these arrows into left-leaning and right-leaning versions
too, according to the following scheme:

γ|δ
-    --- == γ//((α/β) \\\\ δ)
+    --- == γ//((α/β) \\ δ)
α/β

As we'll see in a minute, each of these for implications (\\, /, \\\\,
@@ -580,3 +580,26 @@ As we'll see in a minute, each of these for implications (\\, /, \\\\,
\\   argument is surrounded by the functor
//  argument surrounds the functor

+## LOWER (reset)
+
+One more ingredient: one thing that shifty continuation operators
+require is an upper bound, a reset to show the extent of the remainder
+of the computation that the shift operator captures.  In the list
+version of the doubling task, we have
+
+    "a#bdeSfg" ~~> "abdebdefg"   continuation captured: bde
+    "ab#deSfg" ~~> "abdedefg"    continuation captured:  de
+
+In programming languages, resets are encoded in the computation
+explicitly.  In natural language, resets are always invisible.
+We'll deal with this in the natural language setting by allowing
+spontaneous resets, as long as the types match the following recipe:
+
+           g[] α|S
+    LOWER (---:---) == g[p]:α
+            p   S
+
+This will be easiest to explain by presenting our first complete
+example from natural language:
+
+


edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index fd25cce..3bdbd2a 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -11,17 +11,13 @@ In the past couple of weeks, we've introduced continuations, first as
a functional programming technique, then in terms of list and tree
zippers, then as a monad.  In this lecture, we will generalize
continuations slightly beyond a monad, and then begin to outline some
-of the applications of monads.  In brief, the generalization can be
-summarized in terms of types: instead of using a Kleisli arrow mapping
-a type α to a continuized type (α -> ρ) -> ρ, we'll allow the result
-types to differ, i.e., we'll map α to (α -> β) -> γ.  This will be
-crucial for some natural language applications.
+of the applications of the generalized continuations.

Many (though not all) of the applications are discussed in detail in
Barker and Shan 2014, *Continuations in Natural Language*, OUP.

-In terms of list zippers, the continuation of a focused element in
-the list is the front part of the list.
+To review, in terms of list zippers, the continuation of a focused
+element in the list is the front part of the list.

list zipper for the list [a;b;c;d;e;f] with focus on d:

@@ -33,28 +29,42 @@ the list is the front part of the list.
In terms of tree zippers, the continuation is the entire context of
the focused element--the entire rest of the tree.

-[drawing of a broken tree]
+[drawing of a tree zipper]

-Last week we had trouble computing the doubling task when there was more
-than one shifty operator after moving from a list perspective to a
-tree perspective.  That is, it remained unclear why "aScSe" was
+We explored continuations first in a list setting, then in a tree
+setting, using the doubling task as an example.

-    "aacaceecaacaceecee"
+    "abSd" ~~> "ababd"
+    "ab#deSfg" ~~> "abdedefg"

-We'll burn through that conceptual fog today.  The natural thing to
-try would have been to defunctionalize the continuation-based solution
-using a tree zipper.  But that would not have been easy, since the
-natural way to implement the doubling behavior of the shifty operator
-would have been to simply copy the context provided by the zipper.
-This would have produced two uncoordinated copies of the other shifty
-operator, and we'd have been in the situation described in class of
-having a reduction strategy that never reduced the number of shifty
-operators below 2. (There are ways around this limitation of tree zippers,
-but they are essentially equivalent to the technique given just below.)
+The "S" functions like a shifty operator, and "#" functions like a reset.
+
+Although the list version of the doubling task was easy to understand
+thoroughly, the tree version was significantly more challenging.  In
+particular, it remained unclear why
+
+    "aScSe" ~~> "aacaceecaacaceecee"
+
+We'll burn through that conceptual fog today by learning more about
+how to work with continuations.
+
+The natural thing to try would have been to defunctionalize the
+continuation-based solution using a tree zipper.  But that would not
+have been easy, since the natural way to implement the doubling
+behavior of the shifty operator would have been to simply copy the
+context provided by the zipper.  This would have produced two
+uncoordinated copies of the other shifty operator, and we'd have been
+in the situation described in class of having a reduction strategy
+that never reduced the number of shifty operators below 2.  The
+limitation is that zippers by themselves don't provide a natural way
+to establish a dependency between two distant elements of a data
+structure.  (There are ways around this limitation of tree zippers,
+but they are essentially equivalent to the technique given just
+below.)

Instead, we'll re-interpreting what the continuation monad was doing
-in more or less defunctionalized terms by using Quantifier Raising, a technique
-from linguistics.
+in more or less defunctionalized terms by using Quantifier Raising, a
+technique from linguistics.

But first, motivating quantifier scope as a linguistic application.

@@ -101,13 +111,19 @@ The standard technique for handling scope-taking in linguistics is
Quantifier Raising (QR).  As you might suppose, the rule for Quantifier
Raising closely resembles the reduction rule for shift:

-    Quantifier Raising: given a sentence [... [QDP] ...], build a new
-    sentence [QDP (\x.[... [x] ...])].
+    Quantifier Raising: given a sentence of the form
+
+             [... [QDP] ...],
+
+    build a new sentence of the form
+
+    [QDP (\x.[... [x] ...])].

Here, QDP is a scope-taking quantificational DP.

Just to emphasize the similarity between QR and shift, we can use QR
-to provide insight into the tree task that mystified us earlier.
+to provide insight into the tree version of the doubling task that
+mystified us earlier.  Here's the starting point:

<!--
\tree (. (a)((S)((d)((S)(e)))))
@@ -269,7 +285,8 @@ Three lessons:

* Generalizing from one-sided, list-based continuation
operators to two-sided, tree-based continuation operators is a
-  dramatic increase in power and complexity.
+  dramatic increase in power and complexity.  (De Groote's dynamic
+  montague semantics continuations are the one-sided, list-based variety.)

* Operators that
compose multiple copies of a context can be hard to understand
@@ -287,13 +304,14 @@ involving control operators such as shift and reset: using a CPS
transform, lifting into a continuation monad, and by using QR.

QR is the traditional system in linguistics, but it will not be
-adequate for us in general.  The reason has to do with order.  As
-we've discussed, especially with respect to the CPS transform,
-continuations allow fine-grained control over the order of evaluation.
-One of the main empirical claims of Barker and Shan 2014 is that
-natural language is sensitive to evaluation order.  Unlike other
-presentations of continuations, QR does not lend itself to reasoning
-about evaluation order, so we will need to use a different strategy.
+adequate for us in general.  The reason has to do with evaluation
+order.  As we've discussed, especially with respect to the CPS
+transform, continuations allow fine-grained control over the order of
+evaluation.  One of the main empirical claims of Barker and Shan 2014
+is that natural language is sensitive to evaluation order.  Unlike
+other presentations of continuations, QR does not lend itself to
+reasoning about evaluation order, so we will need to use a different
+strategy.

[Note to self: it is interesting to consider what it would take to
reproduce the analyses giving in Barker and Shan in purely QR terms.
@@ -331,9 +349,9 @@ into an at-issue (pre-monadic) computation with a layer at which
side-effects occur.

The tower notation is a precise way of articulating continuation-based
-computations into a payload and (potentially multiple) layers of side-effects.
-We won't keep the outer box, but we will keep the horizontal line
-dividing main effects from side-effects.
+computations into a payload and (potentially multiple) layers of
+side-effects.  Visually, we won't keep the outer box, but we will keep
+the horizontal line dividing main effects from side-effects.

Tower convention for types:
<pre>
@@ -342,6 +360,8 @@ Tower convention for types:
α
</pre>

+Read these types counter-clockwise starting at the bottom.
+
Tower convention for values:
<pre>
g[]
@@ -376,7 +396,7 @@ individual-denoting expression yields the generalized quantifier
proposed by Montague as the denotation for proper names:

[]   S|S
-    LIFT (j:DP) = \k.kx : (DP -> S) -> S == -- : ---
+    LIFT (j:DP) = \k.kj : (DP -> S) -> S == -- : ---
j    DP

So if the proper name *John* denotes the individual j, LIFT(j) is the
@@ -403,6 +423,8 @@ functional application (i.e, f:(α->β) (x:α) = fx:β).

## Not quite a monad

+This discussion is based on Wadler's paper Composable continuations'.
+
The unit and the combination rule work very much like we are used to
from the various monads we've studied.

@@ -430,7 +452,7 @@ identical to the result type of the bind operator overall.  For the
continuation monad, this means that mbind has the following type:

((α -> γ) -> ρ)
-	     -> α -> ((β -> δ) -> ρ)
+             -> α -> ((β -> δ) -> ρ)
-> ((β -> δ) -> ρ)

But the type of the bind operator in our generalized continuation
@@ -438,26 +460,27 @@ system (call it "kbind") is

kbind :
((α -> γ) -> ρ)
-	     -> α -> ((β -> δ) -> γ)
+             -> α -> ((β -> δ) -> γ)
-> ((β -> δ) -> ρ)

Note that (β -> δ) -> γ is not the same as (β -> δ) -> ρ.

+These more general types work out fine when plugged into the

(Diff truncated)

edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index 001f8b1..fd25cce 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -403,8 +403,11 @@ functional application (i.e, f:(α->β) (x:α) = fx:β).

## Not quite a monad

-To demonstrate that this is indeed the continuation monad's ¢
-operator:
+The unit and the combination rule work very much like we are used to
+from the various monads we've studied.
+
+In particular, we can easily see that the ¢ operator defined just
+above is exactly the same as the ¢ operator from the continuation monad:

¢ (\k.g[kf]) (\k.h[kx])
= (\MNk.M(\m.N(\n.k(mn)))) (\k.g[kf]) (\k.h[kx])
@@ -416,20 +419,102 @@ operator:
== ------
fx

-However, these continuations do not form an official monad.  The
-reason is that (see Wadler's paper Composable continuations' for details).
+However, these continuations do not form an official monad.
+One way to see this is to consider the type of the bind operator.
+The type of the bind operator in a genuine monad is
+
+    mbind : Mα -> (α -> Mβ) -> Mβ
+
+In particular, the result type of the second argument (Mβ) must be
+identical to the result type of the bind operator overall.  For the
+continuation monad, this means that mbind has the following type:
+
+                     ((α -> γ) -> ρ)
+	     -> α -> ((β -> δ) -> ρ)
+                  -> ((β -> δ) -> ρ)
+
+But the type of the bind operator in our generalized continuation
+system (call it "kbind") is
+
+    kbind :
+                     ((α -> γ) -> ρ)
+	     -> α -> ((β -> δ) -> γ)
+                  -> ((β -> δ) -> ρ)
+
+Note that (β -> δ) -> γ is not the same as (β -> δ) -> ρ.
+
+    kbind u f = \k.   u           (\x.     f              x    k      )
+                 β->δ (α->γ)->ρ     α      α->(β->δ)->γ   α    β->δ
+                                           -----------------
+                                              (β->δ)->γ
+                                              ------------------------
+                                                      γ
+                                    -------------------
+                                        α->γ
+                       ---------------------
+                              ρ
+                 --------------
+                   (β->δ)->ρ
+
+See Wadler's paper Composable continuations' for discussion.
+
+Neverthless, it's easy to see that the generalized continuation system
+obeys the monad laws.  We haven't spent much time proving monad laws,
+so this seems like a worthy occasion on which to give some details.
+Since we're working with bind here, we'll use the version of the monad
+laws that are expressed in terms of bind:
+
+    Prove u >>= ⇧ == u:
+
+       u >>= (\ak.ka)
+    == (\ufk.u(\x.fxk)) u (\ak.ka)
+    ~~> \k.u(\x.(\ak.ka)xk)
+    ~~> \k.u(\x.kx)
+    ~~> \k.uk
+    ~~> u
+
+The last two steps are eta reductions.
+
+    Prove ⇧a >>= f == f a:
+
+       ⇧a >>= f
+    == (\ak.ka)a >>= f
+    ~~> (\k.ka) >>= f
+    == (\ufk.u(\x.fxk)) (\k.ka) f
+    ~~> \k.(\k.ka)(\x.fxk)
+    ~~> \k.fak
+    ~~> fa
+
+    Prove u >>= (\a.fa >>= g) == (u >>= f) >>= g:
+
+       u >>= (\a.fa >>= g)
+    == u >>= (\a.(\k.fa(\x.gxk)))
+    == (\ufk.u(\x.fxk)) u (\a.(\k.fa(\x.gxk)))
+    == \k.u(\x.(\a.(\k.fa(\x.gxk)))xk)
+    ~~> \k.u(\x.fx(\y.gyk))
+
+       (u >>= f) >>= g
+    == (\ufk.u(\x.fxk)) u f >>= g
+    ~~> (\k.u(\x.fxk)) >>= g
+    == (\ufk.u(\x.fxk)) (\k.u(\x.fxk)) g
+    ~~> \k.(\k.u(\x.fxk))(\y.gyk)
+    ~~> \k.u(\x.fx(\y.gyk))
+
+## Syntactic refinements: subtypes of implication
+
+Because natural language allows the functor to be on the left or on
+the right, we replace the type arrow -> with a left-leaning version \
+and a right-leaning version, as follows:
+
+    α/β   β    = α
+      β   β\α  = α

-Neverthless, obeys the monad laws.
+This means (without adding some fancy footwork, as in Charlow 2014) we
+need two versions of ¢ too, one for each direction for the unmonadized types.

-Oh, one more thing: because natural language allows the functor to be
-on the left or on the right, we replace the type arrow -> with a
-left-leaning version \ and a right-leaning version, as follows:
+\\ //

-    α/β  β = α
-    β  β\α = α

-This means we need two versions of ¢ too (see Barker and Shan 2014
-chapter 1 for full details).

This is (almost) all we need to get some significant linguistic work
done.


edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index d13efa7..001f8b1 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -300,6 +300,8 @@ reproduce the analyses giving in Barker and Shan in purely QR terms.
Simple quantificational binding using parasitic scope should be easy,
but how reconstruction would work is not so clear.]

+## Introducting the tower notation
+
We'll present tower notation, then comment and motivate several of its
features as we consider various applications.  For now, we'll motivate
the tower notation by thinking about box types.  In the discussion of
@@ -355,6 +357,8 @@ hole in it.  For instance, we might have g[x] = \forall x.P[x].
We'll use a simply-typed system with two atomic types, DP (the type of
individuals) and S (the type of truth values).

+## LIFT
+
Then in the spirit of monadic thinking, we'll have a way of lifting an
arbitrary value into the tower system:

@@ -383,9 +387,9 @@ Crucially for the discussion here, LIFT does not apply only to DPs, as
in Montague and Partee, but to any expression whatsoever.  For
instance, here is LIFT applied to a lexical verb phrase:

-                                                   []     S|S
-    LIFT (left:DP\S) = \k.kx : (DP\S -> S) -> S == ---- : ---
-                                                   left   DP
+                                                        []    S|S
+    LIFT (left:DP->S) = \k.kx : ((DP->S) -> S) -> S == ---- : ---
+                                                       left   DP

Once we have expressions of type (α -> β) -> γ, we'll need to combine
them.  We'll use the ¢ operator from the continuation monad:
@@ -397,6 +401,8 @@ them.  We'll use the ¢ operator from the continuation monad:
Note that the types below the horizontal line combine just like
functional application (i.e, f:(α->β) (x:α) = fx:β).

+## Not quite a monad
+
To demonstrate that this is indeed the continuation monad's ¢
operator:

@@ -410,7 +416,9 @@ operator:
== ------
fx

-Not a monad (Wadler); would be if the types were
+However, these continuations do not form an official monad.  The
+reason is that (see Wadler's paper Composable continuations' for details).
+
Neverthless, obeys the monad laws.

Oh, one more thing: because natural language allows the functor to be


edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index ffe10ba..d13efa7 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -312,12 +312,12 @@ separating a box into two regions, the payload and the hidden scratch
space:

<pre>
-    _______________               _______________           _______________
+    _______________               _______________            _______________
| [x->2, y->3] |	          | [x->2, y->3] |          | [x->2, y->3] |
------------------- 	         ------------------        ------------------
|              |     ¢        |              |    =     |              |
-    |    +2        |	          |     y        |	    |     5        |
-    |______________|	          |______________|	    |______________|
+    |    +2        |	          |     y        |          |     5        |
+    |______________|	          |______________|          |______________|
</pre>

For people who are familiar with Discourse Representation Theory (Kamp


edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index 6eeaf8f..ffe10ba 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -413,6 +413,16 @@ operator:
Not a monad (Wadler); would be if the types were
Neverthless, obeys the monad laws.

+Oh, one more thing: because natural language allows the functor to be
+on the left or on the right, we replace the type arrow -> with a
+left-leaning version \ and a right-leaning version, as follows:
+
+    α/β  β = α
+    β  β\α = α
+
+This means we need two versions of ¢ too (see Barker and Shan 2014
+chapter 1 for full details).
+
This is (almost) all we need to get some significant linguistic work
done.


edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index b56fcc0..6eeaf8f 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -126,7 +126,8 @@ a  __|___
S  e
</pre>

-First we QR the lower shift operator
+First we QR the lower shift operator, replacing it with a variable and
+abstracting over that variable.

<!--
\tree (. (S) ((\\x) ((a)((S)((d)((x)(e)))))))
@@ -178,7 +179,7 @@ S  ___|____

We then evaluate, using the same value for the shift operator proposed before:

-    shift = \k.k(k "")
+    S = shift = \k.k(k "")

It will be easiest to begin evaluating this tree with the lower shift
operator (we get the same result if we start with the upper one).
@@ -261,8 +262,8 @@ a  ___|____           |      |
The yield of this tree (the sequence of leaf nodes) is

-Exercise: the result is different, by the way, if the QR occurs in a
-different order.
+Exercise: the result is different, by the way, if the QR occurs in the
+opposite order.

Three lessons:

@@ -271,7 +272,9 @@ Three lessons:
dramatic increase in power and complexity.

* Operators that
-  compose multiple copies of a context can be hard to understand.
+  compose multiple copies of a context can be hard to understand
+  (though keep this in mind when we see the continuations-based
+  analysis of coordination, which involves context doubling).

* When considering two-sided, tree-based continuation operators,
quantifier raising is a good tool for visualizing (defunctionalizing)
@@ -310,8 +313,8 @@ space:

<pre>
_______________               _______________           _______________
-    | [x->2, y->3] |	          | [x->2, y->3] |	    | [x->2, y->3] |
-  ------------------- 	        ------------------	   ------------------
+    | [x->2, y->3] |	          | [x->2, y->3] |          | [x->2, y->3] |
+  ------------------- 	         ------------------        ------------------
|              |     ¢        |              |    =     |              |
|    +2        |	          |     y        |	    |     5        |
|______________|	          |______________|	    |______________|
@@ -331,14 +334,18 @@ We won't keep the outer box, but we will keep the horizontal line
dividing main effects from side-effects.

Tower convention for types:
+<pre>
γ | β
(α -> β) -> γ can be equivalently written -----
α
+</pre>

Tower convention for values:
+<pre>
g[]
\k.g[k(x)] can be equivalently written ---
x
+</pre>

If \k.g[k(x)] has type (α -> β) -> γ, then k has type (α -> β).

@@ -351,12 +358,15 @@ individuals) and S (the type of truth values).
Then in the spirit of monadic thinking, we'll have a way of lifting an
arbitrary value into the tower system:

-                                           []    γ|β
-    LIFT (x:α) = \k.kx : (α -> β) -> γ ==  --- : ---
-                                           x      α
+                                           []   β|β
+    LIFT (x:α) = \k.kx : (α -> β) -> β ==  -- : ---
+                                           x     α

Obviously, LIFT is exactly the midentity (the unit) for the continuation monad.
-The name comes from Partee's 1987 theory of type-shifters for
+Notice that LIFT requires the result type of the continuation argument
+and the result type of the overall expression to match (here, both are β).
+
+The name LIFT comes from Partee's 1987 theory of type-shifters for
determiner phrases.  Importantly, LIFT applied to an
individual-denoting expression yields the generalized quantifier
proposed by Montague as the denotation for proper names:
@@ -369,6 +379,14 @@ So if the proper name *John* denotes the individual j, LIFT(j) is the
generalized quantifier that maps each property k of type DP -> S to true
just in case kj is true.

+Crucially for the discussion here, LIFT does not apply only to DPs, as
+in Montague and Partee, but to any expression whatsoever.  For
+instance, here is LIFT applied to a lexical verb phrase:
+
+                                                   []     S|S
+    LIFT (left:DP\S) = \k.kx : (DP\S -> S) -> S == ---- : ---
+                                                   left   DP
+
Once we have expressions of type (α -> β) -> γ, we'll need to combine
them.  We'll use the ¢ operator from the continuation monad:


edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index cc5c9e2..b56fcc0 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -109,7 +109,9 @@ Here, QDP is a scope-taking quantificational DP.
Just to emphasize the similarity between QR and shift, we can use QR
to provide insight into the tree task that mystified us earlier.

+<!--
\tree (. (a)((S)((d)((S)(e)))))
+-->

<pre>
.
@@ -126,7 +128,9 @@ a  __|___

First we QR the lower shift operator

+<!--
\tree (. (S) ((\\x) ((a)((S)((d)((x)(e)))))))
+-->

<pre>
.
@@ -147,7 +151,9 @@ S  ___|___

Next, we QR the upper shift operator

+<!--
\tree (. (S) ((\\y) ((S) ((\\x) ((a)((y)((d)((x)(e)))))))))
+-->

<pre>
.
@@ -180,7 +186,9 @@ The relevant value for k is (\x.a(y(d(x e)))).  Then k "" is
a(y(d(""(e)))), and k(k "") is a(y(d((a(y(d(""(e)))))(e)))).  In tree
form:

+<!--
\tree (. (S) ((\\y) ((a)((y)((d)(((a)((y)((d)(("")(e)))))(e)))))))
+-->

<pre>
.
@@ -211,7 +219,9 @@ S  ___|____
Repeating the process for the upper shift operator replaces each
occurrence of y with a copy of the whole tree.

+<!--
\tree (. ((a)((((a)(("")((d)(((a)(("")((d)(("")(e)))))(e))))))((d)(((a)((((a)(("")((d)(((a)(("")((d)(("")(e)))))(e))))))((d)(("")(e)))))(e))))))
+-->

<pre>
.


edits
diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index ec0cb5d..cc5c9e2 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -104,11 +104,14 @@ Raising closely resembles the reduction rule for shift:
Quantifier Raising: given a sentence [... [QDP] ...], build a new
sentence [QDP (\x.[... [x] ...])].

+Here, QDP is a scope-taking quantificational DP.
+
Just to emphasize the similarity between QR and shift, we can use QR
to provide insight into the tree task that mystified us earlier.

\tree (. (a)((S)((d)((S)(e)))))

+<pre>
.
__|___
|    |
@@ -119,11 +122,13 @@ a  __|___
d  _|__
|  |
S  e
+</pre>

First we QR the lower shift operator

\tree (. (S) ((\\x) ((a)((S)((d)((x)(e)))))))

+<pre>
.
___|___
|     |
@@ -138,11 +143,13 @@ S  ___|___
d  _|__
|  |
x  e
+</pre>

Next, we QR the upper shift operator

\tree (. (S) ((\\y) ((S) ((\\x) ((a)((y)((d)((x)(e)))))))))

+<pre>
.
___|___
|     |
@@ -161,6 +168,7 @@ S  ___|____
d  _|__
|  |
x  e
+</pre>

We then evaluate, using the same value for the shift operator proposed before:

@@ -174,6 +182,7 @@ form:

\tree (. (S) ((\\y) ((a)((y)((d)(((a)((y)((d)(("")(e)))))(e)))))))

+<pre>
.
___|___
|     |
@@ -196,6 +205,7 @@ S  ___|____
d  __|__
|   |
""  e
+</pre>

Repeating the process for the upper shift operator replaces each
@@ -203,6 +213,7 @@ occurrence of y with a copy of the whole tree.

\tree (. ((a)((((a)(("")((d)(((a)(("")((d)(("")(e)))))(e))))))((d)(((a)((((a)(("")((d)(((a)(("")((d)(("")(e)))))(e))))))((d)(("")(e)))))(e))))))

+<pre>
.
|
______|______
@@ -235,8 +246,10 @@ a  ___|____           |      |
d  __|__
|   |
""  e
+</pre>

+The yield of this tree (the sequence of leaf nodes) is

Exercise: the result is different, by the way, if the QR occurs in a
different order.
@@ -285,15 +298,14 @@ manipulates a list of information.  It is natural to imagine
separating a box into two regions, the payload and the hidden scratch
space:

+<pre>
_______________               _______________           _______________
| [x->2, y->3] |	          | [x->2, y->3] |	    | [x->2, y->3] |
------------------- 	        ------------------	   ------------------
|              |     ¢        |              |    =     |              |
|    +2        |	          |     y        |	    |     5        |
|______________|	          |______________|	    |______________|
-
-
-(Imagine the + operation has been lifted into the Reader monad too.)
+</pre>

For people who are familiar with Discourse Representation Theory (Kamp
1981, Kamp and Reyle 1993), this separation of boxes into payload and


diff --git a/topics/_week15_continuation_applications.mdwn b/topics/_week15_continuation_applications.mdwn
index eeb397b..ec0cb5d 100644
--- a/topics/_week15_continuation_applications.mdwn
+++ b/topics/_week15_continuation_applications.mdwn
@@ -13,14 +13,14 @@ zippers, then as a monad.  In this lecture, we will generalize
continuations slightly beyond a monad, and then begin to outline some
of the applications of monads.  In brief, the generalization can be
summarized in terms of types: instead of using a Kleisli arrow mapping
-a type α to a continuized type α -> ρ -> ρ, we'll allow the result
-types to differ, i.e., we'll map α to α -> β -> γ.  This will be
+a type α to a continuized type (α -> ρ) -> ρ, we'll allow the result
+types to differ, i.e., we'll map α to (α -> β) -> γ.  This will be
crucial for some natural language applications.

Many (though not all) of the applications are discussed in detail in
Barker and Shan 2014, *Continuations in Natural Language*, OUP.

-In terms of list zippers, the continuation of a focussed element in
+In terms of list zippers, the continuation of a focused element in
the list is the front part of the list.

list zipper for the list [a;b;c;d;e;f] with focus on d:
@@ -31,7 +31,7 @@ the list is the front part of the list.
continuation

In terms of tree zippers, the continuation is the entire context of
-the focussed element--the entire rest of the tree.
+the focused element--the entire rest of the tree.

[drawing of a broken tree]

@@ -45,15 +45,16 @@ We'll burn through that conceptual fog today.  The natural thing to
try would have been to defunctionalize the continuation-based solution
using a tree zipper.  But that would not have been easy, since the
natural way to implement the doubling behavior of the shifty operator
-would have been to simply copy the context provided by the zipper.
+would have been to simply copy the context provided by the zipper.
This would have produced two uncoordinated copies of the other shifty
operator, and we'd have been in the situation described in class of
having a reduction strategy that never reduced the number of shifty
-operators below 2.
+operators below 2. (There are ways around this limitation of tree zippers,
+but they are essentially equivalent to the technique given just below.)

Instead, we'll re-interpreting what the continuation monad was doing
-in defunctionalized terms by using Quantifier Raising (a technique
-from linguistics).
+in more or less defunctionalized terms by using Quantifier Raising, a technique
+from linguistics.

But first, motivating quantifier scope as a linguistic application.

@@ -69,7 +70,7 @@ a scope-taking expression to take scope.
2. For every x, [Ann put a copy of x's homeworks in her briefcase]

The sentence in (1) can be paraphrased as in (2), in which the
-quantificational DP *every student* takes scope over the rest of the sentence.
+quantificational DP *everyone* takes scope over the rest of the sentence.
Even if you suspect that there could be an analysis of (2) on which
"every student's term paper" could denote some kind of mereological
fusion of a set of papers, it is much more difficult to be satisfied


last lecture
diff --git a/topics/_week14_continuations.mdwn b/topics/_week14_continuations.mdwn
index bb68260..0b99065 100644
--- a/topics/_week14_continuations.mdwn
+++ b/topics/_week14_continuations.mdwn
@@ -30,119 +30,243 @@ We then presented CPS transforms, and demonstrated how they provide
an order-independent analysis of order of evaluation.

In order to continue to explore continuations, we will proceed in the
-followin fashion.
+following fashion: we introduce the traditional continuation monad,
+and show how it solves the task, then generalize the task to
+include doubling of both the left and the right context.

## The continuation monad

-Let's take a look at some of our favorite monads from the point of
-view of types.  Here, ==> is the Kleisli arrow.
-
-    Reader monad: types: Mα ==> β -> α
-                  ⇧: \ae.a : α -> Mα
-                  compose: \fghe.f(ghe)e : (Q->MR)->(P->MQ)->(P->MR)
-                  gloss: copy environment and distribute it to f and g
-
-    State monad: types: α ==> β -> (α x β)
-                 ⇧: \ae.(a,e) : α -> Mα
-                 compose: \fghe.let (x,s) = ghe in fxs
-                 thread the state through g, then through f
-
-    List monad: types: α ==> [α]
-                ⇧: \a.[a] : α -> Mα
-                compose: \fgh.concat(map f (gh))
-                gloss: compose f and g pointwise
-
-    Maybe monad: types: α ==> Nothing | Just α
-                ⇧: \a. Just a
-                compose: \fgh.
-		  case gh of Nothing -> Nothing
-		           | Just x -> case fx of Nothing -> Nothing
-                                                | Just y -> Just y
-                gloss: strong Kline
-
-Now we need a type for a continuation.  A continuized term is one that
-expects its continuation as an argument.  The continuation of a term
-is a function from the normal value of that term to a result.  So if
-the term has type continuized term has type α, the continuized version
-has type (α -> ρ) -> ρ:
-
-    Continuation monad: types: Mα => (α -> ρ) -> ρ
-                        ⇧: \ak.ka
-                        compose: \fghk.f(\f'.g h (\g'.f(g'h)
-			gloss: first give the continuation to f, then build
-                               a continuation out of the result to give to
-
-The first thing we should do is demonstrate that this monad is
-suitable for accomplishing the task.
-
-We lift the computation ("a" ++ ("b" ++ ("c" ++ "d"))) into
-the monad as follows:
-
-    t1 = (map1 ((++) "a") (map1 ((++) "b") (map1 ((++) "c") (mid "d"))))
-
-Here, (++) "a" is a function of type [Char] -> [Char] that prepends
-the string "a", so map1 ((++) "a") takes a string continuation k and
-returns a new string continuation that takes a string s returns "a" ++ k(s).
-So t1 (\k->k) == "abcd".
+In order to build a monad, we start with a Kleisli arrow.
+
+    Continuation monad: types: given some ρ, Mα => (α -> ρ) -> ρ
+                        ⇧ == \ak.ka : a -> Ma
+                        bind == \ufk. u(\x.fxk)
+
+We'll first show that this monad solves the task, then we'll consider
+the monad in more detail.
+
+The unmonadized computation (without the shifty "S" operator) is
+
+    t1 = + a (+ b (+ c d)) ~~> abcd
+
+where "+" is string concatenation and the symbol a is shorthand for
+the string "a".
+
+In order to use the continuation monad to solve the list task,
+we choose α = ρ = [Char].  So "abcd" is a list of characters, and
+a boxed list has type M[Char] == ([Char] -> [Char]) -> [Char].
+
+Writing ¢ in between its arguments, t1 corresponds to the following
+
+    mt1 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (⇧+ ¢ ⇧c ¢ ⇧d))
+
+We have to lift each functor (+) and each object (e.g., "b") into the
+monad using mid (⇧), then combine them using monadic function
+application, where
+
+    ¢ mf mx = \k -> mf (\f -> mx (\a -> k(f x)))
+
+for the continuation monad.
+
+The way in which we extract a value from a continuation box is by
+applying it to a continuation; often, it is convenient to supply the
+trivial continuation, the identity function \k.k = I.  So in fact,
+
+   t1 = mt1 I
+
+That is, the original computation is the monadic version applied to
+the trivial continuation.
+
+We can now imagine replacing the third element ("c") with a shifty
+operator.  We would like to replace just the one element, and we will
+do just that in a moment; but in order to simulate the original task,
+we'll have to take a different strategy initially.  We'll start by
+imagining a shift operator that combined direction with the tail of
+the list, like this:
+
+    mt2 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ ⇧d))

We can now define a shift operator to perform the work of "S":

shift u k = u(\s.k(ks))

-Shift takes two arguments: a string continuation u of type (String -> String) -> String,
-and a string continuation k of type String -> String.  Since u is the
-result returned by the argument to shift, it represents the tail of
-the list after the shift operator.  Then k is the continuation of the
-expression headed by shift.  So in order to execute the task, shift
-needs to invoke k twice.
+Shift takes two arguments: a string continuation u of type M[Char],
+and a string continuation k of type [Char] -> [Char].  Since u is the
+the argument to shift, it represents the tail of the list after the
+shift operator.  Then k is the continuation of the expression headed
+by shift.  So in order to execute the task, shift needs to invoke k
+twice.

-Note that the shift operator constructs a new continuation by
-composing its second argument with itself (i.e., the new doubled
-continuation is \s.k(ks)).  Once it has constructed this
-new continuation, it delivers it as an argument to the remaining part
-of the computation!
+    mt2 I == "ababd"

-    (map1 ((++) "a") (map1 ((++) "b") (shift (mid "d")))) (\k->k) == "ababd"
+just as desired.

Let's just make sure that we have the left-to-right evaluation we were
hoping for by evaluating "abSdeSf":

-    t6 = map1 ((++) "a")
-              (map1 ((++) "b")
-                    (shift
-		      (map1 ((++) "d")
-                            (map1 ((++) "e")
-                                  (shift (mid "f"))))))
+    mt3 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ (⇧+ ¢ ⇧d ¢ (⇧+ ¢ ⇧e ¢ (shift ⇧f)))))

-    t6 (\k->k) == "ababdeababdef"
+Then
+
+    mt3 I = "ababdeababdef"   -- structure: (ababde)(ababde)f
+

As expected.

-In order to add a reset operator #, we can have
+For a reset operator #, we can have

-    # u k = k(u(\k.k))
-    ab#deSf ~~> abdedef
+    # u k = k(u(\k.k))   -- ex.: ab#deSf ~~> abdedef

-Note that the lifting of the original unmonadized computation treated
-prepending "a" as a one-place operation.  If we decompose this
-operation into a two-place operation of appending combined with a
-string "a", an interesting thing happens.
+So the continuation monad solves the list task using continuations in
+a way that conforms to our by-now familiar strategy of lifting a
+computation into a monad, and then writing a few key functions (shift,
+reset) that exploit the power of the monad.

+Now we should consider what happens when we write a shift operator
+that takes the place of a single letter.

-    map2 f u v k = u(\u' -> v (\v' -> k(f u' v')))
-    shift k = k (k "")
+    mt2 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (shift ¢ ⇧d))
+    mt4 = ⇧+ ¢ ⇧a ¢ (⇧+ ¢ ⇧b ¢ (⇧+ ¢ shift' ¢ ⇧d))

-    t2 = map2 (++) (mid "a")
-                   (map2 (++) (mid "b")
-                              (map2 (++) shift
-		 	                 (map2 (++) (mid "d")
- 				                    (mid []))))
-    t2 (\k->k) == "ababdd"
+Instead of mt2, we have mt4.  So now the type of "c" (a boxed string,
+type M[Char]) is the same as the type of the new shift operator, shift'.
+
+    shift' = \k.k(k"")
+
+This shift operator takes a continuation k of type [Char]->[Char], and
+invokes it twice.  Since k requires an argument of type [Char], we
+need to use the first invocation of k to construction a [Char]; we do

(Diff truncated)

edits
diff --git a/topics/_week14_continuations.mdwn b/topics/_week14_continuations.mdwn
index ba2475f..bb68260 100644
--- a/topics/_week14_continuations.mdwn
+++ b/topics/_week14_continuations.mdwn
@@ -148,34 +148,170 @@ operator, the continuation that it will be fed will be the entire rest
of the computation.  This includes not only what has come before, but
what will come after it as well.  That means when the continuation is
doubled (self-composed), the result duplicates not only the prefix
-(ab ~~> abab), but also the suffix (d ~~> dd).  In some sense, then,
+(ab ~~> abab), but also the suffix (d ~~> dd).  In some sense, then,
continuations allow functions to see into the future!

What do you expect will be the result of executing "aScSe" under the
-second perspective?
+second perspective?  The answer to this question is not determined by
+the informal specification of the task that we've been using, since
+under the new perspective, we're copying complete (bi-directional)
+contexts rather than just left contexts.
+
+It might be natural to assume that what execution does is choose an S,
+and double its context, temporarily treating all other shift operators
+as dumb letters, then choosing a remaining S to execute.  If that was
+our interpreation of the task, then no string with more than one S
+would ever terminate, since the number S's after each reduction step
+would always be 2(n-1), where n is the number before reduction.
+
+But there is another equally natural way to answer the question.
+Assume the leftmost S is executed first.  What will the value of its
+continuation k be?  It will be a function that maps a string s to the
+result of computing ascSe, which will be ascascee.  So k(k "")
+will be k(acacee), which will result in a(acacee)ca(acacee)cee
+(the parentheses are added to show stages in the construction of the
+final result).
+
+So the continuation monad behaves in a way that we expect a
+continuation to behave.  But where did the continuation monad come
+from?  Let's consider some ways of deriving the continuation monad.

-This depends on which S is executed first.  Assume the first S is
-executed first.  What will the value of its continuation k be?
-It will be a function that maps a string s to the result of computing
-ascSe, which will be ascascee.  So k(k "") will be k(acacee), which
-will result in a(acacee)ca(acacee)cee (the parenthesese are added to
-show stages in the construction of the final result).
+## Viewing Montague's PTQ as CPS

-Note that this is a different result than assuming that what execution
-does is choose an S, and double its context, treating all other shift
-operators as dumb letters, then choosing a remaining S to execute.  If
-that was our interpreation of the task, then no string with more than
-one S would ever terminate (on the bi-directional continuation
-perspective).
+Montague's conception of determiner phrases as generalized quantifiers
+is a limited form of continuation-passing.  (See, e.g., chapter 4 of
+Barker and Shan 2014.)  Start by assuming that ordinary DPs such as
+proper names denote objects of type e.  Then verb phrases denote
+functions from individuals to truth values, i.e., functions of type e
+-> t.
+
+The meaning of extraordinary DPs such as *every woman* or *no dog*
+can't be expressed as a simple individual.  As Montague argued, it
+works much better to view them as predicates on verb phrase meanings,
+i.e., as having type (e->t)->t.  Then *no woman left* is true just
+in case the property of leaving is true of no woman:
+
+    no woman:  \k.not \exists x . (woman x) & kx
+    left: \x.left x
+    (no woman) (left) = not \exists x . woman x & left x
+
+Montague also proposed that all determiner phrases should have the
+same type.  After all, we can coordinate proper names with
+quantificational DPs, as in *John and no dog left*.  Then generalized
+quantifier corresponding to the proper name *John* is the quantifier
+\k.kj.
+
+At this point, we have a type for our Kliesli arrow and a value for
+our mid.  Given some result type (such as t in the Montague application),
+
+    α ==> (α -> ρ) -> ρ
+    ⇧a = \k.ka
+
+It remains only to find a suitable value for bind.  Montague didn't
+provide one, but it's easy to find:
+
+    bind ::    Mα -> (α -> Mβ) -> Mβ
+
+given variables of the following types
+
+    u :: Mα == (α -> ρ) -> ρ
+    f :: α -> Mβ
+    k :: β -> ρ
+    x :: α
+
+We have
+
+    bind u f = \k.u(\x.fxk)
+
+Let's carefully make sure that this types out:
+
+    bind u f = \k.       u      (\x .   f       x     k)
+                                      --------  --
+                                      α -> Mβ   α
+                                     ------------  ------
+                                         Mβ        β -> ρ
+                                  --  --------------------
+                                  α            ρ
+                  -------------  ------------------------
+                  (α -> ρ) -> ρ             α -> ρ
+                  ---------------------------------
+                                ρ
+                -----------------------
+                    (β -> ρ) -> ρ
+
+Yep!
+
+Is there any other way of building a bind operator?  Well, the
+challenge is getting hold of the "a" that is buried inside of u.
+In the Reader monad, we could get at the a inside of the box by
+applying the box to an environment.  In the State monad, we could get
+at the a by applying to box to a state and deconstructing the
+resulting pair.  In the continuation case, the only way to do it is to
+apply the boxed a (i.e., u) to a function that takes an a as an
+argument.  That means that f must be invoked inside the scope of the
+functional argument to u.  So we've deduced the structure
+
+    ... u (\x. ... f x ...) ...
+
+At this point, in order to provide u with an argument of the
+appropriate type, the argument must not only take objects of type
+α as an argument, it must return a result of type ρ.  That means that
+we must apply fx to an object of type β -> ρ.  We can hypothesize such
+an object, as long as we eliminate that hypothesis later (by binding
+it), and we have the complete bind operation.
+
+The way in which the value of type α that is needed in order to unlock
+the function f is hidden inside of u is directly analogous to the
+concept of "data hiding" in object-oriented programming.  See Pierce's
+discussion of how to extend system F with existential type
+quantification by encoding the existential using continuations.
+
+So the Kliesli type pretty much determines the bind operation.
+
+## What continuations are doing
+
+Ok, we have a continuation monad.  We derived it from first
+principles, and we have seen that it behaves at least in some respects
+as we expect a continuation monad to behave (in that it allows us to
+give a good implementation of the task).

+## How continuations can simulate other monads

-## Viewing Montague's PTQ as CPS
+Because the continuation monad allows the result type ρ to be any
+type, we can choose ρ in clever ways that allow us to simulate other

+    Reader: ρ = env -> α
+    State: ρ = s -> (α, s)
+    Maybe: ρ = Just α | Nothing

+You see how this is going to go.  Let's see an example by adding an
+abort operator to our task language, which represents what
+we want to have happen if we divide by zero, where what we want to do
+is return Nothing.
+
+    abort k = Nothing
+    mid a k = k a
+    map2 f u v k = u(\u' -> v (\v' -> k(f u' v')))
+    t13 = map2 (++) (mid "a")
+                   (map2 (++) (mid "b")
+                              (map2 (++) (mid "c")
+		 	                 (mid "d")))
+
+    t13 (\k->Just k) == Just "abcd"
+
+    t14 = map2 (++) (mid "a")
+                   (map2 (++) abort
+                              (map2 (++) (mid "c")
+		 	                 (mid "d")))
+
+
+    t14 (\k->Just k) == Nothing
+
+Super cool.

-## How continuations can simulate other monads

## Delimited versus undelimited continuations


discussion of continuations
diff --git a/topics/_week14_continuations.mdwn b/topics/_week14_continuations.mdwn
new file mode 100644
index 0000000..ba2475f
--- /dev/null
+++ b/topics/_week14_continuations.mdwn
@@ -0,0 +1,188 @@
+<!-- λ ◊ ≠ ∃ Λ ∀ ≡ α β γ ρ ω φ ψ Ω ○ μ η δ ζ ξ ⋆ ★ • ∙ ● ⚫ 𝟎 𝟏 𝟐 𝟘 𝟙 𝟚 𝟬 𝟭 𝟮 ⇧ (U+2e17) ¢ -->
+
+[[!toc]]
+
+# Continuations
+
+Last week we saw how to turn a list zipper into a continuation-based
+list processor.  The function computed something we called "the task",
+which was a simplified langauge involving control operators.
+
+    abSdS ~~> ababdS ~~> ababdababd
+
+The task is to process the list from left to right, and at each "S",
+double the list so far.  Here, "S" is a kind of control operator, and
+captures the entire previous computation.  We also considered a
+variant in which '#' delimited the portion of the list to be copied:
+
+    ab#deSfg ~~> abededfg
+
+In this variant, "S" and "#" correspond to shift and reset, which
+
+The expository logic of starting with this simplified task is the
+notion that as lists are to trees, so is this task to full-blown
+continuations.  So to the extent that, say, list zippers are easier to
+grasp than tree zippers, the task is easier to grasp than full
+continuations.
+
+We then presented CPS transforms, and demonstrated how they provide
+an order-independent analysis of order of evaluation.
+
+In order to continue to explore continuations, we will proceed in the
+followin fashion.
+
+## The continuation monad
+
+Let's take a look at some of our favorite monads from the point of
+view of types.  Here, ==> is the Kleisli arrow.
+
+    Reader monad: types: Mα ==> β -> α
+                  ⇧: \ae.a : α -> Mα
+                  compose: \fghe.f(ghe)e : (Q->MR)->(P->MQ)->(P->MR)
+                  gloss: copy environment and distribute it to f and g
+
+    State monad: types: α ==> β -> (α x β)
+                 ⇧: \ae.(a,e) : α -> Mα
+                 compose: \fghe.let (x,s) = ghe in fxs
+                 thread the state through g, then through f
+
+    List monad: types: α ==> [α]
+                ⇧: \a.[a] : α -> Mα
+                compose: \fgh.concat(map f (gh))
+                gloss: compose f and g pointwise
+
+    Maybe monad: types: α ==> Nothing | Just α
+                ⇧: \a. Just a
+                compose: \fgh.
+		  case gh of Nothing -> Nothing
+		           | Just x -> case fx of Nothing -> Nothing
+                                                | Just y -> Just y
+                gloss: strong Kline
+
+Now we need a type for a continuation.  A continuized term is one that
+expects its continuation as an argument.  The continuation of a term
+is a function from the normal value of that term to a result.  So if
+the term has type continuized term has type α, the continuized version
+has type (α -> ρ) -> ρ:
+
+    Continuation monad: types: Mα => (α -> ρ) -> ρ
+                        ⇧: \ak.ka
+                        compose: \fghk.f(\f'.g h (\g'.f(g'h)
+			gloss: first give the continuation to f, then build
+                               a continuation out of the result to give to
+
+The first thing we should do is demonstrate that this monad is
+suitable for accomplishing the task.
+
+We lift the computation ("a" ++ ("b" ++ ("c" ++ "d"))) into
+the monad as follows:
+
+    t1 = (map1 ((++) "a") (map1 ((++) "b") (map1 ((++) "c") (mid "d"))))
+
+Here, (++) "a" is a function of type [Char] -> [Char] that prepends
+the string "a", so map1 ((++) "a") takes a string continuation k and
+returns a new string continuation that takes a string s returns "a" ++ k(s).
+So t1 (\k->k) == "abcd".
+
+We can now define a shift operator to perform the work of "S":
+
+    shift u k = u(\s.k(ks))
+
+Shift takes two arguments: a string continuation u of type (String -> String) -> String,
+and a string continuation k of type String -> String.  Since u is the
+result returned by the argument to shift, it represents the tail of
+the list after the shift operator.  Then k is the continuation of the
+expression headed by shift.  So in order to execute the task, shift
+needs to invoke k twice.
+
+Note that the shift operator constructs a new continuation by
+composing its second argument with itself (i.e., the new doubled
+continuation is \s.k(ks)).  Once it has constructed this
+new continuation, it delivers it as an argument to the remaining part
+of the computation!
+
+    (map1 ((++) "a") (map1 ((++) "b") (shift (mid "d")))) (\k->k) == "ababd"
+
+Let's just make sure that we have the left-to-right evaluation we were
+hoping for by evaluating "abSdeSf":
+
+    t6 = map1 ((++) "a")
+              (map1 ((++) "b")
+                    (shift
+		      (map1 ((++) "d")
+                            (map1 ((++) "e")
+                                  (shift (mid "f"))))))
+
+    t6 (\k->k) == "ababdeababdef"
+
+As expected.
+
+In order to add a reset operator #, we can have
+
+    # u k = k(u(\k.k))
+    ab#deSf ~~> abdedef
+
+Note that the lifting of the original unmonadized computation treated
+prepending "a" as a one-place operation.  If we decompose this
+operation into a two-place operation of appending combined with a
+string "a", an interesting thing happens.
+
+
+    map2 f u v k = u(\u' -> v (\v' -> k(f u' v')))
+    shift k = k (k "")
+
+    t2 = map2 (++) (mid "a")
+                   (map2 (++) (mid "b")
+                              (map2 (++) shift
+		 	                 (map2 (++) (mid "d")
+ 				                    (mid []))))
+    t2 (\k->k) == "ababdd"
+
+First, we need map2 instead of map1.  Second, the type of the shift
+operator will be a string continuation, rather than a function from
+string continuations to string continuations.
+
+But here's the interesting part: from the point of view of the shift
+operator, the continuation that it will be fed will be the entire rest
+of the computation.  This includes not only what has come before, but
+what will come after it as well.  That means when the continuation is
+doubled (self-composed), the result duplicates not only the prefix
+(ab ~~> abab), but also the suffix (d ~~> dd).  In some sense, then,
+continuations allow functions to see into the future!
+
+What do you expect will be the result of executing "aScSe" under the
+second perspective?
+
+This depends on which S is executed first.  Assume the first S is
+executed first.  What will the value of its continuation k be?
+It will be a function that maps a string s to the result of computing
+ascSe, which will be ascascee.  So k(k "") will be k(acacee), which
+will result in a(acacee)ca(acacee)cee (the parenthesese are added to
+show stages in the construction of the final result).
+
+Note that this is a different result than assuming that what execution
+does is choose an S, and double its context, treating all other shift
+operators as dumb letters, then choosing a remaining S to execute.  If
+that was our interpreation of the task, then no string with more than
+one S would ever terminate (on the bi-directional continuation
+perspective).
+
+
+## Viewing Montague's PTQ as CPS
+
+
+
+
+
+## How continuations can simulate other monads
+
+## Delimited versus undelimited continuations
+
+## Natural language requries delimited continuations
+
+
+
+
+
+


fix stuff
diff --git a/readings.mdwn b/readings.mdwn
index b056b7e..7d72fa7 100644
@@ -242,6 +242,8 @@ in M. Broy, editor, *Marktoberdorf Summer School on Program Design Calculi*, Spr
*	[Haskell wikibook on Continuation Passing Style](http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style)<p>

+<!-- -->
+
*	[[!wikipedia Delimited continuation]]
*	Ken's paper [Shift to Control](http://repository.readscheme.org/ftp/papers/sw2004/shan.pdf), comparing some of the different delimited continuation operators
*	Racket's documents on [the variety of continuation operators](http://docs.racket-lang.org/reference/cont.html?q=abort#%28mod-path._racket%2Fcontrol%29)


diff --git a/readings.mdwn b/readings.mdwn
index 9fdc3f7..b056b7e 100644
@@ -241,7 +241,10 @@ in M. Broy, editor, *Marktoberdorf Summer School on Program Design Calculi*, Spr
*	[Continuations for Curmudgeons](http://www.intertwingly.net/blog/2005/04/13/Continuations-for-Curmudgeons) [Commentary](http://lambda-the-ultimate.org/node/643)
*	[Haskell wikibook on Continuation Passing Style](http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style)<p>
+
*	[[!wikipedia Delimited continuation]]
+*	Ken's paper [Shift to Control](http://repository.readscheme.org/ftp/papers/sw2004/shan.pdf), comparing some of the different delimited continuation operators
+*	Racket's documents on [the variety of continuation operators](http://docs.racket-lang.org/reference/cont.html?q=abort#%28mod-path._racket%2Fcontrol%29)
*	[Composable Continuations Tutorial](http://community.schemewiki.org/?composable-continuations-tutorial) at SchemeWiki
*	[Post by Ken](http://lambda-the-ultimate.org/node/1197#comment-12927) on Lambda the Ultimate explaining difference between undelimited and delimited continuations
*	[shift, reset and streams](http://chneukirchen.org/blog/archive/2005/04/shift-reset-and-streams.html)


formatting
diff --git a/readings.mdwn b/readings.mdwn
index 4c10160..9fdc3f7 100644
@@ -230,7 +230,7 @@ in M. Broy, editor, *Marktoberdorf Summer School on Program Design Calculi*, Spr
*	[[!wikipedia Continuation]]
*	[[!wikipedia Continuation-passing style]]
* 	[[!wikipedia Call-with-current-continuation]]
-*       John C Reynolds, [The discoveries of continuations](http://cs.au.dk/~hosc/local/LaSC-6-34-pp233-248.pdf)
+* 	John C Reynolds, [The discoveries of continuations](http://cs.au.dk/~hosc/local/LaSC-6-34-pp233-248.pdf)
*	[Intro to call/cc](http://community.schemewiki.org/?call-with-current-continuation) at SchemeWiki
*	[Call With Current Continuation](http://www.c2.com/cgi/wiki?CallWithCurrentContinuation)
*	[Continuations Made Simple and Illustrated](http://www.ps.uni-saarland.de/~duchier/python/continuations.html)


diff --git a/readings.mdwn b/readings.mdwn
index 7810acd..4c10160 100644
@@ -230,6 +230,7 @@ in M. Broy, editor, *Marktoberdorf Summer School on Program Design Calculi*, Spr
*	[[!wikipedia Continuation]]
*	[[!wikipedia Continuation-passing style]]
* 	[[!wikipedia Call-with-current-continuation]]
+*       John C Reynolds, [The discoveries of continuations](http://cs.au.dk/~hosc/local/LaSC-6-34-pp233-248.pdf)
*	[Intro to call/cc](http://community.schemewiki.org/?call-with-current-continuation) at SchemeWiki
*	[Call With Current Continuation](http://www.c2.com/cgi/wiki?CallWithCurrentContinuation)
*	[Continuations Made Simple and Illustrated](http://www.ps.uni-saarland.de/~duchier/python/continuations.html)


style
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index df9c134..0246663 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -4,7 +4,7 @@

Recall [[the recent homework assignment|/exercises/assignment12]] where you solved the same-fringe problem with a make_fringe_enumerator function, or in the Scheme version using streams instead of zippers, with a lazy-flatten function.

-The technique illustrated in those solutions is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*.
+The technique illustrated in those solutions is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this --- for example, it's determined by the operating system or the hardware in ways that the programmer can't predict --- that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*.

With cooperative threads, one typically yields control to the thread, and then back again to the main program, multiple times. Here's the pattern in which that happens in our same_fringe function:

@@ -13,7 +13,7 @@ With cooperative threads, one typically yields control to the thread, and then b
start next1
(paused)            starting
(paused)            calculate first leaf
-    (paused)            <--- return it
+    (paused)            <-- return it
start next2         (paused)            starting
(paused)            (paused)            calculate first leaf
(paused)            (paused)            <-- return it
@@ -78,7 +78,7 @@ Some languages have a native syntax for coroutines. Here's how we'd write the sa
{left = {left = {leaf=1}, right = {leaf=2}}, right = {leaf=3}} )
true

-We're going to think about the underlying principles to this execution pattern, and instead learn how to implement it from scratch---without necessarily having zippers or dedicated native syntax to rely on.
+We're going to think about the underlying principles to this execution pattern, and instead learn how to implement it from scratch --- without necessarily having zippers or dedicated native syntax to rely on.

##Exceptions and Aborts##
@@ -327,7 +327,7 @@ Well, that's when we use the outer_snapshot code in an unusual way. If we enco
) + 100
in outer_snapshot foo_applied_to_x;;

-Except that isn't quite right, yet---in this fragment, after the outer_snapshot 20 code is finished, we'd pick up again inside let foo_applied_to_x = (...) + 100 in outer_snapshot foo_applied_to_x. That's not what we want. We don't want to pick up again there. We want instead to do this:
+Except that isn't quite right, yet --- in this fragment, after the outer_snapshot 20 code is finished, we'd pick up again inside let foo_applied_to_x = (...) + 100 in outer_snapshot foo_applied_to_x. That's not what we want. We don't want to pick up again there. We want instead to do this:

let x = 2
in let outer_snapshot = fun box ->
@@ -370,11 +370,11 @@ A similar kind of "snapshotting" lets coroutines keep track of where they left o

These snapshots are called **continuations** because they represent how the computation will "continue" once some target code (in our example, the code in the box) delivers up a value.

-You can think of them as functions that represent "how the rest of the computation proposes to continue." Except that, once we're able to get our hands on those functions, we can do exotic and unwholesome things with them. Like use them to suspend and resume a thread. Or to abort from deep inside a sub-computation: one function might pass the command to abort *it* to a subfunction, so that the subfunction has the power to jump directly to the outside caller. Or a function might *return* its continuation function to the outside caller, giving *the outside caller* the ability to "abort" the function (the function that has already returned its value---so what should happen then?) Or we may call the same continuation function *multiple times* (what should happen then?). All of these weird and wonderful possibilities await us.
+You can think of them as functions that represent "how the rest of the computation proposes to continue." Except that, once we're able to get our hands on those functions, we can do exotic and unwholesome things with them. Like use them to suspend and resume a thread. Or to abort from deep inside a sub-computation: one function might pass the command to abort *it* to a subfunction, so that the subfunction has the power to jump directly to the outside caller. Or a function might *return* its continuation function to the outside caller, giving *the outside caller* the ability to "abort" the function (the function that has already returned its value --- so what should happen then?) Or we may call the same continuation function *multiple times* (what should happen then?). All of these weird and wonderful possibilities await us.

-The key idea behind working with continuations is that we're *inverting control*. In the fragment above, the code (if x = 1 then ... else outer_snapshot 20) + 100---which is written as if it were to supply a value to the outside context that we snapshotted---itself *makes non-trivial use of* that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an *argument* or bound variable. That is: the code that is *written* like it's supplying an argument to the outside context is instead *getting that context as its own argument*. He who is written as value-supplying slave is instead become the outer context's master.
+The key idea behind working with continuations is that we're *inverting control*. In the fragment above, the code (if x = 1 then ... else outer_snapshot 20) + 100 --- which is written as if it were to supply a value to the outside context that we snapshotted --- itself *makes non-trivial use of* that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an *argument* or bound variable. That is: the code that is *written* like it's supplying an argument to the outside context is instead *getting that context as its own argument*. He who is written as value-supplying slave is instead become the outer context's master.

-In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with our [[abortable list traversals|/topics/week12_abortable_traversals]]. (The outer_snapshot corresponds to the "done" handler in those traversals; and the continue_foo_snapshot to the "keep_going" handler.)
+In fact you've already seen this several times this semester --- recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with our [[abortable list traversals|/topics/week12_abortable_traversals]]. (The outer_snapshot corresponds to the "done" handler in those traversals; and the continue_foo_snapshot to the "keep_going" handler.)

This inversion of control should also remind you of Montague's treatment of determiner phrases in ["The Proper Treatment of Quantification in Ordinary English"](http://www.blackwellpublishing.com/content/BPL_Images/Content_store/Sample_chapter/0631215417%5CPortner.pdf) (PTQ).

@@ -466,9 +466,9 @@ There are also different kinds of "syntactic sugar" we can use to hide the conti
-->

-Various of the tools we've been introducing over the past weeks are inter-related. We saw coroutines implemented first with zippers; here we've talked in the abstract about their being implemented with continuations. Oleg says that "Zipper can be viewed as a delimited continuation reified as a data structure." Ken expresses the same idea in terms of a zipper being a "defunctionalized" continuation---that is, take something implemented as a function (a continuation) and implement the same thing as an inert data structure (a zipper).
+Various of the tools we've been introducing over the past weeks are inter-related. We saw coroutines implemented first with zippers; here we've talked in the abstract about their being implemented with continuations. Oleg says that "Zipper can be viewed as a delimited continuation reified as a data structure." Ken expresses the same idea in terms of a zipper being a "defunctionalized" continuation --- that is, take something implemented as a function (a continuation) and implement the same thing as an inert data structure (a zipper).

Mutation, delimited continuations, and monads can also be defined in terms of each other in various ways. We find these connections fascinating but the seminar won't be able to explore them very far.

-We recommend reading [the Yet Another Haskell Tutorial on Continuation Passing Style](http://en.wikibooks.org/wiki/Haskell/YAHT/Type_basics#Continuation_Passing_Style)---though the target language is Haskell, this discussion is especially close to material we're discussing in the seminar.
+We recommend reading [the Yet Another Haskell Tutorial on Continuation Passing Style](http://en.wikibooks.org/wiki/Haskell/YAHT/Type_basics#Continuation_Passing_Style) --- though the target language is Haskell, this discussion is especially close to material we're discussing in the seminar.


various tweaks
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
index 366e3c1..ea5dce9 100644
--- a/topics/week13_native_continuation_operators.mdwn
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -54,17 +54,19 @@ For our discussion, though, we'll just be looking at the full-strength continuat

The next issue is whether the continuations are _delimited_ or not. In [[our discussion of aborts|week13_coroutines_exceptions_and_aborts#index3h2]], we had a box, and what abort did was skip the rest of the code inside the box and resume execution at the outside border of the box. This is the pattern of a **delimited continuation**, with the box being the delimiter. There are a bunch of different operators that have been proposed for dealing with delimited continuations. Many of them are interdefinable (though the interdefinitions are sometimes complex). We won't be trying to survey them all. The ones we'll suggest as a paradigm are the pair of reset and shift. The first of these marks where the box goes, and the second has two roles: (i) it marks where you should start skipping (if you're going to "skip the rest of the code inside the box"), and (ii) it specifies a variable k that we bind to the continuation representing that skipped code. Thus we have:

-    initial outside code
-    +---reset--------------------+
-    | initial inside code        |
-    | shift k ( ... )            |
-    | remaining inside code      |
-    +----------------------------+
-    remaining outside code
+<pre>
+initial outside code
++---reset--------------------+
+| initial inside code        |
+| shift k. ( ... )           |
+| <i>remaining inside code</i>      |
++----------------------------+
+<i>remaining outside code</i>
+</pre>

-Really in the implementation of this there are _two_ continuations or snapshots being tracked. There's the potentially skipped code, represented by remaining inside code above; and there's also the continuation/snapshot that we resume with if we do skip that code, represented by remaining outside code. But only the first of these gets bound to a variable, k in the above diagram. What happens in this diagram is that initial outside code runs, then initial inside code runs, then remaining inside code is distilled into a function and bound to the variable k, then we run the ( ... ) code with k so bound. If that ( ... ) code invokes k by applying it to an argument, then remaining inside code is run as though the supplied argument were what the shift k ( ... ) bit evaluated to. If the ( ... ) code doesn't invoke k, but just ends with a normal result like 10, then the remaining inside code is skipped and we resume execution with the outside, implicitly snapshotted code remaining outside code.
+Really in the implementation of this there are _two_ continuations or snapshots being tracked. There's the potentially skipped code, represented by remaining inside code above; and there's also the continuation/snapshot that we resume with if we do skip that code, represented by remaining outside code. But only the first of these gets bound to a variable, k in the above diagram. What happens in this diagram is that initial outside code runs, then initial inside code runs, then remaining inside code is distilled into a function and bound to the variable k, then we run the ( ... ) code with k so bound. If that ( ... ) code invokes k by applying it to an argument, then remaining inside code is run as though the supplied argument were what the shift k. ( ... ) bit evaluated to. If the ( ... ) code doesn't invoke k, but just ends with a normal result like 10, then the remaining inside code is skipped and we resume execution with the outside, implicitly snapshotted code remaining outside code.

-You may encounter references to prompt and control. These are variants of reset and shift that differ in only subtle ways. As we said, there are lots of variants of these that we're not going to try to survey.
+You may encounter references to prompt and control. These are variants of reset and shift that differ in only subtle ways. As we said, there are [lots of variants of these](http://docs.racket-lang.org/reference/cont.html?q=abort#%28mod-path._racket%2Fcontrol%29) that we're not going to try to survey.

We talked before about abort. This can be expressed in terms of reset and shift. At the end of our discussion of abort, we said that this diagram:

@@ -101,7 +103,7 @@ or:
100)))])
(+ (foo 2) 1000))

-That shows you how abort can be expressed in terms of shift. Rewriting the Scheme code into a more OCaml-ish syntax, it might look something like this:
+That shows you how abort can be expressed in terms of shift. (Notice that with abort, there's a special keyword used in the aborting branch but no keyword in the "continue normally" branch; but with shift it's the converse.) Rewriting the Scheme code into a more OCaml-ish syntax, it might look something like this:

let foo x = reset (shift k -> if x = 1 then k 10 else 20) + 100) in
foo 2 + 1000
@@ -119,8 +121,13 @@ However, OCaml doesn't have any continuation operators in its standard deploymen
# let shift fun_k = match !reset_label with
| None -> failwith "shift must be inside reset"
| Some p -> Delimcc.shift p fun_k;;
+    # let abort x = match !reset_label with
+      | None -> failwith "abort must be inside reset"
+      | Some p -> Delimcc.abort p x;;

-Also, the previous code has to be massaged a bit to have the right syntax. What you really need to write is:
+(I've added that to my ~/.ocamlinit file so that it runs every time I start OCaml up. But note that the above code only works when the result types of your reset blocks are always the same throughout your whole OCaml session. For the toy examples we're working with, these result types are always int, so it's OK. But for more varied usage scenarios, you'd have to do something syntactically more complex.)
+
+Additionally, the previous code has to be massaged a bit to have the right syntax. What you really need to write is:

let foo x = reset (fun () -> shift (fun k -> if x = 1 then k 10 else 20) + 100) in
foo 2 + 1000
@@ -133,7 +140,7 @@ That was all *delimited* continuation operators. There's also the **undelimited

(call/cc (lambda (k) ...))

-(let/cc k ...) is a lot like (shift k ...) (or in the OCaml version, shift (fun k -> ...)), except that it doesn't need a surrounding reset ( ... ) (in OCaml, reset (fun () -> ...)). For the undelimited continuation operator, the box is understood to be *the whole rest of the top-level computation*. If you're running a file, that's all the rest of the file that would have been executed after the syntactic hole filled by (let/cc k ...). With (shift k ...), the code that gets bound to k doesn't get executed unless you specifically invoke k; but let/cc works differently in this respect. Thus:
+(let/cc k ...) is a lot like (shift k ...) (or in the OCaml version, shift (fun k -> ...)), except that it doesn't need a surrounding (reset ... ) (in OCaml, reset (fun () -> ...)). For the undelimited continuation operator, the box is understood to be *the whole rest of the top-level computation*. If you're running a file, that's all the rest of the file that would have been executed after the syntactic hole filled by (let/cc k ...). With (shift k ...), the code that gets bound to k doesn't get executed unless you specifically invoke k; but let/cc works differently in this respect. Thus:

(+ 100 (let/cc k 1))

@@ -141,9 +148,9 @@ returns 101, whereas:

(reset (+ 100 (shift k 1)))

-only returns 1. It is possible to duplicate the behavior of let/cc using reset/shift, but you have to structure your code in certain ways to do it. In order to duplicate the behavior of reset/shift using let/cc, you need to also make use of a mutable reference cell. So in that sense delimited continuations are more powerful and undelimited continuations are sort-of a special case.
+only returns 1. It is possible to duplicate the behavior of let/cc using reset/shift, but you have to structure your code in certain ways to do it.

-(In the OCaml code above for using delimited continuations, there is a mutable reference cell reset_label, but this is just for convenience. Oleg's library is designed for use with _multiple_ reset blocks having different labels, then when you invoke shift you have to specify which labeled reset block you want to potentially skip the rest of. We haven't introduced that complexity into our discussion, so for convenience we worked around it in showing you how to use reset and shift in OCaml. And the mutable reference cell was only playing the role of enabling us to work around the need to explicitly specify the reset block's label.)
+You can't duplicate the behavior of reset/shift using *only* let/cc, but you can do it if you *also* make use of a mutable reference cell. So in a way delimited continuations are more powerful, and undelimited continuations are sort-of a special case. (In the OCaml code above for using delimited continuations, there is a mutable reference cell reset_label, but this is just for convenience. Oleg's library is designed for use with _multiple_ reset blocks having different labels, and when you invoke shift you have to specify which labeled reset block you want to potentially skip the rest of. We haven't introduced that complexity into our discussion, so for convenience we worked around it in showing you how to use reset and shift in OCaml. And the mutable reference cell was only playing the role of enabling us to work around the need to explicitly specify the reset block's label.)

You may have noticed in some of our Scheme code we had the preface (require racket/control). You don't need to do anything special (in Racket) to use call/cc or let/cc, but you do need that preface to be able to use reset and shift and abort.

@@ -151,14 +158,15 @@ You may have noticed in some of our Scheme code we had the preface (require rac

Here are some examples of using these different continuation operators. The continuation that gets bound to k will be in bold. I'll use an OCaml-ish syntax because that's easiest to read, but these examples don't work as-is in OCaml. The reset/shift examples need to be massaged into the form displayed above for OCaml; and the let/cc examples don't work in OCaml because that's not provided. Alternatively, you could massage all of these into Scheme syntax. You shouldn't find that hard.

-1.  <pre><b>100 + </b>let/cc k (10 + 1)</pre>
-    This evaluates to 111. Nothing exotic happens here.
+1.  <pre><b>100 + </b>let/cc k. (10 + 1)</pre>
+    This evaluates to 111. Nothing exotic happens here. As mentioned above, let/cc automatically feeds any normal result from its body to its surrounding continuation. You'd get the same result if you invoked the continuation explicitly, as in:
+    <pre><b>100 + </b>let/cc k. (k (10 + 1))</pre>

-2.  <pre><b>100 + </b>let/cc k (10 + k 1)</pre>
-    k is again bound to 100 + < >. Note that after invoking k 1, the rest of the body of let/cc k ( ... ) is discarded, so the result is simply 101. See example 11, below, for contrast with shift k ( ... ).
+2.  <pre><b>100 + </b>let/cc k. (10 + k 1)</pre>
+    k is again bound to 100 + < >. Note that after invoking k 1, the rest of the body of let/cc k. ( ... ) is discarded, so the result is simply 101. See example 11, below, for contrast with shift k. ( ... ).

3.  You aren't restricted to calling a full-strength continuation function only once; nor are you restricted to calling it only inside the let/cc block. For example:
-    <pre><b>let p = </b>let/cc k (1,k) <b>in
+    <pre><b>let p = </b>let/cc k. (1,k) <b>in
let y = snd p (2, ident) in
(fst p, y)</b></pre>
In the first line, we bind the continuation function (the bold code) to k and then bind the variable p to the pair of 1 and that function.
@@ -170,32 +178,32 @@ Here are some examples of using these different continuation operators. The cont
Notice how the first time through, when p's second element is a continuation, applying it to an argument is a bit like time-travel? The metaphysically impossible kind of time-travel, where you can change what happened. The second time through, p gets bound to a different pair, whose second element is the ordinary ident function.

4.  <pre><b>1000 + (100 + </b>abort 11<b>)</b></pre>
-    Here the box is implicit, understood to be the rest of the code. The result is just the abort value 11, because the bold code is skipped.
+    Here the box is implicit, understood to be the rest of the code. The result is just the abort value 11, because the bold code is skipped. (This will work in Scheme but not in OCaml.)

5.  <pre>1000 + reset <b>(100 + </b>abort 11<b>)</b></pre>
Here the box or delimiter is explicitly specified. The bold code is skipped, but the outside code 1000 + < > is still executed, so we get 1011.

-6.  <pre>1000 + reset <b>(100 + </b>shift k (10 + 1)<b>)</b></pre>
-    Equivalent to preceding. We bind the bold code to k but then never apply k, so the value 10 + 1 is supplied directly to the outside code 1000 + < >, resulting in 1011.
+6.  <pre>1000 + reset <b>(100 + </b>shift k. (10 + 1)<b>)</b></pre>
+    Equivalent to preceding. We bind the bold code to k but then never apply k, so the value 10 + 1 is supplied directly to the outside code 1000 + < >, resulting in 1011. (Contrast example 1.)

-7.  <pre>1000 + reset <b>(100 + </b>shift k (k (10 + 1))<b>)</b></pre>
+7.  <pre>1000 + reset <b>(100 + </b>shift k. (k (10 + 1))<b>)</b></pre>
Here we do invoke the captured continuation, so what gets passed to the outside code 1000 + < > is k (10 + 1), that is, (100 + (10 + 1)). Result is 1111.
-    In general, if the last thing that happens inside a shift k ( ... ) body is that k is applied to an argument, then we do continue running the bold code between shift k ( ... ) and the edge of the reset box.
+    In general, if the last thing that happens inside a shift k. ( ... ) body is that k is applied to an argument, then we do continue running the bold code between shift k. ( ... ) and the edge of the reset box.

-8.  <pre>1000 + reset <b>(100 + </b>shift k (10 + k 1)<b>)</b></pre>
-    This also results in 1111, but via a different path than the preceding. First, note that k is bound to 100 + < >. So k 1 is 101. Then 10 + k 1 is 10 + 101. Then we exit the body of shift k ( ... ), without invoking k again, so we don't add 100 any more times. Thus we pass 10 + 101 to the outside code 1000 + < >. So the result is 1000 + (10 + 101) or 1111. (Whereas in the preceding example, the result was 1000 + (100 + 11). The order in which the operations are performed is different. If we used a non-commutative operation instead of +, the results of these two examples would be different from each other.)
+8.  <pre>1000 + reset <b>(100 + </b>shift k. (10 + k 1)<b>)</b></pre>
+    This also results in 1111, but via a different path than the preceding. First, note that k is bound to 100 + < >. So k 1 is 101. Then 10 + k 1 is 10 + 101. Then we exit the body of shift k. ( ... ), without invoking k again, so we don't add 100 any more times. Thus we pass 10 + 101 to the outside code 1000 + < >. So the result is 1000 + (10 + 101) or 1111. (Whereas in the preceding example, the result was 1000 + (100 + 11). The order in which 1 is added to 10 and 100 is different. If we used a non-commutative operation instead of +, the results of these two examples would be different from each other.)

-9.  <pre>1000 + reset <b>(100 + </b>shift k (k)<b>)</b> 1</pre>
-    Here k is bound to 100 + < >. That function k is what's returned by the shift k ( ... ) block, and since k isn't invoked (applied) when doing so, the rest of the bold reset block is skipped (for now). So we resume the outside code 1000 + < > 1, with what fills the gap < > being the function that was bound to k. Thus this is equivalent to 1000 + (fun x -> 100 + x) 1 or 1000 + 101 or 1101.
+9.  <pre>1000 + reset <b>(100 + </b>shift k. (k)<b>)</b> 1</pre>
+    Here k is bound to 100 + < >. That function k is what's returned by the shift k. ( ... ) block, and since k isn't invoked (applied) when doing so, the rest of the bold reset block is skipped (for now). So we resume the outside code 1000 + < > 1, with what fills the gap < > being the function that was bound to k. Thus this is equivalent to 1000 + (fun x -> 100 + x) 1 or 1000 + 101 or 1101.

-10. <pre>1000 + reset <b>(100 + </b>shift k (k (k 1))<b>)</b></pre>
+10. <pre>1000 + reset <b>(100 + </b>shift k. (k (k 1))<b>)</b></pre>
Here k is bound to 100 + < >. Thus k 1 is 101. Now there are two ways to think about what happens next. (Both are valid.) One way to think is that since the shift block ends with an additional outermost application of k, then as described in example 7 above, we continue through the bold code with the value k 1 or 101. Thus we get 100 + 101, and then we continue with the outermost code 1000 + < >, getting 1000 + (100 + 101), or 1201. The other way to think is that since k is 100 + < >, and k 1 is 101, then k (k 1) is 201. Now we leave the shift block *without* executing the bold code a third time (we've already taken account of the two applications of k), resuming with the outside code 1000 + < >, thereby getting 1000 + 201 as before.

-11. Here's a comparison of let/cc to shift. Recall example 2 above was:
-    <pre><b>100 + </b>let/cc k (10 + k 1)</pre>
-    which evaluated to 101. The parallel code where we instead capture the continuation using shift k ( ... ) would look like this:
-    <pre>reset <b>(100 + </b>shift k (10 + k 1)<b>)</b></pre>
-    But this evaluates differently. In the let/cc example, k is bound to the rest of the computation *including its termination*, so after executing k 1 we never come back and finish with 10 + < >. A let/cc-bound k never returns to the context where it was invoked. Whereas the shift-bound k only includes up to the edge of the reset box --- here, the rest of the computation, but *not including* its termination. So after k 1, if there is still code inside the body of shift, as there is here, we continue executing it. Thus the shift code evaluates to 111 not to 101.
+11. Here's another comparison of let/cc to shift. Recall example 2 above was:
+    <pre><b>100 + </b>let/cc k. (10 + k 1)</pre>
+    which evaluated to 101. The parallel code where we instead capture the continuation using shift k. ( ... ) would look like this:
+    <pre>reset <b>(100 + </b>shift k. (10 + k 1)<b>)</b></pre>
+    But this evaluates differently, as we saw in example 8. In the let/cc example, k is bound to the rest of the computation *including its termination*, so after executing k 1 we never come back and finish with 10 + < >. A let/cc-bound k never returns to the context where it was invoked. Whereas the shift-bound k only includes up to the edge of the reset box --- here, the rest of the computation, but *not including* its termination. So after k 1, if there is still code inside the body of shift, as there is here, we continue executing it. Thus the shift code evaluates to 111 not to 101.

Thus code using let/cc can't be *straightforwardly* translated into code using shift. It can be translated, but the algorithm will be more complex.


wording
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index c8caa0d..df9c134 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -388,7 +388,7 @@ This inversion of who is the argument and who is the function receiving the argu

Continuations come in many varieties. There are **undelimited continuations**, expressed in Scheme via (call/cc (lambda (k) ...)) or the shorthand (let/cc k ...). (call/cc is itself shorthand for call-with-current-continuation.) These capture "the entire rest of the computation." There are also **delimited continuations**, expressed in Scheme via (reset ... (shift k ...) ...) or (prompt ... (control k ...) ...) or any of several other operations. There are subtle differences between those that we won't be exploring in the seminar. Ken Shan has done terrific work exploring the relations of these operations to each other.

-When working with continuations, it's easiest in the first place to write them out explicitly, the way that we explicitly wrote out the "snapshot" continuations when we transformed this:
+When working with continuations, it's easiest in the beginning to write them out explicitly, the way that we explicitly wrote out the "snapshot" continuations when we transformed this:

let foo x =
+---try begin----------------+


formatting re Montague
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index f1d5925..c8caa0d 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -378,11 +378,11 @@ In fact you've already seen this several times this semester---recall how in our

This inversion of control should also remind you of Montague's treatment of determiner phrases in ["The Proper Treatment of Quantification in Ordinary English"](http://www.blackwellpublishing.com/content/BPL_Images/Content_store/Sample_chapter/0631215417%5CPortner.pdf) (PTQ).

-A naive semantics for atomic sentences will say the subject term is of type e, and the predicate of type e -> t, and that the subject provides an argument to the function expressed by the predicate.
+> A naive semantics for atomic sentences will say the subject term is of type e, and the predicate of type e -> t, and that the subject provides an argument to the function expressed by the predicate.

-Monatague proposed we instead take the subject term to be of type (e -> t) -> t, and that now it'd be the predicate (still of type e -> t) that provides an argument to the function expressed by the subject.
+> Monatague proposed we instead take the subject term to be of type (e -> t) -> t, and that now it'd be the predicate (still of type e -> t) that provides an argument to the function expressed by the subject.

-If all the subject did then was supply an e to the e -> t it receives as an argument, we wouldn't have gained anything we weren't already able to do. But of course, there are other things the subject can do with the e -> t it receives as an argument. For instance, it can check whether anything in the domain satisfies that e -> t; or whether most things do; and so on.
+> If all the subject did then was supply an e to the e -> t it receives as an argument, we wouldn't have gained anything we weren't already able to do. But of course, there are other things the subject can do with the e -> t it receives as an argument. For instance, it can check whether anything in the domain satisfies that e -> t; or whether most things do; and so on.

This inversion of who is the argument and who is the function receiving the argument is paradigmatic of working with continuations.


comment about abortable traversals
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index a820bde..f1d5925 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -374,7 +374,7 @@ You can think of them as functions that represent "how the rest of the computati

The key idea behind working with continuations is that we're *inverting control*. In the fragment above, the code (if x = 1 then ... else outer_snapshot 20) + 100---which is written as if it were to supply a value to the outside context that we snapshotted---itself *makes non-trivial use of* that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an *argument* or bound variable. That is: the code that is *written* like it's supplying an argument to the outside context is instead *getting that context as its own argument*. He who is written as value-supplying slave is instead become the outer context's master.

-In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with our [[abortable list traversals|/topics/week12_abortable_traversals]].
+In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with our [[abortable list traversals|/topics/week12_abortable_traversals]]. (The outer_snapshot corresponds to the "done" handler in those traversals; and the continue_foo_snapshot to the "keep_going" handler.)

This inversion of control should also remind you of Montague's treatment of determiner phrases in ["The Proper Treatment of Quantification in Ordinary English"](http://www.blackwellpublishing.com/content/BPL_Images/Content_store/Sample_chapter/0631215417%5CPortner.pdf) (PTQ).


continue_foo_normally -> continue_foo_snapshot
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index de7e8c0..a820bde 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -345,11 +345,11 @@ We can get that by some further rearranging of the code:
in let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
-    in let continue_foo_normally = fun from_value ->
+    in let continue_foo_snapshot = fun from_value ->
let value = from_value + 100
in outer_snapshot value
in (* start of foo_applied_to_x *)
-        if x = 1 then continue_foo_normally 10
+        if x = 1 then continue_foo_snapshot 10
else outer_snapshot 20;;

And this is indeed what is happening, at a fundamental level, when you use an expression like abort 20. Here is the original code for comparison:
@@ -404,11 +404,11 @@ into this:
in let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
-    in let continue_foo_normally = fun from_value ->
+    in let continue_foo_snapshot = fun from_value ->
let value = from_value + 100
in outer_snapshot value
in (* start of foo_applied_to_x *)
-        if x = 1 then continue_foo_normally 10
+        if x = 1 then continue_foo_snapshot 10
else outer_snapshot 20;;

Code written in the latter form is said to be written in **explicit continuation-passing style** or CPS. Later we'll talk about algorithms that mechanically convert an entire program into CPS.
@@ -442,10 +442,10 @@ There are also different kinds of "syntactic sugar" we can use to hide the conti
let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
-      in let continue_foo_normally = fun from_value ->
+      in let continue_foo_snapshot = fun from_value ->
let value = from_value + 100
in outer_snapshot value
-      in if x = 1 then continue_foo_normally 10
+      in if x = 1 then continue_foo_snapshot 10
else outer_snapshot 20;;

# let test_shift x =


snapshot -> outer_snapshot
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index a0b8d0d..de7e8c0 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -304,53 +304,53 @@ or, spelling out the gap < > as a bound variable:
That function is our "snapshot". Normally what happens is that code *inside* the box delivers up a value, and that value gets supplied as an argument to the snapshot-function just described. That is, our code is essentially working like this:

let x = 2
-    in let snapshot = fun box ->
+    in let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let foo_applied_to_x =
(if x = 1 then 10
else ... (* we'll come back to this part *)
) + 100
-    in shapshot foo_applied_to_x;;
+    in outer_snapshot foo_applied_to_x;;

But now how should the abort 20 part, that we ellided here, work? What should happen when we try to evaluate that?

-Well, that's when we use the snapshot code in an unusual way. If we encounter an abort 20, we should abandon the code we're currently executing, and instead just supply 20 to the snapshot we saved when we entered the box. That is, something like this:
+Well, that's when we use the outer_snapshot code in an unusual way. If we encounter an abort 20, we should abandon the code we're currently executing, and instead just supply 20 to the snapshot we saved when we entered the box. That is, something like this:

let x = 2
-    in let snapshot = fun box ->
+    in let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let foo_applied_to_x =
(if x = 1 then 10
-        else snapshot 20
+        else outer_snapshot 20
) + 100
-    in shapshot foo_applied_to_x;;
+    in outer_snapshot foo_applied_to_x;;

-Except that isn't quite right, yet---in this fragment, after the snapshot 20 code is finished, we'd pick up again inside let foo_applied_to_x = (...) + 100 in snapshot foo_applied_to_x. That's not what we want. We don't want to pick up again there. We want instead to do this:
+Except that isn't quite right, yet---in this fragment, after the outer_snapshot 20 code is finished, we'd pick up again inside let foo_applied_to_x = (...) + 100 in outer_snapshot foo_applied_to_x. That's not what we want. We don't want to pick up again there. We want instead to do this:

let x = 2
-    in let snapshot = fun box ->
+    in let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let foo_applied_to_x =
(if x = 1 then 10
-        else snapshot 20 THEN STOP
+        else outer_snapshot 20 THEN STOP
) + 100
-    in shapshot foo_applied_to_x;;
+    in outer_snapshot foo_applied_to_x;;

We can get that by some further rearranging of the code:

let x = 2
-    in let snapshot = fun box ->
+    in let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let continue_foo_normally = fun from_value ->
let value = from_value + 100
-        in snapshot value
+        in outer_snapshot value
in (* start of foo_applied_to_x *)
if x = 1 then continue_foo_normally 10
-        else snapshot 20;;
+        else outer_snapshot 20;;

And this is indeed what is happening, at a fundamental level, when you use an expression like abort 20. Here is the original code for comparison:

@@ -372,7 +372,7 @@ These snapshots are called **continuations** because they represent how the comp

You can think of them as functions that represent "how the rest of the computation proposes to continue." Except that, once we're able to get our hands on those functions, we can do exotic and unwholesome things with them. Like use them to suspend and resume a thread. Or to abort from deep inside a sub-computation: one function might pass the command to abort *it* to a subfunction, so that the subfunction has the power to jump directly to the outside caller. Or a function might *return* its continuation function to the outside caller, giving *the outside caller* the ability to "abort" the function (the function that has already returned its value---so what should happen then?) Or we may call the same continuation function *multiple times* (what should happen then?). All of these weird and wonderful possibilities await us.

-The key idea behind working with continuations is that we're *inverting control*. In the fragment above, the code (if x = 1 then ... else snapshot 20) + 100---which is written as if it were to supply a value to the outside context that we snapshotted---itself *makes non-trivial use of* that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an *argument* or bound variable. That is: the code that is *written* like it's supplying an argument to the outside context is instead *getting that context as its own argument*. He who is written as value-supplying slave is instead become the outer context's master.
+The key idea behind working with continuations is that we're *inverting control*. In the fragment above, the code (if x = 1 then ... else outer_snapshot 20) + 100---which is written as if it were to supply a value to the outside context that we snapshotted---itself *makes non-trivial use of* that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an *argument* or bound variable. That is: the code that is *written* like it's supplying an argument to the outside context is instead *getting that context as its own argument*. He who is written as value-supplying slave is instead become the outer context's master.

In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with our [[abortable list traversals|/topics/week12_abortable_traversals]].

@@ -388,7 +388,7 @@ This inversion of who is the argument and who is the function receiving the argu

Continuations come in many varieties. There are **undelimited continuations**, expressed in Scheme via (call/cc (lambda (k) ...)) or the shorthand (let/cc k ...). (call/cc is itself shorthand for call-with-current-continuation.) These capture "the entire rest of the computation." There are also **delimited continuations**, expressed in Scheme via (reset ... (shift k ...) ...) or (prompt ... (control k ...) ...) or any of several other operations. There are subtle differences between those that we won't be exploring in the seminar. Ken Shan has done terrific work exploring the relations of these operations to each other.

-When working with continuations, it's easiest in the first place to write them out explicitly, the way that we explicitly wrote out the snapshot continuation when we transformed this:
+When working with continuations, it's easiest in the first place to write them out explicitly, the way that we explicitly wrote out the "snapshot" continuations when we transformed this:

let foo x =
+---try begin----------------+
@@ -401,15 +401,15 @@ When working with continuations, it's easiest in the first place to write them o
into this:

let x = 2
-    in let snapshot = fun box ->
+    in let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let continue_foo_normally = fun from_value ->
let value = from_value + 100
-        in snapshot value
+        in outer_snapshot value
in (* start of foo_applied_to_x *)
if x = 1 then continue_foo_normally 10
-        else snapshot 20;;
+        else outer_snapshot 20;;

Code written in the latter form is said to be written in **explicit continuation-passing style** or CPS. Later we'll talk about algorithms that mechanically convert an entire program into CPS.

@@ -439,14 +439,14 @@ There are also different kinds of "syntactic sugar" we can use to hide the conti
# open Delimcc;;
# let reset body = let p = new_prompt () in push_prompt p (body p);;
# let test_cps x =
-      let snapshot = fun box ->
+      let outer_snapshot = fun box ->
let foo_result = box
in (foo_result) + 1000
in let continue_foo_normally = fun from_value ->
let value = from_value + 100
-          in snapshot value
+          in outer_snapshot value
in if x = 1 then continue_foo_normally 10
-      else snapshot 20;;
+      else outer_snapshot 20;;

# let test_shift x =
let foo x = reset(fun p () ->


tweaks
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index a9d8bc2..a0b8d0d 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -212,17 +212,26 @@ Here we call foo bar 0, and foo in turn calls bar 0, and bar raises the

OK, now this exception-handling apparatus does exemplify the second execution pattern we want to focus on. But it may bring it into clearer focus if we **simplify the pattern** even more. Imagine we could write code like this instead:

-    # let foo x =
-        try begin
-            (if x = 1 then 10
-            else abort 20
-            ) + 100
-        end
-        ;;
+    let foo x =
+      try begin
+          (if x = 1 then 10
+          else abort 20
+          ) + 100
+      end

then if we called foo 1, we'd get the result 110. If we called foo 2, on the other hand, we'd get 20 (note, not 120). This exemplifies the same interesting "jump out of this part of the code" behavior that the try ... raise ... with ... code does, but without the details of matching which exception was raised, and handling the exception to produce a new result.

-Many programming languages have this simplified exceution pattern, either instead of or alongside a try ... with ...-like pattern. In Lua and many other languages, abort is instead called return. In Lua, the preceding example would be written:
+> If we had to write that using try...with..., it'd look something like this:
+
+>     exception Abort of int;; (* declare a new type of exception that can carry an int parameter *)
+>     let foo x =
+>       try
+>         (if x = 1 then 10
+>         else raise (Abort 20)
+>         ) + 100
+>       with Abort n -> n
+
+Many programming languages have this simplified execution pattern, either instead of or alongside a try ... with ...-like pattern. In Lua and many other languages, abort is instead called return. In Lua, the preceding example would be written:

> function foo(x)
local value


Chris' improvement to tc with #
diff --git a/code/refunctionalizing_zippers.rkt b/code/refunctionalizing_zippers.rkt
index cac0916..477daea 100644
--- a/code/refunctionalizing_zippers.rkt
+++ b/code/refunctionalizing_zippers.rkt
@@ -69,7 +69,8 @@
(define (tc3 l k)
(cond
[(null? l) (k '())]
-    [(eqv? #\# (car l)) (append (k '()) (tc3 (cdr l) identity))]
+    ; [(eqv? #\# (car l)) (append (k '()) (tc3 (cdr l) identity))]
+    [(eqv? #\# (car l)) (k (tc3 (cdr l) identity))]
[(eqv? #\S (car l)) (tc3 (cdr l) (compose k k))]
[else (tc3 (cdr l) (lambda (tail) (k (cons (car l) tail))))]))

@@ -78,7 +79,8 @@
(shift k
(cond
[(null? l) (identity (k '()))]
-      [(eqv? #\# (car l)) (append (k '()) (reset (tr3 (cdr l))))]
+      ; [(eqv? #\# (car l)) (append (k '()) (reset (tr3 (cdr l))))]
+      [(eqv? #\# (car l)) (k (reset (tr3 (cdr l))))]
[(eqv? #\S (car l)) ((compose k k) (tr3 (cdr l)))]
[else ((lambda (tail) (k (cons (car l) tail))) (tr3 (cdr l)))])))


tweaks
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
index 6f7a0eb..366e3c1 100644
--- a/topics/week13_native_continuation_operators.mdwn
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -145,6 +145,8 @@ only returns 1. It is possible to duplicate the behavior of let/cc using re

(In the OCaml code above for using delimited continuations, there is a mutable reference cell reset_label, but this is just for convenience. Oleg's library is designed for use with _multiple_ reset blocks having different labels, then when you invoke shift you have to specify which labeled reset block you want to potentially skip the rest of. We haven't introduced that complexity into our discussion, so for convenience we worked around it in showing you how to use reset and shift in OCaml. And the mutable reference cell was only playing the role of enabling us to work around the need to explicitly specify the reset block's label.)

+You may have noticed in some of our Scheme code we had the preface (require racket/control). You don't need to do anything special (in Racket) to use call/cc or let/cc, but you do need that preface to be able to use reset and shift and abort.
+
## Examples of using these continuation operators ##

Here are some examples of using these different continuation operators. The continuation that gets bound to k will be in bold. I'll use an OCaml-ish syntax because that's easiest to read, but these examples don't work as-is in OCaml. The reset/shift examples need to be massaged into the form displayed above for OCaml; and the let/cc examples don't work in OCaml because that's not provided. Alternatively, you could massage all of these into Scheme syntax. You shouldn't find that hard.
@@ -153,13 +155,14 @@ Here are some examples of using these different continuation operators. The cont
This evaluates to 111. Nothing exotic happens here.

2.  <pre><b>100 + </b>let/cc k (10 + k 1)</pre>
-    This evaluates to 101. See also example 11, below.
+    k is again bound to 100 + < >. Note that after invoking k 1, the rest of the body of let/cc k ( ... ) is discarded, so the result is simply 101. See example 11, below, for contrast with shift k ( ... ).

-3.  <pre><b>let p = </b>let/cc k (1,k) <b>in
+3.  You aren't restricted to calling a full-strength continuation function only once; nor are you restricted to calling it only inside the let/cc block. For example:
+    <pre><b>let p = </b>let/cc k (1,k) <b>in
let y = snd p (2, ident) in
(fst p, y)</b></pre>
In the first line, we bind the continuation function (the bold code) to k and then bind the variable p to the pair of 1 and that function.
-    In the second line, we extract the continuation function from the pair p and apply it to the argument (2, ident). That results in the following code being run:
+    In the second line, we extract the continuation function from the pair p and apply it to the argument (2, ident). That results in us discarding the rest of *that* computation and instead executing the following:
<pre><b>let p = </b>(2, ident) <b>in
let y = snd p (2, ident) in
(fst p, y)</b></pre>


tweak
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
index 95defac..6f7a0eb 100644
--- a/topics/week13_native_continuation_operators.mdwn
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -139,7 +139,7 @@ That was all *delimited* continuation operators. There's also the **undelimited

returns 101, whereas:

-    (reset (+ 10 (shift k 1)))
+    (reset (+ 100 (shift k 1)))

only returns 1. It is possible to duplicate the behavior of let/cc using reset/shift, but you have to structure your code in certain ways to do it. In order to duplicate the behavior of reset/shift using let/cc, you need to also make use of a mutable reference cell. So in that sense delimited continuations are more powerful and undelimited continuations are sort-of a special case.


diff --git a/topics/cps_hint_1.mdwn b/topics/cps_hint_1.mdwn
new file mode 100644
index 0000000..985ba18
--- /dev/null
+++ b/topics/cps_hint_1.mdwn
@@ -0,0 +1,27 @@
+This function is developed in *The Seasoned Schemer* pp. 55-60. It accepts an atom a and a list of atoms lst, and returns the part of lst following the last occurrence of a. If a is not in lst, it returns lst unaltered.
+
+	#lang racket
+
+	(define (atom? x)
+	  (and (not (pair? x)) (not (null? x))))
+
+	(define alpha
+	  (lambda (a lst)
+	    (let/cc k ; calling k with val will immediately return val from the call to alpha
+	      (letrec ([aux (lambda (l)
+	                      (cond
+	                        [(null? l) '()]
+	                        [(eq? (car l) a)
+	                         ; we abandon any waiting recursive (aux ...) calls, and instead immediately return (aux (cdr l))
+	                         ; ...since Scheme is call-by-value, (aux (cdr l)) will be evaluated first, and
+	                         ; any calls to k therein will come first (and the pending (k ...) here will be abandoned)
+	                         (k (aux (cdr l)))]
+	                        [else (cons (car l) (aux (cdr l)))]))])
+	        (aux lst)))))
+
+
+	(alpha 'a '(a b c a d e f)) ; ~~> '(d e f)
+	(alpha 'x '(a b c a d e f)) ; ~~> '(a b c a d e f)
+	(alpha 'f '(a b c a d e f)) ; ~~> '()
+	(alpha 'a '(a b c x d e f)) ; ~~> '(b c x d e f)
+
diff --git a/topics/cps_hint_2.mdwn b/topics/cps_hint_2.mdwn
new file mode 100644
index 0000000..d948a28
--- /dev/null
+++ b/topics/cps_hint_2.mdwn
@@ -0,0 +1,43 @@
+This function is developed in *The Seasoned Schemer* pp. 76-83. It accepts a list lst and returns the leftmost atom in it, even if that atom is embedded several levels deep. Any empty lists preceding the leftmost atom are ignored.
+
+
+	#lang racket
+
+	(define (atom? x)
+	  (and (not (pair? x)) (not (null? x))))
+
+	(define beta
+	  (lambda (lst)
+	    (let/cc k ; calling k with val will immediately return val from the call to beta
+	      (letrec ([aux (lambda (l)
+	                      (cond
+	                        [(null? l) '()]
+	                        [(atom? (car l)) (k (car l))]
+	                        [else (begin
+	                                ; each of the following lines will evaluate to '() iff no atom was found in the specified part of l
+	                                (aux (car l))
+	                                (aux (cdr l)))]))])
+	        (aux lst)))))
+
+	(beta '(((a b) ()) (c (d ()))))       ; ~~> 'a
+	(beta '((() (a b) ()) (c (d ()))))    ; ~~> 'a
+	(beta '(() (() (a b) ()) (c (d ())))) ; ~~> 'a
+	(beta '(() (() ())))                  ; no leftmost atom, returns '()
+
+This function could also be written like this:
+
+	(define leftmost
+	  (lambda (l)
+	    (cond
+	      [(null? l) '()]
+	      [(atom? (car l)) (car l)]
+	      [else (let ([found (leftmost (car l))])
+	              (cond
+	                ; here we check whether the recursive call found an atom in (car l)
+	                [(atom? found) found]
+	                ; if not, we search for an atom in (cdr l)
+	                [else (leftmost (cdr l))]))])))
+
+But in this version, when an atom is found, it is returned back the chain of recursive calls, one by one. The previous version, on the other hand, uses a captured continuation k to return the atom immediately upon finding it.
+
+
diff --git a/topics/cps_hint_3.mdwn b/topics/cps_hint_3.mdwn
new file mode 100644
index 0000000..fa8b373
--- /dev/null
+++ b/topics/cps_hint_3.mdwn
@@ -0,0 +1,40 @@
+This function is developed in *The Seasoned Schemer* pp. 84-89. It accepts an atom a and a list lst and returns lst with the leftmost occurrence of a, if any, removed. Occurrences of a will be found no matter how deeply embedded.
+
+	#lang racket
+
+	(define (atom? x)
+	  (and (not (pair? x)) (not (null? x))))
+
+	(define gamma
+	  (lambda (a lst)
+	    (letrec ([aux (lambda (l k)
+	                    (cond
+	                      [(null? l) (k 'notfound)]
+	                      [(eq? (car l) a) (cdr l)]
+	                      [(atom? (car l)) (cons (car l) (aux (cdr l) k))]
+	                      [else
+	                       ; when (car l) exists but isn't an atom, we try to remove a from (car l)
+	                       ; if we succeed we prepend the result to (cdr l) and stop
+	                       (let ([car2 (let/cc k2
+	                                     ; calling k2 with val will bind car2 to val and continue with the (cond ...) block below
+	                                     (aux (car l) k2))])
+	                         (cond
+	                           ; if a wasn't found in (car l) then prepend (car l) to the result of removing a from (cdr l)
+	                           [(eq? car2 'notfound) (cons (car l) (aux (cdr l) k))]
+	                           ; else a was found in (car l)
+	                           [else (cons car2 (cdr l))]))]))]
+	             [lst2 (let/cc k1
+	                     ; calling k1 with val will bind lst2 to val and continue with the (cond ...) block below
+	                     (aux lst k1))])
+	      (cond
+	        ; was no atom found in lst?
+	        [(eq? lst2 'notfound) lst]
+	        [else lst2]))))
+
+	(gamma 'a '(((a b) ()) (c (d ()))))       ; ~~> '(((b) ()) (c (d ())))
+	(gamma 'a '((() (a b) ()) (c (d ()))))    ; ~~> '((() (b) ()) (c (d ())))
+	(gamma 'a '(() (() (a b) ()) (c (d ())))) ; ~~> '(() (() (b) ()) (c (d ())))
+	(gamma 'c '((() (a b) ()) (c (d ()))))    ; ~~> '((() (a b) ()) ((d ())))
+	(gamma 'c '(() (() (a b) ()) (c (d ())))) ; ~~> '(() (() (a b) ()) ((d ())))
+	(gamma 'x '((() (a b) ()) (c (d ()))))    ; ~~> '((() (a b) ()) (c (d ())))
+
diff --git a/topics/cps_hint_4.mdwn b/topics/cps_hint_4.mdwn
new file mode 100644
index 0000000..028504c
--- /dev/null
+++ b/topics/cps_hint_4.mdwn
@@ -0,0 +1,56 @@
+This function is developed in *The Seasoned Schemer* pp. 165-177. It accepts a list lst and returns #t or #f depending on whether any atom appears in lst twice in a row. The list is interpreted as though it were flattened: all embedded lists are collapsed into the topmost level, and empty list elements are ignored. However, no flattened copy of the list is ever constructed.
+
+	#lang racket
+
+	(define (atom? x)
+	  (and (not (pair? x)) (not (null? x))))
+
+	(define delta
+	  (letrec ([yield (lambda (x) x)]
+	           [resume (lambda (x) x)]
+	           [walk (lambda (l)
+	                   (cond
+	                     ; this is the only case where walk terminates naturally
+	                     [(null? l) '()]
+	                     [(atom? (car l)) (begin
+	                                        (let/cc k2 (begin
+	                                          (set! resume k2) ; now calling resume with val will ignore val
+	                                                           ; and continue with the final line of (begin ... (walk (cdr l)))
+	                                          ; when the next line is executed, yield will be bound to k1 or k3
+	                                          (yield (car l))))
+	                                        ; the previous yield line will never return, but the following line will be executed when resume is called
+	                                        (walk (cdr l)))]
+	                     [else (begin
+	                             ; walk will only ever return when a '() is reached, and will in that case return a '()
+	                             (walk (car l))
+	                             (walk (cdr l)))]))]
+	           [next (lambda () ; next is a thunk
+	                   (let/cc k3 (begin
+	                     (set! yield k3) ; now calling yield with val will return val from the call to next
+	                     ; when the next line is executed, resume will be bound to k2
+	                     (resume 'blah))))]
+	           [check (lambda (prev)
+	                    (let ([n (next)])
+	                      (cond
+	                        [(eq? n prev) #t]
+	                        [(atom? n) (check n)]
+	                        ; n will fail to be an atom iff we've walked to the end of the list, and (resume 'blah) returned naturally
+	                        [else #f])))])
+	    (lambda (lst)
+	      (let ([fst (let/cc k1 (begin
+	                   (set! yield k1) ; now calling yield with val will bind fst to val and continue with the (cond ...) block below
+	                   (walk lst)
+	                   ; the next line will be executed when we've walked to the end of lst
+	                   (yield '())))])
+	        (cond
+	          [(atom? fst) (check fst)]
+	          [else #f])
+	        ))))
+
+	(delta '(((a b) ()) (c (d ()))))   ; ~~> #f
+	(delta '(((a b) ()) (b (d ()))))   ; ~~> #t
+	(delta '(((a b) ()) (c (d (d)))))  ; ~~> #t
+	(delta '(((a b c) ()) (c (d ())))) ; ~~> #t
+	(delta '(((a b) ()) (c (d ()) c))) ; ~~> #f
+	(delta '((() ()) ()))              ; ~~> #f
+


add headers and exercises from Seasoned Schemer
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
index 30c85ec..95defac 100644
--- a/topics/week13_native_continuation_operators.mdwn
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -1,3 +1,7 @@
+[[!toc]]
+
+## Explicit and Implicit ##
+
Consider two kinds of video games. The first are 80s-style cabinets, that might suppress most awareness of your outside environment, but you can still directly perceive the controls, the "new game" button, and so on:

@@ -39,6 +43,9 @@ Here we explicitly pass around continuations in the k argument, beginning with

What the **continuation or control operators** like let/cc, reset, shift, abort, and so on do is give us a "magic gesture" alternative, where we can let the continuations usually be *implicit* in the way our code is structured, but when we perform the magic gesture (that is, use some of these special operators), the continuation gets converted from its implicit form into an explicit function that's bound to a variable we supply.

+
+## A Bestiary of operators for magically distilling implicit continuations into explicit functions ##
+
The continuation operators come in a variety of forms. You'll only be using a few of them (if any) in a single application. But here we'll present a couple of them side-by-side.

One issue is whether the continuation operators you're working with are "full-strength" or not. As we said, what these operators do is distill an implicit continuation into a function that you can explicitly invoke or manipulate (pass into or return from a function). If they're "full-strength", then there aren't constraints on _where_ or _how many times_ you can invoke that continuation function. Anywhere you have access to some variable that's bound to the continuation, you can invoke it as often as you like. More handicapped continuations are only invocable a single time, or only in certain regions of the code. Sometimes these handicapped continuations are provided because they're easier to implement, and the language designers haven't gotten around to implementing full-strength continuations yet. Or a language might provide _both_ handicapped and full-strength continuations, because the former can be implemented more efficiently. For applications like coroutines or exceptions/aborts, that we looked at before, typically all that's needed is a handicapped form of continuations. If your language has an abort operation, typically you'll only be invoking it once within a single execution path, and only inside the box that you want to abort from.
@@ -138,6 +145,8 @@ only returns 1. It is possible to duplicate the behavior of let/cc using re

(In the OCaml code above for using delimited continuations, there is a mutable reference cell reset_label, but this is just for convenience. Oleg's library is designed for use with _multiple_ reset blocks having different labels, then when you invoke shift you have to specify which labeled reset block you want to potentially skip the rest of. We haven't introduced that complexity into our discussion, so for convenience we worked around it in showing you how to use reset and shift in OCaml. And the mutable reference cell was only playing the role of enabling us to work around the need to explicitly specify the reset block's label.)

+## Examples of using these continuation operators ##
+
Here are some examples of using these different continuation operators. The continuation that gets bound to k will be in bold. I'll use an OCaml-ish syntax because that's easiest to read, but these examples don't work as-is in OCaml. The reset/shift examples need to be massaged into the form displayed above for OCaml; and the let/cc examples don't work in OCaml because that's not provided. Alternatively, you could massage all of these into Scheme syntax. You shouldn't find that hard.

1.  <pre><b>100 + </b>let/cc k (10 + 1)</pre>
@@ -186,3 +195,120 @@ Here are some examples of using these different continuation operators. The cont
But this evaluates differently. In the let/cc example, k is bound to the rest of the computation *including its termination*, so after executing k 1 we never come back and finish with 10 + < >. A let/cc-bound k never returns to the context where it was invoked. Whereas the shift-bound k only includes up to the edge of the reset box --- here, the rest of the computation, but *not including* its termination. So after k 1, if there is still code inside the body of shift, as there is here, we continue executing it. Thus the shift code evaluates to 111 not to 101.

Thus code using let/cc can't be *straightforwardly* translated into code using shift. It can be translated, but the algorithm will be more complex.
+
+
+## Some call/cc (or let/cc) exercises from The Seasoned Schemer ##
+
+Here are a series of examples from *The Seasoned Schemer*, which we recommended at the start of term. It's not necessary to have the book to follow the exercises, though if you do have it, its walkthroughs will give you useful assistance.
+
+For reminders about Scheme syntax, see [[here|/exercises/assignment12/#scheme]], and [[here|/rosetta1]] and [[here|/rosetta3]]. Other resources are on our [[Learning Scheme]] page.
+
+Most of the examples assume the following preface:
+
+	#lang racket
+
+	(define (atom? x)
+	  (and (not (pair? x)) (not (null? x))))
+
+Now try to figure out what this function does:
+
+	(define alpha
+	  (lambda (a lst)
+	    (let/cc k ; now what will happen when k is called?
+	      (letrec ([aux (lambda (l)
+	                      (cond
+	                        [(null? l) '()]
+	                        [(eq? (car l) a) (k (aux (cdr l)))]
+	                        [else (cons (car l) (aux (cdr l)))]))])
+	        (aux lst)))))
+
+Here is [[the answer|cps_hint_1]], but try to figure it out for yourself.
+
+Next, try to figure out what this function does:
+
+	(define beta
+	  (lambda (lst)
+	    (let/cc k ; now what will happen when k is called?
+	      (letrec ([aux (lambda (l)
+	                      (cond
+	                        [(null? l) '()]
+	                        [(atom? (car l)) (k (car l))]
+	                        [else (begin
+	                                ; what will the value of the next line be? why is it ignored?
+	                                (aux (car l))
+	                                (aux (cdr l)))]))])
+	        (aux lst)))))
+
+Here is [[the answer|cps_hint_2]], but try to figure it out for yourself.
+
+Next, try to figure out what this function does:
+
+	(define gamma
+	  (lambda (a lst)
+	    (letrec ([aux (lambda (l k)
+	                    (cond
+	                      [(null? l) (k 'notfound)]
+	                      [(eq? (car l) a) (cdr l)]
+	                      [(atom? (car l)) (cons (car l) (aux (cdr l) k))]
+	                      [else
+	                       ; what happens when (car l) exists but isn't an atom?
+	                       (let ([car2 (let/cc k2 ; now what will happen when k2 is called?
+	                                     (aux (car l) k2))])
+	                         (cond
+	                           ; when will the following condition be met? what happens then?
+	                           [(eq? car2 'notfound) (cons (car l) (aux (cdr l) k))]
+	                           [else (cons car2 (cdr l))]))]))]
+	             [lst2 (let/cc k1 ; now what will happen when k1 is called?
+	                     (aux lst k1))])
+	      (cond
+	        ; when will the following condition be met?
+	        [(eq? lst2 'notfound) lst]
+	        [else lst2]))))
+
+Here is [[the answer|cps_hint_3]], but try to figure it out for yourself.
+
+Here is the hardest example. Try to figure out what this function does:
+
+	(define delta
+	  (letrec ([yield (lambda (x) x)]
+	           [resume (lambda (x) x)]
+	           [walk (lambda (l)
+	                   (cond
+	                     ; is this the only case where walk returns a non-atom?
+	                     [(null? l) '()]
+	                     [(atom? (car l)) (begin
+	                                        (let/cc k2 (begin
+	                                          (set! resume k2) ; now what will happen when resume is called?
+	                                          ; when the next line is executed, what will yield be bound to?
+	                                          (yield (car l))))
+	                                        ; when will the next line be executed?
+	                                        (walk (cdr l)))]
+	                     [else (begin
+	                             ; what will the value of the next line be? why is it ignored?
+	                             (walk (car l))
+	                             (walk (cdr l)))]))]
+	           [next (lambda () ; next is a thunk
+	                   (let/cc k3 (begin
+	                     (set! yield k3) ; now what will happen when yield is called?
+	                     ; when the next line is executed, what will resume be bound to?
+	                     (resume 'blah))))]
+	           [check (lambda (prev)
+	                    (let ([n (next)])
+	                      (cond
+	                        [(eq? n prev) #t]
+	                        [(atom? n) (check n)]
+	                        ; when will n fail to be an atom?
+	                        [else #f])))])
+	    (lambda (lst)
+	      (let ([fst (let/cc k1 (begin
+	                   (set! yield k1) ; now what will happen when yield is called?
+	                   (walk lst)
+	                   ; when will the next line be executed?
+	                   (yield '())))])
+	        (cond
+	          [(atom? fst) (check fst)]
+	          ; when will fst fail to be an atom?
+	          [else #f])
+	        ))))
+
+Here is [[the answer|cps_hint_4]], but again, first try to figure it out for yourself.


diff --git a/exercises/assignment12.mdwn b/exercises/assignment12.mdwn
index 58e5096..f2ddbe3 100644
--- a/exercises/assignment12.mdwn
+++ b/exercises/assignment12.mdwn
@@ -325,6 +325,7 @@ You can think of int_stream as a functional object that provides access to an

Okay, now armed with the idea of a stream, let's use a Scheme version of them to handle the same-fringe problem. This code is taken from <http://c2.com/cgi/wiki?SameFringeProblem>. It uses thunks to delay the evaluation of code that computes the tail of a list of a tree's fringe. It also involves passing "the rest of the enumeration of the fringe" as a thunk argument (tail-thunk below). Your assignment is to fill in the blanks in the code, **and also to supply comments to the code,** to explain what every significant piece is doing. Don't forget to supply the comments, this is an important part of the assignment.

+<a id=scheme></a>
This code uses Scheme's cond construct. That works like this;

(cond


tweak wrapper for delimcc
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
index 179b780..30c85ec 100644
--- a/topics/week13_native_continuation_operators.mdwn
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -104,10 +104,11 @@ However, OCaml doesn't have any continuation operators in its standard deploymen
# #directory "+../delimcc";;
# let reset_label = ref None;;
-    # let reset body = let p = Delimcc.new_prompt () in
-      let oldp = !reset_label in
-      reset_label := Some p; let res = Delimcc.push_prompt p body in
-      reset_label := oldp; res;;
+    # let reset body =
+        let p = Delimcc.new_prompt () in
+        let oldp = !reset_label in
+        reset_label := Some p; let res = Delimcc.push_prompt p body in
+        reset_label := oldp; res;;
# let shift fun_k = match !reset_label with
| None -> failwith "shift must be inside reset"
| Some p -> Delimcc.shift p fun_k;;


tweak wrapper for delimcc
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
index 8d64d2e..179b780 100644
--- a/topics/week13_native_continuation_operators.mdwn
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -105,7 +105,9 @@ However, OCaml doesn't have any continuation operators in its standard deploymen
# let reset_label = ref None;;
# let reset body = let p = Delimcc.new_prompt () in
-      reset_label := Some p; let res = Delimcc.push_prompt p body in reset_label := None; res;;
+      let oldp = !reset_label in
+      reset_label := Some p; let res = Delimcc.push_prompt p body in
+      reset_label := oldp; res;;
# let shift fun_k = match !reset_label with
| None -> failwith "shift must be inside reset"
| Some p -> Delimcc.shift p fun_k;;


tweak
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
index ce68d11..8d64d2e 100644
--- a/topics/week13_native_continuation_operators.mdwn
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -146,7 +146,7 @@ Here are some examples of using these different continuation operators. The cont
3.  <pre><b>let p = </b>let/cc k (1,k) <b>in
let y = snd p (2, ident) in
(fst p, y)</b></pre>
-    In the first line, we bind the continuation function (the bold code) to k and then bind the pair of 1 and that function to the variable p.
+    In the first line, we bind the continuation function (the bold code) to k and then bind the variable p to the pair of 1 and that function.
In the second line, we extract the continuation function from the pair p and apply it to the argument (2, ident). That results in the following code being run:
<pre><b>let p = </b>(2, ident) <b>in
let y = snd p (2, ident) in


typo
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index cd26a3a..a9d8bc2 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -422,7 +422,7 @@ There are also different kinds of "syntactic sugar" we can use to hide the conti
(shift k
(if (eqv? x 1) (k 10) 20))
100)))])
-      (+ (foo 1) 1000))
+      (+ (foo 2) 1000))

<!--


formatting
diff --git a/content.mdwn b/content.mdwn
index e85b513..be10a97 100644
--- a/content.mdwn
+++ b/content.mdwn
@@ -186,6 +186,7 @@ Week 12:
*   [[Homework for week 12|exercises/assignment12]]

Week 13:
+
*   [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]
*   [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]
*   [[Let/cc and reset/shift|topics/week13_native_continuation_operators]]


diff --git a/content.mdwn b/content.mdwn
index 9279e29..e85b513 100644
--- a/content.mdwn
+++ b/content.mdwn
@@ -38,8 +38,7 @@ week in which they were introduced.
*   [[Unit and its usefulness|topics/week3 unit]]
*   Combinatory evaluator ([[for home|topics/week7_combinatory_evaluator]])
*   [[Programming with mutable state|/topics/week9_mutable_state]]
-    *   [[Abortable list traversals|/topics/week12_abortable_traversals]]
-    *   [[List and tree zippers|/topics/week12_list_and_tree_zippers]]

*   The Untyped Lambda Calculus
@@ -89,6 +88,11 @@ week in which they were introduced.
*   Continuations
*   [[Abortable list traversals|/topics/week12_abortable_traversals]]
*   [[List and tree zippers|/topics/week12_list_and_tree_zippers]]
+    *   [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]
+    *   [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]
+    *   [[Let/cc and reset/shift|topics/week13_native_continuation_operators]]
+    *   CPS transforms
+

## Topics by week ##
@@ -180,3 +184,9 @@ Week 12:
*   [[Abortable list traversals|/topics/week12_abortable_traversals]]
*   [[List and tree zippers|/topics/week12_list_and_tree_zippers]]
*   [[Homework for week 12|exercises/assignment12]]
+
+Week 13:
+*   [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]
+*   [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]
+*   [[Let/cc and reset/shift|topics/week13_native_continuation_operators]]
+*   CPS transforms


diff --git a/index.mdwn b/index.mdwn
index 0fda2df..27ce3a0 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -197,7 +197,7 @@ We've posted a [[State Monad Tutorial]].

(**Week 13**) Thursday April 30

-> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; Let/cc and reset/shift<!-- [[Let/cc and reset/shift|topics/week13_native_continuation_operators]] -->; CPS transforms
+> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; [[Let/cc and reset/shift|topics/week13_native_continuation_operators]]; CPS transforms

(**Week 14**) Thursday May 7


update for rename of topics/week13_control_operators.mdwn to topics/week13_native_continuation_operators.mdwn
diff --git a/index.mdwn b/index.mdwn
index 980fde1..0fda2df 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -197,7 +197,7 @@ We've posted a [[State Monad Tutorial]].

(**Week 13**) Thursday April 30

-> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; Let/cc and reset/shift<!-- [[Let/cc and reset/shift|topics/week13_control_operators]] -->; CPS transforms
+> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; Let/cc and reset/shift<!-- [[Let/cc and reset/shift|topics/week13_native_continuation_operators]] -->; CPS transforms

(**Week 14**) Thursday May 7


rename topics/week13_control_operators.mdwn to topics/week13_native_continuation_operators.mdwn
diff --git a/topics/week13_control_operators.mdwn b/topics/week13_control_operators.mdwn
deleted file mode 100644
index ce68d11..0000000
--- a/topics/week13_control_operators.mdwn
+++ /dev/null
@@ -1,185 +0,0 @@
-Consider two kinds of video games. The first are 80s-style cabinets, that might suppress most awareness of your outside environment, but you can still directly perceive the controls, the "new game" button, and so on:
-
-
-The second are more immersive games with VR goggles and gloves:
-
-[[/images/virtual-reality.jpg]]
-
-In this second kind of game, you don't see or feel the goggles or gloves (anyway, you don't perceive them _as_ goggles or gloves), and you needn't normally perceive any "new game" button. But the game might have some "magic gesture" you can perform, such as holding your left elbow while simultaneously stamping your right foot twice, that would invoke a special menu in your visual display, containing among other things a "new game" button.
-
-I want to offer the contrast between these two kinds of games, and the ways that you can perceive and handle the "new game" button, as analogy for the contrast between explicit and implicit mutation, which we looked at earlier, and also the contrast between explicit and implicit continuations, which we're beginning to look at now.
-
-With explicit mutation operators in the language, our code looks like this:
-
-    let x = cell 0 in
-    ... get x ...
-    ... put 1 into x ...
-
-With implicit mutation operators in the language, it looks instead like this:
-
-    var x = 0 in
-    ... x ...
-    ... x := 1 ...
-
-The first two lines aren't very different from what we'd have without mutation:
-
-    let x = 0 in
-    ... x ...
-
-The first line used the keyword var instead of the more familiar let, but that's just to signal that the variable we're introducing is mutable. Syntactically it acts just like a variant spelling of let. Also we access the contents of the variable in the same way, with just x. Whereas with the explicit reference cells, we have to say get x. There we can "see" the reference cell and have to explicitly "look inside it" to get at its contents. That's like seeing the "new game" button and other controls during the normal use of the video game. In the third line of the implicit mutation code, we have the "magic gesture", x := 1, which does something you couldn't do in the code without mutation. This is like bringing up the "new game" display by the magic elbow-and-stomping gesture, which doesn't work in real life. This lets us achieve the same effect that we did in the explicit code using put 1 into x, but without us needing to (or being able to) explicitly inspect or manipulate the reference cell itself.
-
-Turning to continuations, so far we've seen how to explicitly manipulate them, as in:
-
-    let rec tc (xs : char list) (k : char list -> char list) =
-      ... tc xs' (fun tail -> ... k ... tail) in
-    tc some_list identity
-
-Here we explicitly pass around continuations in the k argument, beginning with the identity or do-nothing continuation, but then modifying the continuation at each recursive invocation of tc.
-
-What the **continuation or control operators** like let/cc, reset, shift, abort, and so on do is give us a "magic gesture" alternative, where we can let the continuations usually be *implicit* in the way our code is structured, but when we perform the magic gesture (that is, use some of these special operators), the continuation gets converted from its implicit form into an explicit function that's bound to a variable we supply.
-
-The continuation operators come in a variety of forms. You'll only be using a few of them (if any) in a single application. But here we'll present a couple of them side-by-side.
-
-One issue is whether the continuation operators you're working with are "full-strength" or not. As we said, what these operators do is distill an implicit continuation into a function that you can explicitly invoke or manipulate (pass into or return from a function). If they're "full-strength", then there aren't constraints on _where_ or _how many times_ you can invoke that continuation function. Anywhere you have access to some variable that's bound to the continuation, you can invoke it as often as you like. More handicapped continuations are only invocable a single time, or only in certain regions of the code. Sometimes these handicapped continuations are provided because they're easier to implement, and the language designers haven't gotten around to implementing full-strength continuations yet. Or a language might provide _both_ handicapped and full-strength continuations, because the former can be implemented more efficiently. For applications like coroutines or exceptions/aborts, that we looked at before, typically all that's needed is a handicapped form of continuations. If your language has an abort operation, typically you'll only be invoking it once within a single execution path, and only inside the box that you want to abort from.
-
-For our discussion, though, we'll just be looking at the full-strength continuations. You can learn about different ways they might be handicapped later.
-
-The next issue is whether the continuations are _delimited_ or not. In [[our discussion of aborts|week13_coroutines_exceptions_and_aborts#index3h2]], we had a box, and what abort did was skip the rest of the code inside the box and resume execution at the outside border of the box. This is the pattern of a **delimited continuation**, with the box being the delimiter. There are a bunch of different operators that have been proposed for dealing with delimited continuations. Many of them are interdefinable (though the interdefinitions are sometimes complex). We won't be trying to survey them all. The ones we'll suggest as a paradigm are the pair of reset and shift. The first of these marks where the box goes, and the second has two roles: (i) it marks where you should start skipping (if you're going to "skip the rest of the code inside the box"), and (ii) it specifies a variable k that we bind to the continuation representing that skipped code. Thus we have:
-
-    initial outside code
-    +---reset--------------------+
-    | initial inside code        |
-    | shift k ( ... )            |
-    | remaining inside code      |
-    +----------------------------+
-    remaining outside code
-
-Really in the implementation of this there are _two_ continuations or snapshots being tracked. There's the potentially skipped code, represented by remaining inside code above; and there's also the continuation/snapshot that we resume with if we do skip that code, represented by remaining outside code. But only the first of these gets bound to a variable, k in the above diagram. What happens in this diagram is that initial outside code runs, then initial inside code runs, then remaining inside code is distilled into a function and bound to the variable k, then we run the ( ... ) code with k so bound. If that ( ... ) code invokes k by applying it to an argument, then remaining inside code is run as though the supplied argument were what the shift k ( ... ) bit evaluated to. If the ( ... ) code doesn't invoke k, but just ends with a normal result like 10, then the remaining inside code is skipped and we resume execution with the outside, implicitly snapshotted code remaining outside code.
-
-You may encounter references to prompt and control. These are variants of reset and shift that differ in only subtle ways. As we said, there are lots of variants of these that we're not going to try to survey.
-
-We talked before about abort. This can be expressed in terms of reset and shift. At the end of our discussion of abort, we said that this diagram:
-
-    let foo x =
-    +---try begin----------------+
-    |       (if x = 1 then 10    |
-    |       else abort 20        |
-    |       ) + 100              |
-    +---end----------------------+
-    in (foo 2) + 1000;;
-
-could be written in Scheme with either:
-
-    #lang racket
-    (require racket/control)
-
-    (let ([foo (lambda (x)
-                 (reset
-                  (+
-                    (if (eqv? x 1) 10 (abort 20))
-                    100)))])
-      (+ (foo 2) 1000))
-
-or:
-
-    #lang racket
-    (require racket/control)
-
-    (let ([foo (lambda (x)
-                 (reset
-                  (+
-                    (shift k
-                      (if (eqv? x 1) (k 10) 20))
-                    100)))])
-      (+ (foo 2) 1000))
-
-That shows you how abort can be expressed in terms of shift. Rewriting the Scheme code into a more OCaml-ish syntax, it might look something like this:
-
-    let foo x = reset (shift k -> if x = 1 then k 10 else 20) + 100) in
-    foo 2 + 1000
-
-However, OCaml doesn't have any continuation operators in its standard deployment. If you [[installed Oleg's delimcc library|/rosetta3/#delimcc]], you can use the previous code after first doing this:
-
-    # #directory "+../delimcc";;
-    # #load "delimcc.cma";;
-    # let reset_label = ref None;;
-    # let reset body = let p = Delimcc.new_prompt () in
-      reset_label := Some p; let res = Delimcc.push_prompt p body in reset_label := None; res;;
-    # let shift fun_k = match !reset_label with
-      | None -> failwith "shift must be inside reset"
-      | Some p -> Delimcc.shift p fun_k;;
-
-Also, the previous code has to be massaged a bit to have the right syntax. What you really need to write is:
-
-    let foo x = reset (fun () -> shift (fun k -> if x = 1 then k 10 else 20) + 100) in
-    foo 2 + 1000
-
-That will return 1020 just like the Scheme code does. If you said ... foo 1 + 1000, you'll instead get 1110.
-
-That was all *delimited* continuation operators. There's also the **undelimited continuation operators**, which historically were developed first. Here you don't see the same kind of variety that you do with the delimited continuation operators. Essentially, there is just one full-strength undelimited continuation operator. But there are several different syntactic forms for working with it. (Also, a language might provide handicapped continuation operators alongside, or instead of, the full-strength one. Some loser languages don't even do that much.) The historically best-known of these is expressed in Scheme as call-with-current-continuation, or call/cc for short. But we think it's a bit easier to instead use the variant let/cc. The following code is equivalent, and shows how these two forms relate to each other:
-
-    (let/cc k ...)
-
-    (call/cc (lambda (k) ...))
-
-(let/cc k ...) is a lot like (shift k ...) (or in the OCaml version, shift (fun k -> ...)), except that it doesn't need a surrounding reset ( ... ) (in OCaml, reset (fun () -> ...)). For the undelimited continuation operator, the box is understood to be *the whole rest of the top-level computation*. If you're running a file, that's all the rest of the file that would have been executed after the syntactic hole filled by (let/cc k ...). With (shift k ...), the code that gets bound to k doesn't get executed unless you specifically invoke k; but let/cc works differently in this respect. Thus:
-
-    (+ 100 (let/cc k 1))
-
-returns 101, whereas:
-
-    (reset (+ 10 (shift k 1)))
-
-only returns 1. It is possible to duplicate the behavior of let/cc using reset/shift, but you have to structure your code in certain ways to do it. In order to duplicate the behavior of reset/shift using let/cc, you need to also make use of a mutable reference cell. So in that sense delimited continuations are more powerful and undelimited continuations are sort-of a special case.
-
-(In the OCaml code above for using delimited continuations, there is a mutable reference cell reset_label, but this is just for convenience. Oleg's library is designed for use with _multiple_ reset blocks having different labels, then when you invoke shift you have to specify which labeled reset block you want to potentially skip the rest of. We haven't introduced that complexity into our discussion, so for convenience we worked around it in showing you how to use reset and shift in OCaml. And the mutable reference cell was only playing the role of enabling us to work around the need to explicitly specify the reset block's label.)
-
-Here are some examples of using these different continuation operators. The continuation that gets bound to k will be in bold. I'll use an OCaml-ish syntax because that's easiest to read, but these examples don't work as-is in OCaml. The reset/shift examples need to be massaged into the form displayed above for OCaml; and the let/cc examples don't work in OCaml because that's not provided. Alternatively, you could massage all of these into Scheme syntax. You shouldn't find that hard.
-
-1.  <pre><b>100 + </b>let/cc k (10 + 1)</pre>
-    This evaluates to 111. Nothing exotic happens here.
-
-2.  <pre><b>100 + </b>let/cc k (10 + k 1)</pre>
-    This evaluates to 101. See also example 11, below.
-
-3.  <pre><b>let p = </b>let/cc k (1,k) <b>in
-    let y = snd p (2, ident) in
-    (fst p, y)</b></pre>
-    In the first line, we bind the continuation function (the bold code) to k and then bind the pair of 1 and that function to the variable p.
-    In the second line, we extract the continuation function from the pair p and apply it to the argument (2, ident). That results in the following code being run:
-    <pre><b>let p = </b>(2, ident) <b>in
-    let y = snd p (2, ident) in
-    (fst p, y)</b></pre>
-    which in turn results in the nested pair (2, (2, ident)).
-    Notice how the first time through, when p's second element is a continuation, applying it to an argument is a bit like time-travel? The metaphysically impossible kind of time-travel, where you can change what happened. The second time through, p gets bound to a different pair, whose second element is the ordinary ident function.
-
-4.  <pre><b>1000 + (100 + </b>abort 11<b>)</b></pre>
-    Here the box is implicit, understood to be the rest of the code. The result is just the abort value 11, because the bold code is skipped.
-
-5.  <pre>1000 + reset <b>(100 + </b>abort 11<b>)</b></pre>
-    Here the box or delimiter is explicitly specified. The bold code is skipped, but the outside code 1000 + < > is still executed, so we get 1011.
-
-6.  <pre>1000 + reset <b>(100 + </b>shift k (10 + 1)<b>)</b></pre>
-    Equivalent to preceding. We bind the bold code to k but then never apply k, so the value 10 + 1 is supplied directly to the outside code 1000 + < >, resulting in 1011.
-
-7.  <pre>1000 + reset <b>(100 + </b>shift k (k (10 + 1))<b>)</b></pre>
-    Here we do invoke the captured continuation, so what gets passed to the outside code 1000 + < > is k (10 + 1), that is, (100 + (10 + 1)). Result is 1111.
-    In general, if the last thing that happens inside a shift k ( ... ) body is that k is applied to an argument, then we do continue running the bold code between shift k ( ... ) and the edge of the reset box.
-
-8.  <pre>1000 + reset <b>(100 + </b>shift k (10 + k 1)<b>)</b></pre>
-    This also results in 1111, but via a different path than the preceding. First, note that k is bound to 100 + < >. So k 1 is 101. Then 10 + k 1 is 10 + 101. Then we exit the body of shift k ( ... ), without invoking k again, so we don't add 100 any more times. Thus we pass 10 + 101 to the outside code 1000 + < >. So the result is 1000 + (10 + 101) or 1111. (Whereas in the preceding example, the result was 1000 + (100 + 11). The order in which the operations are performed is different. If we used a non-commutative operation instead of +, the results of these two examples would be different from each other.)
-
-9.  <pre>1000 + reset <b>(100 + </b>shift k (k)<b>)</b> 1</pre>
-    Here k is bound to 100 + < >. That function k is what's returned by the shift k ( ... ) block, and since k isn't invoked (applied) when doing so, the rest of the bold reset block is skipped (for now). So we resume the outside code 1000 + < > 1, with what fills the gap < > being the function that was bound to k. Thus this is equivalent to 1000 + (fun x -> 100 + x) 1 or 1000 + 101 or 1101.
-
-10. <pre>1000 + reset <b>(100 + </b>shift k (k (k 1))<b>)</b></pre>
-    Here k is bound to 100 + < >. Thus k 1 is 101. Now there are two ways to think about what happens next. (Both are valid.) One way to think is that since the shift block ends with an additional outermost application of k, then as described in example 7 above, we continue through the bold code with the value k 1 or 101. Thus we get 100 + 101, and then we continue with the outermost code 1000 + < >, getting 1000 + (100 + 101), or 1201. The other way to think is that since k is 100 + < >, and k 1 is 101, then k (k 1) is 201. Now we leave the shift block *without* executing the bold code a third time (we've already taken account of the two applications of k), resuming with the outside code 1000 + < >, thereby getting 1000 + 201 as before.
-
-11. Here's a comparison of let/cc to shift. Recall example 2 above was:
-    <pre><b>100 + </b>let/cc k (10 + k 1)</pre>
-    which evaluated to 101. The parallel code where we instead capture the continuation using shift k ( ... ) would look like this:
-    <pre>reset <b>(100 + </b>shift k (10 + k 1)<b>)</b></pre>
-    But this evaluates differently. In the let/cc example, k is bound to the rest of the computation *including its termination*, so after executing k 1 we never come back and finish with 10 + < >. A let/cc-bound k never returns to the context where it was invoked. Whereas the shift-bound k only includes up to the edge of the reset box --- here, the rest of the computation, but *not including* its termination. So after k 1, if there is still code inside the body of shift, as there is here, we continue executing it. Thus the shift code evaluates to 111 not to 101.
-
-    Thus code using let/cc can't be *straightforwardly* translated into code using shift. It can be translated, but the algorithm will be more complex.
diff --git a/topics/week13_native_continuation_operators.mdwn b/topics/week13_native_continuation_operators.mdwn
new file mode 100644
index 0000000..ce68d11
--- /dev/null
+++ b/topics/week13_native_continuation_operators.mdwn
@@ -0,0 +1,185 @@
+Consider two kinds of video games. The first are 80s-style cabinets, that might suppress most awareness of your outside environment, but you can still directly perceive the controls, the "new game" button, and so on:
+

(Diff truncated)

revisions
diff --git a/topics/week13_control_operators.mdwn b/topics/week13_control_operators.mdwn
index 325c2a2..ce68d11 100644
--- a/topics/week13_control_operators.mdwn
+++ b/topics/week13_control_operators.mdwn
@@ -27,7 +27,7 @@ The first two lines aren't very different from what we'd have without mutation:
let x = 0 in
... x ...

-The first line used the keyword var instead of the more familiar let, but that's just to signal that the variable we're introducing is mutable. Syntactically it acts just like a variant spelling of let. Also we access the contents of the variable in the same way, with just x. Whereas with the explicit reference cells, we have to say get x. We can "see" the reference cell and have to explicitly "look inside it" to get at its contents. That's like seeing the "new game" button and other controls during the normal use of the video game. Then in the third line of the implicit mutation code, we have the "magic gesture", x := 1, which does something you couldn't do in the code without mutation. This is like bringing up the "new game" display by the magic elbow-and-stomping gesture, which doesn't work in real life. This lets us achieve the same effect that we did in the explicit code using put 1 into x, but without us needing to (or being able to) explicitly inspect or manipulate the reference cell itself.
+The first line used the keyword var instead of the more familiar let, but that's just to signal that the variable we're introducing is mutable. Syntactically it acts just like a variant spelling of let. Also we access the contents of the variable in the same way, with just x. Whereas with the explicit reference cells, we have to say get x. There we can "see" the reference cell and have to explicitly "look inside it" to get at its contents. That's like seeing the "new game" button and other controls during the normal use of the video game. In the third line of the implicit mutation code, we have the "magic gesture", x := 1, which does something you couldn't do in the code without mutation. This is like bringing up the "new game" display by the magic elbow-and-stomping gesture, which doesn't work in real life. This lets us achieve the same effect that we did in the explicit code using put 1 into x, but without us needing to (or being able to) explicitly inspect or manipulate the reference cell itself.

Turning to continuations, so far we've seen how to explicitly manipulate them, as in:

@@ -52,7 +52,7 @@ The next issue is whether the continuations are _delimited_ or not. In [[our dis
| initial inside code        |
| shift k ( ... )            |
| remaining inside code      |
-    +---end----------------------+
+    +----------------------------+
remaining outside code

Really in the implementation of this there are _two_ continuations or snapshots being tracked. There's the potentially skipped code, represented by remaining inside code above; and there's also the continuation/snapshot that we resume with if we do skip that code, represented by remaining outside code. But only the first of these gets bound to a variable, k in the above diagram. What happens in this diagram is that initial outside code runs, then initial inside code runs, then remaining inside code is distilled into a function and bound to the variable k, then we run the ( ... ) code with k so bound. If that ( ... ) code invokes k by applying it to an argument, then remaining inside code is run as though the supplied argument were what the shift k ( ... ) bit evaluated to. If the ( ... ) code doesn't invoke k, but just ends with a normal result like 10, then the remaining inside code is skipped and we resume execution with the outside, implicitly snapshotted code remaining outside code.
@@ -92,27 +92,30 @@ or:
(shift k
(if (eqv? x 1) (k 10) 20))
100)))])
-      (+ (foo 1) 1000))
+      (+ (foo 2) 1000))

That shows you how abort can be expressed in terms of shift. Rewriting the Scheme code into a more OCaml-ish syntax, it might look something like this:

let foo x = reset (shift k -> if x = 1 then k 10 else 20) + 100) in
-    foo 1 + 1000
+    foo 2 + 1000

However, OCaml doesn't have any continuation operators in its standard deployment. If you [[installed Oleg's delimcc library|/rosetta3/#delimcc]], you can use the previous code after first doing this:

# #directory "+../delimcc";;
-    # let prompt = ref None;;
-    # let reset body = let p = Delimcc.new_prompt () in begin prompt := Some p; Delimcc.push_prompt p body end;;
-    # let shift fun_k = match !prompt with None -> failwith "shift must be inside reset" | Some p -> Delimcc.shift p fun_k;;
+    # let reset_label = ref None;;
+    # let reset body = let p = Delimcc.new_prompt () in
+      reset_label := Some p; let res = Delimcc.push_prompt p body in reset_label := None; res;;
+    # let shift fun_k = match !reset_label with
+      | None -> failwith "shift must be inside reset"
+      | Some p -> Delimcc.shift p fun_k;;

Also, the previous code has to be massaged a bit to have the right syntax. What you really need to write is:

let foo x = reset (fun () -> shift (fun k -> if x = 1 then k 10 else 20) + 100) in
-    foo 1 + 1000
+    foo 2 + 1000

-That will return 1110 just like the Scheme code does. If you said ... foo 2 + 1000, you'll instead get 1020.
+That will return 1020 just like the Scheme code does. If you said ... foo 1 + 1000, you'll instead get 1110.

That was all *delimited* continuation operators. There's also the **undelimited continuation operators**, which historically were developed first. Here you don't see the same kind of variety that you do with the delimited continuation operators. Essentially, there is just one full-strength undelimited continuation operator. But there are several different syntactic forms for working with it. (Also, a language might provide handicapped continuation operators alongside, or instead of, the full-strength one. Some loser languages don't even do that much.) The historically best-known of these is expressed in Scheme as call-with-current-continuation, or call/cc for short. But we think it's a bit easier to instead use the variant let/cc. The following code is equivalent, and shows how these two forms relate to each other:

@@ -130,7 +133,7 @@ returns 101, whereas:

only returns 1. It is possible to duplicate the behavior of let/cc using reset/shift, but you have to structure your code in certain ways to do it. In order to duplicate the behavior of reset/shift using let/cc, you need to also make use of a mutable reference cell. So in that sense delimited continuations are more powerful and undelimited continuations are sort-of a special case.

-(In the OCaml code above for using delimited continuations, there is a mutable reference cell, but this is just for convenience. Oleg's library is designed for use with _multiple_ reset blocks having different labels, then when you invoke shift you have to specify which labeled reset block you want to potentially skip the rest of. We haven't introduced that complexity into our discussion, so for convenience we worked around it in showing you how to use reset and shift in OCaml. And the mutable reference cell was only playing the role of enabling us to work around the need to explicitly specify the reset block's label.)
+(In the OCaml code above for using delimited continuations, there is a mutable reference cell reset_label, but this is just for convenience. Oleg's library is designed for use with _multiple_ reset blocks having different labels, then when you invoke shift you have to specify which labeled reset block you want to potentially skip the rest of. We haven't introduced that complexity into our discussion, so for convenience we worked around it in showing you how to use reset and shift in OCaml. And the mutable reference cell was only playing the role of enabling us to work around the need to explicitly specify the reset block's label.)

Here are some examples of using these different continuation operators. The continuation that gets bound to k will be in bold. I'll use an OCaml-ish syntax because that's easiest to read, but these examples don't work as-is in OCaml. The reset/shift examples need to be massaged into the form displayed above for OCaml; and the let/cc examples don't work in OCaml because that's not provided. Alternatively, you could massage all of these into Scheme syntax. You shouldn't find that hard.

@@ -138,16 +141,18 @@ Here are some examples of using these different continuation operators. The cont
This evaluates to 111. Nothing exotic happens here.

2.  <pre><b>100 + </b>let/cc k (10 + k 1)</pre>
-    This evaluates to 101; (+ 100 (let/cc k (+ 10 (k 1)))) is the same as (reset (+ 100 (shift k (k 1)))).
+    This evaluates to 101. See also example 11, below.

3.  <pre><b>let p = </b>let/cc k (1,k) <b>in
let y = snd p (2, ident) in
(fst p, y)</b></pre>
-    In the second line, we extract the continuation function (the bold part of the previous code) from the pair p and apply it to the argument (2, ident). That results in the following code being run:
+    In the first line, we bind the continuation function (the bold code) to k and then bind the pair of 1 and that function to the variable p.
+    In the second line, we extract the continuation function from the pair p and apply it to the argument (2, ident). That results in the following code being run:
<pre><b>let p = </b>(2, ident) <b>in
let y = snd p (2, ident) in
(fst p, y)</b></pre>
which in turn results in the nested pair (2, (2, ident)).
+    Notice how the first time through, when p's second element is a continuation, applying it to an argument is a bit like time-travel? The metaphysically impossible kind of time-travel, where you can change what happened. The second time through, p gets bound to a different pair, whose second element is the ordinary ident function.

4.  <pre><b>1000 + (100 + </b>abort 11<b>)</b></pre>
Here the box is implicit, understood to be the rest of the code. The result is just the abort value 11, because the bold code is skipped.
@@ -156,17 +161,25 @@ Here are some examples of using these different continuation operators. The cont
Here the box or delimiter is explicitly specified. The bold code is skipped, but the outside code 1000 + < > is still executed, so we get 1011.

6.  <pre>1000 + reset <b>(100 + </b>shift k (10 + 1)<b>)</b></pre>
-    Equivalent to preceding; results in 1011.
+    Equivalent to preceding. We bind the bold code to k but then never apply k, so the value 10 + 1 is supplied directly to the outside code 1000 + < >, resulting in 1011.

7.  <pre>1000 + reset <b>(100 + </b>shift k (k (10 + 1))<b>)</b></pre>
Here we do invoke the captured continuation, so what gets passed to the outside code 1000 + < > is k (10 + 1), that is, (100 + (10 + 1)). Result is 1111.
+    In general, if the last thing that happens inside a shift k ( ... ) body is that k is applied to an argument, then we do continue running the bold code between shift k ( ... ) and the edge of the reset box.

8.  <pre>1000 + reset <b>(100 + </b>shift k (10 + k 1)<b>)</b></pre>
-    This also results in 1111, but via a different path than the preceding. First, note that k is bound to 100 + < >. So k 1 is 101. Then 10 + k 1 is 10 + 101. Then we exit the body of shift k ( ... ), without invoking k again, so we don't anymore add 100. Thus we pass 10 + 101 to the outside code 1000 + < >. So the result is 1000 + (10 + 101) or 1111. (Whereas in the preceding example, the result was 1000 + (100 + 11). The order in which the operations are performed is different. If we used a non-commutative operation instead of +, the results of these two examples would be different from each other.)
+    This also results in 1111, but via a different path than the preceding. First, note that k is bound to 100 + < >. So k 1 is 101. Then 10 + k 1 is 10 + 101. Then we exit the body of shift k ( ... ), without invoking k again, so we don't add 100 any more times. Thus we pass 10 + 101 to the outside code 1000 + < >. So the result is 1000 + (10 + 101) or 1111. (Whereas in the preceding example, the result was 1000 + (100 + 11). The order in which the operations are performed is different. If we used a non-commutative operation instead of +, the results of these two examples would be different from each other.)

9.  <pre>1000 + reset <b>(100 + </b>shift k (k)<b>)</b> 1</pre>
-    Here k is bound to 100 + < >. That's what's returned by the shift k ( ... ) block, and since k isn't invoked (applied) when doing so, the rest of the bold reset block is skipped (for now). So we resume the outside code 1000 + < > 1, with what fills the gap < > being the function that was bound to k. Thus this is equivalent to 1000 + (fun x -> 100 + x) 1 or 1000 + 101 or 1101.
+    Here k is bound to 100 + < >. That function k is what's returned by the shift k ( ... ) block, and since k isn't invoked (applied) when doing so, the rest of the bold reset block is skipped (for now). So we resume the outside code 1000 + < > 1, with what fills the gap < > being the function that was bound to k. Thus this is equivalent to 1000 + (fun x -> 100 + x) 1 or 1000 + 101 or 1101.

10. <pre>1000 + reset <b>(100 + </b>shift k (k (k 1))<b>)</b></pre>
-    Here k is bound to 100 + < >. Thus k 1 is 101. Now there are two ways to think about what happens next. (Both are valid.) One way to think is that since the shift block ends with an additional outermost application of k, we continue through the bold code with the value k 1 or 101. Thus we get 100 + 101, and then we continue with the outermost code 1000 + < >, getting 1000 + (100 + 101), or 1201. The other way to think is that since k is 100 + < >, and k 1 is 101, then k (k 1) is 201. Now we leave the shift block *without* executing the bold code a third time (we've already taken account of the two applications of k), resuming with the outside code 1000 + < >, thereby getting 1000 + 201 as before.
+    Here k is bound to 100 + < >. Thus k 1 is 101. Now there are two ways to think about what happens next. (Both are valid.) One way to think is that since the shift block ends with an additional outermost application of k, then as described in example 7 above, we continue through the bold code with the value k 1 or 101. Thus we get 100 + 101, and then we continue with the outermost code 1000 + < >, getting 1000 + (100 + 101), or 1201. The other way to think is that since k is 100 + < >, and k 1 is 101, then k (k 1) is 201. Now we leave the shift block *without* executing the bold code a third time (we've already taken account of the two applications of k), resuming with the outside code 1000 + < >, thereby getting 1000 + 201 as before.
+
+11. Here's a comparison of let/cc to shift. Recall example 2 above was:
+    <pre><b>100 + </b>let/cc k (10 + k 1)</pre>
+    which evaluated to 101. The parallel code where we instead capture the continuation using shift k ( ... ) would look like this:
+    <pre>reset <b>(100 + </b>shift k (10 + k 1)<b>)</b></pre>
+    But this evaluates differently. In the let/cc example, k is bound to the rest of the computation *including its termination*, so after executing k 1 we never come back and finish with 10 + < >. A let/cc-bound k never returns to the context where it was invoked. Whereas the shift-bound k only includes up to the edge of the reset box --- here, the rest of the computation, but *not including* its termination. So after k 1, if there is still code inside the body of shift, as there is here, we continue executing it. Thus the shift code evaluates to 111 not to 101.

+    Thus code using let/cc can't be *straightforwardly* translated into code using shift. It can be translated, but the algorithm will be more complex.


updated
diff --git a/topics/week13_control_operators.mdwn b/topics/week13_control_operators.mdwn
--- a/topics/week13_control_operators.mdwn
+++ b/topics/week13_control_operators.mdwn
@@ -1,388 +1,172 @@
-* [A more immersive game](http://upload.wikimedia.org/wikipedia/commons/7/78/AC89-0437-20_a.jpeg)
+Consider two kinds of video games. The first are 80s-style cabinets, that might suppress most awareness of your outside environment, but you can still directly perceive the controls, the "new game" button, and so on:

-3.	callcc was originally introduced in Scheme. There it's written call/cc and is an abbreviation of call-with-current-continuation. Instead of the somewhat bulky form:
+The second are more immersive games with VR goggles and gloves:

-		(call/cc (lambda (k) ...))
+[[/images/virtual-reality.jpg]]

-	I prefer instead to use the lighter, and equivalent, shorthand:
+In this second kind of game, you don't see or feel the goggles or gloves (anyway, you don't perceive them _as_ goggles or gloves), and you needn't normally perceive any "new game" button. But the game might have some "magic gesture" you can perform, such as holding your left elbow while simultaneously stamping your right foot twice, that would invoke a special menu in your visual display, containing among other things a "new game" button.

-		(let/cc k ...)
+I want to offer the contrast between these two kinds of games, and the ways that you can perceive and handle the "new game" button, as analogy for the contrast between explicit and implicit mutation, which we looked at earlier, and also the contrast between explicit and implicit continuations, which we're beginning to look at now.

+With explicit mutation operators in the language, our code looks like this:

-Callcc/letcc examples
----------------------
+    let x = cell 0 in
+    ... get x ...
+    ... put 1 into x ...

-First, here are two examples in Scheme:
+With implicit mutation operators in the language, it looks instead like this:

-	(+ 100 (let/cc k (+ 10 1)))
-	       |-----------------|
+    var x = 0 in
+    ... x ...
+    ... x := 1 ...

-This binds the continuation outk of the underlined expression to k, then computes (+ 10 1) and delivers that to outk in the normal way (not through k). No unusual behavior. It evaluates to 111.
+The first two lines aren't very different from what we'd have without mutation:

-What if we do instead:
+    let x = 0 in
+    ... x ...

-	(+ 100 (let/cc k (+ 10 (k 1))))
-	       |---------------------|
+The first line used the keyword var instead of the more familiar let, but that's just to signal that the variable we're introducing is mutable. Syntactically it acts just like a variant spelling of let. Also we access the contents of the variable in the same way, with just x. Whereas with the explicit reference cells, we have to say get x. We can "see" the reference cell and have to explicitly "look inside it" to get at its contents. That's like seeing the "new game" button and other controls during the normal use of the video game. Then in the third line of the implicit mutation code, we have the "magic gesture", x := 1, which does something you couldn't do in the code without mutation. This is like bringing up the "new game" display by the magic elbow-and-stomping gesture, which doesn't work in real life. This lets us achieve the same effect that we did in the explicit code using put 1 into x, but without us needing to (or being able to) explicitly inspect or manipulate the reference cell itself.

-This time, during the evaluation of (+ 10 (k 1)), we supply 1 to k. So then the local continuation, which delivers the value up to (+ 10 [_]) and so on, is discarded. Instead 1 gets supplied to the outer continuation in place when let/cc was invoked. That will be (+ 100 [_]). When (+ 100 1) is evaluated, there's no more of the computation left to evaluate. So the answer here is 101.
+Turning to continuations, so far we've seen how to explicitly manipulate them, as in:

-You are not restricted to calling a bound continuation only once, nor are you restricted to calling it only inside of the call/cc (or let/cc) block. For example, you can do this:
+    let rec tc (xs : char list) (k : char list -> char list) =
+      ... tc xs' (fun tail -> ... k ... tail) in
+    tc some_list identity

-	(let ([p (let/cc k (cons 1 k))])
-  	  (cons (car p) ((cdr p) (cons 2 (lambda (x) x)))))
-	; evaluates to '(2 2 . #<procedure>)
+Here we explicitly pass around continuations in the k argument, beginning with the identity or do-nothing continuation, but then modifying the continuation at each recursive invocation of tc.

-What happens here? First, we capture the continuation where p is about to be assigned a value. Inside the let/cc block, we create a pair consisting of 1 and the captured continuation. This pair is bound to p. We then proceed to extract the components of the pair. The head (car) goes into the start of a tuple we're building up. To get the next piece of the tuple, we extract the second component of p (this is the bound continuation k) and we apply it to a pair consisting of 2 and the identity function. Supplying arguments to k takes us back to the point where p is about to be assigned a value. The tuple we had formerly been building, starting with 1, will no longer be accessible because we didn't bring along with us any way to refer to it, and we'll never get back to the context where we supplied an argument to k. Now p gets assigned not the result of (let/cc k (cons 1 k)) again, but instead, the new pair that we provided: '(2 . #<identity procedure>). Again we proceed to build up a tuple: we take the first element 2, then we take the second element (now the identity function), and feed it a pair '(2 . #<identity procedure>), and since it's an argument to the identity procedure that's also the result. So our final result is a nested pair, whose first element is 2 and whose second element is the pair '(2 . #<identity procedure>). Racket displays this nested pair like this:
+What the **continuation or control operators** like let/cc, reset, shift, abort, and so on do is give us a "magic gesture" alternative, where we can let the continuations usually be *implicit* in the way our code is structured, but when we perform the magic gesture (that is, use some of these special operators), the continuation gets converted from its implicit form into an explicit function that's bound to a variable we supply.

-	'(2 2 . #<procedure>)
+The continuation operators come in a variety of forms. You'll only be using a few of them (if any) in a single application. But here we'll present a couple of them side-by-side.

+One issue is whether the continuation operators you're working with are "full-strength" or not. As we said, what these operators do is distill an implicit continuation into a function that you can explicitly invoke or manipulate (pass into or return from a function). If they're "full-strength", then there aren't constraints on _where_ or _how many times_ you can invoke that continuation function. Anywhere you have access to some variable that's bound to the continuation, you can invoke it as often as you like. More handicapped continuations are only invocable a single time, or only in certain regions of the code. Sometimes these handicapped continuations are provided because they're easier to implement, and the language designers haven't gotten around to implementing full-strength continuations yet. Or a language might provide _both_ handicapped and full-strength continuations, because the former can be implemented more efficiently. For applications like coroutines or exceptions/aborts, that we looked at before, typically all that's needed is a handicapped form of continuations. If your language has an abort operation, typically you'll only be invoking it once within a single execution path, and only inside the box that you want to abort from.

----
+For our discussion, though, we'll just be looking at the full-strength continuations. You can learn about different ways they might be handicapped later.

-Some callcc/letcc exercises
----------------------------
-
-Here are a series of examples from *The Seasoned Schemer*, which we recommended at the start of term. It's not necessary to have the book to follow the exercises, though if you do have it, its walkthroughs will give you useful assistance.
-
-For reminders about Scheme syntax, see [here](/assignment8/) and [here](/week1/) and [here](/translating_between_ocaml_scheme_and_haskell). Other resources are on our [[Learning Scheme]] page.
-
-Most of the examples assume the following preface:
-
-	#lang racket
-
-	(define (atom? x)
-	  (and (not (pair? x)) (not (null? x))))
-
-Now try to figure out what this function does:
-
-	(define alpha
-	  (lambda (a lst)
-	    (let/cc k ; now what will happen when k is called?
-	      (letrec ([aux (lambda (l)
-	                      (cond
-	                        [(null? l) '()]
-	                        [(eq? (car l) a) (k (aux (cdr l)))]
-	                        [else (cons (car l) (aux (cdr l)))]))])
-	        (aux lst)))))
-
-Here is [the answer](/hints/cps_hint_1), but try to figure it out for yourself.
+The next issue is whether the continuations are _delimited_ or not. In [[our discussion of aborts|week13_coroutines_exceptions_and_aborts#index3h2]], we had a box, and what abort did was skip the rest of the code inside the box and resume execution at the outside border of the box. This is the pattern of a **delimited continuation**, with the box being the delimiter. There are a bunch of different operators that have been proposed for dealing with delimited continuations. Many of them are interdefinable (though the interdefinitions are sometimes complex). We won't be trying to survey them all. The ones we'll suggest as a paradigm are the pair of reset and shift. The first of these marks where the box goes, and the second has two roles: (i) it marks where you should start skipping (if you're going to "skip the rest of the code inside the box"), and (ii) it specifies a variable k that we bind to the continuation representing that skipped code. Thus we have:

-Next, try to figure out what this function does:
+    initial outside code
+    +---reset--------------------+
+    | initial inside code        |
+    | shift k ( ... )            |
+    | remaining inside code      |
+    +---end----------------------+
+    remaining outside code

-	(define beta
-	  (lambda (lst)
-	    (let/cc k ; now what will happen when k is called?
-	      (letrec ([aux (lambda (l)
-	                      (cond
-	                        [(null? l) '()]
-	                        [(atom? (car l)) (k (car l))]
-	                        [else (begin
-	                                ; what will the value of the next line be? why is it ignored?
-	                                (aux (car l))
-	                                (aux (cdr l)))]))])
-	        (aux lst)))))
-
-Here is [the answer](/hints/cps_hint_2), but try to figure it out for yourself.
-
-Next, try to figure out what this function does:
+Really in the implementation of this there are _two_ continuations or snapshots being tracked. There's the potentially skipped code, represented by remaining inside code above; and there's also the continuation/snapshot that we resume with if we do skip that code, represented by remaining outside code. But only the first of these gets bound to a variable, k in the above diagram. What happens in this diagram is that initial outside code runs, then initial inside code runs, then remaining inside code is distilled into a function and bound to the variable k, then we run the ( ... ) code with k so bound. If that ( ... ) code invokes k by applying it to an argument, then remaining inside code is run as though the supplied argument were what the shift k ( ... ) bit evaluated to. If the ( ... ) code doesn't invoke k, but just ends with a normal result like 10, then the remaining inside code is skipped and we resume execution with the outside, implicitly snapshotted code remaining outside code.
+
+You may encounter references to prompt and control. These are variants of reset and shift that differ in only subtle ways. As we said, there are lots of variants of these that we're not going to try to survey.
+
+We talked before about abort. This can be expressed in terms of reset and shift. At the end of our discussion of abort, we said that this diagram:
+
+    let foo x =
+    +---try begin----------------+
+    |       (if x = 1 then 10    |
+    |       else abort 20        |
+    |       ) + 100              |
+    +---end----------------------+
+    in (foo 2) + 1000;;
+
+could be written in Scheme with either:
+
+    #lang racket
+    (require racket/control)
+
+    (let ([foo (lambda (x)
+                 (reset
+                  (+
+                    (if (eqv? x 1) 10 (abort 20))
+                    100)))])
+      (+ (foo 2) 1000))
+
+or:
+
+    #lang racket
+    (require racket/control)
+
+    (let ([foo (lambda (x)
+                 (reset
+                  (+
+                    (shift k
+                      (if (eqv? x 1) (k 10) 20))
+                    100)))])
+      (+ (foo 1) 1000))
+
+That shows you how abort can be expressed in terms of shift. Rewriting the Scheme code into a more OCaml-ish syntax, it might look something like this:
+
+    let foo x = reset (shift k -> if x = 1 then k 10 else 20) + 100) in
+    foo 1 + 1000
+
+However, OCaml doesn't have any continuation operators in its standard deployment. If you [[installed Oleg's delimcc library|/rosetta3/#delimcc]], you can use the previous code after first doing this:
+
+    # #directory "+../delimcc";;
+    # #load "delimcc.cma";;
+    # let prompt = ref None;;
+    # let reset body = let p = Delimcc.new_prompt () in begin prompt := Some p; Delimcc.push_prompt p body end;;
+    # let shift fun_k = match !prompt with None -> failwith "shift must be inside reset" | Some p -> Delimcc.shift p fun_k;;
+
+Also, the previous code has to be massaged a bit to have the right syntax. What you really need to write is:
+
+    let foo x = reset (fun () -> shift (fun k -> if x = 1 then k 10 else 20) + 100) in
+    foo 1 + 1000
+
+That will return 1110 just like the Scheme code does. If you said ... foo 2 + 1000, you'll instead get 1020.
+
+That was all *delimited* continuation operators. There's also the **undelimited continuation operators**, which historically were developed first. Here you don't see the same kind of variety that you do with the delimited continuation operators. Essentially, there is just one full-strength undelimited continuation operator. But there are several different syntactic forms for working with it. (Also, a language might provide handicapped continuation operators alongside, or instead of, the full-strength one. Some loser languages don't even do that much.) The historically best-known of these is expressed in Scheme as call-with-current-continuation, or call/cc for short. But we think it's a bit easier to instead use the variant let/cc. The following code is equivalent, and shows how these two forms relate to each other:
+
+    (let/cc k ...)
+
+    (call/cc (lambda (k) ...))
+
+(let/cc k ...) is a lot like (shift k ...) (or in the OCaml version, shift (fun k -> ...)), except that it doesn't need a surrounding reset ( ... ) (in OCaml, reset (fun () -> ...)). For the undelimited continuation operator, the box is understood to be *the whole rest of the top-level computation*. If you're running a file, that's all the rest of the file that would have been executed after the syntactic hole filled by (let/cc k ...). With (shift k ...), the code that gets bound to k doesn't get executed unless you specifically invoke k; but let/cc works differently in this respect. Thus:
+
+    (+ 100 (let/cc k 1))
+
+returns 101, whereas:
+

(Diff truncated)

formatting
diff --git a/rosetta3.mdwn b/rosetta3.mdwn
index 1501077..f533a02 100644
--- a/rosetta3.mdwn
+++ b/rosetta3.mdwn
@@ -70,7 +70,7 @@ This page covers how to do some OCaml-ish and Haskell-ish things in Scheme, and

These last three forms are also available in OCaml, but to use them you'll need to compile and install Oleg Kiselyov's "delimcc" or "caml-shift" library (these names refer to the same library), which you can find [here](http://okmij.org/ftp/continuations/implementations.html#caml-shift). You'll already need to have OCaml installed. It also helps if you already have the findlib package installed, too, [as we discuss here](http://lambda.jimpryor.net/how_to_get_the_programming_languages_running_on_your_computer/). If you're not familiar with how to compile software on your computer, this might be beyond your reach for the time being.

-        <a id=delimcc></a>
+	<a id=delimcc></a>
But assuming you do manage to compile and install Oleg's library, here's how you'd use it in an OCaml session:

#require "delimcc";; (* loading Oleg's library this way requires the findlib package *)


diff --git a/rosetta3.mdwn b/rosetta3.mdwn
index 070573b..1501077 100644
--- a/rosetta3.mdwn
+++ b/rosetta3.mdwn
@@ -70,6 +70,7 @@ This page covers how to do some OCaml-ish and Haskell-ish things in Scheme, and

These last three forms are also available in OCaml, but to use them you'll need to compile and install Oleg Kiselyov's "delimcc" or "caml-shift" library (these names refer to the same library), which you can find [here](http://okmij.org/ftp/continuations/implementations.html#caml-shift). You'll already need to have OCaml installed. It also helps if you already have the findlib package installed, too, [as we discuss here](http://lambda.jimpryor.net/how_to_get_the_programming_languages_running_on_your_computer/). If you're not familiar with how to compile software on your computer, this might be beyond your reach for the time being.

+        <a id=delimcc></a>
But assuming you do manage to compile and install Oleg's library, here's how you'd use it in an OCaml session:

#require "delimcc";; (* loading Oleg's library this way requires the findlib package *)


add vr game images
diff --git a/images/star-wars-arcade.gif b/images/star-wars-arcade.gif
new file mode 100644
index 0000000..8a0d703
Binary files /dev/null and b/images/star-wars-arcade.gif differ
diff --git a/images/virtual-reality.jpg b/images/virtual-reality.jpg
new file mode 100644
index 0000000..82697fd
Binary files /dev/null and b/images/virtual-reality.jpg differ


1->1000
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index 706050e..cd26a3a 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -255,7 +255,7 @@ A more general way to think about these snapshots is to think of the code we're
else abort 20
) + 100
end
-    in (foo 2) + 1;; (* this line is new *)
+    in (foo 2) + 1000;; (* this line is new *)

we can imagine a box:


re-conceal OCaml
This reverts commit e36eaae1de25d2ba2b42afe2f77f51f6a2470a7b
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index f0ddc61..706050e 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -425,9 +425,7 @@ There are also different kinds of "syntactic sugar" we can use to hide the conti
(+ (foo 1) 1000))

-And in OCaml:
-
-<pre>
+<!--
# #require "delimcc";;
# open Delimcc;;
# let reset body = let p = new_prompt () in push_prompt p (body p);;
@@ -456,7 +454,8 @@ And in OCaml:
- : int = 1020
# test_shift 2;;
- : int = 1020
-</pre>
+-->
+

Various of the tools we've been introducing over the past weeks are inter-related. We saw coroutines implemented first with zippers; here we've talked in the abstract about their being implemented with continuations. Oleg says that "Zipper can be viewed as a delimited continuation reified as a data structure." Ken expresses the same idea in terms of a zipper being a "defunctionalized" continuation---that is, take something implemented as a function (a continuation) and implement the same thing as an inert data structure (a zipper).


temporarily show OCaml
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index 706050e..f0ddc61 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -425,7 +425,9 @@ There are also different kinds of "syntactic sugar" we can use to hide the conti
(+ (foo 1) 1000))

-<!--
+And in OCaml:
+
+<pre>
# #require "delimcc";;
# open Delimcc;;
# let reset body = let p = new_prompt () in push_prompt p (body p);;
@@ -454,8 +456,7 @@ There are also different kinds of "syntactic sugar" we can use to hide the conti
- : int = 1020
# test_shift 2;;
- : int = 1020
--->
-
+</pre>

Various of the tools we've been introducing over the past weeks are inter-related. We saw coroutines implemented first with zippers; here we've talked in the abstract about their being implemented with continuations. Oleg says that "Zipper can be viewed as a delimited continuation reified as a data structure." Ken expresses the same idea in terms of a zipper being a "defunctionalized" continuation---that is, take something implemented as a function (a continuation) and implement the same thing as an inert data structure (a zipper).


update
diff --git a/topics/week13_coroutines_exceptions_and_aborts.mdwn b/topics/week13_coroutines_exceptions_and_aborts.mdwn
index 389e7d5..706050e 100644
--- a/topics/week13_coroutines_exceptions_and_aborts.mdwn
+++ b/topics/week13_coroutines_exceptions_and_aborts.mdwn
@@ -8,73 +8,75 @@ The technique illustrated in those solutions is a powerful and important one. It

With cooperative threads, one typically yields control to the thread, and then back again to the main program, multiple times. Here's the pattern in which that happens in our same_fringe function:

-	main program        next1 thread        next2 thread
-	------------        ------------        ------------
-	start next1
-	(paused)            starting
-	(paused)            calculate first leaf
-	(paused)            <--- return it
-	start next2         (paused)            starting
-	(paused)            (paused)            calculate first leaf
-	(paused)            (paused)            <-- return it
-	compare leaves      (paused)            (paused)
-	call loop again     (paused)            (paused)
-	call next1 again    (paused)            (paused)
-	(paused)            calculate next leaf (paused)
-	(paused)            <-- return it       (paused)
-	... and so on ...
+    main program        next1 thread        next2 thread
+    ------------        ------------        ------------
+    start next1
+    (paused)            starting
+    (paused)            calculate first leaf
+    (paused)            <--- return it
+    start next2         (paused)            starting
+    (paused)            (paused)            calculate first leaf
+    (paused)            (paused)            <-- return it
+    compare leaves      (paused)            (paused)
+    call loop again     (paused)            (paused)
+    call next1 again    (paused)            (paused)
+    (paused)            calculate next leaf (paused)
+    (paused)            <-- return it       (paused)
+    ... and so on ...

If you want to read more about these kinds of threads, here are some links:

-<!-- *	[[!wikipedia Computer_multitasking]]
-*	[[!wikipedia Thread_(computer_science)]] -->
+<!-- *   [[!wikipedia Computer_multitasking]]
+*   [[!wikipedia Thread_(computer_science)]] -->

-*	[[!wikipedia Coroutine]]
-*	[[!wikipedia Iterator]]
-*	[[!wikipedia Generator_(computer_science)]]
-*	[[!wikipedia Fiber_(computer_science)]]
-<!-- *	[[!wikipedia Green_threads]]
-*	[[!wikipedia Protothreads]] -->
+*   [[!wikipedia Coroutine]]
+*   [[!wikipedia Iterator]]
+*   [[!wikipedia Generator_(computer_science)]]
+*   [[!wikipedia Fiber_(computer_science)]]
+<!-- *   [[!wikipedia Green_threads]]
+*   [[!wikipedia Protothreads]] -->

The way we built cooperative threads using make_fringe_enumerator crucially relied on two heavyweight tools. First, it relied on our having a data structure (the tree zipper) capable of being a static snapshot of where we left off in the tree whose fringe we're enumerating. Second, it either required us to manually save and restore the thread's snapshotted state (a tree zipper); or else we had to use a mutable reference cell to save and restore that state for us. Using the saved state, the next invocation of the next_leaf function could start up again where the previous invocation left off.

-It's possible to build cooperative threads without using those tools, however. Already our [[solution using streams|/exercises/assignment12#streams2]] uses neither zippers nor any mutation. Instead it saves the thread's state in explicitly-created thunks, and resumes the thread by forcing the thunk.
-
-Some languages have a native syntax for coroutines. Here's how we'd write the same-fringe solution above using native coroutines in the language Lua:
-
-	> function fringe_enumerator (tree)
-	    if tree.leaf then
-	        coroutine.yield (tree.leaf)
-	    else
-	        fringe_enumerator (tree.left)
-	        fringe_enumerator (tree.right)
-	    end
-	end
-
-	> function same_fringe (tree1, tree2)
-	    local next1 = coroutine.wrap (fringe_enumerator)
-	    local next2 = coroutine.wrap (fringe_enumerator)
-	    local function loop (leaf1, leaf2)
-	        if leaf1 or leaf2 then
-	            return leaf1 == leaf2 and loop( next1(), next2() )
-	        elseif not leaf1 and not leaf2 then
-	            return true
-	        else
-	            return false
-	        end
-	    end
-	    return loop (next1(tree1), next2(tree2))
-	end
-
-	> return same_fringe ( {leaf=1}, {leaf=2} )
-	false
-
-	> return same_fringe ( {leaf=1}, {leaf=1} )
-	true
-
-	> return same_fringe ( {left = {leaf=1}, right = {left = {leaf=2}, right = {leaf=3}}},
-	    {left = {left = {leaf=1}, right = {leaf=2}}, right = {leaf=3}} )
-	true
+It's possible to build cooperative threads without using those tools, however. Already our [[solution using streams|/exercises/assignment12#streams2]] uses neither zippers nor any mutation. Instead it saves the thread's state in the code of explicitly-created thunks, and resumes the thread by forcing the thunk.
+
+Some languages have a native syntax for coroutines. Here's how we'd write the same-fringe solution using native coroutines in the language Lua:
+
+
+    > function fringe_enumerator (tree)
+        if tree.leaf then
+            coroutine.yield (tree.leaf)
+        else
+            fringe_enumerator (tree.left)
+            fringe_enumerator (tree.right)
+        end
+    end
+
+    > function same_fringe (tree1, tree2)
+      -- coroutine.wrap turns a function into a coroutine
+        local next1 = coroutine.wrap (fringe_enumerator)
+        local next2 = coroutine.wrap (fringe_enumerator)
+        local function loop (leaf1, leaf2)
+            if leaf1 or leaf2 then
+                return leaf1 == leaf2 and loop( next1(), next2() )
+            elseif not leaf1 and not leaf2 then
+                return true
+            else
+                return false
+            end
+        end
+        return loop (next1(tree1), next2(tree2))
+    end
+
+    > return same_fringe ( {leaf=1}, {leaf=2} )
+    false
+
+    > return same_fringe ( {leaf=1}, {leaf=1} )
+    true
+
+    > return same_fringe ( {left = {leaf=1}, right = {left = {leaf=2}, right = {leaf=3}}},
+        {left = {left = {leaf=1}, right = {leaf=2}}, right = {leaf=3}} )
+    true

We're going to think about the underlying principles to this execution pattern, and instead learn how to implement it from scratch---without necessarily having zippers or dedicated native syntax to rely on.

@@ -85,97 +87,98 @@ To get a better understanding of how that execution pattern works, we'll add yet

While writing OCaml code, you've probably come across errors. In fact, you've probably come across errors of several sorts. One sort of error comes about when you've got syntax errors and the OCaml interpreter isn't even able to parse your code. A second sort of error is type errors, as in:

-	# let lst = [1; 2] in
-	  "a" :: lst;;
-	Error: This expression has type int list
-	       but an expression was expected of type string list
+    # let lst = [1; 2] in
+      "a" :: lst;;
+             ---
+    Error: This expression has type int list
+           but an expression was expected of type string list

Type errors are also detected and reported before OCaml attempts to execute or evaluate your code. But you may also have encountered a third kind of error, that arises while your program is running. For example:

-	# 1/0;;
-	Exception: Division_by_zero.
-	# List.nth [1;2] 10;;
-	Exception: Failure "nth".
+    # 1/0;;
+    Exception: Division_by_zero.
+    # List.nth [1;2] 10;;
+    Exception: Failure "nth".

-These "Exceptions" are **run-time errors**. OCaml will automatically detect some of them, like when you attempt to divide by zero. Other exceptions are *raised* by code. For instance, here is the standard implementation of List.nth:
+These "Exceptions" are **run-time errors**. OCaml will automatically detect some of them, like when you attempt to divide by zero. Other exceptions are manually *raised* by code. For instance, here is the standard implementation of List.nth:

-	let nth l n =
-	  if n < 0 then invalid_arg "List.nth" else
-	  let rec nth_aux l n =
-	    match l with
-	    | [] -> failwith "nth"
-	    | a::l -> if n = 0 then a else nth_aux l (n-1)
-	  in nth_aux l n
+    let nth l n =
+      if n < 0 then invalid_arg "List.nth" else
+      let rec nth_aux l n =
+        match l with
+        | [] -> failwith "nth"
+        | a::l -> if n = 0 then a else nth_aux l (n-1)
+      in nth_aux l n

(The Juli8 version of List.nth only differs in sometimes raising a different error.) Notice the two clauses invalid_arg "List.nth" and failwith "nth". These are two helper functions which are shorthand for:

-	raise (Invalid_argument "List.nth");;
-	raise (Failure "nth");;
+    raise (Invalid_argument "List.nth");;
+    raise (Failure "nth");;

-where Invalid_argument "List.nth" is a value of type exn, and so too Failure "nth". When you have some value bad of type exn and evaluate the expression:
+where Invalid_argument "List.nth" constructs a value of type exn, and so too Failure "nth". When you have some value bad of type exn and evaluate the expression:

the effect is for the program to immediately stop without evaluating any further code:

-	# let xcell = ref 0;;
-	val xcell : int ref = {contents = 0}

(Diff truncated)

update refunct zippers code
diff --git a/code/refunctionalizing_zippers.rkt b/code/refunctionalizing_zippers.rkt
index 3b51ce9..cac0916 100644
--- a/code/refunctionalizing_zippers.rkt
+++ b/code/refunctionalizing_zippers.rkt
@@ -17,19 +17,26 @@
[else (tz1 (pair (cons (nextchar z) (saved z)) (rest z)))]))

; using explicit continuations
-(define (tc1 l k)
+(define (tc0 l k)
(cond
[(null? l) (reverse (k '()))]
+    [(eqv? #\S (car l)) (tc0 (cdr l) (compose k k))]
+    [else (tc0 (cdr l) (lambda (tail) (cons (car l) (k tail))))]))
+
+; improvement: if we flip the order of cons and k in the last line, we can avoid the need to reverse
+(define (tc1 l k)
+  (cond
+    [(null? l) (k '())]
[(eqv? #\S (car l)) (tc1 (cdr l) (compose k k))]
-    [else (tc1 (cdr l) (lambda (tail) (cons (car l) (k tail))))]))
+    [else (tc1 (cdr l) (lambda (tail) (k (cons (car l) tail))))]))

; using implicit continuations (reset/shift)
(define (tr1 l)
(shift k
(cond
-      [(null? l) (reverse (k '()))]
+      [(null? l) (k '())]
[(eqv? #\S (car l)) ((compose k k) (tr1 (cdr l)))]
-      [else ((lambda (tail) (cons (car l) (k tail))) (tr1 (cdr l)))])))
+      [else ((lambda (tail) (k (cons (car l) tail))) (tr1 (cdr l)))])))

; wrapper functions, there's a (test) function at the end

@@ -59,23 +66,21 @@
[else (tz3 (pair (cons (nextchar z) (saved z)) (rest z)))]))

; using explicit continuations
-; there are several working solutions
-; but it's a bit tricky to get the reverses in the right place, and the order of appending right
(define (tc3 l k)
(cond
-    [(null? l) (reverse (k '()))]
-    [(eqv? #\# (car l)) (append (reverse (k '())) (tc3 (cdr l) identity))]
+    [(null? l) (k '())]
+    [(eqv? #\# (car l)) (append (k '()) (tc3 (cdr l) identity))]
[(eqv? #\S (car l)) (tc3 (cdr l) (compose k k))]
-    [else (tc3 (cdr l) (lambda (tail) (cons (car l) (k tail))))]))
+    [else (tc3 (cdr l) (lambda (tail) (k (cons (car l) tail))))]))

; using implicit continuations (reset/shift)
(define (tr3 l)
(shift k
(cond
-      [(null? l) (reverse (k '()))]
-      [(eqv? #\# (car l)) (append (reverse (k '())) (reset (tr3 (cdr l))))]
+      [(null? l) (identity (k '()))]
+      [(eqv? #\# (car l)) (append (k '()) (reset (tr3 (cdr l))))]
[(eqv? #\S (car l)) ((compose k k) (tr3 (cdr l)))]
-      [else ((lambda (tail) (cons (car l) (k tail))) (tr3 (cdr l)))])))
+      [else ((lambda (tail) (k (cons (car l) tail))) (tr3 (cdr l)))])))

(define (tz4 s)
(list->string (tz3 (cons '() (string->list s)))))
@@ -93,9 +98,11 @@
(equal? (t1 inp) (t2 inp)))
(and
(equal? (tz2 "abSd") "ababd")
+   (cmp (lambda (s) (list->string (tc0 (string->list s) identity))) tz2 "abSd")
(cmp tc2 tz2 "abSd")
(cmp tr2 tz2 "abSd")
(equal? (tz2 "aSbS") "aabaab")
+   (cmp (lambda (s) (list->string (tc0 (string->list s) identity))) tz2 "aSbS")
(cmp tc2 tz2 "aSbS")
(cmp tr2 tz2 "aSbS")
(equal? (tz4 "ab#ceSfSd") "abcecefcecefd")


post two images
diff --git a/topics/week13_control_operators.mdwn b/topics/week13_control_operators.mdwn
--- a/topics/week13_control_operators.mdwn
+++ b/topics/week13_control_operators.mdwn
@@ -1,3 +1,7 @@
+* [A more immersive game](http://upload.wikimedia.org/wikipedia/commons/7/78/AC89-0437-20_a.jpeg)
+
+
3.	callcc was originally introduced in Scheme. There it's written call/cc and is an abbreviation of call-with-current-continuation. Instead of the somewhat bulky form:

(call/cc (lambda (k) ...))


tweak glyphs
diff --git a/topics/week7_introducing_monads.mdwn b/topics/week7_introducing_monads.mdwn
index 06dd06e..e73fb36 100644
@@ -1,4 +1,4 @@
-<!-- λ Λ ∀ ≡ α β γ ρ ω Ω ○ μ η δ ζ ξ ⋆ ★ • ∙ ● 𝟎 𝟏 𝟐 𝟘 𝟙 𝟚 𝟬 𝟭 𝟮 ¢ ⇧ -->
+<!-- λ Λ ∀ ≡ α β γ ρ ω Ω ○ μ η δ ζ ξ ⋆ ★ • ∙ ● ¢ ⇧; rest aren't on office machine 𝟎 𝟏 𝟐 𝟘 𝟙 𝟚 𝟬 𝟭 𝟮 -->

The [[tradition in the functional programming


wip
diff --git a/topics/week13_control_operators.mdwn b/topics/week13_control_operators.mdwn
new file mode 100644
index 0000000..a8fd115
--- /dev/null
+++ b/topics/week13_control_operators.mdwn
@@ -0,0 +1,384 @@
+3.	callcc was originally introduced in Scheme. There it's written call/cc and is an abbreviation of call-with-current-continuation. Instead of the somewhat bulky form:
+
+		(call/cc (lambda (k) ...))
+
+	I prefer instead to use the lighter, and equivalent, shorthand:
+
+		(let/cc k ...)
+
+
+Callcc/letcc examples
+---------------------
+
+First, here are two examples in Scheme:
+
+	(+ 100 (let/cc k (+ 10 1)))
+	       |-----------------|
+
+This binds the continuation outk of the underlined expression to k, then computes (+ 10 1) and delivers that to outk in the normal way (not through k). No unusual behavior. It evaluates to 111.
+
+What if we do instead:
+
+	(+ 100 (let/cc k (+ 10 (k 1))))
+	       |---------------------|
+
+This time, during the evaluation of (+ 10 (k 1)), we supply 1 to k. So then the local continuation, which delivers the value up to (+ 10 [_]) and so on, is discarded. Instead 1 gets supplied to the outer continuation in place when let/cc was invoked. That will be (+ 100 [_]). When (+ 100 1) is evaluated, there's no more of the computation left to evaluate. So the answer here is 101.
+
+You are not restricted to calling a bound continuation only once, nor are you restricted to calling it only inside of the call/cc (or let/cc) block. For example, you can do this:
+
+	(let ([p (let/cc k (cons 1 k))])
+  	  (cons (car p) ((cdr p) (cons 2 (lambda (x) x)))))
+	; evaluates to '(2 2 . #<procedure>)
+
+What happens here? First, we capture the continuation where p is about to be assigned a value. Inside the let/cc block, we create a pair consisting of 1 and the captured continuation. This pair is bound to p. We then proceed to extract the components of the pair. The head (car) goes into the start of a tuple we're building up. To get the next piece of the tuple, we extract the second component of p (this is the bound continuation k) and we apply it to a pair consisting of 2 and the identity function. Supplying arguments to k takes us back to the point where p is about to be assigned a value. The tuple we had formerly been building, starting with 1, will no longer be accessible because we didn't bring along with us any way to refer to it, and we'll never get back to the context where we supplied an argument to k. Now p gets assigned not the result of (let/cc k (cons 1 k)) again, but instead, the new pair that we provided: '(2 . #<identity procedure>). Again we proceed to build up a tuple: we take the first element 2, then we take the second element (now the identity function), and feed it a pair '(2 . #<identity procedure>), and since it's an argument to the identity procedure that's also the result. So our final result is a nested pair, whose first element is 2 and whose second element is the pair '(2 . #<identity procedure>). Racket displays this nested pair like this:
+
+	'(2 2 . #<procedure>)
+
+
+---
+
+Some callcc/letcc exercises
+---------------------------
+
+Here are a series of examples from *The Seasoned Schemer*, which we recommended at the start of term. It's not necessary to have the book to follow the exercises, though if you do have it, its walkthroughs will give you useful assistance.
+
+For reminders about Scheme syntax, see [here](/assignment8/) and [here](/week1/) and [here](/translating_between_ocaml_scheme_and_haskell). Other resources are on our [[Learning Scheme]] page.
+
+Most of the examples assume the following preface:
+
+	#lang racket
+
+	(define (atom? x)
+	  (and (not (pair? x)) (not (null? x))))
+
+Now try to figure out what this function does:
+
+	(define alpha
+	  (lambda (a lst)
+	    (let/cc k ; now what will happen when k is called?
+	      (letrec ([aux (lambda (l)
+	                      (cond
+	                        [(null? l) '()]
+	                        [(eq? (car l) a) (k (aux (cdr l)))]
+	                        [else (cons (car l) (aux (cdr l)))]))])
+	        (aux lst)))))
+
+Here is [the answer](/hints/cps_hint_1), but try to figure it out for yourself.
+
+Next, try to figure out what this function does:
+
+	(define beta
+	  (lambda (lst)
+	    (let/cc k ; now what will happen when k is called?
+	      (letrec ([aux (lambda (l)
+	                      (cond
+	                        [(null? l) '()]
+	                        [(atom? (car l)) (k (car l))]
+	                        [else (begin
+	                                ; what will the value of the next line be? why is it ignored?
+	                                (aux (car l))
+	                                (aux (cdr l)))]))])
+	        (aux lst)))))
+
+Here is [the answer](/hints/cps_hint_2), but try to figure it out for yourself.
+
+Next, try to figure out what this function does:
+
+	(define gamma
+	  (lambda (a lst)
+	    (letrec ([aux (lambda (l k)
+	                    (cond
+	                      [(null? l) (k 'notfound)]
+	                      [(eq? (car l) a) (cdr l)]
+	                      [(atom? (car l)) (cons (car l) (aux (cdr l) k))]
+	                      [else
+	                       ; what happens when (car l) exists but isn't an atom?
+	                       (let ([car2 (let/cc k2 ; now what will happen when k2 is called?
+	                                     (aux (car l) k2))])
+	                         (cond
+	                           ; when will the following condition be met? what happens then?
+	                           [(eq? car2 'notfound) (cons (car l) (aux (cdr l) k))]
+	                           [else (cons car2 (cdr l))]))]))]
+	             [lst2 (let/cc k1 ; now what will happen when k1 is called?
+	                     (aux lst k1))])
+	      (cond
+	        ; when will the following condition be met?
+	        [(eq? lst2 'notfound) lst]
+	        [else lst2]))))
+
+Here is [the answer](/hints/cps_hint_3), but try to figure it out for yourself.
+
+Here is the hardest example. Try to figure out what this function does:
+
+	(define delta
+	  (letrec ([yield (lambda (x) x)]
+	           [resume (lambda (x) x)]
+	           [walk (lambda (l)
+	                   (cond
+	                     ; is this the only case where walk returns a non-atom?
+	                     [(null? l) '()]
+	                     [(atom? (car l)) (begin
+	                                        (let/cc k2 (begin
+	                                          (set! resume k2) ; now what will happen when resume is called?
+	                                          ; when the next line is executed, what will yield be bound to?
+	                                          (yield (car l))))
+	                                        ; when will the next line be executed?
+	                                        (walk (cdr l)))]
+	                     [else (begin
+	                             ; what will the value of the next line be? why is it ignored?
+	                             (walk (car l))
+	                             (walk (cdr l)))]))]
+	           [next (lambda () ; next is a thunk
+	                   (let/cc k3 (begin
+	                     (set! yield k3) ; now what will happen when yield is called?
+	                     ; when the next line is executed, what will resume be bound to?
+	                     (resume 'blah))))]
+	           [check (lambda (prev)
+	                    (let ([n (next)])
+	                      (cond
+	                        [(eq? n prev) #t]
+	                        [(atom? n) (check n)]
+	                        ; when will n fail to be an atom?
+	                        [else #f])))])
+	    (lambda (lst)
+	      (let ([fst (let/cc k1 (begin
+	                   (set! yield k1) ; now what will happen when yield is called?
+	                   (walk lst)
+	                   ; when will the next line be executed?
+	                   (yield '())))])
+	        (cond
+	          [(atom? fst) (check fst)]
+	          ; when will fst fail to be an atom?
+	          [else #f])
+	        ))))
+
+Here is [the answer](/hints/cps_hint_4), but again, first try to figure it out for yourself.
+
+
+Delimited control operators
+===========================
+
+Here again is the CPS transform for callcc:
+
+ 	[callcc (\k. body)] = \outk. (\k. [body] outk) (\v localk. outk v)
+
+callcc is what's known as an *undelimited control operator*. That is, the continuations outk that get bound into our ks include all the code from the call/cc ... out to *and including* the end of the program. Calling such a continuation will never return any value to the call site.
+
+(See the technique employed in the delta example above, with the (begin (let/cc k2 ...) ...), for a work-around. Also. if you've got a copy of *The Seasoned Schemer*, see the comparison of let/cc vs. "collector-using" (that is, partly CPS) functions at pp. 155-164.)
+
+Often times it's more useful to use a different pattern, where we instead capture only the code from the invocation of our control operator out to a certain boundary, not including the end of the program. These are called *delimited control operators*. A variety of these have been formulated. The most well-behaved from where we're coming from is the pair reset and shift. reset sets the boundary, and shift binds the continuation from the position where it's invoked out to that boundary.
+
+It works like this:
+
+	(1) outer code
+	------- reset -------
+	| (2)               |
+	| +----shift k ---+ |
+	| | (3)           | |
+	| |               | |
+	| |               | |
+	| +---------------+ |
+	| (4)               |
+	+-------------------+
+	(5) more outer code
+
+First, the code in position (1) runs. Ignore position (2) for the moment. When we hit the shift k, the continuation between the shift and the reset will be captured and bound to k. Then the code in position (3) will run, with k so bound. The code in position (4) will never run, unless it's invoked through k. If the code in position (3) just ends with a regular value, and doesn't apply k, then the value returned by (3) is passed to (5) and the computation continues.
+
+So it's as though the middle box---the (2) and (4) region---is by default not evaluated. This code is instead bound to k, and it's up to other code whether and when to apply k to any argument. If k is applied to an argument, then what happens? Well it will be as if that were the argument supplied by (3) only now that argument does go to the code (4) syntactically enclosing (3). When (4) is finished, that value also goes to (5) (just as (3)'s value did when it ended with a regular value). k can be applied repeatedly, and every time the computation will traverse that same path from (4) into (5).
+
+I set (2) aside a moment ago. The story we just told is a bit too simple because the code in (2) needs to be evaluated because some of it may be relied on in (3).
+
+For instance, in Scheme this:
+
+	(require racket/control)
+

(Diff truncated)

diff --git a/index.mdwn b/index.mdwn
index edbe9a9..980fde1 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -197,7 +197,7 @@ We've posted a [[State Monad Tutorial]].

(**Week 13**) Thursday April 30

-> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; [[Let/cc and reset/shift|topics/week13_control_operators]]; CPS transforms
+> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; Let/cc and reset/shift<!-- [[Let/cc and reset/shift|topics/week13_control_operators]] -->; CPS transforms

(**Week 14**) Thursday May 7


diff --git a/topics/_cps.mdwn b/topics/_cps.mdwn
index d7752f4..3873215 100644
--- a/topics/_cps.mdwn
+++ b/topics/_cps.mdwn
@@ -1,3 +1,6 @@
+**Note to Chris**: [[don't forget this material to be merged in somehow|/topics/_cps_and_continuation_operators]]. I marked where I cut some material to put into week13_control_operators, but that page is still a work in progress in my browser...
+
+
Gaining control over order of evaluation
----------------------------------------


marked where I cut
diff --git a/topics/_cps_and_continuation_operators.mdwn b/topics/_cps_and_continuation_operators.mdwn
index 72b0034..679f0fb 100644
--- a/topics/_cps_and_continuation_operators.mdwn
+++ b/topics/_cps_and_continuation_operators.mdwn
@@ -175,6 +175,8 @@ So too will examples. We'll give some examples, and show you how to try them out

<!-- GOTCHAS?? -->

+-- cutting for control operators --
+
3.	callcc was originally introduced in Scheme. There it's written call/cc and is an abbreviation of call-with-current-continuation. Instead of the somewhat bulky form:

(call/cc (lambda (k) ...))
@@ -211,6 +213,8 @@ What happens here? First, we capture the continuation where p is about to be a

'(2 2 . #<procedure>)

+-- end of cut --
+
Ok, so now let's see how to perform these same computations via CPS.

In the lambda evaluator:
@@ -276,6 +280,8 @@ The third example is more difficult to make work with the monadic library, becau

<!-- FIXME -->

+-- cutting following section for control operators --
+
Some callcc/letcc exercises
---------------------------

@@ -464,6 +470,10 @@ The box is working like a reset. The abort is implemented with a shift. Earl

snapshot here corresponds to the code outside the reset. continue_normally is the middle block of code, between the shift and its surrounding reset. This is what gets bound to the k in our shift. The if... statement is inside a shift. Notice there that we invoke the bound continuation to "continue normally". We just invoke the outer continuation, saved in snapshot when we placed the reset, to skip the "continue normally" code and immediately abort to outside the box.

+
+-- end of cut --
+
+
Using shift and reset operators in OCaml, this would look like this:

#require "delimcc";;
@@ -515,6 +525,8 @@ In collecting these CPS transforms and implementing the monadic versions, we've
*	Sabry, "Note on axiomatizing the semantics of control operators" (1996)

+-- cutting some of the following for control operators --
+
Examples of shift/reset/abort
-----------------------------


diff --git a/index.mdwn b/index.mdwn
index 4a78f7e..edbe9a9 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -197,7 +197,7 @@ We've posted a [[State Monad Tutorial]].

(**Week 13**) Thursday April 30

-> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; Let/cc and reset/shift; CPS transforms
+> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; [[Let/cc and reset/shift|topics/week13_control_operators]]; CPS transforms

(**Week 14**) Thursday May 7


diff --git a/index.mdwn b/index.mdwn
index d372440..4a78f7e 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -197,7 +197,7 @@ We've posted a [[State Monad Tutorial]].

(**Week 13**) Thursday April 30

-> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; Coroutines, exceptions, and aborts; Let/cc and reset/shift; CPS transforms
+> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; [[Coroutines, exceptions, and aborts|topics/week13_coroutines_exceptions_and_aborts]]; Let/cc and reset/shift; CPS transforms

(**Week 14**) Thursday May 7


rename topics/_coroutines_and_aborts.mdwn to topics/week13_coroutines_exceptions_and_aborts.mdwn
diff --git a/topics/_coroutines_and_aborts.mdwn b/topics/_coroutines_and_aborts.mdwn
deleted file mode 100644
index 389e7d5..0000000
--- a/topics/_coroutines_and_aborts.mdwn
+++ /dev/null
@@ -1,440 +0,0 @@
-[[!toc]]
-
-## Coroutines ##
-
-Recall [[the recent homework assignment|/exercises/assignment12]] where you solved the same-fringe problem with a make_fringe_enumerator function, or in the Scheme version using streams instead of zippers, with a lazy-flatten function.
-
-The technique illustrated in those solutions is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*.
-
-With cooperative threads, one typically yields control to the thread, and then back again to the main program, multiple times. Here's the pattern in which that happens in our same_fringe function:
-
-	main program        next1 thread        next2 thread
-	------------        ------------        ------------
-	start next1
-	(paused)            starting
-	(paused)            calculate first leaf
-	(paused)            <--- return it
-	start next2         (paused)            starting
-	(paused)            (paused)            calculate first leaf
-	(paused)            (paused)            <-- return it
-	compare leaves      (paused)            (paused)
-	call loop again     (paused)            (paused)
-	call next1 again    (paused)            (paused)
-	(paused)            calculate next leaf (paused)
-	(paused)            <-- return it       (paused)
-	... and so on ...
-
-If you want to read more about these kinds of threads, here are some links:
-
-<!-- *	[[!wikipedia Computer_multitasking]]
-*	[[!wikipedia Thread_(computer_science)]] -->
-
-*	[[!wikipedia Coroutine]]
-*	[[!wikipedia Iterator]]
-*	[[!wikipedia Generator_(computer_science)]]
-*	[[!wikipedia Fiber_(computer_science)]]
-<!-- *	[[!wikipedia Green_threads]]
-*	[[!wikipedia Protothreads]] -->
-
-The way we built cooperative threads using make_fringe_enumerator crucially relied on two heavyweight tools. First, it relied on our having a data structure (the tree zipper) capable of being a static snapshot of where we left off in the tree whose fringe we're enumerating. Second, it either required us to manually save and restore the thread's snapshotted state (a tree zipper); or else we had to use a mutable reference cell to save and restore that state for us. Using the saved state, the next invocation of the next_leaf function could start up again where the previous invocation left off.
-
-It's possible to build cooperative threads without using those tools, however. Already our [[solution using streams|/exercises/assignment12#streams2]] uses neither zippers nor any mutation. Instead it saves the thread's state in explicitly-created thunks, and resumes the thread by forcing the thunk.
-
-Some languages have a native syntax for coroutines. Here's how we'd write the same-fringe solution above using native coroutines in the language Lua:
-
-	> function fringe_enumerator (tree)
-	    if tree.leaf then
-	        coroutine.yield (tree.leaf)
-	    else
-	        fringe_enumerator (tree.left)
-	        fringe_enumerator (tree.right)
-	    end
-	end
-
-	> function same_fringe (tree1, tree2)
-	    local next1 = coroutine.wrap (fringe_enumerator)
-	    local next2 = coroutine.wrap (fringe_enumerator)
-	    local function loop (leaf1, leaf2)
-	        if leaf1 or leaf2 then
-	            return leaf1 == leaf2 and loop( next1(), next2() )
-	        elseif not leaf1 and not leaf2 then
-	            return true
-	        else
-	            return false
-	        end
-	    end
-	    return loop (next1(tree1), next2(tree2))
-	end
-
-	> return same_fringe ( {leaf=1}, {leaf=2} )
-	false
-
-	> return same_fringe ( {leaf=1}, {leaf=1} )
-	true
-
-	> return same_fringe ( {left = {leaf=1}, right = {left = {leaf=2}, right = {leaf=3}}},
-	    {left = {left = {leaf=1}, right = {leaf=2}}, right = {leaf=3}} )
-	true
-
-We're going to think about the underlying principles to this execution pattern, and instead learn how to implement it from scratch---without necessarily having zippers or dedicated native syntax to rely on.
-
-
-##Exceptions and Aborts##
-
-To get a better understanding of how that execution pattern works, we'll add yet a second execution pattern to our plate, and then think about what they have in common.
-
-While writing OCaml code, you've probably come across errors. In fact, you've probably come across errors of several sorts. One sort of error comes about when you've got syntax errors and the OCaml interpreter isn't even able to parse your code. A second sort of error is type errors, as in:
-
-	# let lst = [1; 2] in
-	  "a" :: lst;;
-	Error: This expression has type int list
-	       but an expression was expected of type string list
-
-Type errors are also detected and reported before OCaml attempts to execute or evaluate your code. But you may also have encountered a third kind of error, that arises while your program is running. For example:
-
-	# 1/0;;
-	Exception: Division_by_zero.
-	# List.nth [1;2] 10;;
-	Exception: Failure "nth".
-
-These "Exceptions" are **run-time errors**. OCaml will automatically detect some of them, like when you attempt to divide by zero. Other exceptions are *raised* by code. For instance, here is the standard implementation of List.nth:
-
-	let nth l n =
-	  if n < 0 then invalid_arg "List.nth" else
-	  let rec nth_aux l n =
-	    match l with
-	    | [] -> failwith "nth"
-	    | a::l -> if n = 0 then a else nth_aux l (n-1)
-	  in nth_aux l n
-
-(The Juli8 version of List.nth only differs in sometimes raising a different error.) Notice the two clauses invalid_arg "List.nth" and failwith "nth". These are two helper functions which are shorthand for:
-
-	raise (Invalid_argument "List.nth");;
-	raise (Failure "nth");;
-
-where Invalid_argument "List.nth" is a value of type exn, and so too Failure "nth". When you have some value bad of type exn and evaluate the expression:
-
-
-the effect is for the program to immediately stop without evaluating any further code:
-
-	# let xcell = ref 0;;
-	val xcell : int ref = {contents = 0}
-	# let bad = Failure "test"
-	  in let _ = raise bad
-	  in xcell := 1;;
-	Exception: Failure "test".
-	# !xcell;;
-	- : int = 0
-
-Notice that the line xcell := 1 was never evaluated, so the contents of xcell are still 0.
-
-I said when you evaluate the expression:
-
-
-the effect is for the program to immediately stop. That's not exactly true. You can also programmatically arrange to *catch* errors, without the program necessarily stopping. In OCaml we do that with a try ... with PATTERN -> ... construct, analogous to the match ... with PATTERN -> ... construct. (In OCaml 4.02 and higher, there is also a more inclusive construct that combines these, match ... with PATTERN -> ... | exception PATTERN -> ....)
-
-	# let foo x =
-	    try
-	        (if x = 1 then 10
-	        else if x = 2 then raise (Failure "two")
-	        else raise (Failure "three")
-	        ) + 100
-	    with Failure "two" -> 20
-	    ;;
-	val foo : int -> int = <fun>
-	# foo 1;;
-	- : int = 110
-	# foo 2;;
-	- : int = 20
-	# foo 3;;
-	Exception: Failure "three".
-
-Notice what happens here. If we call foo 1, then the code between try and with evaluates to 110, with no exceptions being raised. That then is what the entire try ... with ... block evaluates to; and so too what foo 1 evaluates to. If we call foo 2, then the code between try and with raises an exception Failure "two". The pattern in the with clause matches that exception, so we get instead 20. If we call foo 3, we again raise an exception. This exception isn't matched by the with block, so it percolates up to the top of the program, and then the program immediately stops.
-
-So what I should have said is that when you evaluate the expression:
-
-
-*and that exception is never caught*, then the effect is for the program to immediately stop.
-
-**Trivia**: what's the type of the raise (Failure "two") in:
-
-	if x = 1 then 10
-	else raise (Failure "two")
-
-What's its type in:
-
-	if x = 1 then "ten"
-	else raise (Failure "two")
-
-So now what do you expect the type of this to be:
-
-	fun x -> raise (Failure "two")
-
-
-	(fun x -> raise (Failure "two") : 'a -> 'a)
-
-Remind you of anything we discussed earlier? (At one point earlier in term we were asking whether you could come up with any functions of type 'a -> 'a other than the identity function.)
-
-**/Trivia.**
-
-Of course, it's possible to handle errors in other ways too. There's no reason why the implementation of List.nth *had* to raise an exception. They might instead have returned Some a when the list had an nth member a, and None when it does not. But it's pedagogically useful for us to think about the exception-raising pattern now.
-
-When an exception is raised, it percolates up through the code that called it, until it finds a surrounding try ... with ... that matches it. That might not be the first try ... with ... that it encounters. For example:
-
-	# try
-	    try
-	        (raise (Failure "blah")
-	        ) + 100
-	    with Failure "fooey" -> 10
-	  with Failure "blah" -> 20;;
-	- : int = 20

(Diff truncated)

diff --git a/topics/_coroutines_and_aborts.mdwn b/topics/_coroutines_and_aborts.mdwn
index d7c8b03..389e7d5 100644
--- a/topics/_coroutines_and_aborts.mdwn
+++ b/topics/_coroutines_and_aborts.mdwn
@@ -1,5 +1,7 @@
[[!toc]]

+## Coroutines ##
+
Recall [[the recent homework assignment|/exercises/assignment12]] where you solved the same-fringe problem with a make_fringe_enumerator function, or in the Scheme version using streams instead of zippers, with a lazy-flatten function.

The technique illustrated in those solutions is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*.


diff --git a/topics/_coroutines_and_aborts.mdwn b/topics/_coroutines_and_aborts.mdwn
index ce525b3..d7c8b03 100644
--- a/topics/_coroutines_and_aborts.mdwn
+++ b/topics/_coroutines_and_aborts.mdwn
@@ -1,7 +1,8 @@
[[!toc]]

+Recall [[the recent homework assignment|/exercises/assignment12]] where you solved the same-fringe problem with a make_fringe_enumerator function, or in the Scheme version using streams instead of zippers, with a lazy-flatten function.

-The technique illustrated here with our fringe enumerators is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*.
+The technique illustrated in those solutions is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*.

With cooperative threads, one typically yields control to the thread, and then back again to the main program, multiple times. Here's the pattern in which that happens in our same_fringe function:

@@ -33,9 +34,11 @@ If you want to read more about these kinds of threads, here are some links:
<!-- *	[[!wikipedia Green_threads]]
*	[[!wikipedia Protothreads]] -->

-The way we built cooperative threads here crucially relied on two heavyweight tools. First, it relied on our having a data structure (the tree zipper) capable of being a static snapshot of where we left off in the tree whose fringe we're enumerating. Second, it relied on our using mutable reference cells so that we could update what the current snapshot (that is, tree zipper) was, so that the next invocation of the next_leaf function could start up again where the previous invocation left off.
+The way we built cooperative threads using make_fringe_enumerator crucially relied on two heavyweight tools. First, it relied on our having a data structure (the tree zipper) capable of being a static snapshot of where we left off in the tree whose fringe we're enumerating. Second, it either required us to manually save and restore the thread's snapshotted state (a tree zipper); or else we had to use a mutable reference cell to save and restore that state for us. Using the saved state, the next invocation of the next_leaf function could start up again where the previous invocation left off.

-It's possible to build cooperative threads without using those tools, however. Some languages have a native syntax for them. Here's how we'd write the same-fringe solution above using native coroutines in the language Lua:
+It's possible to build cooperative threads without using those tools, however. Already our [[solution using streams|/exercises/assignment12#streams2]] uses neither zippers nor any mutation. Instead it saves the thread's state in explicitly-created thunks, and resumes the thread by forcing the thunk.
+
+Some languages have a native syntax for coroutines. Here's how we'd write the same-fringe solution above using native coroutines in the language Lua:

> function fringe_enumerator (tree)
if tree.leaf then
@@ -78,21 +81,21 @@ We're going to think about the underlying principles to this execution pattern,

To get a better understanding of how that execution pattern works, we'll add yet a second execution pattern to our plate, and then think about what they have in common.

-While writing OCaml code, you've probably come across errors. In fact, you've probably come across errors of two sorts. One sort of error comes about when you've got syntax errors or type errors and the OCaml interpreter isn't even able to understand your code:
+While writing OCaml code, you've probably come across errors. In fact, you've probably come across errors of several sorts. One sort of error comes about when you've got syntax errors and the OCaml interpreter isn't even able to parse your code. A second sort of error is type errors, as in:

# let lst = [1; 2] in
"a" :: lst;;
Error: This expression has type int list
but an expression was expected of type string list

-But you may also have encountered other kinds of error, that arise while your program is running. For example:
+Type errors are also detected and reported before OCaml attempts to execute or evaluate your code. But you may also have encountered a third kind of error, that arises while your program is running. For example:

# 1/0;;
Exception: Division_by_zero.
# List.nth [1;2] 10;;
Exception: Failure "nth".

-These "Exceptions" are **run-time errors**. OCaml will automatically detect some of them, like when you attempt to divide by zero. Other exceptions are *raised* by code. For instance, here is the implementation of List.nth:
+These "Exceptions" are **run-time errors**. OCaml will automatically detect some of them, like when you attempt to divide by zero. Other exceptions are *raised* by code. For instance, here is the standard implementation of List.nth:

let nth l n =
if n < 0 then invalid_arg "List.nth" else
@@ -102,7 +105,7 @@ These "Exceptions" are **run-time errors**. OCaml will automatically detect some
| a::l -> if n = 0 then a else nth_aux l (n-1)
in nth_aux l n

-Notice the two clauses invalid_arg "List.nth" and failwith "nth". These are two helper functions which are shorthand for:
+(The Juli8 version of List.nth only differs in sometimes raising a different error.) Notice the two clauses invalid_arg "List.nth" and failwith "nth". These are two helper functions which are shorthand for:

raise (Invalid_argument "List.nth");;
raise (Failure "nth");;
@@ -128,7 +131,7 @@ I said when you evaluate the expression:

-the effect is for the program to immediately stop. That's not exactly true. You can also programmatically arrange to *catch* errors, without the program necessarily stopping. In OCaml we do that with a try ... with PATTERN -> ... construct, analogous to the match ... with PATTERN -> ... construct:
+the effect is for the program to immediately stop. That's not exactly true. You can also programmatically arrange to *catch* errors, without the program necessarily stopping. In OCaml we do that with a try ... with PATTERN -> ... construct, analogous to the match ... with PATTERN -> ... construct. (In OCaml 4.02 and higher, there is also a more inclusive construct that combines these, match ... with PATTERN -> ... | exception PATTERN -> ....)

# let foo x =
try
@@ -154,7 +157,7 @@ So what I should have said is that when you evaluate the expression:

*and that exception is never caught*, then the effect is for the program to immediately stop.

-Trivia: what's the type of the raise (Failure "two") in:
+**Trivia**: what's the type of the raise (Failure "two") in:

if x = 1 then 10
else raise (Failure "two")

(fun x -> raise (Failure "two") : 'a -> 'a)

-Remind you of anything we discussed earlier? /Trivia.
+Remind you of anything we discussed earlier? (At one point earlier in term we were asking whether you could come up with any functions of type 'a -> 'a other than the identity function.)
+
+**/Trivia.**

Of course, it's possible to handle errors in other ways too. There's no reason why the implementation of List.nth *had* to raise an exception. They might instead have returned Some a when the list had an nth member a, and None when it does not. But it's pedagogically useful for us to think about the exception-raising pattern now.

@@ -200,7 +205,7 @@ The matching try ... with ... block need not *lexically surround* the site whe

Here we call foo bar 0, and foo in turn calls bar 0, and bar raises the exception. Since there's no matching try ... with ... block in bar, we percolate back up the history of who called that function, and we find a matching try ... with ... block in foo. This catches the error and so then the try ... with ... block in foo (the code that called bar in the first place) will evaluate to 20.

-OK, now this exception-handling apparatus does exemplify the second execution pattern we want to focus on. But it may bring it into clearer focus if we simplify the pattern even more. Imagine we could write code like this instead:
+OK, now this exception-handling apparatus does exemplify the second execution pattern we want to focus on. But it may bring it into clearer focus if we **simplify the pattern** even more. Imagine we could write code like this instead:

# let foo x =
try begin
@@ -221,8 +226,8 @@ Many programming languages have this simplified exceution pattern, either instea
else
return 20         -- abort early
end
-	    return value + 100    -- in Lua, a function's normal value
-	                          -- must always also be explicitly returned
+	    return value + 100    -- in a language like Scheme, you could omit the return here
+                                  -- but in Lua, a function's normal result must always be explicitly returned
end

> return foo(1)
@@ -235,7 +240,7 @@ Okay, so that's our second execution pattern.

##What do these have in common?##

-In both of these patterns, we need to have some way to take a snapshot of where we are in the evaluation of a complex piece of code, so that we might later resume execution at that point. In the coroutine example, the two threads need to have a snapshot of where they were in the enumeration of their tree's leaves. In the abort example, we need to have a snapshot of where to pick up again if some embedded piece of code aborts. Sometimes we might distill that snapshot into a data structure like a zipper. But we might not always know how to do so; and learning how to think about these snapshots without the help of zippers will help us see patterns and similarities we might otherwise miss.
+In both of these patterns --- coroutines and exceptions/aborts --- we need to have some way to take a snapshot of where we are in the evaluation of a complex piece of code, so that we might later resume execution at that point. In the coroutine example, the two threads need to have a snapshot of where they were in the enumeration of their tree's leaves. In the abort example, we need to have a snapshot of where to pick up again if some embedded piece of code aborts. Sometimes we might distill that snapshot into a data structure like a zipper. But we might not always know how to do so; and learning how to think about these snapshots without the help of zippers will help us see patterns and similarities we might otherwise miss.

A more general way to think about these snapshots is to think of the code we're taking a snapshot of as a *function.* For example, in this code:

@@ -273,6 +278,11 @@ What would a "snapshot of the code outside the box" look like? Well, let's rearr

and we can think of the code starting with let foo_result = ... as a function, with the box being its parameter, like this:

+    let foo_result = < >
+    in foo_result + 100
+
+or, spelling out the gap < > as a bound variable:
+
fun box ->
let foo_result = box
in (foo_result) + 1000
@@ -379,8 +389,7 @@ You can think of them as functions that represent "how the rest of the computati

The key idea behind working with continuations is that we're *inverting control*. In the fragment above, the code (if x = 1 then ... else snapshot 20) + 100---which is written as if it were to supply a value to the outside context that we snapshotted---itself *makes non-trivial use of* that snapshot. So it has to be able to refer to that snapshot; the snapshot has to somehow be available to our inside-the-box code as an *argument* or bound variable. That is: the code that is *written* like it's supplying an argument to the outside context is instead *getting that context as its own argument*. He who is written as value-supplying slave is instead become the outer context's master.

-In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with v2 and v5 lists. Version 5 lists were the ones that let us abort a fold early:
-go back and re-read the material on "Aborting a Search Through a List" in [[Week4]].
+In fact you've already seen this several times this semester---recall how in our implementation of pairs in the untyped lambda-calculus, the handler who wanted to use the pair's components had *in the first place to be supplied to the pair as an argument*. So the exotica from the end of the seminar was already on the scene in some of our earliest steps. Recall also what we did with our [[abortable list traversals|/topics/week12_abortable_traversals]].

This inversion of control should also remind you of Montague's treatment of determiner phrases in ["The Proper Treatment of Quantification in Ordinary English"](http://www.blackwellpublishing.com/content/BPL_Images/Content_store/Sample_chapter/0631215417%5CPortner.pdf) (PTQ).


diff --git a/exercises/assignment12.mdwn b/exercises/assignment12.mdwn
index 1d135b4..58e5096 100644
--- a/exercises/assignment12.mdwn
+++ b/exercises/assignment12.mdwn
@@ -117,6 +117,7 @@ Here are the beginnings of functions to move from one focused tree to another:

1.  Your first assignment is to complete the definitions of move_botleft and move_right_or_up. (Really it should be move_right_or_up_..._and_right.)

+    <a id=enumerate1></a>
Having completed that, we can use define a function that enumerates a tree's fringe, step by step, until it's exhausted:

let make_fringe_enumerator (t: 'a tree) =
@@ -192,6 +193,7 @@ Here are the beginnings of functions to move from one focused tree to another:
3.  Now we'll talk about another way to implement the make_fringe_enumerator function above (and so too the same_fringe function which uses it). Notice that the pattern given above is that the make_fringe_enumerator creates a next_leaf function and an initial state, and each time you want to advance the next_leaf by one step, you do so by calling it with the current state. It will return a leaf label plus a modified state, which you can use when you want to call it again and take another step. All of the next_leaf function's memory about where it is in the enumeration is contained in the state. If you saved an old state, took three steps, and then called the next_leaf function again with the saved old state, it would be back where it was three steps ago. But in fact, the way we use the next_leaf function and state above, there is no back-tracking. Neither do we "fork" any of the states and pursue different forward paths. Their progress is deterministic, and fixed independently of anything that same_fringe might do. All that's up to same_fringe is to take the decision of when (and whether) to take another step forward.

Given that usage pattern, it would be appropriate and convenient to make the next_leaf function remember its state itself, in a mutable variable. The client function same_fringe doesn't need to do anything with, or even be given access to, this variable. Here's how we might write make_fringe_enumerator according to this plan:
+<a id=enumerate2></a>

let make_fringe_enumerator (t: 'a tree) =
(* create a zipper focusing the botleft of t *)


formatting
diff --git a/exercises/assignment12.mdwn b/exercises/assignment12.mdwn
index 02a78ac..1d135b4 100644
--- a/exercises/assignment12.mdwn
+++ b/exercises/assignment12.mdwn
@@ -366,6 +366,7 @@ Some other Scheme details or reminders:
<!-- -->

<a id=streams2></a>
+
4.  Here is the Scheme code handling the same-fringe problem. You should fill in the blanks:

(define (lazy-flatten tree)


diff --git a/exercises/assignment12.mdwn b/exercises/assignment12.mdwn
index 58068c4..02a78ac 100644
--- a/exercises/assignment12.mdwn
+++ b/exercises/assignment12.mdwn
@@ -290,6 +290,7 @@ Here are the beginnings of functions to move from one focused tree to another:
# next2 ();;
- : int option = None

+<a id=streams1></a>
## Same-fringe using streams ##

Now we'll describe a different way to create "the little subprograms" that we built above with make_fringe_enumerator. This code will make use of a data structure called a "stream". A stream is like a list in that it wraps a series of elements of a single type. It differs from a list in that the tail of the series is left uncomputed until needed. We turn the stream off and on by thunking it, nad by forcing the thunk.
@@ -364,6 +365,7 @@ Some other Scheme details or reminders:

<!-- -->

+<a id=streams2></a>
4.  Here is the Scheme code handling the same-fringe problem. You should fill in the blanks:

(define (lazy-flatten tree)


diff --git a/topics/week13_from_list_zippers_to_continuations.mdwn b/topics/week13_from_list_zippers_to_continuations.mdwn
index dcd11ce..8b0b431 100644
--- a/topics/week13_from_list_zippers_to_continuations.mdwn
+++ b/topics/week13_from_list_zippers_to_continuations.mdwn
@@ -215,8 +215,10 @@ and then the task would be to copy from the target 'S' only back to
the closest '#'.  This would allow our task to simulate delimited
continuations with embedded prompts (also called resets).

+<!--
The reason the task is well-suited to the list zipper is in part
because the List monad has an intimate connection with continuations.
We'll explore this next.
+-->

-
+Here is [[some Scheme code|/code/refunctionalizing_zippers.rkt]] implementing the tz and tc functions, first as presented above, and second with the variant just mentioned, using '#'. There's also a third kind of implementation, which is akin to the tc version, but doesn't explicitly pass a k argument, and instead uses these unfamiliar operations reset and shift. We'll be explaining what these do shortly. (The reason this code is in Scheme is because that's the language in which it's easiest to work with operations like reset and shift.)


tweak topics
diff --git a/index.mdwn b/index.mdwn
index 28b824d..d372440 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -197,11 +197,11 @@ We've posted a [[State Monad Tutorial]].

(**Week 13**) Thursday April 30

-> Topics: Continuations!
+> Topics: [[From list zippers to continuations|topics/week13_from_list_zippers_to_continuations]]; Coroutines, exceptions, and aborts; Let/cc and reset/shift; CPS transforms

(**Week 14**) Thursday May 7

-> Topics: More continuations!
+> Topics: Continuations (continued)

(**Makeup class**) Monday May 11, 2--5 pm


post topics
diff --git a/index.mdwn b/index.mdwn
index 38eb21b..28b824d 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -195,6 +195,18 @@ We've posted a [[State Monad Tutorial]].

> For amusement/tangential edification: [xkcd on code quality](https://xkcd.com/1513/); [turning a sphere inside out](https://www.youtube.com/watch?v=-6g3ZcmjJ7k)

+(**Week 13**) Thursday April 30
+
+> Topics: Continuations!
+
+(**Week 14**) Thursday May 7
+
+> Topics: More continuations!
+
+(**Makeup class**) Monday May 11, 2--5 pm
+
+> Topics: Linguistic applications of continuations
+

## Course Overview ##


refunct zippers code
diff --git a/code/refunctionalizing_zippers.rkt b/code/refunctionalizing_zippers.rkt
new file mode 100644
index 0000000..3b51ce9
--- /dev/null
+++ b/code/refunctionalizing_zippers.rkt
@@ -0,0 +1,107 @@
+#lang racket
+(require racket/control)
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; solutions to the "abSdS" etc task                                   ;;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; using a list zipper
+(define (tz1 z)
+  (define (saved z) (car z))
+  (define (nextchar z) (cadr z))
+  (define (rest z) (cddr z))
+  (define pair cons)
+  (cond
+    [(null? (cdr z)) (reverse (saved z))]
+    [(eqv? #\S (nextchar z)) (tz1 (pair (append (saved z) (saved z)) (rest z)))]
+    [else (tz1 (pair (cons (nextchar z) (saved z)) (rest z)))]))
+
+; using explicit continuations
+(define (tc1 l k)
+  (cond
+    [(null? l) (reverse (k '()))]
+    [(eqv? #\S (car l)) (tc1 (cdr l) (compose k k))]
+    [else (tc1 (cdr l) (lambda (tail) (cons (car l) (k tail))))]))
+
+; using implicit continuations (reset/shift)
+(define (tr1 l)
+  (shift k
+    (cond
+      [(null? l) (reverse (k '()))]
+      [(eqv? #\S (car l)) ((compose k k) (tr1 (cdr l)))]
+      [else ((lambda (tail) (cons (car l) (k tail))) (tr1 (cdr l)))])))
+
+; wrapper functions, there's a (test) function at the end
+
+(define (tz2 s)
+  (list->string (tz1 (cons '() (string->list s)))))
+
+(define (tc2 s)
+  (list->string (tc1 (string->list s) identity)))
+
+(define (tr2 s)
+  (list->string (reset (tr1 (string->list s)))))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; here are variants that only repeat from S back to the most recent # ;;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; using a list zipper
+(define (tz3 z)
+  (define (saved z) (car z))
+  (define (nextchar z) (cadr z))
+  (define (rest z) (cddr z))
+  (define (pair x y) (cons x y))
+  (cond
+    [(null? (cdr z)) (reverse (saved z))]
+    [(eqv? #\# (nextchar z)) (append (reverse (saved z)) (tz3 (pair '() (rest z))))]
+    [(eqv? #\S (nextchar z)) (tz3 (pair (append (saved z) (saved z)) (rest z)))]
+    [else (tz3 (pair (cons (nextchar z) (saved z)) (rest z)))]))
+
+; using explicit continuations
+; there are several working solutions
+; but it's a bit tricky to get the reverses in the right place, and the order of appending right
+(define (tc3 l k)
+  (cond
+    [(null? l) (reverse (k '()))]
+    [(eqv? #\# (car l)) (append (reverse (k '())) (tc3 (cdr l) identity))]
+    [(eqv? #\S (car l)) (tc3 (cdr l) (compose k k))]
+    [else (tc3 (cdr l) (lambda (tail) (cons (car l) (k tail))))]))
+
+; using implicit continuations (reset/shift)
+(define (tr3 l)
+  (shift k
+    (cond
+      [(null? l) (reverse (k '()))]
+      [(eqv? #\# (car l)) (append (reverse (k '())) (reset (tr3 (cdr l))))]
+      [(eqv? #\S (car l)) ((compose k k) (tr3 (cdr l)))]
+      [else ((lambda (tail) (cons (car l) (k tail))) (tr3 (cdr l)))])))
+
+(define (tz4 s)
+  (list->string (tz3 (cons '() (string->list s)))))
+
+(define (tc4 s)
+  (list->string (tc3 (string->list s) identity)))
+
+(define (tr4 s)
+  (list->string (reset (tr3 (string->list s)))))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+(define (test)
+  (define (cmp t1 t2 inp)
+    (equal? (t1 inp) (t2 inp)))
+  (and
+   (equal? (tz2 "abSd") "ababd")
+   (cmp tc2 tz2 "abSd")
+   (cmp tr2 tz2 "abSd")
+   (equal? (tz2 "aSbS") "aabaab")
+   (cmp tc2 tz2 "aSbS")
+   (cmp tr2 tz2 "aSbS")
+   (equal? (tz4 "ab#ceSfSd") "abcecefcecefd")
+   (cmp tc4 tz4 "ab#ceSfSd")
+   (cmp tr4 tz4 "ab#ceSfSd")
+   (equal? (tz4 "ab#ceS#fSd") "abceceffd")
+   (cmp tc4 tz4 "ab#ceS#fSd")
+   (cmp tr4 tz4 "ab#ceS#fSd")
+   ))


edits
diff --git a/topics/week12_list_and_tree_zippers.mdwn b/topics/week12_list_and_tree_zippers.mdwn
index 3f7ca7a..89d83b6 100644
--- a/topics/week12_list_and_tree_zippers.mdwn
+++ b/topics/week12_list_and_tree_zippers.mdwn
@@ -1,3 +1,5 @@
+<!-- λ ◊ ≠ ∃ Λ ∀ ≡ α β γ ρ ω φ ψ Ω ○ μ η δ ζ ξ ⋆ ★ • ∙ ● ⚫ 𝟎 𝟏 𝟐 𝟘 𝟙 𝟚 𝟬 𝟭 𝟮 ⇧ (U+2e17) ¢ -->
+
[[!toc]]

##List Zippers##
@@ -266,50 +268,33 @@ probably not represent siblings with a list zipper, but with something
more special-purpose and economical.

With these functions, we can refocus on any part of the tree.
-Here's a complete tour:
+Let's abbreviate a tree zipper like this:
+
+    [2;3], ([] [4] ([1], [], Root))
+
+    ≡ (Branch [Leaf 2; Leaf 3],
+       Context ([], [Leaf 4], Context ([Leaf 1], [], Root)))
+
+Then we can take a tour of the original tree like this:

<pre>
-# let z1 = (t1, Root);;
-val z1 : zipper =
-  (Branch [Leaf 1; Branch [Branch [Leaf 2; Leaf 3]; Leaf 4]], Root)
-# let z2 = downleft z1;;
-val z2 : zipper =
-  (Leaf 1, Context ([], [Branch [Branch [Leaf 2; Leaf 3]; Leaf 4]], Root))
-# let z3 = right z2;;
-val z3 : zipper =
-  (Branch [Branch [Leaf 2; Leaf 3]; Leaf 4], Context ([Leaf 1], [], Root))
-# let z4 = downleft z3;;
-val z4 : zipper =
-  (Branch [Leaf 2; Leaf 3],
-   Context ([], [Leaf 4], Context ([Leaf 1], [], Root)))
-# let z5 = downleft z4;;
-val z5 : zipper =
-  (Leaf 2,
-   Context ([], [Leaf 3],
-    Context ([], [Leaf 4], Context ([Leaf 1], [], Root))))
-# let z6 = right z5;;
-val z6 : zipper =
-  (Leaf 3,
-   Context ([Leaf 2], [],
-    Context ([], [Leaf 4], Context ([Leaf 1], [], Root))))
-# let z7 = up z6;;
-val z7 : zipper =
-  (Branch [Leaf 2; Leaf 3],
-   Context ([], [Leaf 4], Context ([Leaf 1], [], Root)))
-# let z8 = right z7;;
-val z8 : zipper =
-  (Leaf 4,
-   Context ([Branch [Leaf 2; Leaf 3]], [], Context ([Leaf 1], [], Root)))
-# let z9 = up z8;;
-val z9 : zipper =
-  (Branch [Branch [Leaf 2; Leaf 3]; Leaf 4], Context ([Leaf 1], [], Root))
-# let z10 = up z9;;
-val z10 : zipper =
-  (Branch [Leaf 1; Branch [Branch [Leaf 2; Leaf 3]; Leaf 4]], Root)
-# z10 = z1;;
-- : bool = true
-# z10 == z1;;
-- : bool = false
+_|__
+|  |
+1  |
+  _|__
+  |  |
+  |  4
+ _|__
+ |  |
+ 2  3
+
+    [1;[[2;3];4]],Root           =                                                              [1;[[2;3];4]],Root
+  downleft                                                                                               up
+1, ([],[[2;3];4],Root) right [[2;3];4],([1],[],Root)                                            [[2;3];4],([1],[],Root)
+                           downleft                                                                      up
+                        [2;3],([],[4],([1],[],Root))         [2;3],([],[4],([1],[],Root)) right 4,([2;3],[],([1],[],Root))
+                      downleft                                           up
+                    2, ([],[3],([],[4],([1],[],Root))) right 3, ([2],[],([],[4],([1],[],Root)))
</pre>

Here's more on zippers:


edits
diff --git a/topics/week12_list_and_tree_zippers.mdwn b/topics/week12_list_and_tree_zippers.mdwn
index 2f5d5dc..3f7ca7a 100644
--- a/topics/week12_list_and_tree_zippers.mdwn
+++ b/topics/week12_list_and_tree_zippers.mdwn
@@ -213,7 +213,7 @@ siblings of the focussed tree:

Branch (Leaf 2, Leaf 3), ([], [Leaf 4])

-We still need to add the rest of the context.  But just computed that
+We still need to add the rest of the context.  But we just computed that
context a minute ago.  It was ([Leaf 1], []).  If we add it here, we get:

Branch (Leaf 2, Leaf 3), ([], [Leaf 4], ([Leaf 1], [])
@@ -223,13 +223,13 @@ Here's the type suggested by this idea:
type context = Root | Context of (tree list) * (tree list) * context
type zipper = tree * context

-We can gloss Context of (tree list) * (tree list) * context as
-Context of (left siblings) * (right siblings) * (context of parent).
+We can gloss the triple (tree list) * (tree list) * context as
+(left siblings) * (right siblings) * (context of parent).

Here, then, is the full tree zipper we've been looking for:

-  (Branch [Leaf 2; Leaf 3],
-   Context ([], [Leaf 4], Context ([Leaf 1], [], Root)))
+    (Branch [Leaf 2; Leaf 3],
+     Context ([], [Leaf 4], Context ([Leaf 1], [], Root)))

Just as with the simple list zipper, note that elements that are near
the focussed element in the tree are near the focussed element in the


edits
diff --git a/topics/week12_list_and_tree_zippers.mdwn b/topics/week12_list_and_tree_zippers.mdwn
index 093b9b0..2f5d5dc 100644
--- a/topics/week12_list_and_tree_zippers.mdwn
+++ b/topics/week12_list_and_tree_zippers.mdwn
@@ -92,140 +92,229 @@ Alternatively, we could present it in a form more like we used in the seminar fo

in_focus = 40, context = (before = [30; 20; 10], after = [50; 60; 70; 80; 90])

+In order to facilitate the discussion of tree zippers,
+let's consolidate a bit with a concrete implementation.
+
+<pre>
+type int_list_zipper = int * (int list) * (int list)
+let zip_open (z:int_list_zipper):int_list_zipper = match z with
+    focus, ls, r::rs -> r, focus::ls, rs
+  | _ -> z
+let zip_closed (z:int_list_zipper):int_list_zipper = match z with
+    focus, l::ls, rs -> l, ls, focus::rs
+</pre>
+
+Here, an int list zipper is an int list with one element in focus.
+The context of that element is divided into two subparts: the left
+context, which gives the elements adjacent to the focussed element
+first (so looks reversed relative to the original list); and the right
+context, which is just the remainder of the list to the right of the
+focussed element.
+
+Then we have the following behavior:
+
+<pre>
+# let z1:int_list_zipper = 1, [], [2;3;4];;
+val z1 : int_list_zipper = (1, [], [2; 3; 4])
+# let z2 = zip_open z1;;
+val z2 : int_list_zipper = (2, [1], [3; 4])
+# let z3 = zip_open z2;;
+val z3 : int_list_zipper = (3, [2; 1], [4])
+# let z4 = zip_closed (zip_closed z3);;
+val z4 : int_list_zipper = (1, [], [2; 3; 4])
+# z4 = z1;;
+# - : bool = true
+</pre>

-##Tree Zippers##
-
-Now how could we translate a zipper-like structure over to trees? What we're aiming for is a way to keep track of where we are in a tree, in the same way that the "broken" lists let us keep track of where we are in the base list.
-
-It's important to set some ground rules for what will follow. If you don't understand these ground rules you will get confused. First off, for many uses of trees one wants some of the nodes or leaves in the tree to be *labeled* with additional information. It's important not to conflate the label with the node itself. Numerically one and the same piece of information --- for example, the same int --- could label two nodes of the tree without those nodes thereby being identical, as here:
-
-	        root
-	        / \
-	      /     \
-	    /  \    label 10
-	  /      \
-	label 10 label 20
-
-The leftmost leaf and the rightmost leaf have the same label; but they are different leaves. The leftmost leaf has a sibling leaf with the label 20; the rightmost leaf has no siblings that are leaves. Sometimes when one is diagramming trees, one will annotate the nodes with the labels, as above. Other times, when one is diagramming trees, one will instead want to annotate the nodes with tags to make it easier to refer to particular parts of the tree. So for instance, I could diagram the same tree as above as:
-
-	         1
-	        / \
-	      2     \
-	    /  \     5
-	  /      \
-	 3        4
-
-Here I haven't drawn what the labels are. The leftmost leaf, the node tagged "3" in this diagram, doesn't have the label 3. It has the label 10, as we said before. I just haven't put that into the diagram. The node tagged "2" doesn't have the label 2. It doesn't have any label. The tree in this example only has information labeling its leaves, not any of its inner nodes. The identity of its inner nodes is exhausted by their position in the tree.
-
-That is a second thing to note. In what follows, we'll only be working with *leaf-labeled* trees. In some uses of trees, one also (or sometimes, only) wants labels on inner nodes. But we won't be discussing any such trees now. Our trees only have labels on their leaves. The diagrams below will tag all of the nodes, as in the second diagram above, and won't display what the leaves' labels are.
-
-Final introductory comment: in particular applications, you may only need to work with binary trees---trees where internal nodes always have exactly two subtrees. That is what we'll work with in the homework, for example. But to get the guiding idea of how tree zippers work, it's helpful first to think about trees that permit nodes to have many subtrees. So that's how we'll start.
-
-Suppose we have the following tree:
-
-	                         9200
-	                    /      |  \
-	                 /         |    \
-	              /            |      \
-	           /               |        \
-	        /                  |          \
-	       500                920          950
-	    /   |    \          /  |  \      /  |  \
-	 20     50     80      91  92  93   94  95  96
-	1 2 3  4 5 6  7 8 9
-
-This is a leaf-labeled tree whose labels aren't displayed. The 9200 and so on are tags to make it easier for us to refer to particular parts of the tree.
-
-Suppose we want to represent that we're *at* the node marked 50. We might use the following metalanguage notation to specify this:
-
-    in_focus = subtree rooted at 50,
-    context = (up = ..., left_siblings = [subtree rooted at 20], right_siblings = [subtree rooted at 80])
-
-This is modeled on the notation suggested above for list zippers. Here "subtree rooted at 20" means the whole subtree underneath node 20:
-
-      20
-     / | \
-    1  2  3
-
-For brevity, we'll just call this subtree 20; and similarly for subtree 50 and subtree 80. We'll also abbreviate left_siblings = [subtree 20], right_siblings = [subtree 80] to just:
-
-    siblings = [subtree 20; *; subtree 80]
-
-The * marks where the left siblings stop and the right siblings start.

-We haven't said yet what goes in the up = ... slot. But if you think about it, the parent of the context centered on node 50 should intuitively be the context centered on node 500:

-	(up = ..., siblings = [*; subtree 920; subtree 950])
-
-And the parent of that context should intuitively be a context centered on node 9200. This context has no left or right siblings, and there is no going further up from it. So let's mark it as a special context, that we'll call:
-
-	Root
-
-Fully spelled out, then, our tree focused on node 50 would look like this:
-
-    in_focus = subtree 50,
-    context = (up = (up = Root,
-                     siblings = [*; subtree 920; subtree 950]),
-               siblings = [subtree 20; *; subtree 80])
-
-For brevity, we may sometimes write like this, using ellipsis and such:
-
-	up = ..., siblings = [subtree 20; *; subtree 80], * filled by subtree 50
-
-But that should be understood as standing for the more fully-spelled-out structure.
-
-Structures of this sort are called **tree zippers**. They should already seem intuitively similar to list zippers, at least in what we're using them to represent. It may also be helpful to call them **focused trees**, though, and so will be switching back and forth between these different terms.
-
-Moving left in our tree focused on node 50 would be a matter of shifting the * leftwards:
-
-    up = ..., siblings = [*; subtree 50; subtree 80], * filled by subtree 20
-
-and similarly for moving right. If the sibling list is implemented as a list zipper, you should already know how to do that. If one were designing a tree zipper for a more restricted kind of tree, however, such as a binary tree, one would probably not represent siblings with a list zipper, but with something more special-purpose and economical.
-
-Moving downward in the tree would be a matter of constructing a tree focused on some child of node 20, with the context part of the focused tree above (everything but the specification of the element in focus) as its up:
-
-	up = (up = ..., siblings = [*; subtree 50; subtree 80]),
-	siblings = [*; leaf 2; leaf 3],
-	* filled by leaf 1
-
-How would we move upward in a tree? Well, to move up from the focused tree just displayed (focused on leaf 1), we'd build a regular, unfocused tree with a root node --- let's call it 20' --- whose children are given by the outermost sibling list in the focused tree above ([*; leaf 2; leaf 3]), after inserting the currently focused subtree (leaf 1) into the * position:
-
-	       node 20'
-	    /     |    \
-	 /        |      \
-	leaf 1  leaf 2  leaf 3
-
-Call the unfocused tree just specified subtree 20'. (It's the same as subtree 20 was before. We just give it a different name because subtree 20 wasn't a component we could extract from the previous zipper. We had to rebuild it from the information the previous zipper encoded.) The result of moving upward from our previous focused tree, focused on leaf 1, would be a tree focused on the subtree just described, with the context being the outermost up element of the previous focused tree (what's written above as (up = ..., siblings = [*; subtree 50; subtree 80]). That is:
+##Tree Zippers##

-    up = ...,
-    siblings = [*; subtree 50; subtree 80],
-    * filled by subtree 20'
+Now how could we translate a zipper-like structure over to trees? What we're aiming for is a way to keep track of where we are in a tree, in the same way that the "broken" lists let us keep track of where we are in the base list.

-Or, spelling that structure out fully:
+Thus tree zippers are analogous to list zippers, but with one
+additional dimension to deal with: in addition to needing to shift
+focus to the left or to the right, we want to be able to shift the
+focus up or down.

-    in_focus = subtree 20',
-    context = (up = (up = Root,
-                     siblings = [*; subtree 920; subtree 950]),
-               siblings = [*; subtree 50; subtree 80])
+In order to emphasize the similarity with list zippers, we'll use
+trees that are conceived of as lists of lists:

-Moving upwards yet again would get us:
+    type tree = Leaf of int | Branch of tree list

-    in_focus = subtree 500',
-    context = (up = Root,
-               siblings = [*; subtree 920; subtree 950])
+On this conception, a tree is nothing more than a list of subtrees.
+For instance, we might have

-where subtree 500' refers to a subtree built from a root node whose children are given by the list [*; subtree 50; subtree 80], with subtree 20' inserted into the * position. Moving upwards yet again would get us:
+    let t1 = Branch [Leaf 1; Branch [Branch [Leaf 2; Leaf 3]; Leaf 4]];;

-    in_focus = subtree 9200',
-    context = Root
+    _|__
+    |  |
+    1  |
+      _|__
+      |  |
+      |  4
+     _|__
+     |  |
+     2  3

-where the focused node is exactly the root of our complete tree. Like the "moving backward" operation for the list zipper, this "moving upward" operation is supposed to be reminiscent of closing a zipper, and that's why these data structures are called zippers.
+For simplicity, we'll work with trees that don't have labels on their
+internal nodes.  Note that there can be any number of siblings, though
+we'll work with binary trees here to prevent clutter.

-We haven't given you an executable implementation of the tree zipper, but only a suggestive notation. We have however told you enough that you should be able to implement it yourself. Or if you're lazy, you can read:
+    _*__
+    |  |
+    1  |
+      _|__
+      |  |

(Diff truncated)

moved notes
diff --git a/topics/_from_list_zippers_to_continuations.mdwn b/topics/_from_list_zippers_to_continuations.mdwn
deleted file mode 100644
index dcd11ce..0000000
--- a/topics/_from_list_zippers_to_continuations.mdwn
+++ /dev/null
@@ -1,222 +0,0 @@
-Refunctionalizing zippers: from lists to continuations
-------------------------------------------------------
-
-If zippers are continuations reified (defuntionalized), then one route
-to continuations is to re-functionalize a zipper.  Then the
-concreteness and understandability of the zipper provides a way of
-understanding an equivalent treatment using continuations.
-
-Let's work with lists of chars for a change.  We'll sometimes write
-"abSd" as an abbreviation for
-['a'; 'b'; 'S'; 'd'].
-
-We will set out to compute a deceptively simple-seeming **task: given a
-string, replace each occurrence of 'S' in that string with a copy of
-the string up to that point.**
-
-We'll define a function t (for "task") that maps strings to their
-updated version.
-
-Expected behavior:
-
-	t "abSd" ~~> "ababd"
-
-
-In linguistic terms, this is a kind of anaphora
-resolution, where 'S' is functioning like an anaphoric element, and
-the preceding string portion is the antecedent.
-
-This task can give rise to considerable complexity.
-Note that it matters which 'S' you target first (the position of the *
-indicates the targeted 'S'):
-
-	    t "aSbS"
-	        *
-	~~> t "aabS"
-	          *
-	~~> "aabaab"
-
-versus
-
-	    t "aSbS"
-	          *
-	~~> t "aSbaSb"
-	        *
-	~~> t "aabaSb"
-	           *
-	~~> "aabaaabab"
-
-versus
-
-	    t "aSbS"
-	          *
-	~~> t "aSbaSb"
-	           *
-	~~> t "aSbaaSbab"
-	            *
-	~~> t "aSbaaaSbaabab"
-	             *
-	~~> ...
-
-Apparently, this task, as simple as it is, is a form of computation,
-and the order in which the 'S's get evaluated can lead to divergent
-behavior.
-
-For now, we'll agree to always evaluate the leftmost 'S', which
-guarantees termination, and a final string without any 'S' in it.
-
-This is a task well-suited to using a zipper.  We'll define a function
-tz (for task with zippers), which accomplishes the task by mapping a
-char list zipper to a char list.  We'll call the two parts of the
-zipper unzipped and zipped; we start with a fully zipped list, and
-move elements to the unzipped part by pulling the zipper down until the
-entire list has been unzipped, at which point the zipped half of the
-zipper will be empty.
-
-	type 'a list_zipper = ('a list) * ('a list);;
-
-	let rec tz (z : char list_zipper) =
-          match z with
-            | (unzipped, []) -> List.rev(unzipped) (* Done! *)
-            | (unzipped, 'S'::zipped) -> tz ((List.append unzipped unzipped), zipped)
-            | (unzipped, target::zipped) -> tz (target::unzipped, zipped);; (* Pull zipper *)
-
-	# tz ([], ['a'; 'b'; 'S'; 'd']);;
-	- : char list = ['a'; 'b'; 'a'; 'b'; 'd']
-
-	# tz ([], ['a'; 'S'; 'b'; 'S']);;
-	- : char list = ['a'; 'a'; 'b'; 'a'; 'a'; 'b']
-
-Note that the direction in which the zipper unzips enforces the
-evaluate-leftmost rule.  Task completed.
-
-One way to see exactly what is going on is to watch the zipper in
-action by tracing the execution of tz.  By using the #trace
-directive in the OCaml interpreter, the system will print out the
-arguments to tz each time it is called, including when it is called
-recursively within one of the match clauses.  Note that the
-lines with left-facing arrows (<--) show (both initial and recursive) calls to tz,
-giving the value of its argument (a zipper), and the lines with
-right-facing arrows (-->) show the output of each recursive call, a
-simple list.
-
-	# #trace tz;;
-	t1 is now traced.
-	# tz ([], ['a'; 'b'; 'S'; 'd']);;
-	tz <-- ([], ['a'; 'b'; 'S'; 'd'])       (* Initial call *)
-	tz <-- (['a'], ['b'; 'S'; 'd'])         (* Pull zipper *)
-	tz <-- (['b'; 'a'], ['S'; 'd'])         (* Pull zipper *)
-	tz <-- (['b'; 'a'; 'b'; 'a'], ['d'])    (* Special 'S' step *)
-	tz <-- (['d'; 'b'; 'a'; 'b'; 'a'], [])  (* Pull zipper *)
-	tz --> ['a'; 'b'; 'a'; 'b'; 'd']        (* Output reversed *)
-	tz --> ['a'; 'b'; 'a'; 'b'; 'd']
-	tz --> ['a'; 'b'; 'a'; 'b'; 'd']
-	tz --> ['a'; 'b'; 'a'; 'b'; 'd']
-	tz --> ['a'; 'b'; 'a'; 'b'; 'd']
-	- : char list = ['a'; 'b'; 'a'; 'b'; 'd']
-
-The nice thing about computations involving lists is that it's so easy
-to visualize them as a data structure.  Eventually, we want to get to
-a place where we can talk about more abstract computations.  In order
-to get there, we'll first do the exact same thing we just did with
-concrete zipper using procedures instead.
-
-Think of a list as a procedural recipe: ['a'; 'b'; 'c'; 'd'] is the result of
-the computation 'a'::('b'::('c'::('d'::[]))) (or, in our old style,
-make_list 'a' (make_list 'b' (make_list 'c' (make_list 'd' empty)))). The
-recipe for constructing the list goes like this:
-
->	(0)  Start with the empty list []
->	(1)  make a new list whose first element is 'd' and whose tail is the list constructed in step (0)
->	(2)  make a new list whose first element is 'c' and whose tail is the list constructed in step (1)
->	-----------------------------------------
->	(3)  make a new list whose first element is 'b' and whose tail is the list constructed in step (2)
->	(4)  make a new list whose first element is 'a' and whose tail is the list constructed in step (3)
-
-What is the type of each of these steps?  Well, it will be a function
-from the result of the previous step (a list) to a new list: it will
-be a function of type char list -> char list.  We'll call each step
-(or group of steps) a **continuation** of the previous steps.  So in this
-context, a continuation is a function of type char list -> char
-list.  For instance, the continuation corresponding to the portion of
-the recipe below the horizontal line is the function fun (tail : char
-list) -> 'a'::('b'::tail). What is the continuation of the 4th step? That is, after we've built up 'a'::('b'::('c'::('d'::[]))), what more has to happen to that for it to become the list ['a'; 'b'; 'c'; 'd']? Nothing! Its continuation is the function that does nothing: fun tail -> tail.
-
-In what follows, we'll be thinking about the result list that we're building up in this procedural way. We'll treat our input list just as a plain old static list data structure, that we recurse through in the normal way we're accustomed to. We won't need a zipper data structure, because the continuation-based representation of our result list will take over the same role.
-
-So our new function tc (for task with continuations) takes an input list (not a zipper) and a also takes a continuation k (it's conventional to use k for continuation variables). k is a function that represents how the result list is going to continue being built up after this invocation of tc delivers up a value. When we invoke tc for the first time, we expect it to deliver as a value the very de-S'd list we're seeking, so the way for the list to continue being built up is for nothing to happen to it. That is, our initial invocation of tc will supply fun tail -> tail as the value for k. Here is the whole tc function. Its structure and behavior follows tz from above, which we've repeated here to facilitate detailed comparison:
-
-	let rec tz (z : char list_zipper) =
-	    match z with
-	    | (unzipped, []) -> List.rev(unzipped) (* Done! *)
-	    | (unzipped, 'S'::zipped) -> tz ((List.append unzipped unzipped), zipped)
-	    | (unzipped, target::zipped) -> tz (target::unzipped, zipped);; (* Pull zipper *)
-
-	let rec tc (l: char list) (k: (char list) -> (char list)) =
-	    match l with
-	    | [] -> List.rev (k [])
-	    | 'S'::zipped -> tc zipped (fun tail -> k (k tail))
-	    | target::zipped -> tc zipped (fun tail -> target::(k tail));;
-
-	# tc ['a'; 'b'; 'S'; 'd'] (fun tail -> tail);;
-	- : char list = ['a'; 'b'; 'a'; 'b']
-
-	# tc ['a'; 'S'; 'b'; 'S'] (fun tail -> tail);;
-	- : char list = ['a'; 'a'; 'b'; 'a'; 'a'; 'b']
-
-To emphasize the parallel, we've re-used the names zipped and
-target.  The trace of the procedure will show that these variables
-take on the same values in the same series of steps as they did during
-the execution of tz above: there will once again be one initial and
-four recursive calls to tc, and zipped will take on the values
-"bSd", "Sd", "d", and "" (and, once again, on the final call,
-the first match clause will fire, so the the variable zipped will
-not be instantiated).
-
-We have not named the continuation argument unzipped, although that is
-what the parallel would suggest.  The reason is that unzipped (in
-tz) is a list, but k (in tc) is a function.  That's the most crucial
-difference between the solutions---it's the
-point of the excercise, and it should be emphasized.  For instance,
-you can see this difference in the fact that in tz, we have to glue
-together the two instances of unzipped with an explicit (and,
-computationally speaking, relatively inefficient) List.append.
-In the tc version of the task, we simply compose k with itself:
-k o k = fun tail -> k (k tail).
-
-A call tc ['a'; 'b'; 'S'; 'd'] would yield a partially-applied function; it would still wait for another argument, a continuation of type char list -> char list. So we have to give it an "initial continuation" to get started. As mentioned above, we supply *the identity function* as the initial continuation. Why did we choose that? Again, if
-you have already constructed the result list "ababd", what's the desired continuation? What's the next step in the recipe to produce the desired result, i.e, the very same list, "ababd"?  Clearly, the identity function.
-
-A good way to test your understanding is to figure out what the
-continuation function k must be at the point in the computation when
-tc is applied to the argument "Sd".  Two choices: is it
-fun tail -> 'a'::'b'::tail, or it is fun tail -> 'b'::'a'::tail?  The way to see if you're right is to execute the following command and see what happens:
-

(Diff truncated)

move "count from 0", thanks Kyle
diff --git a/topics/week12_list_and_tree_zippers.mdwn b/topics/week12_list_and_tree_zippers.mdwn
index c59a87d..093b9b0 100644
--- a/topics/week12_list_and_tree_zippers.mdwn
+++ b/topics/week12_list_and_tree_zippers.mdwn
@@ -15,7 +15,7 @@ Say you've got some moderately-complex function for searching through a list, fo
| x :: xs -> helper (position + 1) n xs
in helper 0 n lst;;

-This searches for the nth element of a list that satisfies the predicate test, and returns a pair containing the position of that element, and the element itself. Good. But now what if you wanted to retrieve a different kind of information, such as the nth element matching test, together with its preceding and succeeding elements? In a real situation, you'd want to develop some good strategy for reporting when the target element doesn't have a predecessor and successor; but we'll just simplify here and report them as having some default value:
+This searches for the nth element of a list that satisfies the predicate test, and returns a pair containing the position of that element, and the element itself. (We follow the dominant convention of counting list positions from the left starting at 0.) Good. But now what if you wanted to retrieve a different kind of information, such as the nth element matching test, together with its preceding and succeeding elements? In a real situation, you'd want to develop some good strategy for reporting when the target element doesn't have a predecessor and successor; but we'll just simplify here and report them as having some default value:

let find_nth' (test : 'a -> bool) (n : int) (lst : 'a list) (default : 'a) : ('a * 'a * 'a) ->
let rec helper (predecessor : 'a) n lst =
@@ -36,7 +36,7 @@ Here's an idea. What if we had some way of representing a list as "broken" at a

[10; 20; 30; 40; 50; 60; 70; 80; 90]

-we might imagine the list "broken" at position 3 like this (we follow the dominant convention of counting list positions from the left starting at 0):
+we might imagine the list "broken" at position 3 like this:

40;
30;     50;


rename exercises/_assignment12.mdwn to exercises/assignment12.mdwn
diff --git a/exercises/_assignment12.mdwn b/exercises/_assignment12.mdwn
deleted file mode 100644
index 58068c4..0000000
--- a/exercises/_assignment12.mdwn
+++ /dev/null
@@ -1,392 +0,0 @@
-## Same-fringe using zippers ##
-
-Recall back in [[Assignment 4|assignment4#fringe]], we asked you to enumerate the "fringe" of a leaf-labeled tree. Both of these trees (here I *am* drawing the labels in the diagram):
-
-        .                .
-       / \              / \
-      .   3            1   .
-     / \                  / \
-    1   2                2   3
-
-have the same fringe: [1; 2; 3]. We also asked you to write a function that determined when two trees have the same fringe. The way you approached that back then was to enumerate each tree's fringe, and then compare the two lists for equality. Today, and then again in a later class, we'll encounter new ways to approach the problem of determining when two trees have the same fringe.
-
-
-Supposing you did work out an implementation of the tree zipper, then one way to determine whether two trees have the same fringe would be: go downwards (and leftwards) in each tree as far as possible. Compare the focused leaves. If they're different, stop because the trees have different fringes. If they're the same, then for each tree, move rightward if possible; if it's not (because you're at the rightmost leaf in a subtree), move upwards then try again to move rightwards. Repeat until you are able to move rightwards. Once you do move rightwards, go downwards (and leftwards) as far as possible. Then you'll be focused on the next leaf in the tree's fringe. The operations it takes to get to "the next leaf" may be different for the two trees. For example, in these trees:
-
-        .                .
-       / \              / \
-      .   3            1   .
-     / \                  / \
-    1   2                2   3
-
-you won't move upwards at the same steps. Keep comparing "the next leaves" until they are different, or you exhaust the leaves of only one of the trees (then again the trees have different fringes), or you exhaust the leaves of both trees at the same time, without having found leaves with different labels. In this last case, the trees have the same fringe.
-
-If your trees are very big---say, millions of leaves---you can imagine how this would be quicker and more memory-efficient than traversing each tree to construct a list of its fringe, and then comparing the two lists so built to see if they're equal. For one thing, the zipper method can abort early if the fringes diverge early, without needing to traverse or build a list containing the rest of each tree's fringe.
-
-Let's sketch the implementation of this. We won't provide all the details for an implementation of the tree zipper (you'll need to fill those in), but we will sketch an interface for it.
-
-In these exercises, we'll help ourselves to OCaml's **record types**. These are nothing more than tuples with a pretty interface. Instead of saying:
-
-    # type blah = Blah of int * int * (char -> bool);;
-
-and then having to remember which element in the triple was which:
-
-    # let b1 = Blah (1, (fun c -> c = 'M'), 2);;
-    Error: This expression has type int * (char -> bool) * int
-    but an expression was expected of type int * int * (char -> bool)
-    # (* damnit *)
-    # let b1 = Blah (1, 2, (fun c -> c = 'M'));;
-    val b1 : blah = Blah (1, 2, <fun>)
-
-records let you attach descriptive labels to the components of the tuple:
-
-    # type blah_record = { height : int; weight : int; char_tester : char -> bool };;
-    # let b2 = { height = 1; weight = 2; char_tester = (fun c -> c = 'M') };;
-    val b2 : blah_record = {height = 1; weight = 2; char_tester = <fun>}
-    # let b3 = { height = 1; char_tester = (fun c -> c = 'K'); weight = 3 };; (* also works *)
-    val b3 : blah_record = {height = 1; weight = 3; char_tester = <fun>}
-
-These were the strategies to extract the components of an unlabeled tuple:
-
-    let h = fst some_pair (* accessor functions fst and snd are only predefined for pairs *)
-
-    let (h, w, test) = b1 (* works for arbitrary tuples *)
-
-    match b1 with
-    | (h, w, test) -> ... (* same as preceding *)
-
-Here is how you can extract the components of a labeled record:
-
-    let h = b2.height (* handy! *)
-
-    let {height = h; weight = w; char_tester = test} = b2 in
-    (* go on to use h, w, and test ... *)
-
-    match test with
-    | {height = h; weight = w; char_tester = test} ->
-    (* same as preceding *)
-
-Anyway, using record types, we might define the tree zipper interface like so. First, we define a type for leaf-labeled, binary trees:
-
-    type 'a tree = Leaf of 'a | Node of 'a tree * 'a tree
-
-Next, the types for our tree zippers:
-
-    type 'a zipper = { in_focus: 'a tree; context : 'a context }
-    and 'a context = Root | Nonroot of 'a nonroot_context
-    and 'a nonroot_context = { up : 'a context; left: 'a tree option; right: 'a tree option }
-
-Unlike in seminar, here we represent the siblings as 'a tree options rather than 'a tree lists. Since we're dealing with binary trees, each context will have exactly one sibling, either to the right or to the left.
-
-The following function takes an 'a tree and returns an 'a zipper focused on its root:
-
-    let new_zipper (t : 'a tree) : 'a zipper =
-      {in_focus = t; context = Root}
-
-Here are the beginnings of functions to move from one focused tree to another:
-
-    let rec move_botleft (z : 'a zipper) : 'a zipper =
-      (* returns z if the focused node in z has no children *)
-      (* else returns move_botleft (zipper which results from moving down from z's focused node to its leftmost child) *)
-      _____ (* YOU SUPPLY THE DEFINITION *)
-
-<!--
-    match z.in_focus with
-    | Leaf _ -> z
-    | Node(left, right) ->
-        move_botleft {in_focus = left; context = Nonroot {up = z.context; left = None; right = Some right}}
--->
-
-
-    let rec move_right_or_up (z : 'a zipper) : 'a zipper option =
-      (* if it's possible to move right in z, returns Some (the result of doing so) *)
-      (* else if it's not possible to move any further up in z, returns None *)
-      (* else returns move_right_or_up (result of moving up in z) *)
-      _____ (* YOU SUPPLY THE DEFINITION *)
-
-<!--
-    match z.context with
-    | Nonroot {up; left= None; right = Some right} ->
-        Some {in_focus = right; context = Nonroot {up; left = Some z.in_focus; right = None}}
-    | Root -> None
-    | Nonroot {up; left = Some left; right = None} ->
-        move_right_or_up {in_focus = Node(left, z.in_focus); context = up}
-    | _ -> assert false
--->
-
-
-1.  Your first assignment is to complete the definitions of move_botleft and move_right_or_up. (Really it should be move_right_or_up_..._and_right.)
-
-    Having completed that, we can use define a function that enumerates a tree's fringe, step by step, until it's exhausted:
-
-        let make_fringe_enumerator (t: 'a tree) =
-          (* create a zipper focusing the botleft of t *)
-          let zbotleft = move_botleft (new_zipper t) in
-          (* create initial state, pointing to zbotleft *)
-          let initial_state : 'a zipper option = Some zbotleft in
-          (* construct the next_leaf function *)
-          let next_leaf : 'a zipper option -> ('a * 'a zipper option) option =
-            fun state -> match state with
-            | Some z -> (
-              (* extract label of currently-focused leaf *)
-              let Leaf current = z.in_focus in
-              (* create next_state pointing to next leaf, if there is one *)
-              let next_state : 'a zipper option = match move_right_or_up z with
-                | None -> None
-                | Some z' -> Some (move_botleft z') in
-              (* return saved label and next_state *)
-              Some (current, next_state)
-              )
-            | None -> (* we've finished enumerating the fringe *)
-              None in
-          (* return the next_leaf function and initial state *)
-          next_leaf, initial_state
-
-    Here's an example of make_fringe_enumerator in action:
-
-        # let tree1 = Leaf 1;;
-        val tree1 : int tree = Leaf 1
-        # let next1, state1 = make_fringe_enumerator tree1;;
-        val next1 : int zipper option -> (int * int zipper option) option = <fun>
-        val state1 : int zipper option = Some ...
-        # let Some (res1, state1') = next1 state1;;
-        val res1 : int = 1
-        val state1' : int zipper option = None
-        # next1 state1';;
-        - : (int * int zipper option) option = None
-        # let tree2 = Node (Node (Leaf 1, Leaf 2), Leaf 3);;
-        val tree2 : int tree = Node (Node (Leaf 1, Leaf 2), Leaf 3)
-        # let next2, state2 = make_fringe_enumerator tree2;;
-        val next2 : int zipper option -> (int * int zipper option) option = <fun>
-        val state2 : int zipper option = Some ...
-        # let Some (res2, state2') = next2 state2;;
-        val res2 : int = 1
-        val state2' : int zipper option = Some ...
-        # let Some (res2, state2'') = next2 state2';;
-        val res2 : int = 2
-        val state2'' : int zipper option = Some ...
-        # let Some (res2, state2''') = next2 state2'';;
-        val res2 : int = 3
-        val state2''' : int zipper option = None
-        # next2 state2''';;
-        - : (int * int zipper option) option = None
-
-    You might think of it like this: make_fringe_enumerator returns a little subprogram that will keep returning the next leaf in a tree's fringe, in the form Some ..., until it gets to the end of the fringe. After that, it will return None. The subprogram's memory of where it is and what steps to perform next are stored in the next_state variables that are part of its input and output.
-
-    Using these fringe enumerators, we can write our same_fringe function like this:
-
-        let same_fringe (t1 : 'a tree) (t2 : 'a tree) : bool =
-          let next1, initial_state1 = make_fringe_enumerator t1 in
-          let next2, initial_state2 = make_fringe_enumerator t2 in
-          let rec loop state1 state2 : bool =
-            match next1 state1, next2 state2 with
-            | Some (a, state1'), Some (b, state2') when a = b -> loop state1' state2'
-            | None, None -> true
-            | _ -> false in
-          loop initial_state1 initial_state2
-
-    The auxiliary loop function will keep calling itself recursively until a difference in the fringes has manifested itself---either because one fringe is exhausted before the other, or because the next leaves in the two fringes have different labels. If we get to the end of both fringes at the same time (next1 state1, next2 state2 matches the pattern None, None) then we've established that the trees do have the same fringe.
-
-2.  Test your implementations of move_botleft and move_right_or_up against some example trees to see if the resulting make_fringe_enumerator and same_fringe functions work as expected. Show us some of your tests.
-
-3.  Now we'll talk about another way to implement the make_fringe_enumerator function above (and so too the same_fringe function which uses it). Notice that the pattern given above is that the make_fringe_enumerator creates a next_leaf function and an initial state, and each time you want to advance the next_leaf by one step, you do so by calling it with the current state. It will return a leaf label plus a modified state, which you can use when you want to call it again and take another step. All of the next_leaf function's memory about where it is in the enumeration is contained in the state. If you saved an old state, took three steps, and then called the next_leaf function again with the saved old state, it would be back where it was three steps ago. But in fact, the way we use the next_leaf function and state above, there is no back-tracking. Neither do we "fork" any of the states and pursue different forward paths. Their progress is deterministic, and fixed independently of anything that same_fringe might do. All that's up to same_fringe is to take the decision of when (and whether) to take another step forward.
-
-    Given that usage pattern, it would be appropriate and convenient to make the next_leaf function remember its state itself, in a mutable variable. The client function same_fringe doesn't need to do anything with, or even be given access to, this variable. Here's how we might write make_fringe_enumerator according to this plan:

(Diff truncated)

add week 12 stuff
diff --git a/content.mdwn b/content.mdwn
index e6a02af..9279e29 100644
--- a/content.mdwn
+++ b/content.mdwn
@@ -9,6 +9,8 @@ week in which they were introduced.

*   [[Kaplan on Plexy|topics/week6_plexy]]
*   [[Groenendijk, Stokhof, and Veltman|/topics/week10_gsv]]
+*   Mutation and hyper-synonymy (no notes)
+

*   Functional Programming

@@ -26,6 +28,7 @@ week in which they were introduced.
*   [[Installing and Using the Juli8 Libraries|/juli8]]
*   [[Programming with mutable state|/topics/week9_mutable_state]]
+    *   Mutation and hyper-synonymy (no notes)

*   Order, "static versus dynamic"
@@ -35,6 +38,9 @@ week in which they were introduced.
*   [[Unit and its usefulness|topics/week3 unit]]
*   Combinatory evaluator ([[for home|topics/week7_combinatory_evaluator]])
*   [[Programming with mutable state|/topics/week9_mutable_state]]
+    *   [[Abortable list traversals|/topics/week12_abortable_traversals]]
+    *   [[List and tree zippers|/topics/week12_list_and_tree_zippers]]
+

*   The Untyped Lambda Calculus

@@ -48,6 +54,7 @@ week in which they were introduced.
*   [[Arithmetic with Church numbers|topics/week3_church_arithmetic]]
*   [[How to get the tail of v1 lists?|topics/week3 lists#tails]]
*   [[Some other list encodings|topics/week3 lists#other-lists]]
+        *   [[Abortable list traversals|/topics/week12_abortable_traversals]]
*   [[Reduction Strategies and Normal Forms|topics/week3_evaluation_order]]
*   [[Fixed point combinators|topics/week4_fixed_point_combinators]]
@@ -79,6 +86,11 @@ week in which they were introduced.
*   [[Groenendijk, Stokhof, and Veltman|/topics/week10_gsv]]

+*   Continuations
+    *   [[Abortable list traversals|/topics/week12_abortable_traversals]]
+    *   [[List and tree zippers|/topics/week12_list_and_tree_zippers]]
+
+
## Topics by week ##

Week 1:
@@ -158,7 +170,13 @@ Week 9:

Week 10:

-*    Groenendijk, Stokhof, and Veltman, "[[Coreference and Modality|/readings/coreference-and-modality.pdf]]" (1996)
-*    [[Notes on GSV|/topics/week10_gsv]], with links to code
+*   Groenendijk, Stokhof, and Veltman, "[[Coreference and Modality|/readings/coreference-and-modality.pdf]]" (1996)
+*   [[Notes on GSV|/topics/week10_gsv]], with links to code
+

+Week 12:

+*   Mutation and hyper-synonymy (no notes)
+*   [[Abortable list traversals|/topics/week12_abortable_traversals]]
+*   [[List and tree zippers|/topics/week12_list_and_tree_zippers]]
+*   [[Homework for week 12|exercises/assignment12]]


post homework
diff --git a/index.mdwn b/index.mdwn
index b1f5599..38eb21b 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -191,7 +191,7 @@ We've posted a [[State Monad Tutorial]].

(**Week 12**) Thursday April 23

-> Topics: Mutation and hyper-synonymy (no notes); [[Abortable list traversals|/topics/week12_abortable_traversals]]; [[List and tree zippers|/topics/week12_list_and_tree_zippers]]; Homework <!-- [[Homework|exercises/assignment12]] -->
+> Topics: Mutation and hyper-synonymy (no notes); [[Abortable list traversals|/topics/week12_abortable_traversals]]; [[List and tree zippers|/topics/week12_list_and_tree_zippers]]; [[Homework|exercises/assignment12]]

> For amusement/tangential edification: [xkcd on code quality](https://xkcd.com/1513/); [turning a sphere inside out](https://www.youtube.com/watch?v=-6g3ZcmjJ7k)


overhaul
diff --git a/exercises/_assignment12.mdwn b/exercises/_assignment12.mdwn
index 017a8d3..58068c4 100644
--- a/exercises/_assignment12.mdwn
+++ b/exercises/_assignment12.mdwn
@@ -2,22 +2,22 @@

Recall back in [[Assignment 4|assignment4#fringe]], we asked you to enumerate the "fringe" of a leaf-labeled tree. Both of these trees (here I *am* drawing the labels in the diagram):

-	    .                .
-	   / \              / \
-	  .   3            1   .
-	 / \                  / \
-	1   2                2   3
+        .                .
+       / \              / \
+      .   3            1   .
+     / \                  / \
+    1   2                2   3

have the same fringe: [1; 2; 3]. We also asked you to write a function that determined when two trees have the same fringe. The way you approached that back then was to enumerate each tree's fringe, and then compare the two lists for equality. Today, and then again in a later class, we'll encounter new ways to approach the problem of determining when two trees have the same fringe.

Supposing you did work out an implementation of the tree zipper, then one way to determine whether two trees have the same fringe would be: go downwards (and leftwards) in each tree as far as possible. Compare the focused leaves. If they're different, stop because the trees have different fringes. If they're the same, then for each tree, move rightward if possible; if it's not (because you're at the rightmost leaf in a subtree), move upwards then try again to move rightwards. Repeat until you are able to move rightwards. Once you do move rightwards, go downwards (and leftwards) as far as possible. Then you'll be focused on the next leaf in the tree's fringe. The operations it takes to get to "the next leaf" may be different for the two trees. For example, in these trees:

-	    .                .
-	   / \              / \
-	  .   3            1   .
-	 / \                  / \
-	1   2                2   3
+        .                .
+       / \              / \
+      .   3            1   .
+     / \                  / \
+    1   2                2   3

you won't move upwards at the same steps. Keep comparing "the next leaves" until they are different, or you exhaust the leaves of only one of the trees (then again the trees have different fringes), or you exhaust the leaves of both trees at the same time, without having found leaves with different labels. In this last case, the trees have the same fringe.

@@ -27,7 +27,7 @@ Let's sketch the implementation of this. We won't provide all the details for an

In these exercises, we'll help ourselves to OCaml's **record types**. These are nothing more than tuples with a pretty interface. Instead of saying:

-    # type blah = Blah of (int * int * (char -> bool));;
+    # type blah = Blah of int * int * (char -> bool);;

and then having to remember which element in the triple was which:

@@ -48,16 +48,16 @@ records let you attach descriptive labels to the components of the tuple:

These were the strategies to extract the components of an unlabeled tuple:

-    let h = fst some_pair;; (* accessor functions fst and snd are only predefined for pairs *)
+    let h = fst some_pair (* accessor functions fst and snd are only predefined for pairs *)

-    let (h, w, test) = b1;; (* works for arbitrary tuples *)
+    let (h, w, test) = b1 (* works for arbitrary tuples *)

match b1 with
-    | (h, w, test) -> ...;; (* same as preceding *)
+    | (h, w, test) -> ... (* same as preceding *)

Here is how you can extract the components of a labeled record:

-    let h = b2.height;; (* handy! *)
+    let h = b2.height (* handy! *)

let {height = h; weight = w; char_tester = test} = b2 in
(* go on to use h, w, and test ... *)
@@ -68,7 +68,7 @@ Here is how you can extract the components of a labeled record:

Anyway, using record types, we might define the tree zipper interface like so. First, we define a type for leaf-labeled, binary trees:

-    type 'a tree = Leaf of 'a | Node of ('a tree * 'a tree)
+    type 'a tree = Leaf of 'a | Node of 'a tree * 'a tree

Next, the types for our tree zippers:

@@ -111,20 +111,22 @@ Here are the beginnings of functions to move from one focused tree to another:
| Root -> None
| Nonroot {up; left = Some left; right = None} ->
move_right_or_up {in_focus = Node(left, z.in_focus); context = up}
+    | _ -> assert false
-->

-1.  Your first assignment is to complete the definitions of move_botleft and move_right_or_up.
+1.  Your first assignment is to complete the definitions of move_botleft and move_right_or_up. (Really it should be move_right_or_up_..._and_right.)

Having completed that, we can use define a function that enumerates a tree's fringe, step by step, until it's exhausted:

-        let make_fringe_enumerator (t: 'a tree) : 'b * 'a zipper option =
+        let make_fringe_enumerator (t: 'a tree) =
(* create a zipper focusing the botleft of t *)
let zbotleft = move_botleft (new_zipper t) in
(* create initial state, pointing to zbotleft *)
-          let initial_state = Some zbotleft in
+          let initial_state : 'a zipper option = Some zbotleft in
(* construct the next_leaf function *)
-          let next_leaf : 'a zipper option -> ('a * 'a zipper option) option = function
+          let next_leaf : 'a zipper option -> ('a * 'a zipper option) option =
+            fun state -> match state with
| Some z -> (
(* extract label of currently-focused leaf *)
let Leaf current = z.in_focus in
@@ -143,27 +145,33 @@ Here are the beginnings of functions to move from one focused tree to another:
Here's an example of make_fringe_enumerator in action:

# let tree1 = Leaf 1;;
-	val tree1 : int tree = Leaf 1
-	# let next1, state1 = make_fringe_enumerator tree1;;
-	val next1 : unit -> int option = <fun>
-	# let res1, state1' = next1 state1;;
-	- : int option = Some 1
-	# next1 state1';;
-	- : int option = None
-	# let tree2 = Node (Node (Leaf 1, Leaf 2), Leaf 3);;
-	val tree2 : int tree = Node (Node (Leaf 1, Leaf 2), Leaf 3)
-	# let next2, state2 = make_fringe_enumerator tree2;;
-	val next2 : unit -> int option = <fun>
-	# let res2, state2' = next2 state2;;
-	- : int option = Some 1
-	# let res2, state2'' = next2 state2';;
-	- : int option = Some 2
-	# let res2, state2''' = next2 state2'';;
-	- : int option = Some 3
-	# let res2, state2'''' = next2 state2''';;
-	- : int option = None
-
-    You might think of it like this: make_fringe_enumerator returns a little subprogram that will keep returning the next leaf in a tree's fringe, in the form Some ..., until it gets to the end of the fringe. After that, it will return None. The subprogram's memory of where it is and what steps to perform next are stored in the state variables that are part of its input and output.
+        val tree1 : int tree = Leaf 1
+        # let next1, state1 = make_fringe_enumerator tree1;;
+        val next1 : int zipper option -> (int * int zipper option) option = <fun>
+        val state1 : int zipper option = Some ...
+        # let Some (res1, state1') = next1 state1;;
+        val res1 : int = 1
+        val state1' : int zipper option = None
+        # next1 state1';;
+        - : (int * int zipper option) option = None
+        # let tree2 = Node (Node (Leaf 1, Leaf 2), Leaf 3);;
+        val tree2 : int tree = Node (Node (Leaf 1, Leaf 2), Leaf 3)
+        # let next2, state2 = make_fringe_enumerator tree2;;
+        val next2 : int zipper option -> (int * int zipper option) option = <fun>
+        val state2 : int zipper option = Some ...
+        # let Some (res2, state2') = next2 state2;;
+        val res2 : int = 1
+        val state2' : int zipper option = Some ...
+        # let Some (res2, state2'') = next2 state2';;
+        val res2 : int = 2
+        val state2'' : int zipper option = Some ...
+        # let Some (res2, state2''') = next2 state2'';;
+        val res2 : int = 3
+        val state2''' : int zipper option = None
+        # next2 state2''';;
+        - : (int * int zipper option) option = None
+
+    You might think of it like this: make_fringe_enumerator returns a little subprogram that will keep returning the next leaf in a tree's fringe, in the form Some ..., until it gets to the end of the fringe. After that, it will return None. The subprogram's memory of where it is and what steps to perform next are stored in the next_state variables that are part of its input and output.

Using these fringe enumerators, we can write our same_fringe function like this:

@@ -177,21 +185,21 @@ Here are the beginnings of functions to move from one focused tree to another:
| _ -> false in
loop initial_state1 initial_state2

-    The auxiliary loop function will keep calling itself recursively until a difference in the fringes has manifested itself---either because one fringe is exhausted before the other, or because the next leaves in the two fringes have different labels. If we get to the end of both fringes at the same time (next1 (), next2 () matches the pattern None, None) then we've established that the trees do have the same fringe.
+    The auxiliary loop function will keep calling itself recursively until a difference in the fringes has manifested itself---either because one fringe is exhausted before the other, or because the next leaves in the two fringes have different labels. If we get to the end of both fringes at the same time (next1 state1, next2 state2 matches the pattern None, None) then we've established that the trees do have the same fringe.

2.  Test your implementations of move_botleft and move_right_or_up against some example trees to see if the resulting make_fringe_enumerator and same_fringe functions work as expected. Show us some of your tests.

-3.  Now we'll talk about another way to implement the make_fringe_enumerator function above (and so too the same_fringe function which uses it). Notice that the pattern given above is that the make_fringe_enumerator creates a next_leaf function and an initial state, and each time you want to advance the next_leaf by one step, you do so by calling it with the current state. It will return a result plus a modified state, which you can use when you want to call it again and take another step. All of the next_leaf function's memory about where it is in the enumeration is contained in the state. If you saved an old state, took three steps, and then called the next_leaf function again with the saved old state, it would be back where it was three steps ago. But in fact, the way we use the process and state above, there is no back-tracking. Neither do we "fork" any of the states and pursue different forward paths. Their progress is deterministic, and fixed independently of anything that same_fringe might do. All that's up to same_fringe is to take the decision of when (and whether) to take another step forward.
+3.  Now we'll talk about another way to implement the make_fringe_enumerator function above (and so too the same_fringe function which uses it). Notice that the pattern given above is that the make_fringe_enumerator creates a next_leaf function and an initial state, and each time you want to advance the next_leaf by one step, you do so by calling it with the current state. It will return a leaf label plus a modified state, which you can use when you want to call it again and take another step. All of the next_leaf function's memory about where it is in the enumeration is contained in the state. If you saved an old state, took three steps, and then called the next_leaf function again with the saved old state, it would be back where it was three steps ago. But in fact, the way we use the next_leaf function and state above, there is no back-tracking. Neither do we "fork" any of the states and pursue different forward paths. Their progress is deterministic, and fixed independently of anything that same_fringe might do. All that's up to same_fringe is to take the decision of when (and whether) to take another step forward.

Given that usage pattern, it would be appropriate and convenient to make the next_leaf function remember its state itself, in a mutable variable. The client function same_fringe doesn't need to do anything with, or even be given access to, this variable. Here's how we might write make_fringe_enumerator according to this plan:

let make_fringe_enumerator (t: 'a tree) =
(* create a zipper focusing the botleft of t *)
let zbotleft = move_botleft (new_zipper t) in
-          (* create refcell, initially pointing to zbotleft *)
+          (* create a refcell, initially pointing to zbotleft *)
let zcell = ref (Some zbotleft) in
(* construct the next_leaf function *)
-          let next_leaf () : 'a option =
+          let next_leaf : unit -> 'a option = fun () ->
match !zcell with
| Some z -> (
(* extract label of currently-focused leaf *)
@@ -227,7 +235,7 @@ Here are the beginnings of functions to move from one focused tree to another:
(* create refcell, initially pointing to zbotleft *)
let zcell = ref (Some zbotleft) in
(* construct the next_leaf function *)
-          let next_leaf () : 'a option =
+          let next_leaf : unit -> 'a option = fun () ->
match !zcell with
| Some z -> (
(* extract label of currently-focused leaf *)
@@ -255,99 +263,108 @@ Here are the beginnings of functions to move from one focused tree to another:
loop ()
-->

+    Here's an example of our new make_fringe_enumerator in action:
+
+        # let tree1 = Leaf 1;;

(Diff truncated)

tweaks
diff --git a/topics/week12_abortable_traversals.mdwn b/topics/week12_abortable_traversals.mdwn
index 59d5517..7cccacb 100644
--- a/topics/week12_abortable_traversals.mdwn
+++ b/topics/week12_abortable_traversals.mdwn
@@ -63,11 +63,10 @@ Here's how to get the last element of such a list:

This is similar to getting the first element, except that each step delivers its output to the keep_going handler rather than to the done handler. That ensures that we will only get the output of the last step, when the traversal function is applied to the last member of the list. If the list is empty, then we'll get the err value, just as with the function that tries to extract the list's head.

-One thing to note is that there are limits to how much you can immunize yourself against doing unnecessary work. A demon evaluator who got to custom-pick the evaluation order (including doing reductions underneath lambdas when he wanted to) could ensure that lots of unnecessary computations got performed, despite your best efforts. We don't yet have any way to prevent that. (Later we will see some ways to *computationally force* the evaluation order we prefer. Of course, in any real computing environment you'll mostly know what evaluation order you're dealing with, and you can mostly program efficiently around that.) The current scheme at least makes our result not *computationally depend on* what happens further on in the traversal, once we've passed a result to the done_handler. We don't even depend on the later steps in the traversal cooperating to pass our result through.
+One thing to note is that there are limits to how much you can immunize yourself against doing unnecessary work. A demon evaluator who got to custom-pick the evaluation order (including doing reductions underneath lambdas when he wanted to) could ensure that lots of unnecessary computations got performed, despite your best efforts. We don't yet have any way to prevent that. (Later we will see some ways to *computationally force* the evaluation order we prefer. Of course, in any real computing environment you'll mostly know what evaluation order you're dealing with, and you can mostly program efficiently around that.) The current scheme at least makes our result not *computationally depend on* what happens further on in the traversal, once we've passed a result to the done_handler. We don't even rely on the later steps in the traversal cooperating to pass our result through.

-All of that gave us a left-fold implementation of lists. (Perhaps if you were _aiming_ for a left-fold implementation of lists, you would make the traversal function f take its current_list_element and seed_value arguments in the flipped order, but let's not worry about that.)
-
-Now, let's think about how to get a right-fold implementation. It's not profoundly different, but it does require us to change our interface a little. Our left-fold implementation of [10,20,30,40], above, looked like this (now we abbreviate some of the variables):
+All of that gave us a *left*-fold implementation of lists. (Perhaps if you were _aiming_ for a left-fold implementation of lists, you would make the traversal function f take its current_list_element and seed_value arguments in the flipped order, but let's not worry about that.)
+Now, let's think about how to get a *right*-fold implementation. It's not profoundly different, but it does require us to change our interface a little. Our left-fold implementation of [10,20,30,40], above, looked like this (now we abbreviate some of the variables):

\f z d. f 10 z d (\z. [20,30,40] f z d)

@@ -85,7 +84,7 @@ Now suppose we had just the implementation of the tail of the list, [20,30,40]

How should we take that value and transform it into the preceding value, which represents 10 consed onto that tail? I can't see how to do it in a general way, and I expect it's just not possible. Essentially what we want is to take that second d in the innermost function \z. f 20 z d d, we want to replace that second d with something like (\z. f 10 z d d). But how can we replace just the second d without also replacing the first d, and indeed all the other bound occurrences of d in the expansion of [20,30,40].

-The difficulty here is that our traversal function f expects two handlers, but we are only giving the fold function we implement the list as a single handler. That single handler gets fed twice to the traversal function. One time it may be transformed, but at the end of the traversal, as with \z. f 20 z d d, there's nothing left to do to "keep going", so here it's just the single handler d fed to f twice. But we can see that in order to implement cons for a right-folding traversal, we don't want it to be the single handler d fed to f twice. It'd work better if we implemented [20,30,40] like this:
+The difficulty here is that our traversal function f expects two handlers, but we are only giving a single handler to the fold function we implement the list as. That single handler gets fed twice to the traversal function. One time it may be transformed, but at the end of the traversal, as with \z. f 20 z d d, there's nothing left to do to "keep going", so here it's just the single handler d fed to f twice. But we can see that in order to implement cons for a right-folding traversal, we don't want it to be the single handler d fed to f twice. It'd work better if we implemented [20,30,40] like this:

\f z d g. f 40 z d (\z. f 30 z d (\z. f 20 z d g))


some refactoring, including exposition
diff --git a/exercises/_assignment12.mdwn b/exercises/_assignment12.mdwn
index 2af6513..017a8d3 100644
--- a/exercises/_assignment12.mdwn
+++ b/exercises/_assignment12.mdwn
@@ -1,49 +1,133 @@
-1.  Complete the definitions of move_botleft and move_right_or_up from the same-fringe solution in [[this week's notes|/topics/week12_list_and_tree_zippers]]. **Test your attempts** against some example trees to see if the resulting make_fringe_enumerator and same_fringe functions work as expected. Show us some of your tests.
+## Same-fringe using zippers ##

-        type 'a tree = Leaf of 'a | Node of ('a tree * 'a tree)
+Recall back in [[Assignment 4|assignment4#fringe]], we asked you to enumerate the "fringe" of a leaf-labeled tree. Both of these trees (here I *am* drawing the labels in the diagram):

-        type 'a starred_level = Root | Starring_left of 'a starred_nonroot | Starring_right of 'a starred_nonroot
-        and 'a starred_nonroot = { parent : 'a starred_level; sibling: 'a tree };;
+	    .                .
+	   / \              / \
+	  .   3            1   .
+	 / \                  / \
+	1   2                2   3

-        type 'a zipper = { level : 'a starred_level; filler: 'a tree };;
+have the same fringe: [1; 2; 3]. We also asked you to write a function that determined when two trees have the same fringe. The way you approached that back then was to enumerate each tree's fringe, and then compare the two lists for equality. Today, and then again in a later class, we'll encounter new ways to approach the problem of determining when two trees have the same fringe.

-        let new_zipper (t : 'a tree) : 'a zipper =
-          {level = Root; filler = t}

-        let rec move_botleft (z : 'a zipper) : 'a zipper =
-          (* returns z if the targetted node in z has no children *)
-          (* else returns move_botleft (zipper which results from moving down from z to the leftmost child) *)
-          _____ (* YOU SUPPLY THE DEFINITION *)
+Supposing you did work out an implementation of the tree zipper, then one way to determine whether two trees have the same fringe would be: go downwards (and leftwards) in each tree as far as possible. Compare the focused leaves. If they're different, stop because the trees have different fringes. If they're the same, then for each tree, move rightward if possible; if it's not (because you're at the rightmost leaf in a subtree), move upwards then try again to move rightwards. Repeat until you are able to move rightwards. Once you do move rightwards, go downwards (and leftwards) as far as possible. Then you'll be focused on the next leaf in the tree's fringe. The operations it takes to get to "the next leaf" may be different for the two trees. For example, in these trees:

-    <!--
-    match z.filler with Leaf _ -> z | Node (l, r) -> move_botleft { level = Starring_left { parent = z.level; sibling = r }; filler = l }
-    -->
+	    .                .
+	   / \              / \
+	  .   3            1   .
+	 / \                  / \
+	1   2                2   3

-        let rec move_right_or_up (z : 'a zipper) : 'a zipper option =
-          (* if it's possible to move right in z, returns Some (the result of doing so) *)
-          (* else if it's not possible to move any further up in z, returns None *)
-          (* else returns move_right_or_up (result of moving up in z) *)
-          _____ (* YOU SUPPLY THE DEFINITION *)
+you won't move upwards at the same steps. Keep comparing "the next leaves" until they are different, or you exhaust the leaves of only one of the trees (then again the trees have different fringes), or you exhaust the leaves of both trees at the same time, without having found leaves with different labels. In this last case, the trees have the same fringe.

-    <!--
-    match z.level with
-    | Starring_left { parent = p; sibling = right } -> Some { level = Starring_right { parent = p; sibling = z.filler }; filler = right }
-    | Starring_right { parent = p; sibling = left } -> let new_tree = Node (left, z.filler) in move_right_or_up { level = p; filler = new_tree}
+If your trees are very big---say, millions of leaves---you can imagine how this would be quicker and more memory-efficient than traversing each tree to construct a list of its fringe, and then comparing the two lists so built to see if they're equal. For one thing, the zipper method can abort early if the fringes diverge early, without needing to traverse or build a list containing the rest of each tree's fringe.
+
+Let's sketch the implementation of this. We won't provide all the details for an implementation of the tree zipper (you'll need to fill those in), but we will sketch an interface for it.
+
+In these exercises, we'll help ourselves to OCaml's **record types**. These are nothing more than tuples with a pretty interface. Instead of saying:
+
+    # type blah = Blah of (int * int * (char -> bool));;
+
+and then having to remember which element in the triple was which:
+
+    # let b1 = Blah (1, (fun c -> c = 'M'), 2);;
+    Error: This expression has type int * (char -> bool) * int
+    but an expression was expected of type int * int * (char -> bool)
+    # (* damnit *)
+    # let b1 = Blah (1, 2, (fun c -> c = 'M'));;
+    val b1 : blah = Blah (1, 2, <fun>)
+
+records let you attach descriptive labels to the components of the tuple:
+
+    # type blah_record = { height : int; weight : int; char_tester : char -> bool };;
+    # let b2 = { height = 1; weight = 2; char_tester = (fun c -> c = 'M') };;
+    val b2 : blah_record = {height = 1; weight = 2; char_tester = <fun>}
+    # let b3 = { height = 1; char_tester = (fun c -> c = 'K'); weight = 3 };; (* also works *)
+    val b3 : blah_record = {height = 1; weight = 3; char_tester = <fun>}
+
+These were the strategies to extract the components of an unlabeled tuple:
+
+    let h = fst some_pair;; (* accessor functions fst and snd are only predefined for pairs *)
+
+    let (h, w, test) = b1;; (* works for arbitrary tuples *)
+
+    match b1 with
+    | (h, w, test) -> ...;; (* same as preceding *)
+
+Here is how you can extract the components of a labeled record:
+
+    let h = b2.height;; (* handy! *)
+
+    let {height = h; weight = w; char_tester = test} = b2 in
+    (* go on to use h, w, and test ... *)
+
+    match test with
+    | {height = h; weight = w; char_tester = test} ->
+    (* same as preceding *)
+
+Anyway, using record types, we might define the tree zipper interface like so. First, we define a type for leaf-labeled, binary trees:
+
+    type 'a tree = Leaf of 'a | Node of ('a tree * 'a tree)
+
+Next, the types for our tree zippers:
+
+    type 'a zipper = { in_focus: 'a tree; context : 'a context }
+    and 'a context = Root | Nonroot of 'a nonroot_context
+    and 'a nonroot_context = { up : 'a context; left: 'a tree option; right: 'a tree option }
+
+Unlike in seminar, here we represent the siblings as 'a tree options rather than 'a tree lists. Since we're dealing with binary trees, each context will have exactly one sibling, either to the right or to the left.
+
+The following function takes an 'a tree and returns an 'a zipper focused on its root:
+
+    let new_zipper (t : 'a tree) : 'a zipper =
+      {in_focus = t; context = Root}
+
+Here are the beginnings of functions to move from one focused tree to another:
+
+    let rec move_botleft (z : 'a zipper) : 'a zipper =
+      (* returns z if the focused node in z has no children *)
+      (* else returns move_botleft (zipper which results from moving down from z's focused node to its leftmost child) *)
+      _____ (* YOU SUPPLY THE DEFINITION *)
+
+<!--
+    match z.in_focus with
+    | Leaf _ -> z
+    | Node(left, right) ->
+        move_botleft {in_focus = left; context = Nonroot {up = z.context; left = None; right = Some right}}
+-->
+
+
+    let rec move_right_or_up (z : 'a zipper) : 'a zipper option =
+      (* if it's possible to move right in z, returns Some (the result of doing so) *)
+      (* else if it's not possible to move any further up in z, returns None *)
+      (* else returns move_right_or_up (result of moving up in z) *)
+      _____ (* YOU SUPPLY THE DEFINITION *)
+
+<!--
+    match z.context with
+    | Nonroot {up; left= None; right = Some right} ->
+        Some {in_focus = right; context = Nonroot {up; left = Some z.in_focus; right = None}}
| Root -> None
-    -->
+    | Nonroot {up; left = Some left; right = None} ->
+        move_right_or_up {in_focus = Node(left, z.in_focus); context = up}
+-->
+

-    &nbsp;
+1.  Your first assignment is to complete the definitions of move_botleft and move_right_or_up.
+
+    Having completed that, we can use define a function that enumerates a tree's fringe, step by step, until it's exhausted:

let make_fringe_enumerator (t: 'a tree) : 'b * 'a zipper option =
-          (* create a zipper targetting the botleft of t *)
+          (* create a zipper focusing the botleft of t *)
let zbotleft = move_botleft (new_zipper t) in
(* create initial state, pointing to zbotleft *)
let initial_state = Some zbotleft in
(* construct the next_leaf function *)
let next_leaf : 'a zipper option -> ('a * 'a zipper option) option = function
| Some z -> (
-              (* extract label of currently-targetted leaf *)
-              let Leaf current = z.filler in
+              (* extract label of currently-focused leaf *)
+              let Leaf current = z.in_focus in
(* create next_state pointing to next leaf, if there is one *)
let next_state : 'a zipper option = match move_right_or_up z with
| None -> None
@@ -56,6 +140,33 @@
(* return the next_leaf function and initial state *)
next_leaf, initial_state

+    Here's an example of make_fringe_enumerator in action:
+
+        # let tree1 = Leaf 1;;
+	val tree1 : int tree = Leaf 1
+	# let next1, state1 = make_fringe_enumerator tree1;;
+	val next1 : unit -> int option = <fun>
+	# let res1, state1' = next1 state1;;
+	- : int option = Some 1
+	# next1 state1';;
+	- : int option = None
+	# let tree2 = Node (Node (Leaf 1, Leaf 2), Leaf 3);;
+	val tree2 : int tree = Node (Node (Leaf 1, Leaf 2), Leaf 3)
+	# let next2, state2 = make_fringe_enumerator tree2;;
+	val next2 : unit -> int option = <fun>
+	# let res2, state2' = next2 state2;;
+	- : int option = Some 1
+	# let res2, state2'' = next2 state2';;
+	- : int option = Some 2
+	# let res2, state2''' = next2 state2'';;
+	- : int option = Some 3
+	# let res2, state2'''' = next2 state2''';;
+	- : int option = None
+
+    You might think of it like this: make_fringe_enumerator returns a little subprogram that will keep returning the next leaf in a tree's fringe, in the form Some ..., until it gets to the end of the fringe. After that, it will return None. The subprogram's memory of where it is and what steps to perform next are stored in the state variables that are part of its input and output.
+
+    Using these fringe enumerators, we can write our same_fringe function like this:
+
let same_fringe (t1 : 'a tree) (t2 : 'a tree) : bool =
let next1, initial_state1 = make_fringe_enumerator t1 in
let next2, initial_state2 = make_fringe_enumerator t2 in

(Diff truncated)

diff --git a/exercises/assignment4.mdwn b/exercises/assignment4.mdwn
index 3786a61..9517111 100644
--- a/exercises/assignment4.mdwn
+++ b/exercises/assignment4.mdwn
@@ -114,7 +114,7 @@ For instance, fact 0 ~~> 1, fact 1 ~~> 1, fact 2 ~~> 2, fact 3 ~~>
Your assignment is to write a Lambda Calculus function that expects a tree, encoded in the way just described, as an argument, and returns the sum of its leaves as a result. So for all of the trees listed above, it should return 1 + 2 + 3, namely 6. You can use any Lambda Calculus implementation of lists you like.

-
+<a id=fringe></a>
8.    The **fringe** of a leaf-labeled tree is the list of values at its leaves, ordered from left-to-right. For example, the fringe of all three trees displayed above is the same list, [1, 2, 3]. We are going to return to the question of how to tell whether trees have the same fringe several times this course. We'll discover more interesting and more efficient ways to do it as our conceptual toolboxes get fuller. For now, we're going to explore the straightforward strategy. Write a function that expects a tree as an argument, and returns the list which is its fringe. Next write a function that expects two trees as arguments, converts each of them into their fringes, and then determines whether the two lists so produced are equal. (Convert your list_equal? function from last week's homework into the Lambda Calculus for this last step.)


cut content
diff --git a/topics/_coroutines_and_aborts.mdwn b/topics/_coroutines_and_aborts.mdwn
index 4b2b5da..ce525b3 100644
--- a/topics/_coroutines_and_aborts.mdwn
+++ b/topics/_coroutines_and_aborts.mdwn
@@ -1,189 +1,5 @@
[[!toc]]

-##Same-fringe using a zipper-based coroutine##
-
-Recall back in [[Assignment4]], we asked you to enumerate the "fringe" of a leaf-labeled tree. Both of these trees (here I *am* drawing the labels in the diagram):
-
-	    .                .
-	   / \              / \
-	  .   3            1   .
-	 / \                  / \
-	1   2                2   3
-
-have the same fringe: [1; 2; 3]. We also asked you to write a function that determined when two trees have the same fringe. The way you approached that back then was to enumerate each tree's fringe, and then compare the two lists for equality. Today, and then again in a later class, we'll encounter new ways to approach the problem of determining when two trees have the same fringe.
-
-
-Supposing you did work out an implementation of the tree zipper, then one way to determine whether two trees have the same fringe would be: go downwards (and leftwards) in each tree as far as possible. Compare the targetted leaves. If they're different, stop because the trees have different fringes. If they're the same, then for each tree, move rightward if possible; if it's not (because you're at the rightmost position in a sibling list), move upwards then try again to move rightwards. Repeat until you are able to move rightwards. Once you do move rightwards, go downwards (and leftwards) as far as possible. Then you'll be targetted on the next leaf in the tree's fringe. The operations it takes to get to "the next leaf" may be different for the two trees. For example, in these trees:
-
-	    .                .
-	   / \              / \
-	  .   3            1   .
-	 / \                  / \
-	1   2                2   3
-
-you won't move upwards at the same steps. Keep comparing "the next leaves" until they are different, or you exhaust the leaves of only one of the trees (then again the trees have different fringes), or you exhaust the leaves of both trees at the same time, without having found leaves with different labels. In this last case, the trees have the same fringe.
-
-If your trees are very big---say, millions of leaves---you can imagine how this would be quicker and more memory-efficient than traversing each tree to construct a list of its fringe, and then comparing the two lists so built to see if they're equal. For one thing, the zipper method can abort early if the fringes diverge early, without needing to traverse or build a list containing the rest of each tree's fringe.
-
-Let's sketch the implementation of this. We won't provide all the details for an implementation of the tree zipper, but we will sketch an interface for it.
-
-First, we define a type for leaf-labeled, binary trees:
-
-	type 'a tree = Leaf of 'a | Node of ('a tree * 'a tree)
-
-Next, the interface for our tree zippers. We'll help ourselves to OCaml's **record types**. These are nothing more than tuples with a pretty interface. Instead of saying:
-
-	# type blah = Blah of (int * int * (char -> bool));;
-
-and then having to remember which element in the triple was which:
-
-	# let b1 = Blah (1, (fun c -> c = 'M'), 2);;
-	Error: This expression has type int * (char -> bool) * int
-       but an expression was expected of type int * int * (char -> bool)
-	# (* damnit *)
-	# let b1 = Blah (1, 2, (fun c -> c = 'M'));;
-	val b1 : blah = Blah (1, 2, <fun>)
-
-records let you attach descriptive labels to the components of the tuple:
-
-	# type blah_record = { height : int; weight : int; char_tester : char -> bool };;
-	# let b2 = { height = 1; weight = 2; char_tester = (fun c -> c = 'M') };;
-	val b2 : blah_record = {height = 1; weight = 2; char_tester = <fun>}
-	# let b3 = { height = 1; char_tester = (fun c -> c = 'K'); weight = 3 };; (* also works *)
-	val b3 : blah_record = {height = 1; weight = 3; char_tester = <fun>}
-
-These were the strategies to extract the components of an unlabeled tuple:
-
-	let h = fst some_pair;; (* accessor functions fst and snd are only predefined for pairs *)
-
-	let (h, w, test) = b1;; (* works for arbitrary tuples *)
-
-	match b1 with
-	| (h, w, test) -> ...;; (* same as preceding *)
-
-Here is how you can extract the components of a labeled record:
-
-	let h = b2.height;; (* handy! *)
-
-	let {height = h; weight = w; char_tester = test} = b2
-	in (* go on to use h, w, and test ... *)
-
-	match test with
-	| {height = h; weight = w; char_tester = test} ->
-	  (* same as preceding *)
-
-Anyway, using record types, we might define the tree zipper interface like so:
-
-	type 'a starred_level = Root | Starring_Left of 'a starred_nonroot | Starring_Right of 'a starred_nonroot
-	and 'a starred_nonroot = { parent : 'a starred_level; sibling: 'a tree };;
-
-	type 'a zipper = { level : 'a starred_level; filler: 'a tree };;
-
-	let rec move_botleft (z : 'a zipper) : 'a zipper =
-	    (* returns z if the targetted node in z has no children *)
-	    (* else returns move_botleft (zipper which results from moving down and left in z) *)
-
-<!--
-	    let {level; filler} = z
-	    in match filler with
-	    | Leaf _ -> z
-	    | Node(left, right) ->
-	        let zdown = {level = Starring_Left {parent = level; sibling = right}; filler = left}
-	        in move_botleft zdown
-	    ;;
--->
-
-	let rec move_right_or_up (z : 'a zipper) : 'a zipper option =
-	    (* if it's possible to move right in z, returns Some (the result of doing so) *)
-	    (* else if it's not possible to move any further up in z, returns None *)
-	    (* else returns move_right_or_up (result of moving up in z) *)
-
-<!--
-	    let {level; filler} = z
-	    in match level with
-	    | Starring_Left {parent; sibling = right} -> Some {level = Starring_Right {parent; sibling = filler}; filler = right}
-	    | Root -> None
-	    | Starring_Right {parent; sibling = left} ->
-	        let z' = {level = parent; filler = Node(left, filler)}
-	        in move_right_or_up z'
-	    ;;
--->
-
-The following function takes an 'a tree and returns an 'a zipper focused on its root:
-
-	let new_zipper (t : 'a tree) : 'a zipper =
-	    {level = Root; filler = t}
-	    ;;
-
-Finally, we can use a mutable reference cell to define a function that enumerates a tree's fringe until it's exhausted:
-
-	let make_fringe_enumerator (t: 'a tree) =
-	    (* create a zipper targetting the botleft of t *)
-	    let zbotleft = move_botleft (new_zipper t)
-	    (* create a refcell initially pointing to zbotleft *)
-	    in let zcell = ref (Some zbotleft)
-	    (* construct the next_leaf function *)
-	    in let next_leaf () : 'a option =
-	        match !zcell with
-	        | Some z -> (
-	            (* extract label of currently-targetted leaf *)
-	            let Leaf current = z.filler
-	            (* update zcell to point to next leaf, if there is one *)
-	            in let () = zcell := match move_right_or_up z with
-	                | None -> None
-	                | Some z' -> Some (move_botleft z')
-	            (* return saved label *)
-	            in Some current
-	        | None -> (* we've finished enumerating the fringe *)
-	            None
-	        )
-	    (* return the next_leaf function *)
-	    in next_leaf
-	    ;;
-
-Here's an example of make_fringe_enumerator in action:
-
-	# let tree1 = Leaf 1;;
-	val tree1 : int tree = Leaf 1
-	# let next1 = make_fringe_enumerator tree1;;
-	val next1 : unit -> int option = <fun>
-	# next1 ();;
-	- : int option = Some 1
-	# next1 ();;
-	- : int option = None
-	# next1 ();;
-	- : int option = None
-	# let tree2 = Node (Node (Leaf 1, Leaf 2), Leaf 3);;
-	val tree2 : int tree = Node (Node (Leaf 1, Leaf 2), Leaf 3)
-	# let next2 = make_fringe_enumerator tree2;;
-	val next2 : unit -> int option = <fun>
-	# next2 ();;
-	- : int option = Some 1
-	# next2 ();;
-	- : int option = Some 2
-	# next2 ();;
-	- : int option = Some 3
-	# next2 ();;
-	- : int option = None
-	# next2 ();;
-	- : int option = None
-
-You might think of it like this: make_fringe_enumerator returns a little subprogram that will keep returning the next leaf in a tree's fringe, in the form Some ..., until it gets to the end of the fringe. After that, it will keep returning None.
-
-Using these fringe enumerators, we can write our same_fringe function like this:
-
-	let same_fringe (t1 : 'a tree) (t2 : 'a tree) : bool =
-	    let next1 = make_fringe_enumerator t1
-	    in let next2 = make_fringe_enumerator t2
-	    in let rec loop () : bool =
-	        match next1 (), next2 () with
-	        | Some a, Some b when a = b -> loop ()
-	        | None, None -> true
-	        | _ -> false
-	    in loop ()
-	    ;;
-
-The auxiliary loop function will keep calling itself recursively until a difference in the fringes has manifested itself---either because one fringe is exhausted before the other, or because the next leaves in the two fringes have different labels. If we get to the end of both fringes at the same time (next1 (), next2 () matches the pattern None, None) then we've established that the trees do have the same fringe.

The technique illustrated here with our fringe enumerators is a powerful and important one. It's an example of what's sometimes called **cooperative threading**. A "thread" is a subprogram that the main computation spawns off. Threads are called "cooperative" when the code of the main computation and the thread fixes when control passes back and forth between them. (When the code doesn't control this---for example, it's determined by the operating system or the hardware in ways that the programmer can't predict---that's called "preemptive threading.") Cooperative threads are also sometimes called *coroutines* or *generators*.


tweaks
diff --git a/topics/week12_list_and_tree_zippers.mdwn b/topics/week12_list_and_tree_zippers.mdwn
index b9e030c..c59a87d 100644
--- a/topics/week12_list_and_tree_zippers.mdwn
+++ b/topics/week12_list_and_tree_zippers.mdwn
@@ -36,7 +36,7 @@ Here's an idea. What if we had some way of representing a list as "broken" at a

[10; 20; 30; 40; 50; 60; 70; 80; 90]

-we might imagine the list "broken" at position 3 like this (positions are numbered starting from 0):
+we might imagine the list "broken" at position 3 like this (we follow the dominant convention of counting list positions from the left starting at 0):

40;
30;     50;
@@ -64,7 +64,7 @@ The kind of data structure we're looking for here is called a **list zipper**. T
80;
90]

-would be represented as ([30; 20; 10], [40; 50; 60; 70; 80; 90]). To move forward in the base list, we pop the head element 40 off of the head element of the second list in the zipper, and push it onto the first list, getting ([40; 30; 20; 10], [50; 60; 70; 80; 90]). To move backwards again, we pop off of the first list, and push it onto the second. To reconstruct the base list, we just "move backwards" until the first list is empty. (This is supposed to evoke the image of zipping up a zipper; hence the data structure's name.)
+would be represented as ([30; 20; 10], [40; 50; 60; 70; 80; 90]). To move forward in the base list, we pop the head element 40 off of the head element of the second list in the zipper, and push it onto the first list, getting ([40; 30; 20; 10], [50; 60; 70; 80; 90]). To move backwards again, we pop off of the first list, and push it onto the second. To reconstruct the base list, we just "moved backwards" until the first list is empty. (This is supposed to evoke the image of zipping up a zipper; hence the data structure's name.)

Last time we gave the class, we had some discussion of what's the right way to apply the "zipper" metaphor. I suggest it's best to think of the tab of the zipper being here:

@@ -86,18 +86,18 @@ However you understand the "zipper" metaphor, this is a very handy data structur

[10; 20; 30; *; 50; 60; 70; 80; 90], * filled by 40

-would represent a list zipper where the break is at position 3 (we follow the dominant convention of counting list positions from the left starting at 0), and the element occupying that position is 40. For a list zipper, this might be implemented using the pairs-of-lists structure described above.
+would represent a list zipper where the break is at position 3, and the element occupying that position is 40. For a list zipper, this could be implemented using the pairs-of-lists structure described above.

Alternatively, we could present it in a form more like we used in the seminar for tree zippers:

-    in_focus = 40, context = ([30; 20; 10], [50; 60; 70; 80; 90])
+    in_focus = 40, context = (before = [30; 20; 10], after = [50; 60; 70; 80; 90])

##Tree Zippers##

Now how could we translate a zipper-like structure over to trees? What we're aiming for is a way to keep track of where we are in a tree, in the same way that the "broken" lists let us keep track of where we are in the base list.

-It's important to set some ground rules for what will follow. If you don't understand these ground rules you will get confused. First off, for many uses of trees one wants some of the nodes or leaves in the tree to be *labeled* with additional information. It's important not to conflate the label with the node itself. Numerically one and the same piece of information---for example, the same int---could label two nodes of the tree without those nodes thereby being identical, as here:
+It's important to set some ground rules for what will follow. If you don't understand these ground rules you will get confused. First off, for many uses of trees one wants some of the nodes or leaves in the tree to be *labeled* with additional information. It's important not to conflate the label with the node itself. Numerically one and the same piece of information --- for example, the same int --- could label two nodes of the tree without those nodes thereby being identical, as here:

root
/ \
@@ -117,7 +117,7 @@ The leftmost leaf and the rightmost leaf have the same label; but they are diffe

Here I haven't drawn what the labels are. The leftmost leaf, the node tagged "3" in this diagram, doesn't have the label 3. It has the label 10, as we said before. I just haven't put that into the diagram. The node tagged "2" doesn't have the label 2. It doesn't have any label. The tree in this example only has information labeling its leaves, not any of its inner nodes. The identity of its inner nodes is exhausted by their position in the tree.

-That is a second thing to note. In what follows, we'll only be working with *leaf-labeled* trees. In some uses of trees, one also wants labels on inner nodes. But we won't be discussing any such trees now. Our trees only have labels on their leaves. The diagrams below will tag all of the nodes, as in the second diagram above, and won't display what the leaves' labels are.
+That is a second thing to note. In what follows, we'll only be working with *leaf-labeled* trees. In some uses of trees, one also (or sometimes, only) wants labels on inner nodes. But we won't be discussing any such trees now. Our trees only have labels on their leaves. The diagrams below will tag all of the nodes, as in the second diagram above, and won't display what the leaves' labels are.

Final introductory comment: in particular applications, you may only need to work with binary trees---trees where internal nodes always have exactly two subtrees. That is what we'll work with in the homework, for example. But to get the guiding idea of how tree zippers work, it's helpful first to think about trees that permit nodes to have many subtrees. So that's how we'll start.

@@ -161,7 +161,7 @@ And the parent of that context should intuitively be a context centered on node

Root

-Fully spelled out, then, our tree focused on node 50 would look like this. (For brevity, I'll write siblings = [foo | bar] instead of left_siblings = [foo]; right_siblings = [bar].)
+Fully spelled out, then, our tree focused on node 50 would look like this:

in_focus = subtree 50,
context = (up = (up = Root,
@@ -195,7 +195,7 @@ How would we move upward in a tree? Well, to move up from the focused tree just
/        |      \
leaf 1  leaf 2  leaf 3

-Call the unfocused tree just specified subtree 20'. The result of moving upward from our previous focused tree, focused on leaf 1, would be a tree focused on the subtree just described, with the context being the outermost up element of the previous focused tree (what's written above as (up = ..., siblings = [*; subtree 50; subtree 80]). That is:
+Call the unfocused tree just specified subtree 20'. (It's the same as subtree 20 was before. We just give it a different name because subtree 20 wasn't a component we could extract from the previous zipper. We had to rebuild it from the information the previous zipper encoded.) The result of moving upward from our previous focused tree, focused on leaf 1, would be a tree focused on the subtree just described, with the context being the outermost up element of the previous focused tree (what's written above as (up = ..., siblings = [*; subtree 50; subtree 80]). That is:

up = ...,
siblings = [*; subtree 50; subtree 80],
@@ -214,12 +214,12 @@ Moving upwards yet again would get us:
context = (up = Root,
siblings = [*; subtree 920; subtree 950])

-where subtree 500' refers to a tree built from a root node whose children are given by the list [*; subtree 50; subtree 80], with subtree 20' inserted into the * position. Moving upwards yet again would get us:
+where subtree 500' refers to a subtree built from a root node whose children are given by the list [*; subtree 50; subtree 80], with subtree 20' inserted into the * position. Moving upwards yet again would get us:

in_focus = subtree 9200',
context = Root

-where the focused element is the root of our base tree. Like the "moving backward" operation for the list zipper, this "moving upward" operation is supposed to be reminiscent of closing a zipper, and that's why these data structures are called zippers.
+where the focused node is exactly the root of our complete tree. Like the "moving backward" operation for the list zipper, this "moving upward" operation is supposed to be reminiscent of closing a zipper, and that's why these data structures are called zippers.

We haven't given you an executable implementation of the tree zipper, but only a suggestive notation. We have however told you enough that you should be able to implement it yourself. Or if you're lazy, you can read:


typo
diff --git a/index.mdwn b/index.mdwn
index e1eac10..b1f5599 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -191,7 +191,7 @@ We've posted a [[State Monad Tutorial]].

(**Week 12**) Thursday April 23

-> Topics: Mutation and hyper-synonymy (no notes); [[Abortable list traversals|/topics/week12_abortable_traversals]]; [[List and tree zippers|/topics/week12_list_and_tree_traversals]]; Homework <!-- [[Homework|exercises/assignment12]] -->
+> Topics: Mutation and hyper-synonymy (no notes); [[Abortable list traversals|/topics/week12_abortable_traversals]]; [[List and tree zippers|/topics/week12_list_and_tree_zippers]]; Homework <!-- [[Homework|exercises/assignment12]] -->

> For amusement/tangential edification: [xkcd on code quality](https://xkcd.com/1513/); [turning a sphere inside out](https://www.youtube.com/watch?v=-6g3ZcmjJ7k)

`