From: Jim Date: Mon, 23 Mar 2015 15:48:44 +0000 (-0400) Subject: Merge branch 'working' X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=commitdiff_plain;h=d2cd63fe26f3777ee6fb7a6005bf28e30c0381f0;hp=fd72a947c60f0178c7464b68b922b72ffe530bf7 Merge branch 'working' * working: untyped eval: V[12]-->V[AB] --- diff --git a/content.mdwn b/content.mdwn index 5d8dbb11..84d34856 100644 --- a/content.mdwn +++ b/content.mdwn @@ -21,15 +21,14 @@ week in which they were introduced. * Types in OCaml and Haskell (will be posted someday) * Practical advice for working with OCaml and/or Haskell (will be posted someday) * [[Kaplan on Plexy|topics/week6_plexy]] and the Maybe type - * Lambda evaluator - * [[Introducing Monads|topics/week7_introducing_monads]] + * Untyped lambda evaluator ([[in browser|code/lambda_evaluator]]) (for home) * Order, "static versus dynamic" * [[Order in programming languages and natural language|topics/week1 order]] * [[Reduction Strategies and Normal Forms in the Lambda Calculus|topics/week3_evaluation_order]] * [[Unit and its usefulness|topics/week3 unit]] - * [[Combinatory evaluator|topics/week7_combinatory_evaluator]] + * Combinatory evaluator ([[for home|topics/week7_combinatory_evaluator]]) * The Untyped Lambda Calculus @@ -46,19 +45,24 @@ week in which they were introduced. * [[Reduction Strategies and Normal Forms|topics/week3_evaluation_order]] * [[Fixed point combinators|topics/week4_fixed_point_combinators]] * [[More about fixed point combinators|topics/week4_more_about_fixed_point_combinators]] - * Interpreter for Lambda terms + * Untyped lambda evaluator ([[in browser|code/lambda_evaluator]]) (for home) * Combinatory logic * [[Introduction|topics/week3 combinatory logic]] - * [[Combinatory evaluator|topics/week7_combinatory_evaluator]] + * Combinatory evaluator ([[for home|topics/week7_combinatory_evaluator]]) * Typed Lambda Calculi * [[Simply-typed lambda calculus|topics/week5 simply typed]] (will be updated) * [[System F|topics/week5 system F]] (will be updated) * Types in OCaml and Haskell (will be posted someday) + * see also Monads links, below + +* Monads * [[Introducing Monads|topics/week7_introducing_monads]] + * [[Safe division with monads|topics/week8_safe_division_with_monads]] + ## Topics by week ## @@ -117,6 +121,11 @@ Week 6: Week 7: * [[Combinatory evaluator|topics/week7_combinatory_evaluator]] -* Lambda evaluator (will be posted soon) -* [[Introducing Monads|topics/week7_introducing_monads]] (updated Fri 20 Mar) +* Untyped lambda evaluator (will be posted soon) +* [[Introducing Monads|topics/week7_introducing_monads]] (updated Mon 23 Mar) * [[Homework for week 7|exercises/assignment7]] + +Week 8: + +* [[Safe division with monads|topics/week8_safe_division_with_monads]] + diff --git a/exercises/assignment7.mdwn b/exercises/assignment7.mdwn index e9cf8237..029a7798 100644 --- a/exercises/assignment7.mdwn +++ b/exercises/assignment7.mdwn @@ -78,7 +78,7 @@ boxed types. Here, a "boxed type" is a type function with one unsaturated hole (which may have several occurrences, as in `(α,α) tree`). We can think of the box type as a function from a type to a type. -Recall that a monad requires a singleton function mid : P-> P, and a +Recall that a monad requires a singleton function ⇧ (\* mid \*) : P-> P, and a composition operator like >=> : (P->Q) -> (Q->R) -> (P->R). As we said in the notes, we'll move freely back and forth between using `>=>` and using `<=<` (aka `mcomp`), which @@ -86,19 +86,19 @@ is just `>=>` with its arguments flipped. `<=<` has the virtue that it correspon closely to the ordinary mathematical symbol `○`. But `>=>` has the virtue that its types flow more naturally from left to right. -Anyway, `mid` and (let's say) `<=<` have to obey the Monad Laws: +Anyway, `mid`/`⇧` and (let's say) `<=<` have to obey the Monad Laws: - mid <=< k = k - k <=< mid = k - j <=< (k <=< l) = (j <=< k) <=< l + ⇧ <=< k == k + k <=< ⇧ == k + j <=< (k <=< l) == (j <=< k) <=< l -For example, the Identity monad has the identity function `I` for `mid` +For example, the Identity monad has the identity function `I` for `⇧` and ordinary function composition `○` for `<=<`. It is easy to prove that the laws hold for any terms `j`, `k`, and `l` whose types are -suitable for `mid` and `<=<`: +suitable for `⇧` and `<=<`: - mid <=< k == I ○ k == \p. I (k p) ~~> \p. k p ~~> k - k <=< mid == k ○ I == \p. k (I p) ~~> \p. k p ~~> k + ⇧ <=< k == I ○ k == \p. I (k p) ~~> \p. k p ~~> k + k <=< ⇧ == k ○ I == \p. k (I p) ~~> \p. k p ~~> k (j <=< k) <=< l == (\p. j (k p)) ○ l == \q. (\p. j (k p)) (l q) ~~> \q. j (k (l q)) j <=< (k <=< l) == j ○ (k ○ l) == j ○ (\p. k (l p)) == \q. j ((\p. k (l p)) q) ~~> \q. j (k (l q)) @@ -119,7 +119,7 @@ More specifically, Show that your composition operator obeys the Monad Laws. 2. Do the same with lists. That is, given an arbitrary type -`'a`, let the boxed type be `['a]` or `'a list`, that is, lists of values of type `'a`. The `mid` +`'a`, let the boxed type be `['a]` or `'a list`, that is, lists of values of type `'a`. The `⇧` is the singleton function `\p. [p]`, and the composition operator is: let (>=>) (j : 'a -> 'b list) (k : 'b -> 'c list) : 'a -> 'c list = @@ -129,4 +129,9 @@ is the singleton function `\p. [p]`, and the composition operator is: let j a = [a; a+1];; let k b = [b*b; b+b];; - (j >=> k) 7 (* ==> [49; 14; 64; 16] *) + (j >=> k) 7 + (* which OCaml evaluates to: + - : int list = [49; 14; 64; 16] + *) + + Show that these obey the Monad Laws. diff --git a/index.mdwn b/index.mdwn index e771ecbd..0c32f1f9 100644 --- a/index.mdwn +++ b/index.mdwn @@ -155,14 +155,13 @@ Practical advice for working with OCaml and/or Haskell (will be posted someday); (**Week 7**) Thursday March 12 -> Topics: [[Combinatory evaluator|topics/week7_combinatory_evaluator]]; Lambda evaluator; [[Introducing Monads|topics/week7_introducing_monads]] (updated Fri 20 Mar); [[Homework|exercises/assignment7]] +> Topics: [[Combinatory evaluator|topics/week7_combinatory_evaluator]]; Lambda evaluator; [[Introducing Monads|topics/week7_introducing_monads]] (updated Mon 23 Mar); [[Homework|exercises/assignment7]] > We posted answers to [[Week 4's homework|exercises/assignment4_answers]] and [[Week 5-6's homework|exercises/assignment5_answers]]. - +> Topics: [[Safe division with monads|topics/week8_safe_division_with_monads]] - -* [Philip Wadler. Monads for Functional Programming](http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf): -in M. Broy, editor, *Marktoberdorf Summer School on Program Design -Calculi*, Springer Verlag, NATO ASI Series F: Computer and systems -sciences, Volume 118, August 1992. Also in J. Jeuring and E. Meijer, -editors, *Advanced Functional Programming*, Springer Verlag, -LNCS 925, 1995. Some errata fixed August 2001. - - - -There's a long list of monad tutorials on the [[Offsite Reading]] page. (Skimming the titles is somewhat amusing.) If you are confused by monads, make use of these resources. Read around until you find a tutorial pitched at a level that's helpful for you. - -In the presentation we gave above---which follows the functional programming conventions---we took `unit`/return and `bind` as the primitive operations. From these a number of other general monad operations can be derived. It's also possible to take some of the others as primitive. The [Monads in Category -Theory](/advanced_topics/monads_in_category_theory) notes do so, for example. - -Here are some of the other general monad operations. You don't have to master these; they're collected here for your reference. - -You may sometimes see: - - u >> v - -That just means: - - u >>= fun _ -> v - -that is: - - bind u (fun _ -> v) - -You could also do `bind u (fun x -> v)`; we use the `_` for the function argument to be explicit that that argument is never going to be used. - -The `lift` operation we asked you to define for last week's homework is a common operation. The second argument to `bind` converts `'a` values into `'b m` values---that is, into instances of the monadic type. What if we instead had a function that merely converts `'a` values into `'b` values, and we want to use it with our monadic type? Then we "lift" that function into an operation on the monad. For example: - - # let even x = (x mod 2 = 0);; - val g : int -> bool = - -`even` has the type `int -> bool`. Now what if we want to convert it into an operation on the Option/Maybe monad? - - # let lift g = fun u -> bind u (fun x -> Some (g x));; - val lift : ('a -> 'b) -> 'a option -> 'b option = - -`lift even` will now be a function from `int option`s to `bool option`s. We can -also define a lift operation for binary functions: - - # let lift2 g = fun u v -> bind u (fun x -> bind v (fun y -> Some (g x y)));; - val lift2 : ('a -> 'b -> 'c) -> 'a option -> 'b option -> 'c option = - -`lift2 (+)` will now be a function from `int option`s and `int option`s to `int option`s. This should look familiar to those who did the homework. - -The `lift` operation (just `lift`, not `lift2`) is sometimes also called the `map` operation. (In Haskell, they say `fmap` or `<$>`.) And indeed when we're working with the List monad, `lift f` is exactly `List.map f`! - -Wherever we have a well-defined monad, we can define a lift/map operation for that monad. The examples above used `Some (g x)` and so on; in the general case we'd use `unit (g x)`, using the specific `unit` operation for the monad we're working with. - -In general, any lift/map operation can be relied on to satisfy these laws: - - * lift id = id - * lift (compose f g) = compose (lift f) (lift g) - -where `id` is `fun x -> x` and `compose f g` is `fun x -> f (g x)`. If you think about the special case of the map operation on lists, this should make sense. `List.map id lst` should give you back `lst` again. And you'd expect these -two computations to give the same result: - - List.map (fun x -> f (g x)) lst - List.map f (List.map g lst) - -Another general monad operation is called `ap` in Haskell---short for "apply." (They also use `<*>`, but who can remember that?) This works like this: - - ap [f] [x; y] = [f x; f y] - ap (Some f) (Some x) = Some (f x) - -and so on. Here are the laws that any `ap` operation can be relied on to satisfy: - - ap (unit id) u = u - ap (ap (ap (unit compose) u) v) w = ap u (ap v w) - ap (unit f) (unit x) = unit (f x) - ap u (unit x) = ap (unit (fun f -> f x)) u - -Another general monad operation is called `join`. This is the operation that takes you from an iterated monad to a single monad. Remember when we were explaining the `bind` operation for the List monad, there was a step where -we went from: - - [[1]; [1;2]; [1;3]; [1;2;4]] - -to: - - [1; 1; 2; 1; 3; 1; 2; 4] - -That is the `join` operation. - -All of these operations can be defined in terms of `bind` and `unit`; or alternatively, some of them can be taken as primitive and `bind` can be defined in terms of them. Here are various interdefinitions: - - lift f u = u >>= compose unit f - lift f u = ap (unit f) u - lift2 f u v = u >>= (fun x -> v >>= (fun y -> unit (f x y))) - lift2 f u v = ap (lift f u) v = ap (ap (unit f) u) v - ap u v = u >>= (fun f -> lift f v) - ap u v = lift2 id u v - join m2 = m2 >>= id - u >>= f = join (lift f u) - u >> v = u >>= (fun _ -> v) - u >> v = lift2 (fun _ -> id) u v - - - -Monad outlook -------------- - -We're going to be using monads for a number of different things in the -weeks to come. One major application will be the State monad, -which will enable us to model mutation: variables whose values appear -to change as the computation progresses. Later, we will study the -Continuation monad. - -But first, we'll look at several linguistic applications for monads, based -on what's called the *Reader monad*. - -##[[Reader Monad for Variable Binding]]## - -##[[Reader Monad for Intensionality]]## - diff --git a/topics/week7_introducing_monads.mdwn b/topics/week7_introducing_monads.mdwn index 0b82d25c..c778bb7e 100644 --- a/topics/week7_introducing_monads.mdwn +++ b/topics/week7_introducing_monads.mdwn @@ -1,8 +1,5 @@ - - + -Introducing Monads -================== The [[tradition in the functional programming literature|https://wiki.haskell.org/Monad_tutorials_timeline]] is to @@ -22,11 +19,10 @@ any case, our emphasis will be on starting with the abstract structure of monads, followed in coming weeks by instances of monads from the philosophical and linguistics literature. -> After you've read this once and are coming back to re-read it to try to digest the details further, the "endofunctors" that slogan is talking about are a combination of our boxes and their associated maps. Their "monoidal" character is captured in the Monad Laws, where a "monoid"---don't confuse with a mon*ad*---is a simpler algebraic notion, meaning a universe with some associative operation that has an identity. For advanced study, here are some further links on the relation between monads as we're working with them and monads as they appear in Category Theory: -[1](http://en.wikipedia.org/wiki/Outline_of_category_theory) -[2](http://lambda1.jimpryor.net/advanced_topics/monads_in_category_theory/) -[3](http://en.wikibooks.org/wiki/Haskell/Category_theory) -[4](https://wiki.haskell.org/Category_theory), where you should follow the further links discussing Functors, Natural Transformations, and Monads. +> After you've read this once and are coming back to re-read it to try to digest the details further, the "endofunctors" that slogan is talking about are a combination of our boxes and their associated `map`s. Their "monoidal" character is captured in the Monad Laws, for which see below. + +[[!toc levels=2]] + ## Box types: type expressions with one free type variable ## @@ -101,7 +97,7 @@ Here are some examples of values of these Kleisli arrow types, where the box typ \x. prime_factors_of x \x. [0, 0, 0] -As semanticists, you are no doubt familiar with the debates between those who insist that propositions are sets of worlds and those who insist they are context change potentials. We hope to show you, in coming weeks, that propositions are (certain sorts of) Kleisli arrows. But this doesn't really compete with the other proposals; it is a generalization of them. Both of the other proposed structures can be construed as specific Kleisli arrow types. +As semanticists, you are no doubt familiar with the debates between those who insist that propositions are sets of worlds and those who insist they are context change potentials. We hope to show you, in coming weeks, that propositions are (certain sorts of) Kleisli arrows. But this [doesn't really compete](/images/faye_dunaway.jpg) with the other proposals; it is a generalization of them. Both of the other proposed structures can be construed as specific Kleisli arrow types. ## A family of functions for each box type ## @@ -118,13 +114,17 @@ Here are the types of our crucial functions, together with our pronunciation, an > In Haskell, this is called `Control.Applicative.liftA2` and `Control.Monad.liftM2`. -mid (/εmaidεnt@tI/): P -> P +⇧ or mid (/εmaidεnt@tI/): P -> P + +> This notion is exemplified by `Just` for the box type `Maybe α` and by the singleton function for the box type `List α`. It will be a way of boxing values with your box type that plays a distinguished role in the various Laws and interdefinitions we present below. + +> In Haskell, this is called `Control.Monad.return` and `Control.Applicative.pure`. In other theoretical contexts it is sometimes called `unit` or `η`. All of these names are somewhat unfortunate. First, it has little to do with `η`-reduction in the Lambda Calculus. Second, it has little to do with the `() : unit` value we discussed in earlier classes. Third, it has little to do with the `return` keyword in C and other languages; that's more closely related to continuations, which we'll discuss in later weeks. Finally, this doesn't perfectly align with other uses of "pure" in the literature. `⇧`'d values _will_ generally be "pure" in the other senses, but other boxed values can be too. -> In Haskell, this is called `Control.Monad.return` and `Control.Applicative.pure`. In other theoretical contexts it is sometimes called `unit` or `η`. In the class presentation Jim called it `𝟭`; but now we've decided that `mid` is better. (Think of it as "m" plus "identity", not as the start of "midway".) This notion is exemplified by `Just` for the box type `Maybe α` and by the singleton function for the box type `List α`. +> For all these reasons, we're thinking it will be clearer in our discussion to use a different name. In the class presentation Jim called it `𝟭`; and in an earlier draft of this page we (only) called it `mid` ("m" plus "identity"); but now we're trying out `⇧` as a symbolic alternative. But in the end, we might switch to just using `η`. -m$ or mapply (/εm@plai/): P -> Q -> P -> Q +¢ or mapply (/εm@plai/): P -> Q -> P -> Q -> We'll use `m$` as a left-associative infix operator, reminiscent of (the right-associative) `$` which is just ordinary function application (also expressed by mere left-associative juxtaposition). In the class presentation Jim called `m$` `●`. In Haskell, it's called `Control.Monad.ap` or `Control.Applicative.<*>`. +> We'll use `¢` as a left-associative infix operator, reminiscent of (the right-associative) `$` which is just ordinary function application (also expressed by mere left-associative juxtaposition). In the class presentation Jim called `¢` `⚫`; and in an earlier draft of this page we called it `m$`. In Haskell, it's called `Control.Monad.ap` or `Control.Applicative.<*>`. <=< or mcomp : (Q -> R) -> (P -> Q) -> (P -> R) @@ -132,27 +132,25 @@ Here are the types of our crucial functions, together with our pronunciation, an >=> or flip mcomp : (P -> Q) -> (Q -> R) -> (P -> R) -> In Haskell, this is `Control.Monad.>=>`. In the class handout, we gave the types for `>=>` twice, and once was correct but the other was a typo. The above is the correct typing. +> In Haskell, this is `Control.Monad.>=>`. We will move freely back and forth between using `<=<` (aka `mcomp`) and using `>=>`, which +is just `<=<` with its arguments flipped. `<=<` has the virtue that it corresponds more +closely to the ordinary mathematical symbol `○`. But `>=>` has the virtue +that its types flow more naturally from left to right. + +> In the class handout, we gave the types for `>=>` twice, and once was correct but the other was a typo. The above is the correct typing. >>= or mbind : (Q) -> (Q -> R) -> (R) +> Haskell uses the symbol `>>=` but calls it "bind". This is not well chosen from the perspective of formal semantics, since it's only loosely connected with what we mean by "binding." But the name is too deeply entrenched to change. We've at least preprended an "m" to the front of "bind". In some presentations this operation is called `★`. + =<< or flip mbind : (Q -> R) -> (Q) -> (R) join: P -> P > In Haskell, this is `Control.Monad.join`. In other theoretical contexts it is sometimes called `μ`. -Haskell uses the symbol `>>=` but calls it "bind". This is not well chosen from the perspective of formal semantics, but it's too deeply entrenched to change. We've at least preprended an "m" to the front of "bind". - -Haskell's names "return" and "pure" for `mid` are even less well chosen, and we think it will be clearer in our discussion to use a different name. (Also, in other theoretical contexts this notion goes by other names, anyway, like `unit` or `η` --- having nothing to do with `η`-reduction in the Lambda Calculus.) - The menagerie isn't quite as bewildering as you might suppose. Many of these will be interdefinable. For example, here is how `mcomp` and `mbind` are related: k <=< j ≡ \a. (j a >>= k). We'll state some other interdefinitions below. -We will move freely back and forth between using `>=>` and using `<=<` (aka `mcomp`), which -is just `>=>` with its arguments flipped. `<=<` has the virtue that it corresponds more -closely to the ordinary mathematical symbol `○`. But `>=>` has the virtue -that its types flow more naturally from left to right. - These functions come together in several systems, and have to be defined in a way that coheres with the other functions in the system: * ***Mappable*** (in Haskelese, "Functors") At the most general level, box types are *Mappable* @@ -168,67 +166,74 @@ has to obey the following Map Laws: * ***MapNable*** (in Haskelese, "Applicatives") A Mappable box type is *MapNable* - if there are in addition `map2`, `mid`, and `mapply`. (Given either + if there are in addition `map2`, `⇧`, and `mapply`. (Given either of `map2` and `mapply`, you can define the other, and also `map`. Moreover, with `map2` in hand, `map3`, `map4`, ... `mapN` are easily definable.) These have to obey the following MapN Laws: - 1. mid (id : P->P) : P -> P is a left identity for `m$`, that is: `(mid id) m$ xs = xs` - 2. `mid (f a) = (mid f) m$ (mid a)` - 3. The `map2`ing of composition onto boxes `fs` and `gs` of functions, when `m$`'d to a box `xs` of arguments == the `m$`ing of `fs` to the `m$`ing of `gs` to xs: `(mid (○) m$ fs m$ gs) m$ xs = fs m$ (gs m$ xs)`. - 4. When the arguments (the right-hand operand of `m$`) are an `mid`'d value, the order of `m$`ing doesn't matter: `fs m$ (mid x) = mid ($x) m$ fs`. (Though note that it's `mid ($x)`, or `mid (\f. f x)` that gets `m$`d onto `fs`, not the original `mid x`.) Here's an example where the order *does* matter: `[succ,pred] m$ [1,2] == [2,3,0,1]`, but `[($1),($2)] m$ [succ,pred] == [2,0,3,1]`. This Law states a class of cases where the order is guaranteed not to matter. - 5. A consequence of the laws already stated is that when the _left_-hand operand of `m$` is a `mid`'d value, the order of `m$`ing doesn't matter either: `mid f m$ xs == map (flip ($)) xs m$ mid f`. + 1. ⇧(id : P->P) : P -> P is a left identity for `¢`, that is: `(⇧id) ¢ xs = xs` + 2. `⇧(f a) = (⇧f) ¢ (⇧a)` + 3. The `map2`ing of composition onto boxes `fs` and `gs` of functions, when `¢`'d to a box `xs` of arguments == the `¢`ing of `fs` to the `¢`ing of `gs` to xs: `(⇧(○) ¢ fs ¢ gs) ¢ xs = fs ¢ (gs ¢ xs)`. + 4. When the arguments (the right-hand operand of `¢`) are an `⇧`'d value, the order of `¢`ing doesn't matter: `fs ¢ (⇧x) = ⇧($x) ¢ fs`. (Though note that it's `⇧($x)`, or `⇧(\f. f x)` that gets `¢`d onto `fs`, not the original `⇧x`.) Here's an example where the order *does* matter: `[succ,pred] ¢ [1,2] == [2,3,0,1]`, but `[($1),($2)] ¢ [succ,pred] == [2,0,3,1]`. This Law states a class of cases where the order is guaranteed not to matter. + 5. A consequence of the laws already stated is that when the _left_-hand operand of `¢` is a `⇧`'d value, the order of `¢`ing doesn't matter either: `⇧f ¢ xs == map (flip ($)) xs ¢ ⇧f`. * ***Monad*** (or "Composables") A MapNable box type is a *Monad* if there - is in addition an associative `mcomp` having `mid` as its left and + is in addition an associative `mcomp` having `⇧` as its left and right identity. That is, the following Monad Laws must hold: mcomp (mcomp j k) l (that is, (j <=< k) <=< l) == mcomp j (mcomp k l) - mcomp mid k (that is, mid <=< k) == k - mcomp k mid (that is, k <=< mid) == k + mcomp mid k (that is, ⇧ <=< k) == k + mcomp k mid (that is, k <=< ⇧) == k You could just as well express the Monad laws using `>=>`: l >=> (k >=> j) == (l >=> k) >=> j - k >=> mid == k - mid >=> k == k + k >=> ⇧ == k + ⇧ >=> k == k - If you have any of `mcomp`, `mpmoc`, `mbind`, or `join`, you can use them to define the others. Also, with these functions you can define `m$` and `map2` from *MapNables*. So with Monads, all you really need to get the whole system of functions are a definition of `mid`, on the one hand, and one of `mcomp`, `mbind`, or `join`, on the other. + If you studied algebra, you'll remember that a mon*oid* is a universe with some associative operation that has an identity. For example, the natural numbers form a monoid with multiplication as the operation and `1` as the identity, or with addition as the operation and `0` as the identity. Strings form a monoid with concatenation as the operation and the empty string as the identity. (This example shows that the operation need not be commutative.) Monads are a kind of generalization of this notion, and that's why they're named as they are. The key difference is that for monads, the values being operated on need not be of the same type. They *can* be, if they're all Kleisli arrows of a single type P -> P. But they needn't be. Their types only need to "cohere" in the sense that the output type of the one arrow is a boxing of the input type of the next. - In practice, you will often work with `>>=`. In the Haskell manuals, they express the Monad Laws using `>>=` instead of the composition operators. This looks similar, but doesn't have the same symmetry: + In the Haskell manuals, they express the Monad Laws using `>>=` instead of the composition operators `>=>` or `<=<`. This looks similar, but doesn't have the same symmetry: u >>= (\a -> k a >>= j) == (u >>= k) >>= j - u >>= mid == u - mid a >>= k == k a + u >>= ⇧ == u + ⇧a >>= k == k a + + (Also, Haskell calls `⇧` `return` or `pure`, but we've stuck to our terminology in this context.) Some authors try to make the first of those Laws look more symmetrical by writing it as: + + (A >>= \a -> B) >>= \b -> C == A >>= (\a -> B >>= \b -> C) + + If you have any of `mcomp`, `mpmoc`, `mbind`, or `join`, you can use them to define the others. Also, with these functions you can define `¢` and `map2` from *MapNables*. So with Monads, all you really need to get the whole system of functions are a definition of `⇧`, on the one hand, and one of `mcomp`, `mbind`, or `join`, on the other. - Also, Haskell calls `mid` `return` or `pure`, but we've stuck to our terminology in this context. - > In Category Theory discussion, the Monad Laws are instead expressed in terms of `join` (which they call `μ`) and `mid` (which they call `η`). These are assumed to be "natural transformations" for their box type, which means that they satisfy these equations with that box type's `map`: - >
map f ○ mid == mid ○ f
map f ○ join == join ○ map (map f)
+ > In Category Theory discussion, the Monad Laws are instead expressed in terms of `join` (which they call `μ`) and `⇧` (which they call `η`). These are assumed to be "natural transformations" for their box type, which means that they satisfy these equations with that box type's `map`: + >
map f ○ ⇧ == ⇧ ○ f
map f ○ join == join ○ map (map f)
> The Monad Laws then take the form: - >
join ○ (map join) == join ○ join
join ○ mid == id == join ○ map mid
- > The first of these says that if you have a triply-boxed type, and you first merge the inner two boxes (with `map join`), and then merge the resulting box with the outermost box, that's the same as if you had first merged the outer two boxes, and then merged the resulting box with the innermost box. The second law says that if you take a box type and wrap a second box around it (with `mid`) and then merge them, that's the same as if you had done nothing, or if you had instead wrapped a second box around each element of the original (with `map mid`, leaving the original box on the outside), and then merged them.

+ >

join ○ (map join) == join ○ join
join ○ ⇧ == id == join ○ map ⇧
+ > The first of these says that if you have a triply-boxed type, and you first merge the inner two boxes (with `map join`), and then merge the resulting box with the outermost box, that's the same as if you had first merged the outer two boxes, and then merged the resulting box with the innermost box. The second law says that if you take a box type and wrap a second box around it (with `⇧`) and then merge them, that's the same as if you had done nothing, or if you had instead wrapped a second box around each element of the original (with `map ⇧`, leaving the original box on the outside), and then merged them.

> The Category Theorist would state these Laws like this, where `M` is the endofunctor that takes us from type `α` to type α: - >

μ ○ M(μ) == μ ○ μ
μ ○ η == id == μ ○ M(η)
+ >
μ ○ M(μ) == μ ○ μ
μ ○ η == id == μ ○ M(η)
> A word of advice: if you're doing any work in this conceptual neighborhood and need a Greek letter, don't use μ. In addition to the preceding usage, there's also a use in recursion theory (for the minimization operator), in type theory (as a fixed point operator for types), and in the λμ-calculus, which is a formal system that deals with _continuations_, which we will focus on later in the course. So μ already exhibits more ambiguity than it can handle. + > We link to further reading about the Category Theory origins of Monads below.
+There isn't any single `⇧` function, or single `mbind` function, and so on. For each new box type, this has to be worked out in a useful way. And as we hinted, in many cases the choice of box *type* still leaves some latitude about how they should be defined. We commonly talk about "the List Monad" to mean a combination of the choice of `α list` for the box type and particular definitions for the various functions listed above. There's also "the ZipList MapNable/Applicative" which combines that same box type with other choices for (some of) the functions. Many of these packages also define special-purpose operations that only make sense for that system, but not for other Monads or Mappables. As hinted in last week's homework and explained in class, the operations available in a Mappable system exactly preserve the "structure" of the boxed type they're operating on, and moreover are only sensitive to what content is in the corresponding original position. If you say `map f [1,2,3]`, then what ends up in the first position of the result depends only on how `f` and `1` combine. @@ -239,7 +244,7 @@ With `map`, you can supply an `f` such that `map f [3,2,0,1] == [[3,3,3],[2,2],[ For Monads (Composables), on the other hand, you can perform more radical transformations of that sort. For example, `join (map (\x. dup x x) [3,2,0,1])` would give us `[3,3,3,2,2,1]` (for a suitable definition of `dup`). @@ -266,22 +271,22 @@ j >=> k ≡= \a. (j a >>= k) u >>= k == (id >=> k) u; or ((\(). u) >=> k) () u >>= k == join (map k u) join w == w >>= id -map2 f xs ys == xs >>= (\x. ys >>= (\y. mid (f x y))) -map2 f xs ys == (map f xs) m$ ys, using m$ as an infix operator -fs m$ xs == fs >>= (\f. map f xs) -m$ == map2 id -map f xs == mid f m$ xs -map f u == u >>= mid ○ f +map2 f xs ys == xs >>= (\x. ys >>= (\y. ⇧(f x y))) +map2 f xs ys == (map f xs) ¢ ys, using ¢ as an infix operator +fs ¢ xs == fs >>= (\f. map f xs) +¢ == map2 id +map f xs == ⇧f ¢ xs +map f u == u >>= ⇧ ○ f Here are some other monadic notion that you may sometimes encounter: -* mzero is a value of type α that is exemplified by `Nothing` for the box type `Maybe α` and by `[]` for the box type `List α`. It has the behavior that `anything m$ mzero == mzero == mzero m$ anything == mzero >>= anything`. In Haskell, this notion is called `Control.Applicative.empty` or `Control.Monad.mzero`. +* mzero is a value of type α that is exemplified by `Nothing` for the box type `Maybe α` and by `[]` for the box type `List α`. It has the behavior that `anything ¢ mzero == mzero == mzero ¢ anything == mzero >>= anything`. In Haskell, this notion is called `Control.Applicative.empty` or `Control.Monad.mzero`. -* Haskell has a notion `>>` definable as `\u v. map (const id) u m$ v`, or as `\u v. u >>= const v`. This is often useful, and `u >> v` won't in general be identical to just `v`. For example, using the box type `List α`, `[1,2,3] >> [4,5] == [4,5,4,5,4,5]`. But in the special case of `mzero`, it is a consequence of what we said above that `anything >> mzero == mzero`. Haskell also calls `>>` `Control.Applicative.*>`. +* Haskell has a notion `>>` definable as `\u v. map (const id) u ¢ v`, or as `\u v. u >>= const v`. This is often useful, and `u >> v` won't in general be identical to just `v`. For example, using the box type `List α`, `[1,2,3] >> [4,5] == [4,5,4,5,4,5]`. But in the special case of `mzero`, it is a consequence of what we said above that `anything >> mzero == mzero`. Haskell also calls `>>` `Control.Applicative.*>`. -* Haskell has a correlative notion `Control.Applicative.<*`, definable as `\u v. map const u m$ v`. For example, `[1,2,3] <* [4,5] == [1,1,2,2,3,3]`. You might expect Haskell to call `<*` `<<`, but they don't. They used to use `<<` for `flip (>>)` instead, but now they seem not to use `<<` anymore. +* Haskell has a correlative notion `Control.Applicative.<*`, definable as `\u v. map const u ¢ v`. For example, `[1,2,3] <* [4,5] == [1,1,2,2,3,3]`. * mapconst is definable as `map ○ const`. For example `mapconst 4 [1,2,3] == [4,4,4]`. Haskell calls `mapconst` `<$` in `Data.Functor` and `Control.Applicative`. They also use `$>` for `flip mapconst`, and `Control.Monad.void` for `mapconst ()`. @@ -295,8 +300,8 @@ then a boxed `α` is ... a `bool`. That is, α == α. In terms of the box analogy, the Identity box type is a completely invisible box. With the following definitions: - mid ≡ \p. p, that is, our familiar combinator I - mcomp ≡ \f g x. f (g x), that is, ordinary function composition (○) (aka the B combinator) + mid (* or ⇧ *) ≡ \p. p, that is, our familiar combinator I + mcomp (* or <=< *) ≡ \f g x. f (g x), that is, ordinary function composition (○) (aka the B combinator) Identity is a monad. Here is a demonstration that the laws hold: @@ -321,6 +326,10 @@ Identity is a monad. Here is a demonstration that the laws hold: The Identity monad is favored by mimes. + + + + To take a slightly less trivial (and even more useful) example, consider the box type `α list`, with the following operations: @@ -346,32 +355,115 @@ For example: `j 7` produced `[49, 14]`, which after being fed through `k` gave us `[49, 50, 14, 15]`. -Contrast that to `m$` (`mapply`), which operates not on two *box-producing functions*, but instead on two *boxed type values*, one containing functions to be applied to the values in the other box, via some predefined scheme. Thus: +Contrast that to `¢` (`mapply`), which operates not on two *box-producing functions*, but instead on two *boxed type values*, one containing functions to be applied to the values in the other box, via some predefined scheme. Thus: let js = [(\a->a*a),(\a->a+a)] in let xs = [7, 5] in mapply js xs ==> [49, 25, 14, 10] +These implementations of `<=<` and `¢` for lists use the "crossing" strategy for pairing up multiple lists, as opposed to the "zipping" strategy. Nothing forces that choice; you could also define `¢` using the "zipping" strategy instead. (But then you wouldn't be able to build a corresponding Monad; see below.) Haskell talks of the List Monad in the first case, and the ZipList Applicative in the second case. + +Sticking with the "crossing" strategy, here's how to motivate our implementation of `<=<`. Recall that we have on the one hand, an operation `filter` for lists, that applies a predicate to each element of the list, and returns a list containing only those elements which satisfied the predicate. But the elements it does retain, it retains unaltered. On the other hand, we have the operation `map` for lists, that is capable of _changing_ the list elements in the result. But it doesn't have the power to throw list elements away; elements in the source map one-to-one to elements in the result. In many cases, we want something in between `filter` and `map`. We want to be able to ignore or discard some list elements, and possibly also to change the list elements we keep. One way of doing this is to have a function `optmap`, defined like this: + + let rec optmap (f : α -> β option) (xs : α list) : β list = + match xs with + | [] -> [] + | x' :: xs' -> + (match f x' with + | None -> optmap f xs' + | Some b -> b :: optmap f xs') -The question came up in class of when box types might fail to be Mappable, or Mappables might fail to be MapNables, or MapNables might fail to be Monads. +Then we retain only those `α`s for which `f` returns `Some b`; when `f` returns `None`, we just leave out any corresponding element in the result. + +That can be helpful, but it only enables us to have _zero or one_ elements in the result corresponding to a given element in the source list. What if we sometimes want more? Well, here is a more general approach: + + let rec catmap (k : α -> β list) (xs : α list) : β list = + match xs with + | [] -> [] + | x' :: xs' -> List.append (k x') (catmap f xs') + +Now we can have as many elements in the result for a given `α` as `k` cares to return. Another way to write `catmap k xs` is as (Haskell) `concat (map k xs)` or (OCaml) `List.flatten (List.map k xs)`. And this is just the definition of `mbind` or `>>=` for the List Monad. The definition of `mcomp` or `<=<`, that we gave above, differs only in that it's the way to compose two functions `j` and `k`, that you'd want to `catmap`, rather than the way to `catmap` one of those functions over a value that's already a list. + +This example is a good intuitive basis for thinking about the notions of `mbind` and `mcomp` more generally. Thus `mbind` for the option/Maybe type takes an option value, applies `k` to its element (if there is one), and returns the resulting option value. `mbind` for a tree with `α`-labeled leaves would apply `k` to each of the leaves, and return a tree containing arbitrarily large subtrees in place of all its former leaves, depending on what `k` returned. + +
+[3, 2, 0, 1]  >>=α list    (\a -> dup a a)  ==>  [3, 3, 3, 2, 2, 1]
+
+      Some a  >>=α option  (\a -> Some 0) ==> Some 0
+      None    >>=α option  (\a -> Some 0) ==> None
+      Some a  >>=α option  (\a -> None  ) ==> None
+
+                                                         .
+                                                        / \
+    .                                                  /   \
+   / \                                  .             .     \
+  .   3       >>=(α,unit) tree  (\a ->  / \  )  ==>   / \     .
+ / \                                  a   a         /   \   / \
+1   2                                              .     . 3   3
+                                                  / \   / \
+                                                 1   1 2   2
+
+ + +Though as we warned before, only some of the Monads we'll be working with are naturally thought of "containers"; so in other cases the similarity of their `mbind` operations to what we have here will be more abstract. + + +The question came up in class of **when box types might fail to be Mappable, or Mappables might fail to be MapNables, or MapNables might fail to be Monads**. For the first failure, we noted that it's easy to define a `map` operation for the box type `R -> α`, for a fixed type `R`. You `map` a function of type `P -> Q` over a value of the boxed type P, that is a value of type `R -> P`, by just returning a function that takes some `R` as input, first supplies it to your `R -> P` value, and then supplies the result to your `map`ped function of type `P -> Q`. (We will be working with this Mappable extensively; in fact it's not just a Mappable but more specifically a Monad.) But if on the other hand, your box type is `α -> R`, you'll find that there is no way to define a `map` operation that takes arbitrary functions of type `P -> Q` and values of the boxed type P, that is `P -> R`, and returns values of the boxed type Q. -For the second failure, that is cases of Mappables that are not MapNables, we cited box types like `(R, α)`, for arbitrary fixed types `R`. The `map` operation for these is defined by `map f (r,a) = (r, f a)`. For certain choices of `R` these can be MapNables too. The easiest case is when `R` is the type of `()`. But when we look at the MapNable Laws, we'll see that they impose constraints we cannot satisfy for *every* choice of the fixed type `R`. Here's why. We'll need to define `mid a = (r0, a)` for some specific `r0` of type `R`. The choice can't depend on the value of `a`, because `mid` needs to work for `a`s of _any_ type. Then the MapNable Laws will entail: +For the second failure, that is cases of Mappables that are not MapNables, we cited box types like `(R, α)`, for arbitrary fixed types `R`. The `map` operation for these is defined by `map f (r,a) = (r, f a)`. For certain choices of `R` these can be MapNables too. The easiest case is when `R` is the type of `()`. But when we look at the MapNable Laws, we'll see that they impose constraints we cannot satisfy for *every* choice of the fixed type `R`. Here's why. We'll need to define `⇧a = (r0, a)` for some specific `r0` of type `R`. The choice can't depend on the value of `a`, because `⇧` needs to work for `a`s of _any_ type. Then the MapNable Laws will entail: - 1. (r0,id) m$ (r,x) == (r,x) - 2. (r0,f x) == (r0,f) m$ (r0,x) - 3. (r0,(○)) m$ (r'',f) m$ (r',g) m$ (r,x) == (r'',f) m$ ((r',g) m$ (r,x)) - 4. (r'',f) m$ (r0,x) == (r0,($x)) m$ (r'',f) - 5. (r0,f) m$ (r,x) == (r,($x)) m$ (r0,f) + 1. (r0,id) ¢ (r,x) == (r,x) + 2. (r0,f x) == (r0,f) ¢ (r0,x) + 3. (r0,(○)) ¢ (r'',f) ¢ (r',g) ¢ (r,x) == (r'',f) ¢ ((r',g) ¢ (r,x)) + 4. (r'',f) ¢ (r0,x) == (r0,($x)) ¢ (r'',f) + 5. (r0,f) ¢ (r,x) == (r,($x)) ¢ (r0,f) -Now we are not going to be able to write a `m$` function that inspects the second element of its left-hand operand to check if it's the `id` function; the identity of functions is not decidable. So the only way to satisfy Law 1 will be to have the first element of the result (`r`) be taken from the first element of the right-hand operand in _all_ the cases when the first element of the left-hand operand is `r0`. But then that means that the result of the lhs of Law 5 will also have a first element of `r`; so, turning now to the rhs of Law 5, we see that `m$` must use the first element of its _left_-hand operand (here again `r`) at least in those cases when the first element of its right-hand operand is `r0`. If our `R` type has a natural *monoid* structure, we could just let `r0` be the monoid's identity, and have `m$` combine other `R`s using the monoid's operation. Alternatively, if the `R` type is one that we can safely apply the predicate `(r0==)` to, then we could define `m$` something like this: +Now we are not going to be able to write a `¢` function that inspects the second element of its left-hand operand to check if it's the `id` function; the identity of functions is not decidable. So the only way to satisfy Law 1 will be to have the first element of the result (`r`) be taken from the first element of the right-hand operand in _all_ the cases when the first element of the left-hand operand is `r0`. But then that means that the result of the lhs of Law 5 will also have a first element of `r`; so, turning now to the rhs of Law 5, we see that `¢` must use the first element of its _left_-hand operand (here again `r`) at least in those cases when the first element of its right-hand operand is `r0`. If our `R` type has a natural *monoid* structure, we could just let `r0` be the monoid's identity, and have `¢` combine other `R`s using the monoid's operation. Alternatively, if the `R` type is one that we can safely apply the predicate `(r0==)` to, then we could define `¢` something like this: - let (m$) (r1,f) (r2,x) = ((if r0==r1 then r2 else if r0==r2 then r1 else ...), ...) + let (¢) (r1,f) (r2,x) = ((if r0==r1 then r2 else if r0==r2 then r1 else ...), ...) -But for some types neither of these will be the case. For function types, as we already mentioned, `==` is not decidable. If the functions have suitable types, they do form a monoid with `○` as the operation and `id` as the identity; but many function types won't be such that arbitrary functions of that type are composable. So when `R` is the type of functions from `int`s to `bool`s, for example, we won't have any way to write a `m$` that satisfies the constraints stated above. +But for some types neither of these will be the case. For function types, as we already mentioned, `==` is not decidable. If the functions have suitable types, they do form a monoid with `○` as the operation and `id` as the identity; but many function types won't be such that arbitrary functions of that type are composable. So when `R` is the type of functions from `int`s to `bool`s, for example, we won't have any way to write a `¢` that satisfies the constraints stated above. For the third failure, that is examples of MapNables that aren't Monads, we'll just state that lists where the `map2` operation is taken to be zipping rather than taking the Cartesian product (what in Haskell are called `ZipList`s), these are claimed to exemplify that failure. But we aren't now in a position to demonstrate that to you. + +## Further Reading ## + +As we mentioned above, the notions of Monads have their origin in Category Theory, where they are mostly specified in terms of (what we call) `⇧` and `join`. For advanced study, here are some further links on the relation between monads as we're working with them and monads as they appear in Category Theory: +[1](http://en.wikipedia.org/wiki/Outline_of_category_theory) +[2](http://lambda1.jimpryor.net/advanced_topics/monads_in_category_theory/) +[3](http://en.wikibooks.org/wiki/Haskell/Category_theory) +[4](https://wiki.haskell.org/Category_theory) (where you should follow the further links discussing Functors, Natural Transformations, and Monads) +[5](http://www.stephendiehl.com/posts/monads.html) + + +Here are some papers that introduced Monads into functional programming: + +* Eugenio Moggi, Notions of Computation and Monads: Information and Computation 93 (1) 1991. This paper is available online, but would be very difficult reading for members of this seminar, so we won't link to it. However, the next two papers should be accessible. + +* [Philip Wadler. The essence of functional programming](http://homepages.inf.ed.ac.uk/wadler/papers/essence/essence.ps): +invited talk, *19'th Symposium on Principles of Programming Languages*, ACM Press, Albuquerque, January 1992. + + +* [Philip Wadler. Monads for Functional Programming](http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf): +in M. Broy, editor, *Marktoberdorf Summer School on Program Design +Calculi*, Springer Verlag, NATO ASI Series F: Computer and systems +sciences, Volume 118, August 1992. Also in J. Jeuring and E. Meijer, +editors, *Advanced Functional Programming*, Springer Verlag, +LNCS 925, 1995. Some errata fixed August 2001. + + +Here is some other reading: + +* [Yet Another Haskell Tutorial on Monad Laws](http://en.wikibooks.org/wiki/Haskell/YAHT/Monads#Definition) +* [Haskell wikibook on Understanding Monads](http://en.wikibooks.org/wiki/Haskell/Understanding_monads) +* [Haskell wikibook on Advanced Monads](http://en.wikibooks.org/wiki/Haskell/Advanced_monads) +* [Haskell wiki on Monad Laws](http://www.haskell.org/haskellwiki/Monad_laws) + +There's a long list of monad tutorials linked at the [[Haskell wiki|https://wiki.haskell.org/Monad_tutorials_timeline]] (we linked to this at the top of the page), and on our own [[Offsite Reading|/readings]] page. (Skimming the titles is somewhat amusing.) If you are confused by monads, make use of these resources. Read around until you find a tutorial pitched at a level that's helpful for you. diff --git a/topics/_week8_using_monads.mdwn b/topics/week8_safe_division_with_monads.mdwn similarity index 62% rename from topics/_week8_using_monads.mdwn rename to topics/week8_safe_division_with_monads.mdwn index 561ce7cd..933ded83 100644 --- a/topics/_week8_using_monads.mdwn +++ b/topics/week8_safe_division_with_monads.mdwn @@ -1,14 +1,15 @@ -Some applications of monadic machinery... +As we discussed in class, there are clear patterns shared between lists and option types and trees, so perhaps you can see why people want to figure out the general structures. But it probably isn't obvious yet why it would be useful to do so. To a large extent, this will only emerge over the next few classes. But we'll begin to demonstrate the usefulness of these patterns by talking through a simple example, that uses the monadic functions of the Option/Maybe box type. -## Safe division ## +OCaml's `/` operator expresses integer division, which throws away any remainder, thus: -As we discussed in class, there are clear patterns shared between lists and option types and trees, so perhaps you can see why people want to figure out the general structures. But it probably isn't obvious yet why it would be useful to do so. To a large extent, this will only emerge over the next few classes. But we'll begin to demonstrate the usefulness of these patterns by talking through a simple example, that uses the monadic functions of the Option/Maybe box type. + # 11/3;; + - : int = 3 Integer division presupposes that its second argument (the divisor) is not zero, upon pain of presupposition failure. Here's what my OCaml interpreter says: - # 12/0;; + # 11/0;; Exception: Division_by_zero. Say we want to explicitly allow for the possibility that @@ -25,12 +26,12 @@ So if a division is normal, we return some number, but if the divisor is zero, we return `None`. As a mnemonic aid, we'll prepend a `safe_` to the start of our new divide function.
-let safe_div (x:int) (y:int) =
+let safe_div (x : int) (y : int) =
   match y with
     | 0 -> None
     | _ -> Some (x / y);;
 
-(*
+(* an Ocaml session could continue with OCaml's response:
 val safe_div : int -> int -> int option = fun
 # safe_div 12 2;;
 - : int option = Some 6
@@ -49,15 +50,16 @@ the output of the safe-division function as input for further division
 operations. So we have to jack up the types of the inputs:
 
 
-let safe_div2 (u:int option) (v:int option) =
+let safe_div2 (u : int option) (v : int option) =
   match u with
   | None -> None
   | Some x ->
       (match v with
+      | None -> None
       | Some 0 -> None
       | Some y -> Some (x / y));;
 
-(*
+(* an Ocaml session could continue with OCaml's response:
 val safe_div2 : int option -> int option -> int option = 
 # safe_div2 (Some 12) (Some 2);;
 - : int option = Some 6
@@ -75,7 +77,7 @@ I prefer to line up the `match` alternatives by using OCaml's
 built-in tuple type:
 
 
-let safe_div2 (u:int option) (v:int option) =
+let safe_div2 (u : int option) (v : int option) =
   match (u, v) with
   | (None, _) -> None
   | (_, None) -> None
@@ -95,7 +97,7 @@ let safe_add (u:int option) (v:int option) =
     | (_, None) -> None
     | (Some x, Some y) -> Some (x + y);;
 
-(*
+(* an Ocaml session could continue with OCaml's response:
 val safe_add : int option -> int option -> int option = 
 # safe_add (Some 12) (Some 4);;
 - : int option = Some 16
@@ -104,17 +106,23 @@ val safe_add : int option -> int option -> int option = 
 *)
 
+So now, wherever before our operations expected an `int`, we'll instead +have them accept an `int option`. A `None` input signals that +something has gone wrong upstream. + This works, but is somewhat disappointing: the `safe_add` operation doesn't trigger any presupposition of its own, so it is a shame that it needs to be adjusted because someone else might make trouble. -But we can automate the adjustment, using the monadic machinery we introduced above. +But we can automate the adjustment, using the monadic machinery we introduced before. As we said, there needs to be different `>>=`, `map2` and so on operations for each -monad or box type we're working with. +Monad or box type we're working with. Haskell finesses this by "overloading" the single symbol `>>=`; you can just input that symbol and it will calculate from the context of the surrounding type constraints what -monad you must have meant. In OCaml, the monadic operators are not pre-defined, but we will -give you a library that has definitions for all the standard monads, as in Haskell. +Monad you must have meant. In OCaml, the monadic operators are not pre-defined, but we will +give you a library that has definitions for all the standard Monads, as in Haskell. But you +will need to explicitly specify which Monad you mean to be deploying. + For now, though, we will define our `>>=` and `map2` operations by hand:
@@ -139,10 +147,16 @@ Haskell has an even more user-friendly notation for defining `safe_div3`, namely
                         y <- v;
                         if 0 == y then Nothing else Just (x `div` y)}
 
+You can read more about that here:
+
+*	[Haskell wikibook on do-notation](http://en.wikibooks.org/wiki/Haskell/do_Notation)
+*	[Yet Another Haskell Tutorial on do-notation](http://en.wikibooks.org/wiki/Haskell/YAHT/Monads#Do_Notation)
+
+
 Let's see our new functions in action:
 
 
-(*
+(* an Ocaml session could continue with OCaml's response:
 # safe_div3 (safe_div3 (Some 12) (Some 2)) (Some 3);;
 - : int option = Some 2
 #  safe_div3 (safe_div3 (Some 12) (Some 0)) (Some 3);;
@@ -152,11 +166,11 @@ Let's see our new functions in action:
 *)
 
-Compare the new definitions of `safe_add3` and `safe_div3` closely: the definition -for `safe_add3` shows what it looks like to equip an ordinary operation to -survive in dangerous presupposition-filled world. Note that the new +Our definition for `safe_add3` shows what it looks like to equip an ordinary operation to +survive in dangerous presupposition-filled world. We just need to `mapN` it "into" the +Maybe monad, where `N` is the function's adicity. Note that the new definition of `safe_add3` does not need to test whether its arguments are -`None` values or real numbers---those details are hidden inside of the +`None` values or genuine numbers---those details are hidden inside of the `bind` function. Note also that our definition of `safe_div3` recovers some of the simplicity of @@ -165,7 +179,17 @@ add exactly what extra is needed to track the no-division-by-zero presupposition need to keep track of what other presuppositions may have already failed for whatever reason on our inputs. -(Linguistics note: Dividing by zero is supposed to feel like a kind of +So what the monadic machinery gives us here is a way to _separate_ thinking +about error conditions (such as trying to divide by zero) from thinking about normal +arithmetic computations. When we're adding or multiplying, we don't have to worry about generating +any new errors, so we would rather not force these operations to explicitly +track the difference between `int`s and `int option`s. A linguistics analogy we'll +look at soon is that when we're giving the lexical entry for an ordinary +extensional verb, we'd rather not be forced to talk about possible worlds. In each case, +we instead just have a standard way of "lifting" (`mapN`ing) the relevant notions into +the fancier type environment we ultimately want to work in. + +Dividing by zero is supposed to feel like a kind of presupposition failure. If we wanted to adapt this approach to building a simple account of presupposition projection, we would have to do several things. First, we would have to make use of the @@ -178,5 +202,36 @@ theory of accommodation, and a theory of the situations in which material within the sentence can satisfy presuppositions for other material that otherwise would trigger a presupposition violation; but, not surprisingly, these refinements will require some more -sophisticated techniques than the super-simple Option/Maybe monad.) - +sophisticated techniques than the super-simple Option/Maybe Monad. + +To illustrate some of the polymorphism, here's how we could `map1` the `is_even` function: + + # let is_even x = (x mod 2 = 0);; + val is_even : int -> bool = + # let map (g : 'a -> 'b) (u : 'a option) = u >>= fun x -> Some (g x);; + val map : ('a -> 'b) -> 'a option -> 'b option = + # map (is_even);; + - : int option -> bool option = + +Wherever we have a well-defined monad, we can define the `mapN` operations for them in terms +of their `>>=` and `⇧`/`mid`. The general pattern is: + + mapN (g : 'a1 -> ... 'an -> 'result) (u1 : 'a1 option) ... (un : 'an option) : 'result option = + u1 >>= (fun x1 -> ... un >>= (fun xn -> ⇧(g x1 ... xn)) ...) + +Our above definitions of `map` and `mapN` were of this form, except we just +explicitly supplied the definition of `⇧` for the Option/Maybe monad (namely, in OCamlese, the constructor `Some`). +If you substitute in the definition of `>>=`, you can see these are equivalent to: + + map (g : 'a -> 'b) (u : 'a option) = + match u with + | None -> None + | Some x -> Some (g x) + + map2 (g : 'a -> 'b -> 'c) (u : 'a option) (v : 'b option) = + match u with + | None -> None + | Some x -> + (match v with + | None -> None + | Some y -> Some (g x y));;