Introducing Monads ================== The [[tradition in the functional programming literature|https://wiki.haskell.org/Monad_tutorials_timeline]] is to introduce monads using a metaphor: monads are spacesuits, monads are monsters, monads are burritos. These metaphors can be helpful, and they can be unhelpful. There's a backlash about the metaphors that tells people to instead just look at the formal definition. We'll give that to you below, but it's sometimes sloganized as [A monad is just a monoid in the category of endofunctors, what's the problem?](http://stackoverflow.com/questions/3870088). Without some intuitive guidance, this can also be unhelpful. We'll try to find a good balance. The closest we will come to metaphorical talk is to suggest that monadic types place values inside of *boxes*, and that monads wrap and unwrap boxes to expose or enclose the values inside of them. In any case, our emphasis will be on starting with the abstract structure of monads, followed by instances of monads from the philosophical and linguistics literature. > After you've read this once and are coming back to re-read it to try to digest the details further, the "endofunctors" that slogan is talking about are a combination of our boxes and their associated maps. Their "monoidal" character is captured in the Monad Laws, where a "monoid"---don't confuse with a mon*ad*---is a simpler algebraic notion, meaning a universe with some associative operation that has an identity. For advanced study, here are some further links on the relation between monads as we're working with them and monads as they appear in category theory: [1](http://en.wikipedia.org/wiki/Outline_of_category_theory) [2](http://lambda1.jimpryor.net/advanced_topics/monads_in_category_theory/) [3](http://en.wikibooks.org/wiki/Haskell/Category_theory) [4](https://wiki.haskell.org/Category_theory), where you should follow the further links discussing Functors, Natural Transformations, and Monads. ## Box types: type expressions with one free type variable ## Recall that we've been using lower-case Greek letters α, β, γ, ... as type variables. We'll use `P`, `Q`, `R`, and `S` as schematic metavariables over type expressions, that may or may not contain unbound type variables. For instance, we might have P_1 ≡ int P_2 ≡ α -> α P_3 ≡ ∀α. α -> α P_4 ≡ ∀α. α -> β etc. A *box type* will be a type expression that contains exactly one free type variable. (You could extend this to expressions with more free variables; then you'd have to specify which one of them the box is capturing. But let's keep it simple.) Some examples (using OCaml's type conventions): α option α list (α, R) tree (assuming R contains no free type variables) (α, α) tree The idea is that whatever type the free type variable `α` might be instantiated to, we will have a "type box" of a certain sort that "contains" values of type `α`. For instance, if `α list` is our box type, and `α` is the type `int`, then in this context, `int list` is the type of a boxed integer. Warning: although our initial motivating examples are readily thought of as "containers" (lists, trees, and so on, with `α`s as their "elements"), with later examples we discuss it will be less natural to describe the boxed types that way. For example, where `R` is some fixed type, `R -> α` is a box type. Also, for clarity: the *box type* is the type `α list` (or as we might just say, the `list` type operator); the *boxed type* is some specific instantiation of the free type variable `α`. We'll often write boxed types as a box containing what the free type variable instantiates to. So if our box type is `α list`, and `α` instantiates to the specific type `int`, we would write: int for the type of a boxed `int`. ## Kleisli arrows ## A lot of what we'll be doing concerns types that are called *Kleisli arrows*. Don't worry about why they're called that, or if you like go read some Category Theory. All we need to know is that these are functions whose type has the form: P -> Q That is, they are functions from values of one type `P` to a boxed type `Q`, for some choice of type expressions `P` and `Q`. For instance, the following are Kleisli arrows: int -> bool int list -> int list In the first, `P` has become `int` and `Q` has become `bool`. (The boxed type Q is bool). Note that the left-hand schema `P` is permitted to itself be a boxed type. That is, where if `α list` is our box type, we can write the second type as: int -> int list Here are some examples of values of these Kleisli arrow types, where the box type is `α list`, and the Kleisli arrow types are int -> int (that is, `int -> int list`) or int -> bool:
\x. [x]
\x. [odd? x, odd? x]
\x. prime_factors_of x
\x. [0, 0, 0]
As semanticists, you are no doubt familiar with the debates between those who insist that propositions are sets of worlds and those who insist they are context change potentials. We hope to show you, in coming weeks, that propositions are (certain sorts of) Kleisli arrows. But this doesn't really compete with the other proposals; it is a generalization of them. Both of the other proposed structures can be construed as specific Kleisli arrow types. ## A family of functions for each box type ## We'll need a family of functions to help us work with box types. As will become clear, these have to be defined differently for each box type. Here are the types of our crucial functions, together with our pronunciation, and some other names the functions go by. (Usually the type doesn't fix their behavior, which will be discussed below.) map (/mæp/): (P -> Q) -> P -> Q > In Haskell, this is the function `fmap` from the `Prelude` and `Data.Functor`; also called `<$>` in `Data.Functor` and `Control.Applicative`, and also called `Control.Applicative.liftA` and `Control.Monad.liftM`. map2 (/mæptu/): (P -> Q -> R) -> P -> Q -> R > In Haskell, this is called `Control.Applicative.liftA2` and `Control.Monad.liftM2`. mid (/εmaidεnt@tI/): P -> P > In Haskell, this is called `Control.Monad.return` and `Control.Applicative.pure`. In other theoretical contexts it is sometimes called `unit` or `η`. In the class presentation Jim called it `𝟭`. This notion is exemplified by `Just` for the box type `Maybe α` and by the singleton function for the box type `List α`. m$ or mapply (/εm@plai/): P -> Q -> P -> Q > In Haskell, this is called `Control.Monad.ap` or `Control.Applicative.<*>`. In the class presentation Jim called it `●`. <=< or mcomp : (Q -> R) -> (P -> Q) -> (P -> R) > In Haskell, this is `Control.Monad.<=<`. >=> (flip mcomp, should we call it mpmoc?): (P -> Q) -> (Q -> R) -> (P -> R) > In Haskell, this is `Control.Monad.>=>`. >>= or mbind : (Q) -> (Q -> R) -> (R) =<< (flip mbind, should we call it mdnib?) (Q -> R) -> (Q) -> (R) join: P -> P > In Haskell, this is `Control.Monad.join`. In other theoretical contexts it is sometimes called `μ`. In the class handout, we gave the types for `>=>` twice, and once was correct but the other was a typo. The above is the correct typing. Haskell's name "bind" for `>>=` is not well chosen from our perspective, but this is too deeply entrenched by now. We've at least preprended an `m` to the front of it. Haskell's names "return" and "pure" for `mid` are even less well chosen, and we think it will be clearer in our discussion to use a different name. (Also, in other theoretical contexts this notion goes by other names, anyway, like `unit` or `η` --- having nothing to do with `η`-reduction in the Lambda Calculus.) In the handout we called `mid` `𝟭`. But now we've decided that `mid` is better. (Think of it as "m" plus "identity", not as the start of "midway".) The menagerie isn't quite as bewildering as you might suppose. Many of these will be interdefinable. For example, here is how `mcomp` and `mbind` are related: k <=< j ≡ \a. (j a >>= k). We will move freely back and forth between using `>=>` and using `<=<` (aka `mcomp`), which is just `>=>` with its arguments flipped. `<=<` has the virtue that it corresponds more closely to the ordinary mathematical symbol `○`. But `>=>` has the virtue that its types flow more naturally from left to right. These functions come together in several systems, and have to be defined in a way that coheres with the other functions in the system: * ***Mappable*** (in Haskelese, "Functors") At the most general level, box types are *Mappable* if there is a `map` function defined for that box type with the type given above. This has to obey the following Map Laws: map (id : α -> α) == (id : α -> α) map (g ○ f) == (map g) ○ (map f) Essentially these say that `map` is a homomorphism from the algebra of `(universe α -> β, operation ○, elsment id)` to that of (α -> β, ○', id'), where `○'` and `id'` are `○` and `id` restricted to arguments of type _. That might be hard to digest because it's so abstract. Think of the following concrete example: if you take a `α list` (that's our α), and apply `id` to each of its elements, that's the same as applying `id` to the list itself. That's the first law. And if you apply the composition of functions `g ○ f` to each of the list's elements, that's the same as first applying `f` to each of the elements, and then going through the elements of the resulting list and applying `g` to each of those elements. That's the second law. These laws obviously hold for our familiar notion of `map` in relation to lists. > As mentioned at the top of the page, in Category Theory presentations of monads they usually talk about "endofunctors", which are mappings from a Category to itself. In the uses they make of this notion, the endofunctors combine the role of a box type _ and of the `map` that goes together with it. * ***MapNable*** (in Haskelese, "Applicatives") A Mappable box type is *MapNable* if there are in addition `map2`, `mid`, and `mapply`. (Given either of `map2` and `mapply`, you can define the other, and also `map`. Moreover, with `map2` in hand, `map3`, `map4`, ... `mapN` are easily definable.) These have to obey the following MapN Laws: TODO LAWS * ***Monad*** (or "Composables") A MapNable box type is a *Monad* if there is in addition an associative `mcomp` having `mid` as its left and right identity. That is, the following Monad Laws must hold: mcomp (mcomp j k) l (that is, (j <=< k) <=< l) == mcomp j (mcomp k l) mcomp mid k (that is, mid <=< k) == k mcomp k mid (that is, k <=< mid) == k You could just as well express the Monad laws using `>=>`: l >=> (k >=> j) == (l >=> k) >-> j k >=> mid == k mid >=> k == k If you have any of `mcomp`, `mpmoc`, `mbind`, or `join`, you can use them to define the others. Also, with these functions you can define `m$` and `map2` from *MapNables*. So with Monads, all you really need to get the whole system of functions are a definition of `mid`, on the one hand, and one of `mcomp`, `mbind`, or `join`, on the other. In practice, you will often work with `>>=`. In the Haskell manuals, they express the Monad Laws using `>>=` instead of the composition operators. This looks similar, but doesn't have the same symmetry: u >>= (\a -> k a >>= j) == (u >>= k) >>= j u >>= mid == u mid a >>= k == k a Also, Haskell calls `mid` `return` or `pure`, but we've stuck to our terminology in this context. > In Category Theory discussion, the Monad Laws are instead expressed in terms of `join` (which they call `μ`) and `mid` (which they call `η`). These are assumed to be "natural transformations" for their box type, which means that they satisfy these equations with that box type's `map`: >
map f ○ mid == mid ○ f
map f ○ join == join ○ map (map f)
> The Monad Laws then take the form: >
join ○ (map join) == join ○ join
join ○ mid == id == join ○ map mid
> The first of these says that if you have a triply-boxed type, and you first merge the inner two boxes (with `map join`), and then merge the resulting box with the outermost box, that's the same as if you had first merged the outer two boxes, and then merged the resulting box with the innermost box. The second law says that if you take a box type and wrap a second box around it (with `mid`) and then merge them, that's the same as if you had instead mapped a second box around the elements of the original (with `map mid`, leaving the original box on the outside), and then merged them.

> The Category Theorist would state these Laws like this, where `M` is the endofunctor that takes us from type `α` to type α: >

μ ○ M(μ) == μ ○ μ
μ ○ η == id == μ ○ M(η)
## Interdefinitions and Subsidiary notions## We said above that various of these box type operations can be defined in terms of others. Here is a list of various ways in which they're related. We try to stick to the consistent typing conventions that:
f : α -> β; g and h have types of the same format (note that α and β are permitted to be, but needn't be, boxed types)
j : α -> β; k and l have types of the same format
u : α; v and xs and ys have types of the same format
w : α
But we may sometimes slip. Here are some ways the different notions are related:
j >=> k ≡= \a. (j a >>= k)
u >>= k == (id >=> k) u; or ((\(). u) >=> k) ()
u >>= k == join (map k u)
join w == w >>= id
map2 f xs ys == xs >>= (\x. ys >>= (\y. mid (f x y)))
map2 f xs ys == (map f xs) m$ ys, using m$ as an infix operator
fs m$ xs == fs >>= (\f. map f xs)
m$ == map2 id
map f xs == mid f m$ xs
map f u == u >>= mid ○ f
Here are some other monadic notion that you may sometimes encounter: * mzero is a value of type α that is exemplified by `Nothing` for the box type `Maybe α` and by `[]` for the box type `List α`. It has the behavior that `anything m$ mzero == mzero == mzero m$ anything == mzero >>= anything`. In Haskell, this notion is called `Control.Applicative.empty` or `Control.Monad.mzero`. * Haskell has a notion `>>` definable as `\u v. mid (const id) m$ u m$ v`. It works like this: `u >> v == u >>= const v`. This is often useful, and won't in general be identical to just `v`. For example, using the box type `List α`, `[1,2,3] >> [4,5] == [4,5,4,5,4,5]`. But in the special case of `mzero`, it is a consequence of what we said above that `anything >> mzero == mzero`. Haskell also calls `>>` `Control.Applicative.*>`. * Haskell has a correlative notion `Control.Applicative.<*`, definable as `\u v. mid const m$ u m$ v`. For example, `[1,2,3] <* [4,5] == [1,1,2,2,3,3]`. You might expect Haskell to call `<*` `<<`, but they don't. They used to use `<<` for `flip (>>)` instead, but now they seem not to use it anymore. Maybe in the future they'll call `<*` `<<`. * mapconst is definable as `map ○ const`. For example `mapconst 4 [1,2,3] == [4,4,4]`. Haskell calls `mapconst` `<$` in `Data.Functor` and `Control.Applicative`. They also use `$>` for `flip mapconst`, and `Control.Monad.void` for `mapconst ()`. ## Examples ## To take a trivial (but, as we will see, still useful) example, consider the Identity box type: `α`. So if `α` is type `bool`, then a boxed `α` is ... a `bool`. That is, α == α. In terms of the box analogy, the Identity box type is a completely invisible box. With the following definitions: mid ≡ \p. p mcomp ≡ \f g x.f (g x) Identity is a monad. Here is a demonstration that the laws hold: mcomp mid k ≡ (\fgx.f(gx)) (\p.p) k ~~> \x.(\p.p)(kx) ~~> \x.kx ~~> k mcomp k mid ≡ (\fgx.f(gx)) k (\p.p) ~~> \x.k((\p.p)x) ~~> \x.kx ~~> k mcomp (mcomp j k) l ≡ mcomp ((\fgx.f(gx)) j k) l ~~> mcomp (\x.j(kx)) l ≡ (\fgx.f(gx)) (\x.j(kx)) l ~~> \x.(\x.j(kx))(lx) ~~> \x.j(k(lx)) mcomp j (mcomp k l) ≡ mcomp j ((\fgx.f(gx)) k l) ~~> mcomp j (\x.k(lx)) ≡ (\fgx.f(gx)) j (\x.k(lx)) ~~> \x.j((\x.k(lx)) x) ~~> \x.j(k(lx)) The Identity monad is favored by mimes. To take a slightly less trivial (and even more useful) example, consider the box type `α list`, with the following operations: mid : α -> [α] mid a = [a] mcomp : (β -> [γ]) -> (α -> [β]) -> (α -> [γ]) mcomp k j a = concat (map k (j a)) = List.flatten (List.map k (j a)) = foldr (\b ks -> (k b) ++ ks) [] (j a) = List.fold_right (fun b ks -> List.append (k b) ks) [] (j a) = [c | b <- j a, c <- k b] In the first two definitions of `mcomp`, we give the definition first in Haskell and then in the equivalent OCaml. The three different definitions of `mcomp` (one for each line) are all equivalent, and it is easy to show that they obey the Monad Laws. (You will do this in the homework.) In words, `mcomp k j a` feeds the `a` (which has type `α`) to `j`, which returns a list of `β`s; each `β` in that list is fed to `k`, which returns a list of `γ`s. The final result is the concatenation of those lists of `γ`s. For example: let j a = [a*a, a+a] in let k b = [b, b+1] in mcomp k j 7 ==> [49, 50, 14, 15] `j 7` produced `[49, 14]`, which after being fed through `k` gave us `[49, 50, 14, 15]`. Contrast that to `m$` (`mapply`, which operates not on two *box-producing functions*, but instead on two *values of a boxed type*, one containing functions to be applied to the values in the other box, via some predefined scheme. Thus: let js = [(\a->a*a),(\a->a+a)] in let xs = [7, 5] in mapply js xs ==> [49, 25, 14, 10] As we illustrated in class, there are clear patterns shared between lists and option types and trees, so perhaps you can see why people want to figure out the general structures. But it probably isn't obvious yet why it would be useful to do so. To a large extent, this will only emerge over the next few classes. But we'll begin to demonstrate the usefulness of these patterns by talking through a simple example, that uses the monadic functions of the Option/Maybe box type. ## Safe division ## Integer division presupposes that its second argument (the divisor) is not zero, upon pain of presupposition failure. Here's what my OCaml interpreter says: # 12/0;; Exception: Division_by_zero. Say we want to explicitly allow for the possibility that division will return something other than a number. To do that, we'll use OCaml's `option` type, which works like this: # type 'a option = None | Some of 'a;; # None;; - : 'a option = None # Some 3;; - : int option = Some 3 So if a division is normal, we return some number, but if the divisor is zero, we return `None`. As a mnemonic aid, we'll prepend a `safe_` to the start of our new divide function.
let safe_div (x:int) (y:int) =
  match y with
    | 0 -> None
    | _ -> Some (x / y);;

(*
val safe_div : int -> int -> int option = fun
# safe_div 12 2;;
- : int option = Some 6
# safe_div 12 0;;
- : int option = None
# safe_div (safe_div 12 2) 3;;
            ~~~~~~~~~~~~~
Error: This expression has type int option
       but an expression was expected of type int
*)
This starts off well: dividing `12` by `2`, no problem; dividing `12` by `0`, just the behavior we were hoping for. But we want to be able to use the output of the safe-division function as input for further division operations. So we have to jack up the types of the inputs:
let safe_div2 (u:int option) (v:int option) =
  match u with
  | None -> None
  | Some x ->
      (match v with
      | Some 0 -> None
      | Some y -> Some (x / y));;

(*
val safe_div2 : int option -> int option -> int option = 
# safe_div2 (Some 12) (Some 2);;
- : int option = Some 6
# safe_div2 (Some 12) (Some 0);;
- : int option = None
# safe_div2 (safe_div2 (Some 12) (Some 0)) (Some 3);;
- : int option = None
*)
Calling the function now involves some extra verbosity, but it gives us what we need: now we can try to divide by anything we want, without fear that we're going to trigger system errors. I prefer to line up the `match` alternatives by using OCaml's built-in tuple type:
let safe_div2 (u:int option) (v:int option) =
  match (u, v) with
  | (None, _) -> None
  | (_, None) -> None
  | (_, Some 0) -> None
  | (Some x, Some y) -> Some (x / y);;
So far so good. But what if we want to combine division with other arithmetic operations? We need to make those other operations aware of the possibility that one of their arguments has already triggered a presupposition failure:
let safe_add (u:int option) (v:int option) =
  match (u, v) with
    | (None, _) -> None
    | (_, None) -> None
    | (Some x, Some y) -> Some (x + y);;

(*
val safe_add : int option -> int option -> int option = 
# safe_add (Some 12) (Some 4);;
- : int option = Some 16
# safe_add (safe_div (Some 12) (Some 0)) (Some 4);;
- : int option = None
*)
This works, but is somewhat disappointing: the `safe_add` operation doesn't trigger any presupposition of its own, so it is a shame that it needs to be adjusted because someone else might make trouble. But we can automate the adjustment, using the monadic machinery we introduced above. As we said, there needs to be different `>>=`, `map2` and so on operations for each monad or box type we're working with. Haskell finesses this by "overloading" the single symbol `>>=`; you can just input that symbol and it will calculate from the context of the surrounding type constraints what monad you must have meant. In OCaml, the monadic operators are not pre-defined, but we will give you a library that has definitions for all the standard monads, as in Haskell. For now, though, we will define our `>>=` and `map2` operations by hand:
let (>>=) (u : 'a option) (j : 'a -> 'b option) : 'b option =
  match u with
    | None -> None
    | Some x -> j x;;

let map2 (f : 'a -> 'b -> 'c) (u : 'a option) (v : 'b option) : 'c option =
  u >>= (fun x -> v >>= (fun y -> Some (f x y)));;

let safe_add3 = map2 (+);;    (* that was easy *)

let safe_div3 (u: int option) (v: int option) =
  u >>= (fun x -> v >>= (fun y -> if 0 = y then None else Some (x / y)));;
Haskell has an even more user-friendly notation for defining `safe_div3`, namely: safe_div3 :: Maybe Int -> Maybe Int -> Maybe Int safe_div3 u v = do {x <- u; y <- v; if 0 == y then Nothing else Just (x `div` y)} Let's see our new functions in action:
(*
# safe_div3 (safe_div3 (Some 12) (Some 2)) (Some 3);;
- : int option = Some 2
#  safe_div3 (safe_div3 (Some 12) (Some 0)) (Some 3);;
- : int option = None
# safe_add3 (safe_div3 (Some 12) (Some 0)) (Some 3);;
- : int option = None
*)
Compare the new definitions of `safe_add3` and `safe_div3` closely: the definition for `safe_add3` shows what it looks like to equip an ordinary operation to survive in dangerous presupposition-filled world. Note that the new definition of `safe_add3` does not need to test whether its arguments are `None` values or real numbers---those details are hidden inside of the `bind` function. Note also that our definition of `safe_div3` recovers some of the simplicity of the original `safe_div`, without the complexity introduced by `safe_div2`. We now add exactly what extra is needed to track the no-division-by-zero presupposition. Here, too, we don't need to keep track of what other presuppositions may have already failed for whatever reason on our inputs. (Linguistics note: Dividing by zero is supposed to feel like a kind of presupposition failure. If we wanted to adapt this approach to building a simple account of presupposition projection, we would have to do several things. First, we would have to make use of the polymorphism of the `option` type. In the arithmetic example, we only made use of `int option`s, but when we're composing natural language expression meanings, we'll need to use types like `N option`, `Det option`, `VP option`, and so on. But that works automatically, because we can use any type for the `'a` in `'a option`. Ultimately, we'd want to have a theory of accommodation, and a theory of the situations in which material within the sentence can satisfy presuppositions for other material that otherwise would trigger a presupposition violation; but, not surprisingly, these refinements will require some more sophisticated techniques than the super-simple Option/Maybe monad.)