Introducing Monads ================== The [[tradition in the functional programming literature|https://wiki.haskell.org/Monad_tutorials_timeline]] is to introduce monads using a metaphor: monads are spacesuits, monads are monsters, monads are burritos. These metaphors can be helpful, and they can be unhelpful. There's a backlash about the metaphors that tells people to instead just look at the formal definition. We'll give that to you below, but it's sometimes sloganized as [A monad is just a monoid in the category of endofunctors, what's the problem?](http://stackoverflow.com/questions/3870088). Without some intuitive guidance, this can also be unhelpful. We'll try to find a good balance. The closest we will come to metaphorical talk is to suggest that monadic types place values inside of *boxes*, and that monads wrap and unwrap boxes to expose or enclose the values inside of them. In any case, our emphasis will be on starting with the abstract structure of monads, followed in coming weeks by instances of monads from the philosophical and linguistics literature. > After you've read this once and are coming back to re-read it to try to digest the details further, the "endofunctors" that slogan is talking about are a combination of our boxes and their associated maps. Their "monoidal" character is captured in the Monad Laws, where a "monoid"---don't confuse with a mon*ad*---is a simpler algebraic notion, meaning a universe with some associative operation that has an identity. For advanced study, here are some further links on the relation between monads as we're working with them and monads as they appear in Category Theory: [1](http://en.wikipedia.org/wiki/Outline_of_category_theory) [2](http://lambda1.jimpryor.net/advanced_topics/monads_in_category_theory/) [3](http://en.wikibooks.org/wiki/Haskell/Category_theory) [4](https://wiki.haskell.org/Category_theory), where you should follow the further links discussing Functors, Natural Transformations, and Monads. ## Box types: type expressions with one free type variable ## Recall that we've been using lower-case Greek letters α, β, γ, ... as type variables. We'll use `P`, `Q`, `R`, and `S` as schematic metavariables over type expressions, that may or may not contain unbound type variables. For instance, we might have P_1 ≡ int P_2 ≡ α -> α P_3 ≡ ∀α. α -> α P_4 ≡ ∀α. α -> β and so on. A *box type* will be a type expression that contains exactly one free type variable. (You could extend this to expressions with more free variables; then you'd have to specify which one of them the box is capturing. But let's keep it simple.) Some examples (using OCaml's type conventions): α option α list (α, R) tree (assuming R contains no free type variables) (α, α) tree The idea is that whatever type the free type variable `α` might be instantiated to, we will have a "type box" of a certain sort that "contains" values of type `α`. For instance, if `α list` is our box type, and `α` instantiates to the type `int`, then in this context, `int list` is the type of a boxed integer. Warning: although our initial motivating examples are readily thought of as "containers" (lists, trees, and so on, with `α`s as their "elements"), with later examples we discuss it will be less natural to describe the boxed types that way. For example, where `R` is some fixed type, `R -> α` will be one box type we work extensively with. Also, for clarity: the *box type* is the type `α list` (or as we might just say, the `list` type operator); the *boxed type* is some specific instantiation of the free type variable `α`. We'll often write boxed types as a box containing what the free type variable instantiates to. So if our box type is `α list`, and `α` instantiates to the specific type `int`, we write: int for the type of a boxed `int`. ## Kleisli arrows ## A lot of what we'll be doing concerns types that are called *Kleisli arrows*. Don't worry about why they're called that, or if you like go read some Category Theory. All we need to know is that these are functions whose type has the form: P -> Q That is, they are functions from values of one type `P` to a boxed type `Q`, for some choice of box and of type expressions `P` and `Q`. For instance, the following are Kleisli arrow types: int -> bool int list -> int list In the first, `P` has become `int` and `Q` has become `bool`. (The boxed type Q is bool). Note that either of the schemas `P` or `Q` are permitted to themselves be boxed types. That is, if `α list` is our box type, we can write the second type as: int -> int list And also what the rhs there is a boxing of is itself a boxed type (with the same kind of box):, so we can write it as: int -> int We have to be careful though not to to unthinkingly equivocate between different kinds of boxes. Here are some examples of values of these Kleisli arrow types, where the box type is `α list`, and the Kleisli arrow types are int -> int (that is, `int -> int list`) or int -> bool:
\x. [x]
\x. [odd? x, odd? x]
\x. prime_factors_of x
\x. [0, 0, 0]
As semanticists, you are no doubt familiar with the debates between those who insist that propositions are sets of worlds and those who insist they are context change potentials. We hope to show you, in coming weeks, that propositions are (certain sorts of) Kleisli arrows. But this doesn't really compete with the other proposals; it is a generalization of them. Both of the other proposed structures can be construed as specific Kleisli arrow types. ## A family of functions for each box type ## We'll need a family of functions to help us work with box types. As will become clear, these have to be defined differently for each box type. Here are the types of our crucial functions, together with our pronunciation, and some other names the functions go by. (Usually the type doesn't fix their behavior, which will be discussed below.) map (/mæp/): (P -> Q) -> P -> Q > In Haskell, this is the function `fmap` from the `Prelude` and `Data.Functor`; also called `<$>` in `Data.Functor` and `Control.Applicative`, and also called `Control.Applicative.liftA` and `Control.Monad.liftM`. map2 (/mæptu/): (P -> Q -> R) -> P -> Q -> R > In Haskell, this is called `Control.Applicative.liftA2` and `Control.Monad.liftM2`. mid (/εmaidεnt@tI/): P -> P > In Haskell, this is called `Control.Monad.return` and `Control.Applicative.pure`. In other theoretical contexts it is sometimes called `unit` or `η`. In the class presentation Jim called it `𝟭`; but now we've decided that `mid` is better. (Think of it as "m" plus "identity", not as the start of "midway".) This notion is exemplified by `Just` for the box type `Maybe α` and by the singleton function for the box type `List α`. m$ or mapply (/εm@plai/): P -> Q -> P -> Q > We'll use `m$` as a left-associative infix operator, reminiscent of (the right-associative) `$` which is just ordinary function application (also expressed by mere left-associative juxtaposition). In the class presentation Jim called `m$` `●`. In Haskell, it's called `Control.Monad.ap` or `Control.Applicative.<*>`. <=< or mcomp : (Q -> R) -> (P -> Q) -> (P -> R) > In Haskell, this is `Control.Monad.<=<`. >=> or flip mcomp : (P -> Q) -> (Q -> R) -> (P -> R) > In Haskell, this is `Control.Monad.>=>`. In the class handout, we gave the types for `>=>` twice, and once was correct but the other was a typo. The above is the correct typing. >>= or mbind : (Q) -> (Q -> R) -> (R) =<< or flip mbind : (Q -> R) -> (Q) -> (R) join: P -> P > In Haskell, this is `Control.Monad.join`. In other theoretical contexts it is sometimes called `μ`. Haskell uses the symbol `>>=` but calls it "bind". This is not well chosen from the perspective of formal semantics, but it's too deeply entrenched to change. We've at least preprended an "m" to the front of "bind". Haskell's names "return" and "pure" for `mid` are even less well chosen, and we think it will be clearer in our discussion to use a different name. (Also, in other theoretical contexts this notion goes by other names, anyway, like `unit` or `η` --- having nothing to do with `η`-reduction in the Lambda Calculus.) The menagerie isn't quite as bewildering as you might suppose. Many of these will be interdefinable. For example, here is how `mcomp` and `mbind` are related: k <=< j ≡ \a. (j a >>= k). We'll state some other interdefinitions below. We will move freely back and forth between using `>=>` and using `<=<` (aka `mcomp`), which is just `>=>` with its arguments flipped. `<=<` has the virtue that it corresponds more closely to the ordinary mathematical symbol `○`. But `>=>` has the virtue that its types flow more naturally from left to right. These functions come together in several systems, and have to be defined in a way that coheres with the other functions in the system: * ***Mappable*** (in Haskelese, "Functors") At the most general level, box types are *Mappable* if there is a `map` function defined for that box type with the type given above. This has to obey the following Map Laws: map (id : α -> α) == (id : α -> α) map (g ○ f) == (map g) ○ (map f) Essentially these say that `map` is a homomorphism from the algebra of `(universe α -> β, operation ○, elsment id)` to that of (α -> β, ○', id'), where `○'` and `id'` are `○` and `id` restricted to arguments of type _. That might be hard to digest because it's so abstract. Think of the following concrete example: if you take a `α list` (that's our α), and apply `id` to each of its elements, that's the same as applying `id` to the list itself. That's the first law. And if you apply the composition of functions `g ○ f` to each of the list's elements, that's the same as first applying `f` to each of the elements, and then going through the elements of the resulting list and applying `g` to each of those elements. That's the second law. These laws obviously hold for our familiar notion of `map` in relation to lists. > As mentioned at the top of the page, in Category Theory presentations of monads they usually talk about "endofunctors", which are mappings from a Category to itself. In the uses they make of this notion, the endofunctors combine the role of a box type _ and of the `map` that goes together with it. * ***MapNable*** (in Haskelese, "Applicatives") A Mappable box type is *MapNable* if there are in addition `map2`, `mid`, and `mapply`. (Given either of `map2` and `mapply`, you can define the other, and also `map`. Moreover, with `map2` in hand, `map3`, `map4`, ... `mapN` are easily definable.) These have to obey the following MapN Laws: 1. mid (id : P->P) : P -> P is a left identity for `m$`, that is: `(mid id) m$ xs = xs` 2. `mid (f a) = (mid f) m$ (mid a)` 3. The `map2`ing of composition onto boxes `fs` and `gs` of functions, when `m$`'d to a box `xs` of arguments == the `m$`ing of `fs` to the `m$`ing of `gs` to xs: `(mid (○) m$ fs m$ gs) m$ xs = fs m$ (gs m$ xs)`. 4. When the arguments (the right-hand operand of `m$`) are an `mid`'d value, the order of `m$`ing doesn't matter: `fs m$ (mid x) = mid ($x) m$ fs`. (Though note that it's `mid ($x)`, or `mid (\f. f x)` that gets `m$`d onto `fs`, not the original `mid x`.) Here's an example where the order *does* matter: `[succ,pred] m$ [1,2] == [2,3,0,1]`, but `[($1),($2)] m$ [succ,pred] == [2,0,3,1]`. This Law states a class of cases where the order is guaranteed not to matter. 5. A consequence of the laws already stated is that when the _left_-hand operand of `m$` is a `mid`'d value, the order of `m$`ing doesn't matter either: `mid f m$ xs == map (flip ($)) xs m$ mid f`. * ***Monad*** (or "Composables") A MapNable box type is a *Monad* if there is in addition an associative `mcomp` having `mid` as its left and right identity. That is, the following Monad Laws must hold: mcomp (mcomp j k) l (that is, (j <=< k) <=< l) == mcomp j (mcomp k l) mcomp mid k (that is, mid <=< k) == k mcomp k mid (that is, k <=< mid) == k You could just as well express the Monad laws using `>=>`: l >=> (k >=> j) == (l >=> k) >=> j k >=> mid == k mid >=> k == k If you have any of `mcomp`, `mpmoc`, `mbind`, or `join`, you can use them to define the others. Also, with these functions you can define `m$` and `map2` from *MapNables*. So with Monads, all you really need to get the whole system of functions are a definition of `mid`, on the one hand, and one of `mcomp`, `mbind`, or `join`, on the other. In practice, you will often work with `>>=`. In the Haskell manuals, they express the Monad Laws using `>>=` instead of the composition operators. This looks similar, but doesn't have the same symmetry: u >>= (\a -> k a >>= j) == (u >>= k) >>= j u >>= mid == u mid a >>= k == k a Also, Haskell calls `mid` `return` or `pure`, but we've stuck to our terminology in this context. > In Category Theory discussion, the Monad Laws are instead expressed in terms of `join` (which they call `μ`) and `mid` (which they call `η`). These are assumed to be "natural transformations" for their box type, which means that they satisfy these equations with that box type's `map`: >
map f ○ mid == mid ○ f
map f ○ join == join ○ map (map f)
> The Monad Laws then take the form: >
join ○ (map join) == join ○ join
join ○ mid == id == join ○ map mid
> The first of these says that if you have a triply-boxed type, and you first merge the inner two boxes (with `map join`), and then merge the resulting box with the outermost box, that's the same as if you had first merged the outer two boxes, and then merged the resulting box with the innermost box. The second law says that if you take a box type and wrap a second box around it (with `mid`) and then merge them, that's the same as if you had done nothing, or if you had instead wrapped a second box around each element of the original (with `map mid`, leaving the original box on the outside), and then merged them.

> The Category Theorist would state these Laws like this, where `M` is the endofunctor that takes us from type `α` to type α: >

μ ○ M(μ) == μ ○ μ
μ ○ η == id == μ ○ M(η)
> A word of advice: if you're doing any work in this conceptual neighborhood and need a Greek letter, don't use μ. In addition to the preceding usage, there's also a use in recursion theory (for the minimization operator), in type theory (as a fixed point operator for types), and in the λμ-calculus, which is a formal system that deals with _continuations_, which we will focus on later in the course. So μ already exhibits more ambiguity than it can handle. As hinted in last week's homework and explained in class, the operations available in a Mappable system exactly preserve the "structure" of the boxed type they're operating on, and moreover are only sensitive to what content is in the corresponding original position. If you say `map f [1,2,3]`, then what ends up in the first position of the result depends only on how `f` and `1` combine. For MapNable operations, on the other hand, the structure of the result may instead be a complex function of the structure of the original arguments. But only of their structure, not of their contents. And if you say `map2 f [10,20] [1,2,3]`, what ends up in the first position of the result depends only on how `f` and `10` and `1` combine. With `map`, you can supply an `f` such that `map f [3,2,0,1] == [[3,3,3],[2,2],[],[1]]`. But you can't transform `[3,2,0,1]` to `[3,3,3,2,2,1]`, and you can't do that with MapNable operations, either. That would involve the structure of the result (here, the length of the list) being sensitive to the content, and not merely the structure, of the original. For Monads (Composables), on the other hand, you can perform more radical transformations of that sort. For example, `join (map (\x. dup x x) [3,2,0,1])` would give us `[3,3,3,2,2,1]` (for a suitable definition of `dup`). ## Interdefinitions and Subsidiary notions## We said above that various of these box type operations can be defined in terms of others. Here is a list of various ways in which they're related. We try to stick to the consistent typing conventions that:
f : α -> β;  g and h have types of the same form
             also sometimes these will have types of the form α -> β -> γ
             note that α and β are permitted to be, but needn't be, boxed types
j : α -> β; k and l have types of the same form
u : α;      v and xs and ys have types of the same form

w : α
But we may sometimes slip. Here are some ways the different notions are related:
j >=> k ≡= \a. (j a >>= k)
u >>= k == (id >=> k) u; or ((\(). u) >=> k) ()
u >>= k == join (map k u)
join w == w >>= id
map2 f xs ys == xs >>= (\x. ys >>= (\y. mid (f x y)))
map2 f xs ys == (map f xs) m$ ys, using m$ as an infix operator
fs m$ xs == fs >>= (\f. map f xs)
m$ == map2 id
map f xs == mid f m$ xs
map f u == u >>= mid ○ f
Here are some other monadic notion that you may sometimes encounter: * mzero is a value of type α that is exemplified by `Nothing` for the box type `Maybe α` and by `[]` for the box type `List α`. It has the behavior that `anything m$ mzero == mzero == mzero m$ anything == mzero >>= anything`. In Haskell, this notion is called `Control.Applicative.empty` or `Control.Monad.mzero`. * Haskell has a notion `>>` definable as `\u v. map (const id) u m$ v`, or as `\u v. u >>= const v`. This is often useful, and `u >> v` won't in general be identical to just `v`. For example, using the box type `List α`, `[1,2,3] >> [4,5] == [4,5,4,5,4,5]`. But in the special case of `mzero`, it is a consequence of what we said above that `anything >> mzero == mzero`. Haskell also calls `>>` `Control.Applicative.*>`. * Haskell has a correlative notion `Control.Applicative.<*`, definable as `\u v. map const u m$ v`. For example, `[1,2,3] <* [4,5] == [1,1,2,2,3,3]`. You might expect Haskell to call `<*` `<<`, but they don't. They used to use `<<` for `flip (>>)` instead, but now they seem not to use `<<` anymore. * mapconst is definable as `map ○ const`. For example `mapconst 4 [1,2,3] == [4,4,4]`. Haskell calls `mapconst` `<$` in `Data.Functor` and `Control.Applicative`. They also use `$>` for `flip mapconst`, and `Control.Monad.void` for `mapconst ()`. ## Examples ## To take a trivial (but, as we will see, still useful) example, consider the Identity box type: `α`. So if `α` is type `bool`, then a boxed `α` is ... a `bool`. That is, α == α. In terms of the box analogy, the Identity box type is a completely invisible box. With the following definitions: mid ≡ \p. p, that is, our familiar combinator I mcomp ≡ \f g x. f (g x), that is, ordinary function composition (○) (aka the B combinator) Identity is a monad. Here is a demonstration that the laws hold: mcomp mid k ≡ (\fgx.f(gx)) (\p.p) k ~~> \x.(\p.p)(kx) ~~> \x.kx ~~> k mcomp k mid ≡ (\fgx.f(gx)) k (\p.p) ~~> \x.k((\p.p)x) ~~> \x.kx ~~> k mcomp (mcomp j k) l ≡ mcomp ((\fgx.f(gx)) j k) l ~~> mcomp (\x.j(kx)) l ≡ (\fgx.f(gx)) (\x.j(kx)) l ~~> \x.(\x.j(kx))(lx) ~~> \x.j(k(lx)) mcomp j (mcomp k l) ≡ mcomp j ((\fgx.f(gx)) k l) ~~> mcomp j (\x.k(lx)) ≡ (\fgx.f(gx)) j (\x.k(lx)) ~~> \x.j((\x.k(lx)) x) ~~> \x.j(k(lx)) The Identity monad is favored by mimes. To take a slightly less trivial (and even more useful) example, consider the box type `α list`, with the following operations: mid : α -> [α] mid a = [a] mcomp : (β -> [γ]) -> (α -> [β]) -> (α -> [γ]) mcomp k j a = concat (map k (j a)) = List.flatten (List.map k (j a)) = foldr (\b ks -> (k b) ++ ks) [] (j a) = List.fold_right (fun b ks -> List.append (k b) ks) [] (j a) = [c | b <- j a, c <- k b] In the first two definitions of `mcomp`, we give the definition first in Haskell and then in the equivalent OCaml. The three different definitions of `mcomp` (one for each line) are all equivalent, and it is easy to show that they obey the Monad Laws. (You will do this in the homework.) In words, `mcomp k j a` feeds the `a` (which has type `α`) to `j`, which returns a list of `β`s; each `β` in that list is fed to `k`, which returns a list of `γ`s. The final result is the concatenation of those lists of `γ`s. For example: let j a = [a*a, a+a] in let k b = [b, b+1] in mcomp k j 7 ==> [49, 50, 14, 15] `j 7` produced `[49, 14]`, which after being fed through `k` gave us `[49, 50, 14, 15]`. Contrast that to `m$` (`mapply`), which operates not on two *box-producing functions*, but instead on two *boxed type values*, one containing functions to be applied to the values in the other box, via some predefined scheme. Thus: let js = [(\a->a*a),(\a->a+a)] in let xs = [7, 5] in mapply js xs ==> [49, 25, 14, 10] The question came up in class of when box types might fail to be Mappable, or Mappables might fail to be MapNables, or MapNables might fail to be Monads. For the first failure, we noted that it's easy to define a `map` operation for the box type `R -> α`, for a fixed type `R`. You `map` a function of type `P -> Q` over a value of the boxed type P, that is a value of type `R -> P`, by just returning a function that takes some `R` as input, first supplies it to your `R -> P` value, and then supplies the result to your `map`ped function of type `P -> Q`. (We will be working with this Mappable extensively; in fact it's not just a Mappable but more specifically a Monad.) But if on the other hand, your box type is `α -> R`, you'll find that there is no way to define a `map` operation that takes arbitrary functions of type `P -> Q` and values of the boxed type P, that is `P -> R`, and returns values of the boxed type Q. For the second failure, that is cases of Mappables that are not MapNables, we cited box types like `(R, α)`, for arbitrary fixed types `R`. The `map` operation for these is defined by `map f (r,a) = (r, f a)`. For certain choices of `R` these can be MapNables too. The easiest case is when `R` is the type of `()`. But when we look at the MapNable Laws, we'll see that they impose constraints we cannot satisfy for *every* choice of the fixed type `R`. Here's why. We'll need to define `mid a = (r0, a)` for some specific `r0` of type `R`. The choice can't depend on the value of `a`, because `mid` needs to work for `a`s of _any_ type. Then the MapNable Laws will entail: 1. (r0,id) m$ (r,x) == (r,x) 2. (r0,f x) == (r0,f) m$ (r0,x) 3. (r0,(○)) m$ (r'',f) m$ (r',g) m$ (r,x) == (r'',f) m$ ((r',g) m$ (r,x)) 4. (r'',f) m$ (r0,x) == (r0,($x)) m$ (r'',f) 5. (r0,f) m$ (r,x) == (r,($x)) m$ (r0,f) Now we are not going to be able to write a `m$` function that inspects the second element of its left-hand operand to check if it's the `id` function; the identity of functions is not decidable. So the only way to satisfy Law 1 will be to have the first element of the result (`r`) be taken from the first element of the right-hand operand in _all_ the cases when the first element of the left-hand operand is `r0`. But then that means that the result of the lhs of Law 5 will also have a first element of `r`; so, turning now to the rhs of Law 5, we see that `m$` must use the first element of its _left_-hand operand (here again `r`) at least in those cases when the first element of its right-hand operand is `r0`. If our `R` type has a natural *monoid* structure, we could just let `r0` be the monoid's identity, and have `m$` combine other `R`s using the monoid's operation. Alternatively, if the `R` type is one that we can safely apply the predicate `(r0==)` to, then we could define `m$` something like this: let (m$) (r1,f) (r2,x) = ((if r0==r1 then r2 else if r0==r2 then r1 else ...), ...) But for some types neither of these will be the case. For function types, as we already mentioned, `==` is not decidable. If the functions have suitable types, they do form a monoid with `○` as the operation and `id` as the identity; but many function types won't be such that arbitrary functions of that type are composable. So when `R` is the type of functions from `int`s to `bool`s, for example, we won't have any way to write a `m$` that satisfies the constraints stated above. For the third failure, that is examples of MapNables that aren't Monads, we'll just state that lists where the `map2` operation is taken to be zipping rather than taking the Cartesian product (what in Haskell are called `ZipList`s), these are claimed to exemplify that failure. But we aren't now in a position to demonstrate that to you.