The closest we will come to metaphorical talk is to suggest that monadic types place values inside of boxes, and that monads wrap and unwrap boxes to expose or enclose the values inside of them. In any case, our emphasis will be on starting with the abstract structure of monads, followed in coming weeks by instances of monads from the philosophical and linguistics literature.

After you've read this once and are coming back to re-read it to try to digest the details further, the "endofunctors" that slogan is talking about are a combination of our boxes and their associated maps. Their "monoidal" character is captured in the Monad Laws, for which see below.

## Box types: type expressions with one free type variable

Recall that we've been using lower-case Greek letters α, β, γ, ... as type variables. We'll use P, Q, R, and S as schematic metavariables over type expressions, that may or may not contain unbound type variables. For instance, we might have

P_1 ≡ int
P_2 ≡ α -> α
P_3 ≡ ∀α. α -> α
P_4 ≡ ∀α. α -> β


and so on.

A box type will be a type expression that contains exactly one free type variable. (You could extend this to expressions with more free variables; then you'd have to specify which one of them the box is capturing. But let's keep it simple.) Some examples (using OCaml's type conventions):

α option
α list
(α, R) tree    (assuming R contains no free type variables)
(α, α) tree


The idea is that whatever type the free type variable α might be instantiated to, we will have a "type box" of a certain sort that "contains" values of type α. For instance, if α list is our box type, and α instantiates to the type int, then in this context, int list is the type of a boxed integer.

Warning: although our initial motivating examples are readily thought of as "containers" (lists, trees, and so on, with αs as their "elements"), with later examples we discuss it will be less natural to describe the boxed types that way. For example, where R is some fixed type, R -> α will be one box type we work extensively with.

Also, for clarity: the box type is the type α list (or as we might just say, the list type operator); the boxed type is some specific instantiation of the free type variable α. We'll often write boxed types as a box containing what the free type variable instantiates to. So if our box type is α list, and α instantiates to the specific type int, we write:

int

for the type of a boxed int.

## Kleisli arrows

A lot of what we'll be doing concerns types that are called Kleisli arrows. Don't worry about why they're called that, or if you like go read some Category Theory. All we need to know is that these are functions whose type has the form:

P -> Q

That is, they are functions from values of one type P to a boxed type Q, for some choice of box and of type expressions P and Q. For instance, the following are Kleisli arrow types:

int -> bool

int list -> int list

In the first, P has become int and Q has become bool. (The boxed type Q is bool).

Note that either of the schemas P or Q are permitted to themselves be boxed types. That is, if α list is our box type, we can write the second type as:

int -> int list

And also what the rhs there is a boxing of is itself a boxed type (with the same kind of box):, so we can write it as:

int -> int

We have to be careful though not to to unthinkingly equivocate between different kinds of boxes.

Here are some examples of values of these Kleisli arrow types, where the box type is α list, and the Kleisli arrow types are int -> int (that is, int -> int list) or int -> bool:

\x. [x]
\x. [odd? x, odd? x]
\x. prime_factors_of x
\x. [0, 0, 0]

As semanticists, you are no doubt familiar with the debates between those who insist that propositions are sets of worlds and those who insist they are context change potentials. We hope to show you, in coming weeks, that propositions are (certain sorts of) Kleisli arrows. But this doesn't really compete with the other proposals; it is a generalization of them. Both of the other proposed structures can be construed as specific Kleisli arrow types.

## A family of functions for each box type

We'll need a family of functions to help us work with box types. As will become clear, these have to be defined differently for each box type.

Here are the types of our crucial functions, together with our pronunciation, and some other names the functions go by. (Usually the type doesn't fix their behavior, which will be discussed below.)

map (/mæp/): (P -> Q) -> P -> Q

In Haskell, this is the function fmap from the Prelude and Data.Functor; also called <> in Data.Functor and Control.Applicative, and also called Control.Applicative.liftA and Control.Monad.liftM. map2 (/mæptu/): (P -> Q -> R) -> P -> Q -> R In Haskell, this is called Control.Applicative.liftA2 and Control.Monad.liftM2. ⇧ or mid (/εmaidεnt@tI/): P -> P This notion is exemplified by Just for the box type Maybe α and by the singleton function for the box type List α. It will be a way of boxing values with your box type that plays a distinguished role in the various Laws and interdefinitions we present below. In Haskell, this is called Control.Monad.return and Control.Applicative.pure. In other theoretical contexts it is sometimes called unit or η. All of these names are somewhat unfortunate. First, it has little to do with η-reduction in the Lambda Calculus. Second, it has little to do with the () : unit value we discussed in earlier classes. Third, it has little to do with the return keyword in C and other languages; that's more closely related to continuations, which we'll discuss in later weeks. Finally, this doesn't perfectly align with other uses of "pure" in the literature. ⇧'d values will generally be "pure" in the other senses, but other boxed values can be too. For all these reasons, we're thinking it will be clearer in our discussion to use a different name. In the class presentation Jim called it 𝟭; and in an earlier draft of this page we (only) called it mid ("m" plus "identity"); but now we're trying out ⇧ as a symbolic alternative. But in the end, we might switch to just using η. ¢ or mapply (/εm@plai/): P -> Q -> P -> Q We'll use ¢ as a left-associative infix operator, reminiscent of (the right-associative)  which is just ordinary function application (also expressed by mere left-associative juxtaposition). In the class presentation Jim called ¢ ⚫; and in an earlier draft of this page we called it m$. In Haskell, it's called Control.Monad.ap or Control.Applicative.<*>. <=< or mcomp : (Q -> R) -> (P -> Q) -> (P -> R) In Haskell, this is Control.Monad.<=<. >=> or flip mcomp : (P -> Q) -> (Q -> R) -> (P -> R) In Haskell, this is Control.Monad.>=>. We will move freely back and forth between using <=< (aka mcomp) and using >=>, which is just <=< with its arguments flipped. <=< has the virtue that it corresponds more closely to the ordinary mathematical symbol ○. But >=> has the virtue that its types flow more naturally from left to right. In the class handout, we gave the types for >=> twice, and once was correct but the other was a typo. The above is the correct typing. >>= or mbind : (Q) -> (Q -> R) -> (R) Haskell uses the symbol >>= but calls it "bind". This is not well chosen from the perspective of formal semantics, since it's only loosely connected with what we mean by "binding." But the name is too deeply entrenched to change. We've at least preprended an "m" to the front of "bind". In some presentations this operation is called ★. =<< or flip mbind : (Q -> R) -> (Q) -> (R) join: P -> P In Haskell, this is Control.Monad.join. In other theoretical contexts it is sometimes called μ. The menagerie isn't quite as bewildering as you might suppose. Many of these will be interdefinable. For example, here is how mcomp and mbind are related: k <=< j ≡ \a. (j a >>= k). We'll state some other interdefinitions below. These functions come together in several systems, and have to be defined in a way that coheres with the other functions in the system: • Mappable (in Haskelese, "Functors") At the most general level, box types are Mappable if there is a map function defined for that box type with the type given above. This has to obey the following Map Laws: map (id : α -> α) == (id : α -> α) map (g ○ f) == (map g) ○ (map f) Essentially these say that map is a homomorphism from the algebra of (universe α -> β, operation ○, elsment id) to that of (α -> β, ○', id'), where ○' and id' are ○ and id restricted to arguments of type _. That might be hard to digest because it's so abstract. Think of the following concrete example: if you take a α list (that's our α), and apply id to each of its elements, that's the same as applying id to the list itself. That's the first law. And if you apply the composition of functions g ○ f to each of the list's elements, that's the same as first applying f to each of the elements, and then going through the elements of the resulting list and applying g to each of those elements. That's the second law. These laws obviously hold for our familiar notion of map in relation to lists. As mentioned at the top of the page, in Category Theory presentations of monads they usually talk about "endofunctors", which are mappings from a Category to itself. In the uses they make of this notion, the endofunctors combine the role of a box type _ and of the map that goes together with it. • MapNable (in Haskelese, "Applicatives") A Mappable box type is MapNable if there are in addition map2, ⇧, and mapply. (Given either of map2 and mapply, you can define the other, and also map. Moreover, with map2 in hand, map3, map4, ... mapN are easily definable.) These have to obey the following MapN Laws: 1. ⇧(id : P->P) : P -> P is a left identity for ¢, that is: (⇧id) ¢ xs = xs 2. ⇧(f a) = (⇧f) ¢ (⇧a) 3. The map2ing of composition onto boxes fs and gs of functions, when ¢'d to a box xs of arguments == the ¢ing of fs to the ¢ing of gs to xs: (⇧(○) ¢ fs ¢ gs) ¢ xs = fs ¢ (gs ¢ xs). 4. When the arguments (the right-hand operand of ¢) are an ⇧'d value, the order of ¢ing doesn't matter: fs ¢ (⇧x) = ⇧($x) ¢ fs. (Though note that it's ⇧($x), or ⇧(\f. f x) that gets ¢d onto fs, not the original ⇧x.) Here's an example where the order does matter: [succ,pred] ¢ [1,2] == [2,3,0,1], but [($1),($2)] ¢ [succ,pred] == [2,0,3,1]. This Law states a class of cases where the order is guaranteed not to matter. 5. A consequence of the laws already stated is that when the left-hand operand of ¢ is a ⇧'d value, the order of ¢ing doesn't matter either: ⇧f ¢ xs == map (flip ($)) xs ¢ ⇧f.
• Monad (or "Composables") A MapNable box type is a Monad if there is in addition an associative mcomp having ⇧ as its left and right identity. That is, the following Monad Laws must hold:

mcomp (mcomp j k) l (that is, (j <=< k) <=< l) == mcomp j (mcomp k l)
mcomp mid k (that is, ⇧ <=< k) == k
mcomp k mid (that is, k <=< ⇧) == k


You could just as well express the Monad laws using >=>:

l >=> (k >=> j) == (l >=> k) >=> j
k >=> ⇧ == k
⇧ >=> k == k


If you studied algebra, you'll remember that a monoid is a universe with some associative operation that has an identity. For example, the natural numbers form a monoid with multiplication as the operation and 1 as the identity, or with addition as the operation and 0 as the identity. Strings form a monoid with concatenation as the operation and the empty string as the identity. (This example shows that the operation need not be commutative.) Monads are a kind of generalization of this notion, and that's why they're named as they are. The key difference is that for monads, the values being operated on need not be of the same type. They can be, if they're all Kleisli arrows of a single type P -> P. But they needn't be. Their types only need to "cohere" in the sense that the output type of the one arrow is a boxing of the input type of the next.

In the Haskell manuals, they express the Monad Laws using >>= instead of the composition operators >=> or <=<. This looks similar, but doesn't have the same symmetry:

u >>= (\a -> k a >>= j) == (u >>= k) >>= j
u >>= ⇧ == u
⇧a >>= k == k a


(Also, Haskell calls ⇧ return or pure, but we've stuck to our terminology in this context.) Some authors try to make the first of those Laws look more symmetrical by writing it as:

(A >>= \a -> B) >>= \b -> C == A >>= (\a -> B >>= \b -> C)


If you have any of mcomp, mpmoc, mbind, or join, you can use them to define the others. Also, with these functions you can define ¢ and map2 from MapNables. So with Monads, all you really need to get the whole system of functions are a definition of ⇧, on the one hand, and one of mcomp, mbind, or join, on the other.

In Category Theory discussion, the Monad Laws are instead expressed in terms of join (which they call μ) and ⇧ (which they call η). These are assumed to be "natural transformations" for their box type, which means that they satisfy these equations with that box type's map:

map f ○ ⇧ == ⇧ ○ fmap f ○ join == join ○ map (map f)
The Monad Laws then take the form:
join ○ (map join) == join ○ joinjoin ○ ⇧ == id == join ○ map ⇧
The first of these says that if you have a triply-boxed type, and you first merge the inner two boxes (with map join), and then merge the resulting box with the outermost box, that's the same as if you had first merged the outer two boxes, and then merged the resulting box with the innermost box. The second law says that if you take a box type and wrap a second box around it (with ⇧) and then merge them, that's the same as if you had done nothing, or if you had instead wrapped a second box around each element of the original (with map ⇧, leaving the original box on the outside), and then merged them.

The Category Theorist would state these Laws like this, where M is the endofunctor that takes us from type α to type α:

μ ○ M(μ) == μ ○ μμ ○ η == id == μ ○ M(η)
A word of advice: if you're doing any work in this conceptual neighborhood and need a Greek letter, don't use μ. In addition to the preceding usage, there's also a use in recursion theory (for the minimization operator), in type theory (as a fixed point operator for types), and in the λμ-calculus, which is a formal system that deals with continuations, which we will focus on later in the course. So μ already exhibits more ambiguity than it can handle. We link to further reading about the Category Theory origins of Monads below.

There isn't any single ⇧ function, or single mbind function, and so on. For each new box type, this has to be worked out in a useful way. And as we hinted, in many cases the choice of box type still leaves some latitude about how they should be defined. We commonly talk about "the List Monad" to mean a combination of the choice of α list for the box type and particular definitions for the various functions listed above. There's also "the ZipList MapNable/Applicative" which combines that same box type with other choices for (some of) the functions. Many of these packages also define special-purpose operations that only make sense for that system, but not for other Monads or Mappables.

As hinted in last week's homework and explained in class, the operations available in a Mappable system exactly preserve the "structure" of the boxed type they're operating on, and moreover are only sensitive to what content is in the corresponding original position. If you say map f [1,2,3], then what ends up in the first position of the result depends only on how f and 1 combine.

For MapNable operations, on the other hand, the structure of the result may instead be a complex function of the structure of the original arguments. But only of their structure, not of their contents. And if you say map2 f [10,20] [1,2,3], what ends up in the first position of the result depends only on how f and 10 and 1 combine.

With map, you can supply an f such that map f [3,2,0,1] == [[3,3,3],[2,2],[],[1]]. But you can't transform [3,2,0,1] to [3,3,3,2,2,1], and you can't do that with MapNable operations, either. That would involve the structure of the result (here, the length of the list) being sensitive to the content, and not merely the structure, of the original.

For Monads (Composables), on the other hand, you can perform more radical transformations of that sort. For example, join (map (\x. dup x x) [3,2,0,1]) would give us [3,3,3,2,2,1] (for a suitable definition of dup).

## Interdefinitions and Subsidiary notions

We said above that various of these box type operations can be defined in terms of others. Here is a list of various ways in which they're related. We try to stick to the consistent typing conventions that:

f : α -> β;  g and h have types of the same form
also sometimes these will have types of the form α -> β -> γ
note that α and β are permitted to be, but needn't be, boxed types
j : α -> β; k and l have types of the same form
u : α;      v and xs and ys have types of the same form

w : α


But we may sometimes slip.

Here are some ways the different notions are related:

j >=> k ≡= \a. (j a >>= k)
u >>= k == (id >=> k) u; or ((\(). u) >=> k) ()
u >>= k == join (map k u)
join w == w >>= id
map2 f xs ys == xs >>= (\x. ys >>= (\y. ⇧(f x y)))
map2 f xs ys == (map f xs) ¢ ys, using ¢ as an infix operator
fs ¢ xs == fs >>= (\f. map f xs)
¢ == map2 id
map f xs == ⇧f ¢ xs
map f u == u >>= ⇧ ○ f


Here are some other monadic notion that you may sometimes encounter:

• mzero is a value of type α that is exemplified by Nothing for the box type Maybe α and by [] for the box type List α. It has the behavior that anything ¢ mzero == mzero == mzero ¢ anything == mzero >>= anything. In Haskell, this notion is called Control.Applicative.empty or Control.Monad.mzero.

• Haskell has a notion >> definable as \u v. map (const id) u ¢ v, or as \u v. u >>= const v. This is often useful, and u >> v won't in general be identical to just v. For example, using the box type List α, [1,2,3] >> [4,5] == [4,5,4,5,4,5]. But in the special case of mzero, it is a consequence of what we said above that anything >> mzero == mzero. Haskell also calls >> Control.Applicative.*>.

• Haskell has a correlative notion Control.Applicative.<*, definable as \u v. map const u ¢ v. For example, [1,2,3] <* [4,5] == [1,1,2,2,3,3].

• mapconst is definable as map ○ const. For example mapconst 4 [1,2,3] == [4,4,4]. Haskell calls mapconst <$ in Data.Functor and Control.Applicative. They also use $> for flip mapconst, and Control.Monad.void for mapconst ().

## Examples

To take a trivial (but, as we will see, still useful) example, consider the Identity box type: α. So if α is type bool, then a boxed α is ... a bool. That is, α == α. In terms of the box analogy, the Identity box type is a completely invisible box. With the following definitions:

mid (* or ⇧ *) ≡ \p. p, that is, our familiar combinator I
mcomp (* or <=< *) ≡ \f g x. f (g x), that is, ordinary function composition (○) (aka the B combinator)


Identity is a monad. Here is a demonstration that the laws hold:

mcomp mid k ≡ (\fgx.f(gx)) (\p.p) k
~~> \x.(\p.p)(kx)
~~> \x.kx
~~> k
mcomp k mid ≡ (\fgx.f(gx)) k (\p.p)
~~> \x.k((\p.p)x)
~~> \x.kx
~~> k
mcomp (mcomp j k) l ≡ mcomp ((\fgx.f(gx)) j k) l
~~> mcomp (\x.j(kx)) l
≡ (\fgx.f(gx)) (\x.j(kx)) l
~~> \x.(\x.j(kx))(lx)
~~> \x.j(k(lx))
mcomp j (mcomp k l) ≡ mcomp j ((\fgx.f(gx)) k l)
~~> mcomp j (\x.k(lx))
≡ (\fgx.f(gx)) j (\x.k(lx))
~~> \x.j((\x.k(lx)) x)
~~> \x.j(k(lx))


The Identity monad is favored by mimes.

To take a slightly less trivial (and even more useful) example, consider the box type α list, with the following operations:

mid : α -> [α]
mid a = [a]

mcomp : (β -> [γ]) -> (α -> [β]) -> (α -> [γ])
mcomp k j a = concat (map k (j a)) = List.flatten (List.map k (j a))
= foldr (\b ks -> (k b) ++ ks) [] (j a) = List.fold_right (fun b ks -> List.append (k b) ks) [] (j a)
= [c | b <- j a, c <- k b]


In the first two definitions of mcomp, we give the definition first in Haskell and then in the equivalent OCaml. The three different definitions of mcomp (one for each line) are all equivalent, and it is easy to show that they obey the Monad Laws. (You will do this in the homework.)

In words, mcomp k j a feeds the a (which has type α) to j, which returns a list of βs; each β in that list is fed to k, which returns a list of γs. The final result is the concatenation of those lists of γs.

For example:

let j a = [a*a, a+a] in
let k b = [b, b+1] in
mcomp k j 7 ==> [49, 50, 14, 15]


j 7 produced [49, 14], which after being fed through k gave us [49, 50, 14, 15].

Contrast that to ¢ (mapply), which operates not on two box-producing functions, but instead on two boxed type values, one containing functions to be applied to the values in the other box, via some predefined scheme. Thus:

let js = [(\a->a*a),(\a->a+a)] in
let xs = [7, 5] in
mapply js xs ==> [49, 25, 14, 10]


These implementations of <=< and ¢ for lists use the "crossing" strategy for pairing up multiple lists, as opposed to the "zipping" strategy. Nothing forces that choice; you could also define ¢ using the "zipping" strategy instead. (But then you wouldn't be able to build a corresponding Monad; see below.) Haskell talks of the List Monad in the first case, and the ZipList Applicative in the second case.

Sticking with the "crossing" strategy, here's how to motivate our implementation of <=<. Recall that we have on the one hand, an operation filter for lists, that applies a predicate to each element of the list, and returns a list containing only those elements which satisfied the predicate. But the elements it does retain, it retains unaltered. On the other hand, we have the operation map for lists, that is capable of changing the list elements in the result. But it doesn't have the power to throw list elements away; elements in the source map one-to-one to elements in the result. In many cases, we want something in between filter and map. We want to be able to ignore or discard some list elements, and possibly also to change the list elements we keep. One way of doing this is to have a function optmap, defined like this:

let rec optmap (f : α -> β option) (xs : α list) : β list =
match xs with
| [] -> []
| x' :: xs' ->
(match f x' with
| None -> optmap f xs'
| Some b -> b :: optmap f xs')


Then we retain only those αs for which f returns Some b; when f returns None, we just leave out any corresponding element in the result.

That can be helpful, but it only enables us to have zero or one elements in the result corresponding to a given element in the source list. What if we sometimes want more? Well, here is a more general approach:

let rec catmap (k : α -> β list) (xs : α list) : β list =
match xs with
| [] -> []
| x' :: xs' -> List.append (k x') (catmap k xs')


Now we can have as many elements in the result for a given α as k cares to return. Another way to write catmap k xs is as (Haskell) concat (map k xs) or (OCaml) List.flatten (List.map k xs). And this is just the definition of mbind or >>= for the List Monad. The definition of mcomp or <=<, that we gave above, differs only in that it's the way to compose two functions j and k, that you'd want to catmap, rather than the way to catmap one of those functions over a value that's already a list.

This example is a good intuitive basis for thinking about the notions of mbind and mcomp more generally. Thus mbind for the option/Maybe type takes an option value, applies k to its element (if there is one), and returns the resulting option value. mbind for a tree with α-labeled leaves would apply k to each of the leaves, and return a tree containing arbitrarily large subtrees in place of all its former leaves, depending on what k returned.

[3, 2, 0, 1]  >>=α list    (\a -> dup a a)  ==>  [3, 3, 3, 2, 2, 1]

Some a  >>=α option  (\a -> Some 0) ==> Some 0
None    >>=α option  (\a -> Some 0) ==> None
Some a  >>=α option  (\a -> None  ) ==> None

.
/ \
.                                                  /   \
/ \                                  .             .     \
.   3       >>=(α,unit) tree  (\a ->  / \  )  ==>   / \     .
/ \                                  a   a         /   \   / \
1   2                                              .     . 3   3
/ \   / \
1   1 2   2


Though as we warned before, only some of the Monads we'll be working with are naturally thought of "containers"; so in other cases the similarity of their mbind operations to what we have here will be more abstract.

The question came up in class of when box types might fail to be Mappable, or Mappables might fail to be MapNables, or MapNables might fail to be Monads.

For the first failure, we noted that it's easy to define a map operation for the box type R -> α, for a fixed type R. You map a function of type P -> Q over a value of the boxed type P, that is a value of type R -> P, by just returning a function that takes some R as input, first supplies it to your R -> P value, and then supplies the result to your mapped function of type P -> Q. (We will be working with this Mappable extensively; in fact it's not just a Mappable but more specifically a Monad.)

But if on the other hand, your box type is α -> R, you'll find that there is no way to define a map operation that takes arbitrary functions of type P -> Q and values of the boxed type P, that is P -> R, and returns values of the boxed type Q.

For the second failure, that is cases of Mappables that are not MapNables, we cited box types like (R, α), for arbitrary fixed types R. The map operation for these is defined by map f (r,a) = (r, f a). For certain choices of R these can be MapNables too. The easiest case is when R is the type of (). But when we look at the MapNable Laws, we'll see that they impose constraints we cannot satisfy for every choice of the fixed type R. Here's why. We'll need to define ⇧a = (r0, a) for some specific r0 of type R. The choice can't depend on the value of a, because ⇧ needs to work for as of any type. Then the MapNable Laws will entail:

1. (r0,id) ¢ (r,x) == (r,x)
2. (r0,f x) == (r0,f) ¢ (r0,x)
3. (r0,(○)) ¢ (r'',f) ¢ (r',g) ¢ (r,x) == (r'',f) ¢ ((r',g) ¢ (r,x))
4. (r'',f) ¢ (r0,x) == (r0,($x)) ¢ (r'',f) 5. (r0,f) ¢ (r,x) == (r,($x)) ¢ (r0,f)


Now we are not going to be able to write a ¢ function that inspects the second element of its left-hand operand to check if it's the id function; the identity of functions is not decidable. So the only way to satisfy Law 1 will be to have the first element of the result (r) be taken from the first element of the right-hand operand in all the cases when the first element of the left-hand operand is r0. But then that means that the result of the lhs of Law 5 will also have a first element of r; so, turning now to the rhs of Law 5, we see that ¢ must use the first element of its left-hand operand (here again r) at least in those cases when the first element of its right-hand operand is r0. If our R type has a natural monoid structure, we could just let r0 be the monoid's identity, and have ¢ combine other Rs using the monoid's operation. Alternatively, if the R type is one that we can safely apply the predicate (r0==) to, then we could define ¢ something like this:

let (¢) (r1,f) (r2,x) = ((if r0==r1 then r2 else if r0==r2 then r1 else ...), ...)


But for some types neither of these will be the case. For function types, as we already mentioned, == is not decidable. If the functions have suitable types, they do form a monoid with ○ as the operation and id as the identity; but many function types won't be such that arbitrary functions of that type are composable. So when R is the type of functions from ints to bools, for example, we won't have any way to write a ¢ that satisfies the constraints stated above.

For the third failure, that is examples of MapNables that aren't Monads, we'll just state that lists where the map2 operation is taken to be zipping rather than taking the Cartesian product (what in Haskell are called ZipLists), these are claimed to exemplify that failure. But we aren't now in a position to demonstrate that to you.

As we mentioned above, the notions of Monads have their origin in Category Theory, where they are mostly specified in terms of (what we call) ⇧ and join. For advanced study, here are some further links on the relation between monads as we're working with them and monads as they appear in Category Theory: 1 2 3 4 (where you should follow the further links discussing Functors, Natural Transformations, and Monads) 5

Here are some papers that introduced Monads into functional programming:

• Eugenio Moggi, Notions of Computation and Monads: Information and Computation 93 (1) 1991. This paper is available online, but would be very difficult reading for members of this seminar, so we won't link to it. However, the next two papers should be accessible.

• Philip Wadler. The essence of functional programming: invited talk, 19'th Symposium on Principles of Programming Languages, ACM Press, Albuquerque, January 1992.

• Philip Wadler. Monads for Functional Programming: in M. Broy, editor, Marktoberdorf Summer School on Program Design Calculi, Springer Verlag, NATO ASI Series F: Computer and systems sciences, Volume 118, August 1992. Also in J. Jeuring and E. Meijer, editors, Advanced Functional Programming, Springer Verlag, LNCS 925, 1995. Some errata fixed August 2001.