Earlier we gave a special name to certain unary functions that had the same domain and codomain: when these functions are bijective, we call them **permutations**.

Now we're going to look at another class of functions or relations, all of whose arguments and results come from a single set. These don't necessarily need to be bijective functions or relations.

A **binary operation** on a set \(\Alpha\), or just an **operation** on \(\Alpha\) for short, will be a total function on pairs of \(\Alpha\)s into the codomain \(\Alpha\). There are three aspects of this worth emphasizing: (i) an operation on \(\Alpha\) must be a *total* function on \(\Alpha\times\Alpha\), that is, it must have a defined result for every pair of \(\Alpha\)s. (ii) The operation must be a *function*, that is, there must be a *single* result for applying the operation to any pair of \(\Alpha\)s. And (iii) the operation must be *into* \(\Alpha\), that is, its result for any pair of \(\Alpha\)s must be another \(\Alpha\). Sometimes this is put by saying that the set \(\Alpha\) is **closed** under the operation in question.

By saying the operation is into A, I don't exclude its being onto A. It may or may not be.

I'll use \(\star\) and \(\bullet\) to stand for arbitrary operations. Instead of writing \(\star(x,y)\), as we did when using \(f\) as a symbol for functions, we'll instead use "infix" notation and write \(x\star y\).

Now in what follows we're going to need to keep careful track of what set of objects we're working with at each moment. Though I said this was important in the preceding notes, I was sometimes (deliberately) lax about it in the homework exercises. But we've got to start being more careful. One way to proceed would be to assume that whatever operation we're working with has the relevant set "baked" into it as its codomain (and one-half of its domain). So then we could just talk about that operation, and the relevant set would always be recoverable from it. There's nothing intrinsically wrong with going that way. However, things will be easier in some ways if we instead think of the operations as possibly being defined on larger sets than just the set we're working with. For instance, suppose \(+\) is the familiar operation of addition, understood to be defined on all the natural numbers \(\N = \set{0, 1, 2, 3, 4, \dots}\). We might for some purposes want to work with not that set, but instead the set \(\mathbb{E} = \set{0, 2, 4, \dots}\) of just the *even* natural numbers. We can take \(+\) to be an operation on this set too, because it is defined on every pair of \(\mathbb{E}\)s, and the result is always itself a unique member of \(\mathbb{E}\)s.

HOMEWORK EXERCISE:

- Is \(+\) an operation on the set of
*odd*natural numbers? Justify your answer.

So if we permit operations on a set to also be defined over larger sets, we'll need to keep more explicit track of which set we're working with at any moment.

**Algebra** is an umbrella term for a mathematical "package" including objects and operations with certain structures. Part of the package will be the set of objects, which I'll call the algebra's "universe." (Sometimes this is called the algebra's "underlying set.") In some contexts mathemeticians allow in several universes, but we will stick to algebras with a single universe. Sometimes one or more members of the universe are designated as having special roles; we will see some examples of this shortly. The algebra will also come with one or more binary operations. Sometimes in generalized discussions mathematicians will work with functions that aren't binary, or don't count as operations, or will work with relations as well as or instead of what I'm calling operations. But we will stick to algebras where we've just got the universe, zero or more objects from the universe playing special roles, and one or more operations on the universe.

As shorthand, I will talk about the "elements of the algebra." This means: objects that are members of the set that constitutes the algebra's universe.

The simplest kind of algebra involves just this set and a single operation on that set. Algebras of this kind are called **magmas** or **groupoids**; but this isn't yet a very interesting kind of algebra, so I don't expect you to remember its name. (Since all the algebras we're talking about *do* involve a set and at least one operation on the set, they will *be* groupoids, but they will also have more interesting kinds of algebraic structure as well.)

More interesting kinds of algebras are defined by placing certain constraints on the relevant operation, perhaps positing the existence of some elements of the algebra that satisfy further constraints, perhaps positing additional operations that satisfy yet further constraints.

These constraints are presented in the form of *equalities*. For instance, we may want to talk about algebras \(\tuple{\Alpha,\star}\) where for all elements \(x,y\in\Alpha\):

\(x \star y = y \star x\)

It's conventional to leave the quantifiers off of these equations, but they are always to be understood as having all their free variables universally bound (restricted to whatever universe \(\Alpha\) we're working with).

Operations that satisfy the constraint just described (wrt a set \(\Alpha\)) are called **commutative** (wrt \(\Alpha\)). (An operation might be commutative wrt some sets but fail to be commutative wrt other sets on which it's defined.)

Later we'll be talking about some algebras that do require their operations to be commutative. But I introduced this just as an example. For the time being, we don't necessarily require this.

HOMEWORK EXERCISES:

- Is addition commutative on the natural numbers? Is multiplication?

- Recall our function \(\curl\) from the first class, which took two strings and returned their concatenation. Given a set of strings built out of the letters \(\mathcal{A}\) and \(\mathcal{B}\)---for instance, this set will include the empty string \(\varepsilon\), the string \(\varepsilon\smalltriright\mathcal{A}\), the string \(\varepsilon\smalltriright\mathcal{B}\), and perhaps some more strings---what will this set have to look like for \(\curl\) to count as an operation on it? Would \(\curl\) be a commutative operation on the set? Why or why not?

For some later exercises, I will call the minimal set meeting the conditions described here \(\mathfrak{S}\): that is, \(\mathfrak{S}\) is the set meeting the conditions described above and such that no proper subset of it also does so. So every member of \(\mathfrak{S}\) will be a string that's either empty, or built out of the letters \(\mathcal{A}\) and \(\mathcal{B}\), or is a concatenation of the preceding (and so is also a string built out of the letters \(\mathcal{A}\) and \(\mathcal{B}\)). Nothing else belongs to the set.

- Let the "length" of the empty string be \(0\), the length of the string \(\varepsilon\smalltriright\mathcal{A}\) be \(1\), and in general, the length of a string \(S \smalltrirightL\), for any string \(S\) and letter \(L\), be the length of \(S\) plus \(1\). Questions: (a) Is there any upper bound to the lengths of the strings in \(\mathfrak{S}\)? that is, any \(n\in\N\) such that all strings in \(\mathfrak{S}\) have a length \(\le n\)? (b) Prove that every string in \(\mathfrak{S}\) has a finite length, that is, that there is an \(n\in\N\) such that the length of that string is \(\le n\). (Hint: use an inductive argument, where the base clause mirrors the base clause for the definition of strings, and the inductive step mirrors the inductive step in the definition of strings, and we rely on the fact that there are no strings not captured by the clauses you've just established.)

Sometimes when we're working with a mathematical operation, we'll want to talk about multiple applications of it in turn, and it can be cumbersome to explicitly parenthesize each of the applications to keep track of which turn they come in. So conventions develop that the operation is to be understood as "associating to the right" or as "associating to the left," in the absence of explicit parentheses. For example, we understand exponentiation as associating to the right, that is:

\(2^{3^4} = 2^{(3^4)}\), rather than \((2^3)^4\) (which would just be \(2^{3\cdot 4}\))

On the other hand, we understand subtraction as associating to the left, that is:

\(4-3-2 = (4-3)-2\), rather than \(4-(3-2)\)

In some happy circumstances, though, we don't need to make this choice, because whether you understood the operator as associating to the left, or to the right, you'd always get the same result. That is, some operators \(\star\) are such that (wrt certain sets), for every \(x,y,z\) (in that set):

\((x \star y) \star z = x \star (y \star z)\)

These operators are called **associative** (wrt the set in question). (An operation might be associative wrt some sets but fail to be associative wrt other sets on which it's defined.) With such operators, it's common to just omit the parentheses altogether and write: \(x \star y \star z\).

Algebras \(\tuple{\Alpha,\star}\) where the operation \(\star\) is associative wrt \(\Alpha\) are called **semigroups**. This is a bit more interesting class of algebras than the class of all groupoids, but again it's not yet too interesting, so I don't expect you to remember its name. All of the algebras we go on to discuss will be more specialized versions of this: that is, they will all be algebras with associative operations.

HOMEWORK EXERCISES:

Prove that \(\curl\) is associative on the set of strings \(\mathfrak{S}\) described in exercise 27.

Let \(\Beta\) be a set and consider its power set \(2^\Beta\). Is \(\union\) an operation on \(2^\Beta\)? Is it commutative? Is it associative? Justify your answers. What about \(\intersect\)? What about \(-\) (set difference)?

Again, let \(\Beta\) be a set, but now consider the set \(\Psi\) of permutations on \(\Beta\). Consider the operator \(\circ\) that takes two functions as arguments and delivers their composition as result. Is \(\circ\) an operation on \(\Psi\)? Is it commutative? Is it associative? Justify your answers.

Another term we can introduce at this point is **idempotence**. This term can be applied to unary functions and also to functions of higher arity, like binary operations. A unary function \(f\) is called idempotent when, for all \(x\) in \(f\)'s domain:

\(f(f(x)) = f(x)\)

That is, applying \(f\) multiple times is the same as applying \(f\) once. (But applying \(f\) once may have made a difference.) For example, the absolute value function on the integers is idempotent.

A binary function is idempotent when the result of applying it to a pair of an argument and itself always returns that same argument. For example, the function \(\max\) on the integers is idempotent, because for all \(z\in\Z\):

\(\max(z,z) = z\)

These are related but somewhat different ideas.

HOMEWORK EXERCISE:

- Is \(\union\) an idempotent operation (on the set \(2^\Beta\) from exercise 30)? Is \(\intersect\)? Is \(-\)?

**Monoids** are an interesting class of algebras whose name and definition you should pay closer attention to. Monoids are semigroups with an **identity element**. We'll explain what an identity element is momentarily. But let's give a complete definition for monoids, rolling in the definition of simpler algebras like semigroups and groupoids that it's a specialization of.

A monoid is defined as an algebra \(\tuple{\Alpha,\star}\) where \(\star\) is an associative operation on \(\Alpha\), and where \(\Alpha\) contains a special member \(a_0\) such that for all \(x\in\Alpha\):

\(x \star a_0 = x = a_0 \star x\)

Note that it's not required in general that \(x \star y = y \star x\). It's permitted but not required. But for the special element \(a_0\), \(x \star a_0\) always does equal \(a_0 \star x\), and these both equal just \(x\) itself. This special element is what we call the identity element of \(\Alpha\) for \(\star\).

In some contexts, mathematicians work with \(a_0\)s that satisfy \(a_0 \star x = x\) but not \(x = x \star a_0\); these are called "left identities." Analogously for "right identities." But we are only concerned with objects that are identities both as left arguments to \(\star\) and as right arguments.

One paradigm example of a monoid is the algebra whose universe is the set of natural numbers \(\N\) and whose operation is \(+\). The identity element for this operation is \(0\). Another example of a monoid is the algebra whose univere is the set of positive integers \(\Z^+\) and whose operation is \(*\) (multiplication). The identity element for this operation is \(1\). Generally, when we're talking about a monoid, we'll list the identity element, like this: \(\tuple{\Z^+,*,1}\).

HOMEWORK EXERCISES:

Does the algebra described in exercises 27 and 29 have an identity element? What is it?

(a) Does the algebra described in exercise 30, with the operation being \(\union\), have an identity element? What is it? (b) What if the operation is instead \(\intersect\)?

Does the algebra described in exercise 31 have an identity element? What is it?

Notice the difference in how we're now talking about "identities": in previous notes, we talked about identity functions and relations. Here it's not the function or operation \(\star\) that's the identity. Instead, it's one of the operation's *arguments* that is the identity. (Although the results of exercise 35 might tell us something about the connections between these different usages.)

There is a general convention in mathematical discussions to symbolize monoid operations using \(+\) and \(0\) in the way I used \(\star\) and \(a_0\). That is, they're not necessarily talking about addition and the number zero, but they use those symbols to talk about *whatever* associative operation they're talking about, and its identity. There is another general convention to use multiplicative symbols instead of additive ones: that is, to use \(*\) and \(1\) in the way I used \(\star\) and \(a_0\). Again, they're not necessarily talking about multiplication and the number one, but they use those symbols to talk about *whatever* associative operation they're talking about, and its identity. Or instead of \(x*y\) they might just concatenate the variables \(x\) and \(y\), in the same way we often use concatenation to express multiplication.

I won't adopt those conventions myself, but you need to be aware of them because you may encounter them. I think it's especially annoying that concatenation is sometimes used to express genuine multiplication, sometimes used to express string concatenation (our \(\curl\)), sometimes used to express function composition, and sometimes used to express other associative operations. And of course, sometimes used to express not function composition but function application. So you might see \(g \, f \, (x,y)\) where that means to take \(g \circ f\) (rather than \(g(f)\), if that's even defined), and to apply (rather than compose) the resulting function to the pair \((x,y)\). Usually you can figure out from context what the authors mean. But it can be a non-trivial exercise.

Can a monoid have several distinct identities? Well, let's suppose that \(a_0\) and \(a_1\) are an arbitrary pair of identity elements for a monoid \(\tuple{\Alpha,\star}\). Then by the definition of an identity element, we have that for all \(x\in\Alpha\):

- \(a_0 \star x = x = x \star a_0\)
- \(a_1 \star x = x = x \star a_1\)

Now consider \(a_0 \star a_1\). By virtue of the lhs ("left-hand side") of (i), we know that this \(= a_1\). But by virtue of the rhs of (ii), we know that it \(= a_0\). Hence we've proved that \(a_0 = a_1\). So identity elements for a given operation and universe, when they exist, will be unique.

Another kind of special element an algebra might have will be an element \(a_z\) satisfying the constraint that for every element \(x\):

\(a_z \star x = a_z = x \star a_z\)

When such an element exists, it's called a **zero** of the algebra (wrt the operation \(\star\)). Note that \(0\) is not a zero of the monoid \(\tuple{\N,+,0}\), though it is an identity. Two of the algebras I've introduced so far have zeros, in this sense: they were both introduced in exercise 30. What is the zero for the operation \(\union\) on the set \(\pow\Beta\)? What is the zero for the operation \(\intersect\) on that set?

Nothing in our definition of a monoid required its operation to be commutative. But for some monoids, their operation will be commutative.

HOMEWORK EXERCISE:

- Which of: \(\tuple{\N,+}, \tuple{\Z^+,*}, \tuple{\mathfrak{S}, \curl}, \tuple{\Psi, \circ}, \tuple{2^\Beta,\union}, \tuple{2^\Beta, \intersect}\) (with the sets as described in previous exercises) are commutative monoids?

Suppose we have a monoid \(\tuple{\Alpha,\star, a_0}\)---I'm listing the monoid's identity element (wrt \(\star\)) explicitly, because of its special role. Given an element \(a\) from \(\Alpha\), we say that another element \(b\in\Alpha\) is an **inverse** of \(a\) (wrt the operation \(\star\)) iff:

\(b \star a = a_0 = a \star b\)

When such a \(b\) exists, and stands in this relation to \(a\), \(b\) is sometimes written as \(-a\) and other times as \(a^{-1}\). I will adopt a third convention: I will write it as \(\bar{a}\). You can use any of these conventions you like.

Note that the question of an inverse can be raised for *each* element of the algebra. The identity element, on the other hand, will be unique for the whole algebra (if it exists).

HOMEWORK EXERCISE:

- (a) Do any elements of the monoid \(\tuple{\mathfrak{S}, \curl,\latin{ this monoid's identity element}}\) have inverses? (b) Do any elements of the monoid \(\tuple{\N, +, 0}\)? (c) What about the monoid \(\tuple{\Z, +, 0}\)?

In exercise 37, perhaps you noticed that the identities of a monoid count as their own inverses (wrt the relevant operation---I won't keep saying this when it's clear from context). One question we might ask is whether any other elements of a monoid can be their own inverse. And the answer is yes, sometimes this is true for other elements as well. But in general when elements have inverses, they cannot be relied on to be the very same object.

When we have a monoid where *every* element in its universe has an inverse, that monoid is called a **group**.

Here are some facts about groups, that can be proven from the things we've already said:

Each element has a unique inverse.

If \(w\), \(x\), and \(y\) are elements of a group (with operation \(\star\)), then whenever \(x \star w = y \star w\), it follows that \(x = y\).

Similarly, whenever \(w\star x = w\star y\), it follows that \(x = y\).

Since \(\bar{a}\) is defined so to yield \(a_0\) when \(\star\)'d with \(a\) from

*both*sides, the question may be open whether there might be some*other*element \(d\), distinct from \(\bar{a}\), such that \(a \star d = a_0\), but only when \(\star\)'d from the*right*side (or, only when \(\star\)'d from the left side). The answer is no, this is not possible: whenever \(a \star d = a_0\), \(a\) and \(d\) must be each other's inverses.

- The inverse of \(a\)'s inverse is always \(a\) itself.

- There will always be a unique \(x\) satisfying \(x \star a = b\), and it will be \(b \star \bar{a}\). Similarly, there will always be a unique \(y\) satisfying \(a \star y = b\), and it will be \(\bar{a} \star b\).

I'll illustrate how to prove the first of the above claims. Suppose that \(b_1\) and \(b_2\) are two arbitrary inverses of \(a\). Then consider the equation:

\((b_1 \star a) \star b_2 = b_1 \star (a \star b_2)\)

We know this to be true because the group's operation \(\star\) has to be associative. But then since \(b_1\) is an inverse of \(a\), the parenthesized expression on the lhs has to equal the group's identity element \(a_0\). Similarly, since \(b_2\) is an inverse of \(a\), the parenthesized expression on the rhs also has to equal \(a_0\). So we get:

\((a_0) \star b_2 = b_1 \star (a_0)\)

But then since \(a_0\) is an identity for the operation \(\star\), this implies:

\(b_2 = b_1\).

So the "two" inverses \(b1\) and \(b2\) cannot be distinct after all.

You should try to prove the other claims for yourself; none of the proofs is difficult. But (at the moment), I'm inclined not to make this an explicit homework. The last of the claims is proved on p. 261 of the Partee reading.

HOMEWORK EXERCISE:

- We said earlier that a function \(f\) has an inverse function \(f^{-1}\) iff \(f\) is bijective. If \(f\) is a permutation from the set \(\Psi\) described in exercise 31, what is the result of composing \(f^{-1}\circ f\)? What is the result of composing \(f \circ f^{-1}\)? What does this tell us about the algebra whose set is \(\Psi\) and whose operation is \(\circ\)?

As with monoids, nothing in our definition of a group requires its operation to be commutative. But for some monoids, their operation will be commutative. These are known as **commutative** or **Abelian groups**.

In exercise 37, we saw that the algebra \(\tuple{\Z, +, 0}\) is a group. More specifically, it's a commutative group. Another commutative group is \(\tuple{\Q^+, *, 1}\), where \(\Q^+\) is the set of positive rational numbers. (We have to exclude \(0\), because it would interfere with \(1\)'s ability to be an identity.)

Consider the algebra whose universe set is the natural numbers less than \(6\): \(\set{0,1,2,3,4,5}\), and whose operation is addition mod \(6\). (\(4+1=5\pmod 6, 4+2=0\pmod 6, 4+3=1\pmod 6, \dots\)) We'll help ourselves to the observations that this operation is associative and commutative, and has \(0\) as an identity. A little more inspection will persuade us that every element of this algebra has an inverse: \(0\) is its own inverse (as identities always are), \(1\) and \(5\) are inverses of each other, \(2\) and \(4\) are inverses of each other, and \(3\) is its own inverse. So this algebra is a commutative group. It's called the group \(\Z_6\). (Similarly for \(\Z_n\) for other numbers \(n\).)

HOMEWORK EXERCISES:

Recall the notion of symmetric difference, defined back in exercise 12. Consider the algebra defined on the universe from exercise 30 (that is, the power set \(2^\Beta\) of some set \(\Beta\)), and whose operation is symmetric difference. Observe that this operation is associative and commutative. (a) It also has an identity element: what is it? (b) Every element of this algebra, that is, every \(\Theta \se \Beta\), has an inverse wrt to the operation of symmetric difference: what is it?

Consider the "checkerboard game" defined as follows. There's a checkerboard with four squares, which we'll name NW, NE, SE, and SW. A single checker is located on one of the squares. It can move either vertically (from NW to SW, or from SW to NW, and so on), or horizontally (from NW to NE, or from NE to NW, and so on), or diagonally (from NW to SE, or from SE to NW, and so on). Or it can stay in place. Consider the set consisting of these four moves (\(\set{\mathrm{vertical}, \mathrm{horizontal}, \mathrm{diagonal}, \mathrm{stayinplace}}\)). Consider the operation of performing two of these moves in sequence: \(\mathrm{vertical} \star \mathrm{horizontal}\) means first move horizontally, then move vertically. (We order the moves in the same way we order functions when describing their composition.) Prove that this set of moves taken together with that operation form a group.

One of you asked: how should we demonstrate that the operation in exercise 40 is associative? Do we have to verify each possible combination? No, I don't expect you to do that. But you should be able to justify the claim that the operation is associative, on the grounds that a move can be understood as a f___ from s___s to s___s, and f___ c___ is associative, a claim you justified in an earlier exercise whose number rhymes with "bedouin". (At least so says the Great Google.) Fill in the blanks.

Groups are a very interesting kind of algebraic structure. You can find lots of math books and courses on "group theory"; I don't think you can find any on "monoid theory." Though a number of notions that are especially interesting for philosophers are monoids. (Also, a generalization of monoids called "monads"---no relation to Leibinz---have come to play a central role in recent linguistics and philosophy of language. Most linguists and philosophers aren't aware of the background mathematics, and so don't realize they are in fact working with "monads.")

There are many more interesting algebraic structures. An algebra textbook I'm looking at estimates that there are more than 200 ones that have been seriously studied. We are obviously not going to attempt to familiarize ourselves with all that.

However, there is another more complex algebra that it will help to take a brief look at. This is the notion of a **ring**.

Like monoids and groups, rings have a universe set. But they have *two* operations, instead of a single one. Each of the operations has to satisfy certain constraints, and they also have to interact with each other in a specific way. There is some variation in how much is built into the definition of a ring, but we will use this definition:

A ring is defined as an algebra \(\tuple{\Alpha, \bullet, \star, \mathbf{zero}}\) where: (i) \(\bullet\) is an associative operation on \(\Alpha\); (ii) \(\star\) is an associative operation on \(\Alpha\) with identity \(\mathbf{zero}\); (iii) every element \(x\in\Alpha\) has an \(\star\)-inverse \(\bar{x}\); (iv) \(\star\) is commutative; and (v) for all \(x,y,z\in\Alpha\):

\(x \bullet (y \star z) = (x \bullet y) \star (x \bullet z)\)

\((y \star z) \bullet x = (y \bullet x) \star (z \bullet x)\)

Constraints (ii)--(iv) tell us that we have a commutative group wrt \(\star\), and (v) tells us how the two operators interact: it's summarized by saying that \(\bullet\) **distributes over** \(\star\).

It can be proven that in any ring, the \(\star\)-identity will be a "zero" wrt the \(\bullet\) operation, in the sense we defined before, namely that for every element \(x\):

\(\mathbf{zero}\bullet x = \mathbf{zero}= x \bullet \mathbf{zero}\)

This is what justifies calling it "\(\mathbf{zero}\)". This *need not* be the same thing as the number \(0\), though of course in some salient examples it will be.

The most familiar example of a ring is the algebra \(\tuple{\Z, *, +, 0}\), where the universe is the set of integers, the operator required to distribute is multiplication, and the operator required to commute is addition with identity \(0\). However, it can be misleading to take that as a paradigm of a ring, since that particular algebra also has other special properties that other rings don't share. For instance, multiplication is also commutative, but the definition of a ring does not require this. Also, multiplication itself has an identity in that algebra (the number \(1\)), but again the definition of a ring does not require that. Rings with those further properties form interesting more specific algebras.

Another thing that can be proven about rings is that, when \(\bullet\) has an identity, it is equal to the \(\star\)-identity iff every element in the algebra's universe is equal to the \(\star\)-identity. (That is, the algebra's "1" = its \(\mathbf{zero}\) iff all of its elements = its \(\mathbf{zero}\).) Rings with only that small universe are called "trivial."

In rings which have an identity for \(\bullet\) (a "1"), sometimes elements in the universe have \(\bullet\)-inverses. There are only two of these in the ring \(\tuple{\Z, *, +, 0}\) (namely, \(1\) and \(-1\)), but there are many more in the ring \(\tuple{\Q, *, +, 0}\). (Again, \(\Q\) is the set of rational numbers.) The \(\mathbf{zero}\) never has an \(\bullet\)-inverse unless the ring is trivial, and it's the only element. But some interesting rings have \(\bullet\)-inverses for every element *other than* their \(\mathbf{zero}\). When these rings also have commutative \(\bullet\), they are called **fields**. The ring I just mentioned, whose universe is \(\Q\), is a field. There are also some fields with only finitely many elements.

We earlier listed \(\Z_6\), and more generally \(\Z_n\) as an example of a group. It is also an example of a ring, where the \(\bullet\) operation is multiplication mod \(n\). The specific cases of this where \(n\) is not just a number but more specifically a *prime* number, turn out to be fields.

An **integral domain** is a kind of algebra in between rings and fields. (That is, every field is an integral domain, and every integral domain is a ring, but neither of the converse entailments hold.) This class of algebras is interesting because it comes closer to distinguishing the specific structure of the integers. It doesn't yet quite capture just the integers: some interestingly different structures also count as integral domains. But when we add some extra constraints about how it's possible to "order" the elements of an integral domain, then it turns out that only the integers satisfy these conditions. --- Or more precisely, that any algebra that satisfies the condition is "isomorphic to" the integers. We will explain what "isomorphic" means on the next web page, and we will talk about "ordering" things in a few weeks.

I've moved some more detailed comments about integral domains to a separate web page.

HOMEWORK EXERCISES:

In exercise 39, we settled that \(\tuple{2^\Beta, +,\latin{ some }b_0}\) was a commutative group. (\(+\) here is the operation of symmetric difference between sets; you had to figure out what the identity element \(b_0\) was.) Consider the algebra where we take symmetric difference to be the \(\star\) operation, and \(\intersect\) to be the \(\bullet\). (a) Prove that that algebra is a ring. (b) Does the \(\bullet\) operation in that ring commute? (c) Does the \(\bullet\) operation have an identity?

What if we replaced the operation of symmetric difference with the operation \(\union\). Would that be a ring?

A **Boolean ring** is a ring where the \(\bullet\) operation is idempotent. That is, for all \(x\) in the algebra's universe, \(x \bullet x = x\). The ring described in exercise 41 illustrates this. It's demonstrable that in any such ring, every element is its own \(\star\) inverse. See if you can prove this. (Not a homework exercise---though if you do prove it you could use it to answer exercise 42.)

Boolean rings are intertranslatable with what are called "Boolean algebras"; though the latter are introduced using other apparatus that we'll come to later.