Phil 455: Lattices (and More on Orders)

Talking about Partial Orders and their Domains

If some binary relation on a domain Α is a partial order, we need to be able to talk both about the domain and the relation (“the domain’s ordering”).

Ordering a Set in Multiple Ways

The elements of have the familiar (total/linear) ordering:

Example 1
0 < 1 < 2 < 3 < 4 < 5 < ...

But we can also order them in other ways. The examples we’ll consider here are all also total/linear orders; later we’ll consider more examples of orderings that are merely partial. Here’s another linear ordering of :

Example 2
0 ⊏ 2 ⊏ 4 ⊏ ... ⊏ 1 ⊏ 3 ⊏ 5 ⊏ ...

There all the odds are ordered higher than all the evens. Or the elements of could be ordered like this:

Example 3
0 ⊏ 2 ⊏ 3 ⊏ 4 ⊏ 5 ⊏ ... ⊏ 1

Here just one number is ordered higher than all the rest.

Sometimes we’ll talk about orderings “with the same structure.” We’ll make that notion more precise a few classes/webpages from now, when we talk about the notion of an “isomorphism.” But the intuitive idea should be clear. When we’re talking about an ordering “structure,” we don’t care about the identities of the particular elements in the ordering. So this would be a different order relation on than the one in Example 3, but it has the same structure:

Example 4
0 ⊏ 1 ⊏ 3 ⊏ 4 ⊏ 5 ⊏ ... ⊏ 2

This ordering of the set of strings {empty,"z","a","aa","aaa","aaaa",...} would also have that same structure:

Example 5
empty ⊏ "a" ⊏ "aa" ⊏ "aaa" ⊏ "aaaa" ⊏ ... ⊏ "z"

On the other hand, this ordering with two numbers higher than all the others would have a different structure:

Example 6
0 ⊏ 3 ⊏ 4 ⊏ 5 ⊏ ... ⊏ 1 ⊏ 2

Here are three finite orderings, all with the same structure:

Examples 7
0 ⊏ 1 ⊏ 2 ⊏ 3
6 ⊏ 4 ⊏ 2 ⊏ 0
"a" ⊏ "b" ⊏ "c" ⊏ "d"

Here are three more orderings to consider (each with different structures than the other examples):

Example 8
... ⊏ 6 ⊏ 4 ⊏ 2 ⊏ 0 ⊏ 1 ⊏ 3 ⊏ 5 ⊏ ...

In Example 8, all the evens are ordered first, with numbers that are smaller by the familiar ordering < ordered higher, then still higher in the ordering are all the odds, with numbers that are larger by the familiar ordering < ordered higher.

Example 9
0 ⊏ ... ⊏ 6 ⊏ 4 ⊏ 2 ⊏ ... ⊏ 5 ⊏ 3 ⊏ 1

Example 9 is like Example 2, in that all the odds are ordered higher than all the evens. But now all the positive numbers are ordered with numbers that are smaller by the familiar ordering < ordered higher.

Example 10
2 ⊏ 4 ⊏ 8 ⊏ 16 ⊏ ... ⊏ 3 ⊏ 9 ⊏ 27 ⊏ ... ⊏ 5 ⊏ 25 ⊏ 125 ⊏ ... ⊏ 7 ⊏ 49 ⊏ ...

Example 10 is an ordering of a proper subset of , those numbers that can be represented as pk for some prime p and k ∈ positive . In this ordering, all the numbers in the domain divisible without remainder by 2 are ordered lowest, then the numbers divisible without remainder by 3, and then those divisible by 5, then by 7, and so on for all primes p.

When authors talk of order-types like ω, or ω+1, or ωω, and so on, they are referring to some of these ordering structures. The three finite orderings in Examples 7 have an ordering structure called “order type 4.” The familiar ordering structure in Example 1 is called “order type ω.” The structure in Examples 3, 4, and 5 is called “order type ω+1.” The structure in Example 6 is called “order type ω+2.” The structure in Example 2 is called “order type ω+ω.” The structure in Example 10 is called “order type ω².” There are many more of these.

Some of our examples (8 and 9) don’t have names in this scheme, because their structure differs in an important way from the rest. Notice how in those orderings, there are regions where you can “infinitely descend” in the ordering, in the sense of keep finding a next lower element, without getting to an element that has no next lower element. (By “descending” and “lower” here we mean with respect to the order in question, not with respect to the familiar ordering <.) Whereas with, say, Example 2, if you descend from any odd, you have to eventually get to 1, which has no next lower element. In other words, for any x ⊏ 1, there is a y such that x ⊏ y ⊏ 1. If you descend from any even, you have to eventually get to 0, which has no lower element at all. We can put this by saying that whereas Examples 8 and 9 have infinitely descending chains, the others do not. (Most of them do have infinitely ascending chains.) We’ll learn some vocabulary and apparatus for talking about this difference below.

The naming scheme for order types that we were considering is used only for the cases without infinitely descending chains. (So not Examples 8 and 9.)

In all of these examples, I illustrated the order types using (subsets of) , or using strings from a finite alphabet. There are some order types that those sets cannot exemplify with any order relation, because they don’t have enough elements. We won’t be talking about those order types in this class; you may do so if you get into serious set theory. In a few classes/webpages, we will be considering how to think about an infinite set “not having enough elements.”

Some authors won’t just use ω and the like as names for order types, but will talk about them as being specific ordinal numbers and/or sets. When they do that, they are working with some reduction or definition of numbers in terms of pure sets, and the set they call, for example, ω+1, will have elements that, using their definition of "<", have what we’re calling the order type ω+1. They will count it as a number, but it won’t be an element of (so for them, "<" is defined on a superset of ).

We won’t need to talk about reducing/defining numbers in terms of pure sets, or ordinal numbers in the sense just described. You will do so if you get into serious set theory.

Minimal and Least Elements of Subsets

Suppose we have a poset (Α, ⊑). We’re going to start talking about what’s true about the relation for elements of various subsets Β ⊆ Α.

  1. One interesting notion is when a poset (Α, ⊑) is such that every non-empty subset of Α (it needn’t be a finite subset) has at least one minimal element with respect to . This is called a well-founded ordering. (The notion where you require maximal elements instead is called “converse or upwards well-founded,” or “Noetherian.” It’s much less often discussed.) One example of this are the natural numbers , ordered by the familiar <. Indeed, all of Examples 1 through 7 given above, and also Example 10, have this property.

    Here are some examples that are merely partially ordered:

    Example 11
    The positive integers, ordered by the relation “divides without remainder,” that is the relation whose graph is {(n,x) | x mod n = 0}. You met a version of this relation (restricted to a smaller domain) in Homework 3 problem 6. Note that the ordering specified here differs from the one in Example 10, not just because here we are ordering numbers like 1 and 6 that Example 10 does not, but also because in Example 10 it was stipulated that 2 ⊏ 3. But in the current example 2 and 3 are incomparable.
    Example 12
    The set of all strings formed from letters "a" and "b", ordered by the relation of being a substring of. (This is the relation that in Homework 1 problem 2 we called anyWhere.)

    Some posets that aren’t well-founded orderings are Examples 8 and 9 above. Also:

    Examples 13
    Posets containing all the negative integers, ordered by the familiar <, because we can form subsets of these which have no minimal element.
    Example 14
    The set of positive rationals has no minimal element with respect to <, and even if you also included 0 in the domain, there would still be subsets that have no minimal element.

    When you have a well-founded ordering that’s also a total/linear order (as in Examples 1 through 7 and 10, but not in Examples 11 and 12), it’s said that the relation well-orders its domain. These are the ordering structures where the names “order type ω” and so on are applied.

    Here’s what makes these notions interesting.

    • We learned how to do inductions in cases where we had a domain ordered like we do in Example 1. First, we prove that the base case 0 has some property, and also we prove that whenever some arbitrary element k has the property, the next element (or elements) in the order has the property too. We initially did this with natural numbers, but at some point we did it with orderings of other domains too, including cases like Example 12 (there we could have several “next elements”). What if we had a domain where the natural ordering for proving things was more like Example 2, repeated here:

      Example 2
      0 ⊏ 2 ⊏ 4 ⊏ ... ⊏ 1 ⊏ 3 ⊏ 5 ⊏ ...

      There’s a difficulty here, in that 1 is not a “next element in the order” for any element. (For any x ⊏ 1, there is a y such that x ⊏ y ⊏ 1.) We could use the same proof strategy as before, if we were able to include 1 as an additional base case and directly prove that it has the property of interest, when we prove that 0 does. But sometimes we might not be able to easily give such a direct proof for 1. There’s another strategy we could use in these cases. Instead of proving that whenever some element k has the property, the next element(s) in the ordering has it too, we prove something of this form (for all k in the domain):

      If ∀j ⊏ k (element j has the property of interest), then element k has the property of interest.

      If we proved that, we wouldn’t even need to give direct proofs that the minimal elements in the ordering like 0 have the property of interest, because they satisfy the antecedent of this conditional trivially. So proving the conditional would be enough to establish that they have the property. It should be intuitive that if the conditional is true for the property and type of ordering we’re considering, then element 1 will have the property, and so too every element in the ordering. (We won’t prove this rigorously, but that can be done.)

      The strategy of proving a conditional of that form, where is a well-founded ordering on the domain, is called well-founded or strong or transfinite induction.

    • Cases where the ordering is not merely well-founded but also total/linear (and so a well-ordering of the domain) have a further interesting property, that any subset with upper bounds will have a least upper bound (these notions to be explained below). An application of this is to give a recipe that always specifies a unique “next element” in the ordering (except for the greatest element, if there is such). The recipe is this. You start with some (non-greatest) element k of the domain. Take the set of all elements ⊐ k in the ordering. Since the ordering is total/linear and k is not greatest, this set will be non-empty. Since the order is well-founded, this set must have minimal elements. Since the order is total/linear (and anti-symmetric), there will be just one minimal element and it will be least. This is the “next element” in the ordering.

      Some orderings that aren’t well-founded and total/linear are also ones where we can always find a “next element,” such as Example 8 (not well-founded), or Example 2 if we make the evens and odds incomparable (no longer total/linear). But this general recipe for specifying the next element can’t be used in such cases.

  2. Another interesting notion is when a poset (Α, ⊑) is such that some Β ⊆ Α has upper bounds. These will be elements u ∈ Α (they don’t necessarily have to be ∈ Β) such that ∀b ∈ Β (b ⊑ u). So this is like the notion of a greatest element of Β, except that we are no longer requiring that it be an element of Β. It just needs to be an element of the (perhaps larger) domain that is defined on.

    The notion of an upper bound makes sense for pre/quasiorders, too, not just partial orders. In that case, the upper bounds may have more than one element that are least with respect to the order, and this would complicate our discussion. Henceforth, we’ll focus just on partial orders, where any set can have at most one least element.

    For any such Β, Α, and , there are four possibilities:

    1. Β may have no upper bounds
    2. Β may have some upper bounds, but none of them are in Β and none is least with respect to the order
    3. Β has a least upper bound but it’s not an element of Β
    4. Β has a least upper bound that is an element of Β

    Whenever Β has an upper bound that is an element of Β, it will be the greatest element of Β and be Β’s least upper bound. Conversely, whenever Β has a greatest element, it will be a least upper bound for Β. But there are also cases (a), (b), and (c), where there are no upper bounds that are elements of Β.

    An example of case (a) is where our poset is (ℕ, ≤), and the subset Β is the evens. There is no element of such that every even number is it.

    An example of case (b) is where our poset is {"a","b","ab","ba"} ordered by the relation of being a substring, and the subset Β is {"a","b"}. This has two upper bounds — "ab" and "ba" — but they are not comparable by our ordering, so neither is least.

    Another example of case (b) we discussed in class used a total/linear ordering. This was the poset (the rationals, ≤), and the subset Β was {x | x² < 2}. The subset has no greatest element: for any rational less than √2, there’s another larger one that’s still less than √2, and so still ∈ Β. But there are plenty of rationals larger than any element of Β, and so which are upper bounds on Β. For example, 3/2. But there is no least element of these upper bounds. For any rational greater than √2, there’s another smaller one that’s still greater than √2.

    An example of case (c) could use the same poset as in the previous example, but this time let the subset Β be {x | x < 1}. This Β does have a least upper bound in our poset, namely 1, but 1 ∉ Β.

    The least upper bound of a subset Β is also called Β’s join or supremum. It can be symbolized as lub(Β) or sup(Β) or Β. When Β consists of two elements {b₁,b₂}, it can also be written as a binary operator: b₁ ∨ b₂. In all of these notations it will be left implicit what the order relation and superset domain Α are. Keep in mind that depending on the ordering and subset chosen, a join/lub may not exist.

    The notions of a lower bound, and of a greatest lower bound, are defined analogously. The greatest lower bound of a subset Β is also called Β’s meet or infimum, and can be symbolized as glb(Β) or inf(Β) or Β. When Β consists of two elements {b₁,b₂}, it can also be written as a binary operator: b₁ ∧ b₂.

Lattices

We define a join semilattice to be any poset that has a join/lub for any non-empty finite subset. For such orderings, and any pair of elements a,b from their domain, a ∨ b will be defined. Equivalently, you could state the definition as requiring that any pair of elements has a join/lub. Any single element will be its own join/lub; so this requirement ensures that any non-empty finite subset has a join/lub. We’ll discuss infinite and empty subsets later.

The notion of a meet semilattice is defined analogously, requiring any non-empty finite subset to have a meet/glb.

A lattice can be defined as a poset that is both a join and a meet semilattice.

Considering join and meet as operations on pairs of elements, they will be commutative, associative and idempotent operators. (As we saw in Homework 2 problem 14, the last means that applying the operation to the same argument twice results in that same argument.) They also satisfy this property, for all a,a′,b,b′ in the ordering’s domain:

(a ⊑ a′ and b ⊑ b′) ⊃ (a ∨ b) ⊑ (a′ ∨ b′)

The same holds with substituted for .

If follows from our definitions that join and meet will interact with each other as follows (these are called “the absorption laws”):

a ∨ (a ∧ b) = a
a ∧ (a ∨ b) = a

(Compare Homework 2 problems 7a and 7b.)

It follows also that they’ll interact with the order relation like this:

a ⊑ b iff b = a ∨ b iff a = a ∧ b

(Compare Homework 2 problems 7c and 7d.) We’ll return to these equivalences below.

Top and Bottom

If a lattice has a non-empty finite domain, by definition the whole domain must have a join/lub, and this will be greatest element of the domain. It’s called top, also written as . Similarly, the whole domain must have a meet/glb, and this will be the least element of the domain. It’s called bottom, also written as . (As we said on the More on relations webnotes, these terms are sometimes used for partial orders more generally, even if they’re not lattices. The symbols and also have other uses.)

If the lattice has an infinite domain, greatest and/or least elements are not required to exist, but they may. When they do exist, we say the lattice is bounded. (Sometimes this term is also used for partial orders more generally, even if they’re not lattices.)

When a lattice has a top/greatest element (the join of its whole domain), we say that meet is also defined to be top. When a lattice has a bottom/least element (the meet of its whole domain), we say that join is also defined to be bottom. These definitions follow from the fact that upper bounds and lower bounds are defined in such a way that every element of the domain is both an upper and a lower bound for the subset . So any least element of the domain is a least upper bound for , and similarly for greatest lower bound.

Here’s a way to think of the identifications:

that may help make them more memorable or natural. There’s an intuitive correspondence between the join operation and disjunction in propositional logic, and the meet operation and conjunction in propositional logic, and between top and true, and between bottom and false. (Indeed, this correspondence is more than merely intuitive, but we don’t need to make it more rigorous at present.) If you’re taking the disjunction of three propositions p, q, and r, you could understand that as (p or q) or r or as p or (q or r) of as “some of {p,q,r} are true.” Similarly, you could understand their conjunction as “all of {p,q,r} are true.” Then it makes sense to think of the disjunction of no propositions as false, and the conjunction of no propositions as true. Correspondingly, the join is defined to be bottom, and the meet is defined to be top.

It follows from our definitions that when top and/or bottom elements exist, they will satisfy these laws:

a ∧ ⊤ = a
a ∨ ⊥ = a

Infinite Subsets

The definitions of (semilattices and so) lattices only require the existence of joins and meets for non-empty finite subsets. We just saw that when a lattice has a top and/or bottom elements, meets and joins of empty subsets (and of the whole domain, which may be infinite) will also be defined. What about arbitrary infinite subsets? There are not required by the definition to have meets and joins, but depending on the ordering they may. When they always do, the lattice is called complete.

Being complete implies that joins and meets of the whole domain exist, and these will be a top and bottom for the lattice. So any complete lattice will be bounded.

Other special kinds of lattices

Lattices are not required to satisfy these laws:

a ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c)
a ∧ (b ∨ c) = (a ∧ b) ∨ (a ∧ c)

But some lattices do, and they are called distributive.

For a bounded lattice, where top and bottom elements exist, two elements a,z of the domain count as complements when these two identities hold:

a ∨ z = ⊤
a ∧ z = ⊥

In general, an element of a bounded lattice may have no complements, or more than one. For a distributive lattice, the complement of a when it exists is unique; and also “De Morgan”-like laws hold. (The complement of a ∧ b will be the join of the complement of a and the complement of b; and so on.)

When a bounded lattice is such that every element does have a complement, and the lattice is also distributive, it’s called a Boolean algebra.

When the poset (Α, ⊑) is any kind of lattice, a sublattice is defined to be a subset Μ of Α such that for all a,b ∈ Μ, a ∨ b ∈ Μ and a ∧ b ∈ Μ. In other words, TODO

The Algebraic Approach to Lattices

We’ve been approaching lattices from the perspective of their order relations. We defined a join as the least element (wrt the order in question) of a subset’s upper bounds (wrt the order in question). Similarly for meet. These definitions allowed us to consider the cases where join and meet were applied to sets of two elements, and to treat them like associative, commutative, idempotent binary operators on the poset’s domain.

Another approach to lattice structures is to start with an associative, commutative, idempotent binary operator on some domain. We can then “treat” that operator “as a meet,” and use it to define a relation as follows:

a ⊑ b =def a = a ⊛ b

You can then prove that the so defined will be a partial order, and moreover one that satisfies the requirements to be a meet semilattice.

Alternatively, you can treat “as a join,” and use it to define a relation :

a ⊑ b =def b = a ⊛ b

You can then prove that the so defined will be a partial order, and moreover one that satisfies the requirements to be a join semilattice.

In either case, if the defined order also satisfies the requirements to be the other kind of semilattice, you’ll have a lattice and can understand the dual operator (that is, join if you’re treating as a meet) in terms of the lub (or glb) properties of the defined order .

Alternatively, suppose you have two associative, commutative, idempotent operators and on the same domain, and they interact as follows:

a ⊛ (a ⊙ b) = a
a ⊙ (a ⊛ b) = a

(Compare the “absorption laws” for join and meet mentioned above.) In that case, you can choose either of the operators to be the meet, and the other to be the join, and you’d get a lattice using the order relations defined above. (The two definitions of would be equivalent.)