% Collections, with and without order
\newcommand\emptystring{\varepsilon}
\newcommand\snoc{\smalltriright}
### Strings ###
We began by talking about a kind of collection I called **strings**. These are also sometimes called "ordered sequences" or "lists," or several other things. But I'm calling them strings because a paradigm case of these collections are linguistic expressions, like a word or phrase or sentence or novel. (Or gibberish. We're making no assumptions at this point about whether our strings are syntactically well-formed or have any semantic properties.)
One kind of string is the empty string. This could be written as "" or as $\emptystring$.
Other kinds of strings are built out of an initial string and a final "letter". We should decide in advance what our "letters" are going to be. If we're thinking of the strings as words, we'll naturally take English letters to be our "letters." If on the other hand we're thinking of the strings as sentences, it might be more natural to take English words as the "letters." From the point of view of the string, all of its letters are atoms. The string is blind to any additional structure that its letters may have.
\newcommand\lettera{\mathcal{A}}
\newcommand\letterb{\mathcal{B}}
\newcommand\letterc{\mathcal{C}}
In class, I proposed we use as letters the following: $\lettera, \letterb, \letterc, \textbf{:-)}, \textbf{:-(}$. I will symbolize the operation by which a letter is appended to the end of an initial string with a caret, like this:
> initial_string ^ new_letter
Hence, all of the following are strings of length 1:
> $\emptystring \snoc \lettera$
> $\emptystring \snoc \letterb$
> $\emptystring \snoc \textbf{:-)}$
And the following are strings of length 2:
> $(\emptystring \snoc \lettera) \snoc \letterb$
> $(\emptystring \snoc \lettera) \snoc \lettera$
> $(\emptystring \snoc \textbf{:-)}) \snoc \lettera$
> $(\emptystring \snoc \textbf{:-)}) \snoc \textbf{:-(}$
We'll adopt the convention that this caret symbol is to be understood as "associating to the left", so that:
> $\emptystring \snoc \lettera \snoc \letterb$
means the same as:
> $(\emptystring \snoc \lettera) \snoc \letterb$
Now we're in a position to give a **recursive** or **inductive definition** of a string. We do that like this:
1. $\emptystring$ is a string.
2. if $S$ is a string and $L$ is a letter, then $S \snoc L$ is a string.
3. Nothing else is a string.
The second clause relies on the very notion we're defining, but that's okay because the definition has to bottom out at some point in the base clause. Alternatively, you can think of this definition as "generating" the class of strings, by first including the items picked out by any base clauses (here, $\emptystring$), and then giving a rule that says how the class gets extended stage-by-stage. We don't place any limit on how many "stages" it might take to include in all the items we want to count as strings; but we do have to explicitly rule out that anything else is a string.
There's a technique for converting any recursive definition of this sort into an explicit, non-recursive definition. We'll talk about this later in the course.
Here's another recursive definition:
1. $0$ is a natural number.
2. If $n$ is a natural number, then the successor of $n$ is also a natural number.
3. Nothing else is a natural number.
One thing you might think about is why we had to put these definitions in the 1-2-3 form we did. Why couldn't we instead say:
> $n$ is a natural number iff $n$ is $0$ or $n$ is the successor of a natural number.
Hint: does this inadequate definition exclude the possibility that $\pi$ is a natural number? If you think it does, try to explain why $\pi$ is excluded.
In some cases, the "nothing else" clause is guaranteed by the other clauses, or by other assumptions you're making, and doesn't need to be explicitly added; we'll see an example shortly. But in these definitions it is needed.
Okay, we've discussed the operation by which you take a string and a letter and make a new string. A different sort of operation would be one where you take two strings and concatenate them, possibly getting a new string. I say "possibly" because it may be that one of your starting strings was $\emptystring$, and when you concatenate that to another string, the result is just the other string. But for instance:
> the concatenation of $\emptystring \snoc \lettera \snoc \letterb$ with $\emptystring \snoc \letterb \snoc \letterc$ will be the string $\emptystring \snoc \lettera \snoc \letterb \snoc \letterb \snoc \letterc$
\newcommand\concat{\curl}
I will use the symbol $\concat$ to represent concatenation.
In class, we figured out how to define concatenation in terms of operations we already knew. We came up with another recursive definition for this. We assume that $S$ and $T$ are strings. Then there are just two cases, depending on the structure of $T$:
1. If $T$ is $\emptystring$, then $S \concat T$ is $S$.
2. Else $T$ has the form $T_\latin{start} \snoc L$, where $T_\latin{start}$ is a string and $L$ is a letter. In this case, $S \concat T$ is $(S \concat T_\latin{start})\snoc L$.
Strings illustrate a few important ideas. For one thing, they gave us an opportunity to familiarize ourselves with recursive definitions. For another thing, they give us one example of a **collection**: a kind of thing that (may) contain other things. There is a third important idea they illustrate, which we'll come to a bit further down the road.
Some authors talk about strings in such a way that the individual letters are identified with what I would call the length-1 strings that contain them. In certain contexts, this won't get you into trouble. But in other contexts it will, so I've tried to keep these two notions conceptually distinct. Consider: I haven't placed any constraints on what can count as letters. What if we permit strings to themselves be letters? Consider the letter which is the length-2 string $\emptystring \snoc \lettera \snoc \letterb$. Make a string out of that one letter: what is its length? 1 or 2? If you append the length-2 string $\emptystring \snoc \letterc \snoc \lettera$ to the end of the string you just made, do you get a length-2 string, each of whose letters is itself a length-2 string---as I'd naturally expect? Or do you get a length-1 string, whose single letter is a string of length-4? Or a length-3 string? Or something else? How should the $\snoc$ operation tell where the last letter of its left-hand operand is?
If we distinguish between strings and the letters they're made of, even when the string is of length 1, then this all becomes unproblematic.
The first HOMEWORK EXERCISE I assigned was: (1a) give a recursive definition of the factorial function. And (1b) prove in a rigorous way (presumably with a mathematically inductive argument) that the factorial of every $n \ge 2$ is even.
In this class, it will be common to give mathematically inductive arguments.
* If we want to prove things about every formula in a language, then we'll "induce on the structure" of the formulas. That is, our base clauses will talk about the atomic formulas, and our inductive steps will concern all the different ways in which compound formulas can be built up out of things already counted as formulas.
* If we want to prove things about all provable formulas, then our base clauses will concern axioms, and our inductive steps will concern all the different ways in which formulas can be proven from other formulas already counted as provable.
### Other collections ###
There are a variety of collections that a mathematician or logician might work with. Strings pay attention to the order of their elements; they also pay attention to the multiplicity of their elements. The string $\emptystring \snoc \lettera$ is different from the string $\emptystring \snoc \lettera \snoc \lettera$. Other collections make other choices. **Sets**, for example, don't pay attention to either the order or the multiplicity of their elements. The set $\set{a, a, b}$ is the same as the set $\set{b, a}$. We will be working with sets a lot this term.
Another collection, less often discussed, is a **multiset**. This pays attention to the multiplicity of its elements but not their order. So a multiset containing the elements $a, a,\latin{ and }b$ will be the same as a multiset containing the elements $a, b,\latin{ and }a$, but different from a multiset containing only $b$ and $a$ (once).
A bit later, we'll discuss graphs and trees, which are collections having more complex structures.
### Sets ###
I'll try to stick to the convention of naming sets with capital greek letters, like $\Delta$ and $\Gamma$. But I may sometimes slip; and there is no uniform practice. (And sometimes ☺ I will use capital Greek letters like $\Alpha, \Beta, \dots$)
As we said, sets are collections that may have elements or members. Their members may, but need not, also be sets. When the members aren't sets, they're called **urelements** or **individuals** or **atoms**. When $d$ is a member of set $\Delta$, we write $d \in \Delta$. The empty set, written as $\emptyset$ or $\set{}$, has no members.
For a given set of things $\Delta$, there is exactly one empty set of $\Delta$s, which contains no $\Delta$s. Where $\Delta$s and $\Gamma$s are sets of things that are fundamentally different in certain ways, it's less clear and more contentious whether the empty set of $\Delta$s and the empty set of $\Gamma$s are the same entity. But we will just ignore this debate.
Sets can have lots of members, and they can contain sets which have lots of members, but on the standard picture, no set can have itself as a member. Nor can there be an infinite descending chain of sets, each of whom is a member of the next outermost set. That is, no sets like this:
> $\set{0, \set{1, \set{2, \set{3, \dots}}}}$
These claims aren't indisputable, and alternative set theories have been developed which don't respect them. But those kinds of controversies aren't going to be our topic here. We're just going to assume we're working with well-behaved sets, we're not going to worry about Russell's paradox, and so on.
Sometimes mathematicians or logicians will specify a set by listing all its members:
> $\set{0, 16, 128}$
Other times, they'll specify the set using **set-builder notation**, which can look like this:
> $\set{y \where y\in\N \latin{ and }\exists x\in \N ( y = x^2 ) }$
> $\set{y\in\N \where \exists x\in \N ( y = x^2 )}$
> $\set{x^2 \where x\in\N}$
All three of these expressions specify the same set. Sometimes this is instead written like this: $\set{x^2 : x\in\N}$.
In the informal metalanguage you're reading now, I will always specify what domain I'm drawing from when I use a quantifier. Thus I write $\exists x\in \N$ and not just $\exists x$. In the *formal* languages we'll begin talking about in a few weeks, we'll at first only have the second, "unrestricted" quantifier. But we'll talk later about introducing the first, "restricted" kind of quantifier into the languages we're studying.
I'll assume you're familiar with basic set-theoretic relations and operations:
> $\Gamma \union \Delta$
> $\Gamma \intersect \Delta$
> If $\Gamma_0, \Gamma_1, \dots \Gamma_n$ are sets, then another way to write $\Gamma_0 \union \Gamma_1 \union \dots \union \Gamma_n$ is $\Union_{i=0}^n \Gamma_i$. Similarly for $\Intersect_{i=0}^n \Gamma_i$.
> If $\Delta$ is a set of sets, then $\Union\Delta$ is the union of all the members of $\Delta$. Or you could write $\Union_{d\in\Delta} d$. Similarly for $\Intersect\Delta$.
> The "difference" between $\Gamma$ and $\Delta$---that is, the set of all members of $\Gamma$ that aren't also members of $\Delta$---is sometimes written $\Gamma \setminus \Delta$, but I prefer to write it just as $\Gamma - \Delta$
> Sometimes people write $\overline{\Delta}$ or $\overline{\Delta \union \Gamma}$. Here they are assuming some specific larger set $\Omega$ that $\Delta$ and $\Gamma$ are subsets of, and these are just alternative ways of specifying $\Omega - \Delta$ and $\Omega - (\Delta \union \Gamma)$.
$\Delta$ is a subset of $\Omega$ when everything that's a member of $\Delta$ is also a member of $\Omega$. We write this relation as $\Delta \se \Omega$. $\Omega$ may have additional members besides the things that are also in $\Delta$. Or it may not. If it doesn't, then $\Omega$ has exactly the same members as $\Delta$. And as we understand sets, that means that they are exactly *the same* set. So among the sets that $\Delta$ is a subset of is itself.
If you want to say that $\Delta$ is a subset of *and not identical to* some other set, then you say that $\Delta$ is a *proper subset* of the other set, and you write that like this: $\Delta \subset \Gamma$. For the most part this term, we'll be working with the $\se$ relation and rarely if ever with the $\subset$ relation.
More HOMEWORK EXERCISES: (2) Let $\Gamma = \set{1}$ and $\Delta = \set{1,\set{1}}$. Then: (a) Why is $\Gamma \in \Delta$? (b) Why is $\Gamma \se \Delta$? (c) What is $\Gamma \intersect \Delta$? (d) Is it always the case when $\Alpha \se \Beta$ that $\Alpha \intersect \Beta = \Beta$? (This last question contains a mistake. Fix the mistake and then answer the corrected question.)
As we said, sets are the same when they have exactly the same members. For collections like multisets, which care about the multiplicity of their elements, then the collections will be the same when they have all the same elements *to the same multiplicity*. For collections like strings, which care about the order of their elements, then the collections will be the same when they have all the same elements *in the same order*. For collections with other sorts of structure, the rules will be analogous. There's a general idea here, that the identity of a collection depends upon the identity of its elements and possibly on the structure by which the collection organizes them, and on nothing more. You can't have two collections with all the same elements organized (in the way that collection cares about) in the same way, yet those collections be numerically distinct. This general idea is called **extensionality**. (The root term "extensional" also has other meanings you will be familiar with, and that we'll come back to later in the term. These meanings are conceptually related, but don't expect that you could derive specifically the one meaning from the other. We'll encounter this kind of situation fairly often in our studies.)
When mathematicians or logicians work with a collection, they'll generally be understanding the collection to be one that's extensional in the way I described. But in computer science contexts, this can't be taken for granted. There it can be useful to introduce notions of structures that aren't extensional, where we *can* have two numerically distinct but indiscernible collections, with the same elements organized in the same way. We're not going to get into those contexts here. I'm just flagging that though the idea of an extensional collection is very natural and useful to work with, it's not conceptually mandatory, and in some formal settings we can make good sense of, and it's useful to work with, collections that aren't extensional.
The **power set** of a set $\Omega$ is the sets that are subsets of $\Omega$. This includes $\Omega$ itself, and it includes the empty set $\emptyset$. That's because all of the empty set's members (all none of them) are also members of $\Omega$. Sometimes the power set of $\Omega$ is written $\varpow{\Omega}$, but I will write it as $\pow{\Omega}$.
When two sets have members in common, we say that the sets **overlap**. When the sets fail to have any members in common, we say that they're **disjoint**. (The empty set is an edge case: should we define "disjointness" in a way to count it as disjoint from other sets, because their intersection is empty? Or should we count it as non-disjoint, since after all, it's a subset of those other sets?) When two sets fail to be numerically identical, then---regardless of whether they overlap or are disjoint---we say that the sets are **distinct**.
In **mereology**, the formal theory meant to model our intuitive understanding of part-whole relations, they also use the term "overlapping." But there instead of "disjoint," they talk about things with no parts in common as being wholly "discrete." Don't confuse these vocabularies.
(There's also another use of "discrete" that comes from topology; this is the sense in which the integers count as discrete relative to the reals.)
### Partitions and Covers ###
A **partition** of a set is a division of the set into one or more (non-empty) "cells", where everybody from the original set gets to be in one of the cells, and none of the cells overlap. So, for example, here is one partition of the set $\set{1,2,3}$:
> $\set{1} \qquad \set{2,3}$
The partition is itself understood to be a set---the set containing the two sets just listed. But I'll write the partition as I did above, to make the intuitive idea more obvious. Another partition of the set $\set{1,2,3}$ is:
> $\set{1,3} \qquad \set{2}$
Another HOMEWORK EXERCISE: (3) How many different partitions of the set $\set{1,2,3}$ are there?
A **cover** is similar to a partition, except that now we're allowed to have some overlap. It's allowed (but not required) that some of the members of the original set get to be in multiple cells. So both of the above partitions of $\set{1,2,3}$ also count as covers of that set. But so too does this:
> $\set{1,3} \qquad \set{2,3}$
Covers come up sometimes in linguistics. I don't think we'll need to work with them in anything we'll be doing.
### Tuples (Ordered pairs, triples, $\dots$) ###
Sometimes we want to talk about several items taken from several sets. We do this using the notion of an ordered pair (or triple, or $\dots$) An ordered pair $(a, b)$ has a "first" element $a$ and a "second" element $b$. Two ordered pairs count as the same just in case they have the same first elements and the same second elements. The two elements might both be the same kind of thing (they might both be members of the set of natural numbers $\N$, for example). Or they might be different kinds of things (maybe $a$ is a number but $b$ is a person). When $\Alpha$ and $\Beta$ are two, not-necessarily distinct, sets, we describe the set of all ordered pairs whose first element comes from $\Alpha$ and whose second element comes from $\Beta$ as $\Alpha \times \Beta$. We can write this as:
> $\Alpha \times \Beta = \set{(a,b) \where a\in\Alpha \latin{ and } b\in\Beta}$
This is called the **Cartesian product** of $\Alpha$ and $\Beta$. The ordered triple of $\Alpha$, $\Beta$, and $\Gamma$ we write as:
> $\Alpha \times \Beta \times \Gamma = \set{(a,b,c) \where a\in\Alpha \latin{ and } b\in\Beta \latin{ and } c\in\Gamma}$
You might wonder is there something like a notion of "adding" two sets, in the way that this is a way of taking their "product"? There is. Whereas with the product of two sets, we get an element from the first set *and* an element from the second set, with the sum of the sets we'll get an element from the first set *or* an element from the second set. This may remind you of the operation of taking the **union** of the two sets.
However, there's a wrinkle. What if we're taking the product or the sum of two overlapping sets, or of the very same set?
With $(a_1, a_2)$, a member of the product $\Alpha \times \Alpha$, we have an element from $\Alpha$ in its role as the left-hand product and an element from $\Alpha$ in its role as the right-hand product. And we can *recover* which role each $a$ came from $\Alpha$ as playing: that is, we can tell the difference between $(a_1, a_2)$ and $(a_2, a_1)$. Similarly, a member of the *sum* of $\Alpha$ and $\Alpha$ will *either* come from $\Alpha$ as playing the role of the left-hand sum operand, or from it playing the role of the right-hand operand. But how can we tell which? To get the closest parallel to the notion of a Cartesian product, we should be able to keep track of this.
The way this is usually done is by taking not the union of the summed sets, themselves, but rather the union of their-members-with-a-tag-attached indicating whether the element belongs to the sum by virtue of coming from the left-hand or from the right-hand operand. Sums understood in this way are called **tagged** or **disjoint unions**. If we model the tagging in terms of an ordered pair, and help ourselves to two primitive objects $\mathsf{Left}$ and $\mathsf{Right}$, then we can define the tagged union of $\Alpha$ and $\Beta$---with $\Beta$ allowed to be identical to, or a subset of, or to overlap, or to be completely disjoint from $\Alpha$---as: $\set{(\mathsf{Left},a) \where a\in\Alpha} \union \set{(\mathsf{Right},b) \where b\in\Beta}$.
This summing operation is neither more nor less "correct" than the more familiar operation $\union$. For some purposes (the ones that will dominate our attention this semester) the latter is more appropriate; for other purposes, the former is.
I've seen each of the following notations used for disjoint union: $\uplus$, $\udot$, $\sqcup$, $\amalg$, and $+$. Since we won't be employing this notion, we don't need to pick a notation for it.
The notion of an ordered pair's "order" can be misleading. Really the important thing is just that we keep track of which element comes from which set, or which element comes from a single set playing the role of one operand rather than another. Instead of the "first" element and the "second" element, we could instead talk about the "west" element and the "east" element. And for ordered quatruples, about the "north", "south", "east" and "west" elements. If someone then took it in mind to ask whether the west element comes before or after the east element, this question wouldn't have any established sense.
Similarly, if you take the ordered quatruple $(a,b,c,d)$, we might call it "increasing" iff $a \le b \le c \le d$. But I could just as easily define another notion, call it "ascending", which holds iff $a \le d \le b \le c$. And I submit there's no sense in which one of these two notion is more intrinsically natural or less gruesome than the other.
With strings, on the other hand, the collection does have a more natural intrinsic ordering. It's genuinely more natural to count the letter $\letterb$ as coming "between" the letters $\lettera$ and $\letterc$ in $\emptystring \snoc \lettera \snoc \letterb \snoc \letterc$ than it is to count $\letterc$ as coming "between" $\lettera$ and $\letterb$, because of way the string is built up.
Now in many mathematical and logical texts, you will see authors taking a very casual attitude towards these differences. It's also common to see authors propose reductions of some of these notions to the others, in ways that trample on the distinctions. For example, it's common to see the ordered triple $(a,b,c)$ defined as the pair $(a,(b,c))$ whose second element is itself a pair. But there might be settings where I need to differentiate between pairs that contain pairs and triples. In many computer science settings, for example, where we pay close attention to the **types** of things we're working with, conflating these can be disastrous.
So I'm going to downplay these kinds of reductions in this class. It's a philosophically substantial question whether any of these identifications or reductions are correct, and usually there are even good mathematical reasons to resist them, albeit reasons that come from taking a look at broader applications than the author who proposed the reduction was considering. Even if in the end we decide to accept some such reductions, I don't think it's right to be offering them in the guise of *introductory definitions* of the notions in question.
Whether these reductions are to be accepted or resisted is not central to the material we're aiming to learn this semester. It makes no fundamental difference to the mathematics. So I won't spend much time on it. I will myself tend to refrain from reductions like
$(a,b,c) = (a,(b,c))$ that you see other authors going in for. The other authors may say that's what a triple *is*, or how it's *defined*. I'll tend to say instead that the author is *mathematically modelling* or *representing* some notion (a notion we have some intuitive, somewhat pre-theoretical grasp on) in terms of $(a,(b,c))$. In some settings it's worth keeping track of the difference. That's why I'm flagging these things for you. I also think you'll have better conceptual hygiene if you're more aware of the choices to be made here. But for the material we'll be focusing on, it won't matter in any direct way.
As I mentioned in class, some of the important respects in which strings and tuples differ are that:
* strings are length-insensitive, that is, if an operation is defined to work on strings of one length, typically you should expect the operation to make intuitive sense, and be defined for, shorter and longer strings too
* similarly, strings are homogenous, that is, their letters will always come from a single set or type
* so we should think of a length-2 string of numbers as itself being the same type of thing as a length-3 string of numbers. And we should only expect to find strings whose letters are a mix of numbers and people if we think of the string as generated from a larger letter set, that contains both numbers and people. And if we're going to allow such strings, we should be equally willing to work with strings that contain people and numbers in different orders.
On the other hand:
* tuples are length-sensitive. An operation that's defined to work on ordered pairs might in *some* cases *be sensibly extensible* to an operation that works on ordered triples, but there should be no presumption that it will be. If you're expecting pairs as arguments, there's no reason you should be able to deal with triples as well.
* tuples may be homogenous, but may also be heterogenous. We'll very often be working with things like pairs of numbers and people (with no expectation that the operations we're defining will also work on pairs of people and numbers, or pairs of people and people).
These are just rough gestures; but I hope they're enough to give you some intuitive sense of why these notions might be worth keeping separate.
Instead of $(a,b)$, some authors will write $\tuple{a,b}$. I think this notation tends even more strongly to be correlated with ignoring the differences between tuples and strings.
Another sort of reduction you will often encounter in math and logic texts is one that identifies ordered pairs with certain kinds of sets. There are different ways to do this. The most widespread proposal is that:
> $(a,b) = \set{\set{a},\set{a,b}}$
As with the preceding reductions, I take this in the spirit of "Okay, let's suspend judgment about whether these different kinds of abstract collections really do essentially coincide in this way, or whether the nature of sets is really more fundamental than that of a pairing. I'll just assume that present purposes the author proposes to model or represent pairs in terms of sets in this way. Let's go along with that for the time being." And in fact, it almost never makes any mathematical difference whether pairs can be reduced to sets in this way. Perhaps it makes a difference to the philosophy of mathematics; but that is not our topic here.
Sometimes ordered pairs, triples, and so on---the general class of things I will call n-tuples or just **tuples**---are referred to as "ordered sets." Avoid this usage; it will be too confusing when we look at a different notion of ordered set, a bit later in the class.
When you're working with triples from the set $\Alpha \times \Alpha \times \Alpha$, it's customary to also write this as $\Alpha^3$. But then that raises the questions whether:
> $\Alpha \times (\Alpha \times \Alpha)$, that is, $\Alpha \times \Alpha^2$, should $= \Alpha^3$ as well?
Those who identify $(a,b,c)$ with $(a,(b,c))$ will say sure, because that's just the same thing as $\Alpha \times \Alpha \times \Alpha$. But then should it also be the case that:
> $(\Alpha \times \Alpha) \times \Alpha = \Alpha^3$
and so that $((a,b),c)$ always $= (a,(b,c))$? This would be disastrous. The $\Alpha^3$ notation papers over this issue. Nonetheless, I will sometimes go along with the notation because it's very prevalant.
Another question posed by that notation is what is the relation between $\Alpha^1$ and $\Alpha$? Should we take these to be the same? Many authors will just uncritically say yes, but in fact there are many interesting complexities to this. There's a strong case for acknowledging two kinds of tuple, one of which verifies this identification, and the other of which falsifies it. Many computing languages primarily work with the latter notion.
In some theoretical settings, again ones closer to computer science than the material we'll be working with, it's useful to also introduce a notion $\Alpha^0$, that is, the ordered tuple that has zero elements. There is only one of these (at least, up to isomorphism). Sometimes the set of these tuples (all one of them) is called $1$. That should remind you of my practice of sometimes using the symbol $2$ to refer to the set of truth-values.
Don't confuse $\Alpha^2$ and $\pow{\Alpha}$!
Another HOMEWORK EXERCISE: (4) What is the difference between $\Alpha^2$ and $\pow{\Alpha}$? Give examples of some things that belong to one but not the other.
### More HOMEWORK EXERCISES ###
(5) For each of the following, true or false?
a. If $a \in \Alpha$ and $\Alpha \se \Beta$ then $a \in \Beta$.
b. If $\Alpha \se \Beta$ and $\Beta \se \Gamma$ then $\Alpha \se \Gamma$.
c. $a \in \Alpha$ iff $\set{a} \se \Alpha$.
d. $a \in \Alpha$ and $b \in \Alpha$ iff $\set{a,b} \se \Alpha$.
e. If $\Alpha \se \Beta$ and $\Beta \se \Alpha$ then $\Alpha = \Beta$.
(6) Are the following true or false for arbitrary sets $\Gamma$?
a. $\Gamma \in \set{\Gamma}$
b. $\set{\Gamma} \in \set{\Gamma}$
c. $\set{\Gamma} \se \set{\Gamma}$
(7) What is the set whose only member is $\set{\Gamma}$?
(8) Here is an example proof that $\Alpha \union \Beta = \Beta \union \Alpha$. Suppose that $a \in \Alpha \union \Beta$. By the definition of union, this means that $a \in \Alpha$ or $a \in \Beta$. But then of course it follows that $a \in \Beta$ or $a \in \Alpha$ (the commutativity of disjunction in our metalanguage is not questioned). So $a \in \Beta \union \Alpha$. So we've proved that every member of $\Alpha \union \Beta$ is also a member of $\Beta \union \Alpha$. The proof of the converse goes analogously.
Prove the following:
a. If $\Alpha \se \Beta$ and $\Beta \se \Gamma$ then $\Alpha \se \Gamma$.
b. $\Alpha \se \Alpha \union \Beta$
c. $\Alpha \intersect \Beta \se \Alpha$
d. $\Alpha \union \Alpha = \Alpha$
e. $\Alpha \intersect \Alpha = \Alpha$
f. $\Alpha \union \emptyset = \Alpha$
g. $\Alpha \intersect \emptyset = \emptyset$
h. $\Alpha \union (\Alpha \intersect \Beta) = \Alpha$
i. $\Alpha \intersect (\Alpha \union \Beta) = \Alpha$
j. $\Alpha \se \Beta$ iff $\Alpha \union \Beta = \Beta$
k. $\Alpha \se \Beta$ iff $\Alpha \intersect \Beta = \Alpha$
l. $\Alpha - (\Alpha-\Beta) = \Alpha \intersect \Beta$
m. $(\Alpha-\Beta) \intersect \Beta = \emptyset$
(9) Prove the following:
a. $\Alpha \times (\Beta \intersect \Gamma) = (\Alpha \times \Beta) \intersect (\Alpha \times \Gamma)$
b. $\Alpha \times (\Beta \union \Gamma) = (\Alpha \times \Beta) \union (\Alpha \times \Gamma)$
c. $\Alpha \times (\Beta-\Gamma) = (\Alpha \times \Beta) - (\Alpha \times \Gamma)$
(10) Let $T1$ be a set, and let:
> $T2 = \set{T1}$
> $T3 = \set{\set{T1}}$
> $T4 = \set{T1, \set{T1}}$
Questions:
a. Which of $T1$ -- $T4$ are members of $T4$?
b. Which of $T1$ -- $T4$ are subsets of $T4$?
(11) Let $\Alpha = \set{a,b,c}$, $\Gamma = \set{c,d}$, and $\Delta = \set{d,e,f}$. Then:
a. List $\pow{\Alpha}$.
b. What is $\Alpha \union \Gamma$?
c. What is $\Gamma \union \emptyset$?
d. What is $\Alpha \intersect \Gamma$?
e. What is $\Alpha - \Gamma$?
f. What is $\Alpha \union (\Gamma \intersect \Delta)$?
g. What is $\Alpha \intersect (\Gamma \intersect \Delta)$?
(12) Understand the **symmetric difference** of sets $\Alpha$ and $\Beta$ to be the set $(\Alpha \union \Beta) - (\Alpha \intersect \Beta)$. I've seen this operation expressed using each of the following notations: $\triup \quad \ominus \quad \oplus$ and $+$ (note that $+$ is also sometimes used instead to express disjoint union, explained above). For the purposes of this question, let's express symmetric difference using $+$. Prove that $\Alpha + \Beta = (\Alpha-\Beta) \union (\Beta-\Alpha)$.