CMSC 27100 — Lecture 5a

The notes for this course began from a series originally written by Tim Ng, with extensions by David Cash and Robert Rand. I have modified them to follow our course.

This lecture gives some more definitions from set theory and then covers the basics of relations and functions. This provides some examples of the modern mathematical idea that everything can be viewed as a set. Everything. Functions? Sets. Numbers? Sets. Problems? Sets. Sets? Sets. Redefining common objects within set theory might feel like a bit of game (akin to reimplementing programs in an esoteric programming language), but historically and practically it's an important and powerful idea that eliminates (essentially) all ambiguity in mathematical theorems.

Looking forward in this course, the next few lectures will cover combinatorics and probability, and the ideas in both of these subjects are very naturally expressed using set theory.

The Biconditional Connective

Before we get into the set theory, here's a term we should've defined much earlier. Before we defined the conditional connective: $a \to b$, which reads as "a implies b" or "if a then b". A related and often useful connective is the biconditional: $a \leftrightarrow b$.

The proposition $p \leftrightarrow q$ is called a biconditional or logical equivalence and is pronounced "$p$ if and only if $q$". It is defined to be true when $p$ and $q$ have the same truth values (i.e. both false or both true), and false otherwise.

The truth table for this one is

$p$ $q$ $\quad$ $p \leftrightarrow q$
$T$ $T$ $T$
$T$ $F$ $F$
$F$ $T$ $F$
$F$ $F$ $T$

Again let $p =$ "It is Wednesday." and $q =$ "I will buy a milkshake." Then $p\rightarrow q$ is "If it is Wednesday, then I will buy a milkshake", and $p\leftrightarrow q$ is "It is Wednesday, if and only if I will buy a milkshake".

Now examine $p \rightarrow q$ in the case that $p$ is false - say it's Monday, not Wednesday. In that case, we declare $p \rightarrow q$ to be true, regardless of whether or not we buy a milkshake. This follows the conventions discussed above.

The biconditional $p\leftrightarrow q$ can be reworded as "I always buy a milkshake on Wednesdays, and if it is Wednesday, then I must be buying a milkshake." which is closer to normal English. Intuitively it says that buying a milkshake and the day being Wednesday are equivalent in some sense.

More Set Theory - Subsets

We've already used sets a lot, but let's recall the definition here.

A set is an unordered collection of objects. If $S$ is a set and $a$ is a member of $S$, then we write $a \in S$. If $a$ is not a member of $S$, then we write $a \not \in S$.

Reminder that we originally introduced sets in Lecture 2, if you'd like a refresher. Sets are determined by nothing more than what they contain. That is, there are not two distinct sets that contain only the number $2$; They are both $\{2\}$. The next definition states this more formally.

Two sets $A$ and $B$ are equal if and only if they have the same elements. That is, $$A = B \iff (\forall x, x \in A \iff x \in B).$$

In particular, there is a unique set containing nothing:

The set $\{\}$ containing no elements is the empty set and is denoted by $\emptyset = \{\}$.

We will need the following notion of a subset, which describes when one set's elements are entirely contained in another set.

A set $S$ is a subset of $T$, written $S \subseteq T$, when every element of $S$ belongs to $T$. A set $S$ is a proper subset of a set $T$, written $S \subsetneq T$, when $S$ is a subset of $T$ and there exists an element of $T$ which does not belong to $S$.

Since we have a remaining logical connective, $\to$, it's worth pointing out how it relates to subsets: $$S \subseteq T \Leftrightarrow (x \in S \to x \in T).$$

You may notice that sometimes $\subset$ is used for proper subset. This works quite nicely with $\subseteq$ meaning subset. However, we'll avoid this particular notation because there are many other mathematicians who use $\subset$ to mean (not necessarily a proper) subset. Instead, we will use $\subseteq$ and $\subsetneq$ to keep things clear.

From the above, we note that by definition, we have $S \subseteq S$ and if $S = T$, we have $S \subseteq T$ and $T \subseteq S$. This second definition gives us an alternate characterization of two sets that are equal.

The cardinality of a set $S$ is the number of elements in $S$ and is denoted $|S|$. If $S$ is finite, then this will be a natural number. So, the size of the set $\{1,2,4,8\}$ would be $|\{1,2,4,8\}| = 4$.

If $S$ is an infinite set, then things are a bit trickier. The cardinality of the natural numbers is defined to be $|\mathbb N| = \aleph_0$, while the cardinality of the real numbers is $|\mathbb R| = 2^{\aleph_0}$. Here, we reach into the Hebrew alphabet for $\aleph$ (aleph). Anyhow, the cardinalities of $\mathbb N$ and $\mathbb R$ are clearly not the same, a fact famously proved by Cantor in 1891. There are many infinite sets that have cardinality $\aleph_0$, such as the set of even natural numbers $\{n \mid \exists m \in \mathbb N, n = 2\cdot m\}$ or the set of rational numbers $\mathbb Q = \{\frac m n \mid \exists m,n \in \mathbb N\}$. There are also many sets with cardinality $2^{\aleph_0}$. This brings us to a famous problem call the Continuum Hypothesis: Are there any infinite sets that have cardinality strictly between $|\mathbb N| = \aleph_0$ and $|\mathbb R| = 2^{\aleph_0}$?

The power set of a set $A$ is the set containing all of the subsets of $A$, $$\mathcal P(A) = \{S \mid S \subseteq A\}.$$

If $A$ is a finite set, then $|\mathcal P(A)| = 2^{|A|}$.

We will prove this later, but you can do a proof by induction now if you like. Inspired by this fact, you'll sometimes see the power set of a set $A$ denoted by $2^A$. This might also give you some hint about how one could justify that $|\mathbb R| = 2^{\aleph_0}$.

Operations on Sets

The next definition defines ways for combining sets to form new sets.

The union of two sets $S$ and $T$, denoted $S \cup T$ is the set of all elements in $S$ or $T$: $$S \cup T = \{x \mid x \in S \vee x \in T\}.$$

The intersection of two sets $S$ and $T$, denoted $S \cap T$, is the set of all elements in $S$ and $T$: $$S \cap T = \{x \mid x \in S \wedge x \in T\}.$$

The set difference of two sets $S$ and $T$, denoted $S \setminus T$, is defined $$S \setminus T = \{x \mid x \in S \wedge x \not \in T\}.$$

The complement of a set $S$ (with respect to another set $U$, called the "universe"), written $\overline S$, is the set of all elements from $U$ not in $S$, that is, $$\overline S = \{x \in U \mid x \not \in S\}.$$

By definition, we get $\overline S = U \setminus S$. Set complements will be especially useful when we get to probability theory, where they will give us an intuitive way to rigorously model an event not happening.

The following definition is frequently useful.

We say two sets $S$ and $T$ are disjoint if $S \cap T = \emptyset$.

Set operations have a close intuitive and formal connection to logical connectives. Here's an example of an identity that has both a logical and set theory version.

For two sets $A$ and $B$, \begin{align} \overline{A \cup B} &= \overline A \cap \overline B, \text{and} \\ \overline{A \cap B} &= \overline A \cup \overline B. \end{align}

Let's prove the second statement and leave the first as an exercise. Recall from above that $A = B$ if and only if $A \subseteq B$ and $B \subseteq A$. Equivalently, $x \in A \Leftrightarrow x \in B$. Unpacking the definitions of $\cap$, $\cup$ and $\overline{S}$, we want to show that $$\neg (x \in A \wedge x \in B) \Leftrightarrow \neg (x \in A) \vee \neg (x \in B)$$

Note that this is exactly De Morgan's law for logic, where $x \in A$ is $P$ and $x \in B$ is $Q$.

\begin{align*} \overline{A \cup B} &= \{x \mid x \notin A \cup B\} &\text{definition of complement} \\ &= \{x \mid \neg(x \in A \cup B)\} &\text{definition of set membership} \\ &= \{x \mid \neg(x \in A \vee x \in B)\} &\text{definition of union} \\ &= \{x \mid \neg(x \in A) \wedge \neg(x \in B)\} &\text{De Morgan's laws} \\ &= \{x \mid x \not\in A \wedge x \not\in B\} &\text{definition of set membership} \\ &= \{x \mid x \in \overline A \wedge x \in \overline B\} &\text{definition of complement} \\ &= \{x \mid x \in \overline A \cap \overline B\} &\text{definition of intersection} \\ &= \overline A \cap \overline B &\text{set definition} \\ \end{align*} $$\tag*{$\Box$}$$

Relations

We'll now develop a theory of how functions can be viewed as sets. In order to carry this out, we'll need a notion of an ordered collection (rather than an unordered collection, which is just a set).

An $n$-tuple $(a_1, a_2, \dots, a_n)$ is an ordered collection that has $a_1$ as its first element, $a_2$ as its second element, $\dots$, and $a_n$ as its $n$th element. An ordered pair is a 2-tuple.

Observe that since tuples are ordered, we have $(a_1, \dots, a_n) = (b_1, \dots, b_n)$ if and only if $a_i = b_i$ for $i = 1, \dots, n$.

As a side note, we claimed at the start of class that everything in mathematics is just a set, so how would we define tuples using sets? For the case of pairs, we can define $(x,y)$ to be the set $\{\{x\}, \{x,y\}\}$. In this way, we can distinguish $x$ and $y$ and we have a way to determine the order in which they appear. This definition is due to Kuratowski in 1921. We can then generalize this definition for arity $n \gt 2$.

The Cartesian product of two sets $A$ and $B$ is $$A \times B = \{(a,b) \mid a \in A, b \in B\}.$$

We generalize this to products of $n$ sets.

The Cartesian product of $n$ sets $A_1, A_2, \dots, A_n$, denoted $A_1 \times A_2 \times \cdots \times A_n$ is defined $$A_1 \times A_2 \times \cdots \times A_n = \{(a_1, a_2, \dots, a_n) \mid a_i \in A_i, i = 1, 2, \dots, n\}.$$

For any set $A$, we will sometimes write $A^2$ for $A\times A$. More generally, for any $n\in \mathbb{N}$, we will write $A^n$ for the Cartesian product of $A$ with itself $n$ times.

We now use the Cartesian product to define relations.

A relation $R$ with domain $X$ and co-domain $Y$ is a subset of $X \times Y$.

We can see from the definition that a relation really is just a subset of the Cartesian product of some sets. In other words, it's a set of tuples. This also resembles the set-theoretic definition of predicates and it's not entirely a coincidence that we think of $k$-ary predicates as relations.

Equivalence relations are a special type of relation that are frequently useful. They abstractly behave like "equalities", the sense that $(a,b)\in R$ can be intuitively mapped to $a=b$.

We say that $R \subseteq A \times A$ is an equivalence relation if it satisfies the following:

  1. $\forall a\in A, (a,a)\in R$    ("$R$ is reflexive")
  2. $\forall a,b\in A, (a,b)\in R \rightarrow (b,a)\in R$    ("$R$ is symmetric")
  3. $\forall a,b,c\in A, (a,b)\in R \wedge (b,c)\in R \rightarrow (a,c)\in R$    ("$R$ is transitive")

  1. Equality (i.e. $R=\{(a,a) | a\in A\}$) is an equivalence relation.
  2. Fix some $m>1$, and define $R \subseteq \mathbb{Z}\times\mathbb{Z}$ as $\{(a,b) | a\equiv_m b\}$. Then $R$ is an equivalence relation.

Here is an another interesting example of an equivalence relation.

Everyone who learns grade school math gets comfortable with the fact that each number can be written many ways as a fraction. For instance, $$\frac{1}{2} = \frac{2}{4} = \frac{3}{6}= \frac{-1}{-2} = \cdots$$ and so on. You may recall that fractions $\frac{a}{b}$ and $\frac{c}{d}$ represent the same number if $ad=bc$. We can use this to rigorously model fractions, and define an equivalence relation that captures this elementary fact. Namely, we model fractions as the set $\mathbb{Q} = \{ (a,b)\in\mathbb{Z}^2 | b \neq 0\}$; The element $(a,b)$ represents $\frac{a}{b}$. Then we can define the relation $R$ on $\mathbb{Q}\times \mathbb{Q}$ by $$ R = \{ ((a,b), (c,d))\in \mathbb{Q}\times\mathbb{Q} | ad=bc \}. $$ We leave as an exercise to prove $R$ is an equivalence relation.