Read Introduction to Graph Theory Online
Authors: Richard J. Trudeau
Figure 2
The proof went something like this. Any rational number can be reduced to “lowest terms”, that is, expressed as a quotient of whole numbers that have no factors (other than 1) in common; for example 360/75 = 24/5, and 24 and 5 have no common factor. Therefore, if â2 were a rational number, it would be possible to express â2 as â2 =
p
/
q
where
p
and
q
are whole numbers with no common factor. Squaring both sides gives 2 =
p
2
/
q
2
, and multiplying both sides by
q
2
gives 2
q
2
=
p
2
. This means that
p
2
must be even, because it is twice another whole number,
q
2
. The Pythagoreans had previously proved that the square of an odd number is odd, so the fact that
p
2
is even implies that
p
is even also (if
p
were odd,
p
2
would be odd). So
p
is even; that is,
p
is the double of some other whole number; in symbols,
p
= 2
r
for some whole number
r
. Substituting
p
= 2
r
into the equation 2
q
2
=
p
2
gives 2
q
2
= (2
r
)
2
or 2
q
2
= 4
r
2
. Dividing both sides by 2 gives
q
2
= 2
r
2
, so
q
2
, being twice the whole number
r
2
, is even, and for the same reason as before
q
is even too. But if
p
and
q
are both even, as we have shown, then they have a common factor of 2, which contradicts our earlier statement that
p
and
q
have no common factor. Hence the assumption that â2 is rational leads to a contradiction and so is logically untenable.
At this point the Pythagoreans were at an impasse, what mathematicians call a “paradox”. They were sure, on intuitive grounds, that â2, being a quotient of two lengths, is a rational number. On the other hand they were equally sure, on grounds of logic and computation,
that â2 is not a rational number. Had they decided to accept intuition as more reliable than logic, the future of mathematics would have been quite different; but of course they decided in favor of logic, and mathematicians ever since have been taught to mistrust intuition. (I think this has something to do with the stereotype that mathematicians are “cold” people.)
To say that mathematicians consider intuition unreliable, however, does not mean that they have banished it from mathematics. On the contrary, the basic assumptions from which any branch of pure mathematics proceedsâthe axiomsâare unsusceptible to proof, and are accepted primarily because of intuitive appeal. And intuition plays a big role in the discovery of theorems as well, or mathematicians would be spending most of their time trying to prove false theorems.
It's just that intuitive evidence is not accepted as conclusive. Definitions and axioms are carefully framed with an eye to hidden assumptions. Time and again in the history of mathematics, paradoxes have arisen because fundamental notions were not divorced from their origins in the physical world. (The Pythagorean paradox is a good example: any carpenter can tell you that given two sticks
AB
and
CD
, it is always possible to cut a stick
XY
that will be a common measure; but mathematical straight lines aren't sticks.) Similarly, theorems are stated as precisely as possible; and though they are discovered by intuition, they are demonstrated by logic alone. This habit of mind is called by mathematicians “rigor”. It is characteristic of pure mathematics.
In recent years pure mathematics has become “rigorous” beyond the Greeks' wildest dreams. One reason for this is an insidious paradox that surfaced around the turn of the century.
Russell's Paradox
. During the last half of the nineteenth century several branches of pure mathematics were being simultaneously rebuilt because of paradoxes that had arisen a few decades earlier. This rebuilding was based on the notion of “set”, which was defined as follows:
Old Definition
. A set is a collection of distinct objects.
The set concept seemed like a good foundation because it was simple and obviously free of logical difficulty. Mathematicians were confident that with it they could build a logical edifice so solid as to never again be shaken by a paradox.
Then, in 1902, Bertrand Russell presented his famous paradox. He observed that if we define a set to be merely a collection of distinct objects,
we have to include self-referent collections like
A
= {1, 2, 3,
A
}, in which the set is one of its own elements. Of course, the sets mathematicians would normally have occasion to discuss are not of this type, so he called these new sets “extraordinary”. Standard sets like
B
= {3, 17, &,
#
, Massachusetts}, in which the name of the set does not appear on the list of elements, he called “ordinary”. So an extraordinary set is a set that is an element of itself, and an ordinary set is a set that is not an element of itself.
Don't confuse “being an element of” and “being a subset of”. Every set is a subset of itself, but only a strange set like
A
= {1, 2, 3,
A
} is an element of itself.
The paradox unfolds by letting
S
be the collection of all ordinary sets and nothing else. I claim that
S
is a set. We are using the old definition that says a set is merely a collection of distinct objects, an “object” being anything conceivable. Ordinary sets are certainly conceivable (I am holding the concept in my mind as I write), so they are objects;
S
is a collection of such objects, so
S
is a set.
Every set is either ordinary or extraordinary, so
S
must be either ordinary or extraordinary. The rub is that
S
is neither. Recall what we know about
S
: it is a set, it contains every ordinary set, and it contains nothing else. If
S
itself were an ordinary set, it would be an element of itself and therefore extraordinary. On the other hand, if
S
were an extraordinary set, it would not be an element of itself (as it contains only ordinary sets), and therefore
S
would be ordinary. So
S
can be neither ordinary nor extraordinary, though being a set it must be one or the other. We are caught in a bewildering paradox.
One way out would be to abandon the so-called “Law of Excluded Middle”, a rule of logic that says “any meaningful statement is either true or false and there is no other possibility.” We used the Law of Excluded Middle in the above argument when we insisted that every set is either ordinary or not ordinary (i.e., extraordinary). If there were a “middle” possibility, the argument that
S
is neither ordinary nor extraordinary would lose its devastating effect. You may feel that the Law of Excluded Middle is so basic to reason that rejecting it is unthinkable, but there is a minority school of mathematicians, the Intuitionists, who do precisely that. The majority of mathematicians, however, have confidence in this part of Aristotle's logic, so when the paradox was announced they searched for another way out.
In 1910 Russell resolved his own paradox with what he called the “theory of types”, which essentially says that we can avoid the paradox if we tack an extra clause onto the definition of “set” excluding collections that are elements of themselves. This is what we did in
Definition 1
. Under this new definition there are no such things as “extraordinary” sets; all sets are ordinary, so
S
becomes the collection of all sets. Further,
S
itself is no longer a set, for if
S
were a set it would be an element of itself.
S
is in some sense “too big” to be a set. For collections like
S
we can invent a new name, say “class”, to indicate that they are qualitatively different from sets. So the paradox is gone, leaving the following debris:
S
is not an extraordinary set because extraordinary sets don't exist; neither is
S
an ordinary set because
S
is not a set at all;
S
is a class.
The subtlety of Russell's paradox can be seen in the fact that mathematicians all over the world had worked with the theory of sets for more than twenty years before anyone noticed the big loophole in the very definition of “set”. Standards of “rigor” have been at an all-time high ever since. Statements of all kinds (definitions, axioms, theorems) are scrutinized for the tiniest hidden assumptions, and proofs are made to adhere to the strictest logic. If it all seems a trifle paranoid, think about this: if so simple a concept as “set” has turned out to contain the seed of a paradox, what of most other mathematical concepts, like “number” and “function”, that are considerably more complex?
Graphs
Definition 5.
A graph is an object consisting of two sets called its vertex set and its edge set. The vertex set is a finite nonempty set. The edge set may be empty, but otherwise its elements are two-element subsets of the vertex set.
Examples.
1) The sets {
P
,
Q
,
R
,
S
,
T
,
U
} and {{
P
,
Q
}, {
P
,
R
}, {
Q
,
R
}, {
S
,
U
}} constitute a graph, which we can call “
G
” for short.
2) {1, 2, 3, 4, 5} and
ø
constitute a second graph, “
H
”.
3) “
J
”is a graph having vertex set {1, 2, 3, 4, 5} and edge set {{1, 2}, {1, 3}, {1, 4}, {1, 5}, {2, 3}, {2, 4}, {2, 5}, {3, 4}, {3, 5}, {4, 5}}.
I realize that
G
,
H
, and
J
don't look much like the things in
Figure 1
, but they will when we start to draw diagrams of graphs.
Definition 6.
The elements of the vertex set of a graph are called vertices (singular: vertex) and the elements of the edge set are called edges. We shall denote the number of vertices by “
v
” and the number of edges by “
e
”.
Examples.
1) The vertices of the above graph
G
are
P
,
Q
,
R
,
S
,
T
, and
U
.
Its edges are {
P
,
Q
}, {
P
,
R
}, {
Q
,
R
}, and {
S
,
U
}.
G
has
v
= 6 and
e
= 4.
2) The vertices of
H
are 1, 2, 3, 4, and 5. It has no edges. For
v
= 5 and
e
= 0.
3)
J
has
v
= 5 and
e
= 10.
Definition 7
. If {
X
,
Y
} is an edge of a graph, we say that {
X
,
Y
} joins or connects the vertices
X
and
Y
, and that
X
and
Y
are adjacent to one another. The edge {
X
,
Y
} is incident to each of
X
and
Y
, and each of
X
and
Y
is incident to {
X
,
Y
}. Two edges incident to the same vertex are called adjacent edges. A vertex incident to no edges at all is isolated.
Examples.
In the graph
G
of previous examples, {
P
,
Q
} joins
P
and
Q
;
P
and
Q
are adjacent;
P
and
S
are not adjacent; {
P
,
Q
} is incident to
Q
; {
P
,
Q
} is not incident to
R
;
P
is incident to {
P
,
R
};
P
is not incident to {
Q
,
R
}; {
P
,
R
} and {
Q
,
R
} are adjacent; {
P
,
R
} and {
S
,
U
} are not adjacent; and
T
is isolated.
Graph Diagrams
It is customary to represent graphs by drawings, much as it is customary for geometers to represent geometric objects by drawings. For example,
Figure 3
is a representation of the graph
G
we discussed in the last section. The vertices have been represented by heavy dots, and vertex adjacency has been represented by connecting the corresponding dots. Representing graphs by such diagrams makes their structure easier to grasp, but you must be careful not to read too much into the diagram. Diagrams have properties over and above those consequent to their being graph representations. For example, the edges in the picture below each have a certain length, and a specific shape, and they form measurable angles where they meet; and the vertices have specific positions on the page relative to one another.
But surely these properties were not derived from the abstract definition of G, which is merely “G is the graph having vertex set {
P
,
Q
,
R
,
S
,
T
,
U
} and edge set {{
P
,
Q
}, {
P
,
R
}, {
Q
,
R
}, {
S
,
U
}}.” Rather they are incidental features of the diagram, in fact unavoidable features, but irrelevant insofar as the diagram represents a graph. By many standards
Figure 4
is different from
Figure 3
, but viewed as a graph representation, it is just another picture of G. It has the same vertices and the same connections.