Outer Limits of Reason (21 page)

Read Outer Limits of Reason Online

Authors: Noson S. Yanofsky

BOOK: Outer Limits of Reason
10.69Mb size Format: txt, pdf, ePub

In
section 5.1
I present several examples of relatively easy problems. In encountering these easy problems, you'll become familiar with the language and notation used in the next two chapters.
Section 5.2
covers five examples of hard problems. I demonstrate that although the problems are easy to state, they are not easy to solve. These problems will be shown to be related to each other in
section 5.3
. You'll also see why they seem to have no simple solutions. In
section 5.4
I indicate how to cope with such hard problems. They cannot always be solved, but sometimes we can approximate their solutions. I close with
section 5.5
, in which I discuss even harder problems. In the next chapter, my focus will be on certain easily stated computer problems that simply cannot be solved, regardless of how much time or how many resources are used.

5.1  Some Easy Problems

Anyone who has worked with computers over the past ten years knows that they are getting faster and faster. What once took hours can now be done in seconds. What once took seconds is now done almost instantaneously. We have grown blasé about the processing speeds of personal computers. There are those who are not very tidy with their e-mail inboxes and have 10,000 messages awaiting deletion. By simply clicking a button, a modern-day pack rat can sort those 10,000 messages in seconds. This occurs so fast that we are not even aware that the computer is working. In this chapter, we will look at how much work is needed to sort and to perform other simple tasks. We will also see that some problems demand so much work that they simply cannot be performed in a feasible amount of time. First let us look more closely at some easy problems.

Addition and Multiplication

By the fourth grade, children learn the basic rules of how to add and multiply numbers that are several digits long. Students are taught the standard procedure that they must follow in order to obtain the correct answer. The term for a standard procedure or sequence of instructions is
algorithm
. The word is derived from the name Muhammad ibn Mūsā al-Khwārizmī (780–850), who wrote one of the first books that taught the Western world how to do algebra. We will describe and analyze various algorithms that solve different problems.

Let us start with something simple. To add two 7-digit numbers, one must add seven pairs of single digits (and keep track of numbers to carry) as in
figure 5.1
.

Figure 5.1

Adding two 7-digit numbers

Whenever we meet an algorithm, we must ask how many basic operations the algorithm requires. The number of operations needed depends on the size of the problem. For the two 7-digit numbers, seven operations are required. For two 42-digit numbers, 42 operations are required. In general, to add two
n-
digit numbers, one must do
n
additions of single digits.

In contrast to addition, consider multiplication. To multiply two 7-digit numbers, one must multiply every digit of the second number by every digit of the first number. Once this is done and everything is put in its correct column, the numbers must be summed as in 
figure 5.2
.

Figure 5.2

Multiplying two 7-digit numbers

Multiplication requires many more operations than addition. For each of the seven digits in the second number, we must perform seven multiplications, hence a total of 7 × 7 = 7
2
= 49 multiplications need to be done (ignore the number of additions that need to be done). In general, for two
n
-digit numbers, we need to multiply every one of the second number's
n
digits by every one of the first number's
n
digits. This gives us a total of
n
2
multiplications. Usually
n
2
is larger than
n
but—as we will see—it is still not very large.

How much time does it take to perform these tasks? The answer depends on the speed of your computer. The faster the computer, the less time will be needed. However, we will see that the speed of the computer is really not that important. Let's do some calculations. Say we want to multiply two 100-digit numbers. That will demand 100
2
= 10,000 operations. Whether our computer can perform 100,000 or 1,000,000 operations per second, our task will be completed in less than half a second. It will become apparent that rather than looking at the speed of a computer, the important thing to look at is how many basic operations are needed.

We will see later that although multiplication is relatively easy, the opposite of multiplying two numbers—namely, factoring a single number into two numbers—is much harder.

Searching

Imagine that you have a cabinet full of files where each file contains information about a musical band. Assume that these files are not in any kind of order. Now, let us suppose that you are interested in finding the file for a particular band. This is similar to looking for a needle in a haystack. The only way to find the desired file is to examine them all, one at a time. Such a method of searching is called a
brute-force search algorithm
. If there are
n
files in the cabinet, how many files will you have to search? You might find the desired file after a few tries. However, in the worst-case scenario, you have to look through all
n
files to find the one you want, or to find that the desired file is not in the cabinet at all. So, a brute-force search of an unordered collection of
n
elements might demand
n
operations.

Consider a nicely alphabetized cabinet containing files relating to various musical bands. You might perform the same brute-force search on this ordered cabinet as you did on the unordered cabinet. If you are looking for the file on the group ABBA, then a brute-force search would work out well. However, if you are looking for the file on ZZTop, you would be wasting a lot of time. We are interested in an algorithm that performs the least amount of work in all cases. Let us try another algorithm called the
binary search algorithm
, which involves splitting groups of files into two. When searching for a file, rather than starting at the first file and then going forward, try the middle file and see if the desired one is before or after the middle file. Imagine that we are looking for Pink Floyd. We will look at the middle file, say Madonna, and since Pink Floyd is alphabetically after Madonna, we will ignore all the files from the beginning through the middle, and concentrate on the part of the cabinet that starts with Mariah Carey and ends with ZZTop. We might describe this search with the first column of
table 5.1
. We place a * next to the splitting file—that is, the file being compared.

Table 5.1

A binary search through an ordered list

Our next guess will be the item that is halfway through this portion of the cabinet: this will be Prince. Since Pink Floyd comes before Prince, we will ignore all the files from Rod Stewart to ZZTop and concentrate on the files from Mariah Carey until Prince. We continue this method of splitting what remains until we find the file of our beloved Pink Floyd on the fourth guess.

How many comparisons do we have to perform? If we start with
n
files, after one check we will discard approximately
n/
2 of the files and focus on the other
n/
2 files. After another check, we will be concerned with only
n/
4 of the files. We continue in this manner until only one file remains and we see if it is the desired file, or we see if the file we want is not in our cabinet. For a value
n
, the number of times that
n
can be repeatedly split in half is described by the logarithm function that we write as log
2
n
. So the binary search algorithm works with log
2
n
checks.

Let's look at a few examples of logarithms. For
n
=
256, we can make the following splits:

128, 64, 32, 16, 8, 4, 2, 1.

Since there are eight splits, we say that log
2
256
=
8. For
n
=
1,024, we have

512, 256, 128, 64, 32, 16, 8, 4, 2, 1

and hence log
2
1,024 = 10. And finally, for
n
=
65,536 we have

32768, 16384, 8192, 4096, 2048, 1024, 512, 256, 128, 64, 32, 16, 8, 4, 2, 1.

Since there are sixteen splits, we say that log
2
65,536 = 16.

For
n
=
1,000, the brute-force search will perform, in the worst case, 1,000 checks while the binary search will do log
2
1,000 checks. That is, about ten checks. This is a vast improvement.

For a particular problem, there might be several different algorithms that can be used to provide solutions. Sometimes certain algorithms should be used for certain types of inputs while other algorithms should be used for other types of inputs. We have seen that there is a difference if the cabinet is ordered or unordered. A brute-force search algorithm should be used for an unordered cabinet, while a binary search algorithm should be used on an ordered cabinet. For each problem, we will analyze different algorithms and try to find the one that is most efficient. The problem will be characterized by the most efficient algorithm that exists to solve it in the worst-case scenario. So, we say that searching an unordered list demands
n
operations and searching an ordered list demands log
2
n
operations.

Sorting

In the previous example we saw that it pays to keep things tidy and sorted. How does one go about sorting a list of
n
elements? There are many different algorithms to perform the task at hand, but I present one of the simplest ones. The
selection-sort algorithm
works as follows:

1. Search through the elements and find the minimum element.

2. Swap that minimum element with the first element in the list.

3. Search the rest of the list to find its minimum.

4. Swap that element with the second element of the list.

5. Continue in this manner until all the elements are in their correct position.

We might envision this sort with the sequence of lists of three-letter words shown in
figure 5.3
.

Figure 5.3

The steps of the selection sort

Other books

The Carrier by Sophie Hannah
B00DSGY9XW EBOK by Ryan, Ashley
A Stark And Wormy Knight by Tad Williams
Consigned to Death by Jane K. Cleland
Red Delicious Death by Sheila Connolly
The Night, The Day by Andrew Kane
The Rifle by Gary Paulsen