The Elements of Computing Systems: Building a Modern Computer from First Principles (10 page)

BOOK: The Elements of Computing Systems: Building a Modern Computer from First Principles
12.7Mb size Format: txt, pdf, ePub
ads
 
Adder
Memory and register chips represent integer numbers by
n
-bit patterns, n being 16, 32, 64, and so forth—depending on the computer platform. The chip whose job is to add such numbers is called a multi-bit adder, or simply adder. Figure 2.4 presents a 16-bit adder, noting that the same logic and specifications scale up as is to any
n
-bit adder.
 
Incrementer
It is convenient to have a special-purpose chip dedicated to adding the constant 1 to a given number. Here is the specification of a 16-bit incrementer:
Figure 2.3
Full-adder, designed to add 3 bits.
 
Figure 2.4
16-bit adder. Addition of two
n
-bit binary numbers for any n is “more of the same.”
 
2.2.2 The Arithmetic Logic Unit
(ALU)
The specifications of the adder chips presented so far were generic, meaning that they hold for any computer. In contrast, this section describes an ALU that will later become the centerpiece of a specific computer platform called Hack. At the same time, the principles underlying the design of our ALU are rather general. Further, our ALU architecture achieves a great deal of functionality using a minimal set of internal parts. In that respect, it provides a good example of an efficient and elegant logic design.
The Hack ALU computes a fixed set of functions out =
f
i
(
x
, y) where
x
and
y
are the chip’s two 16-bit inputs, out is the chip’s 16-bit output, and
f
i
is an arithmetic or logical function selected from a fixed repertoire of eighteen possible functions. We instruct the ALU which function to compute by setting six input bits, called control bits, to selected binary values. The exact input-output specification is given in figure 2.5, using pseudo-code.
Note that each one of the six control bits instructs the ALU to carry out a certain elementary operation. Taken together, the combined effects of these operations cause the ALU to compute a variety of useful functions. Since the overall operation is driven by six control bits, the ALU can potentially compute 2
6
= 64 different functions. Eighteen of these functions are documented in figure 2.6.
We see that programming our ALU to compute a certain function
f
(
x
, y) is done by setting the six control bits to the code of the desired function. From this point on, the internal ALU logic specified in figure 2.5 should cause the ALU to output the value
f
(
x
, y) specified in figure 2.6. Of course, this does not happen miraculously, it’s the result of careful design.
For example, let us consider the twelfth row of figure 2.6, where the ALU is instructed to compute the function x-1. The zx and nx bits are 0, so the x input is neither zeroed nor negated. The zy and ny bits are 1, so the y input is first zeroed, and then negated bit-wise. Bit-wise negation of zero, (000 ...00)
two
, gives (111 ...11)
two
, the 2’s complement code of -1. Thus the ALU inputs end up being x and -1. Since the f-bit is 1, the selected operation is arithmetic addition, causing the ALU to calculate
x
+ (-1). Finally, since the no bit is 0, the output is not negated but rather left as is. To conclude, the ALU ends up computing x-1, which was our goal.
Figure 2.5
The Arithmetic Logic Unit.
 
Figure 2.6
The ALU truth table. Taken together, the binary operations coded by the first six columns effect the function listed in the right column (we use the symbols !, &, and | to represent the operators Not, And, and Or, respectively, performed bit-wise). The complete ALU truth table consists of sixty-four rows, of which only the eighteen presented here are of interest.
 
Does the ALU logic described in figure 2.6 compute every one of the other seventeen functions listed in the figure’s right column? To verify that this is indeed the case, the reader can pick up some other rows in the table and prove their respective ALU operation. We note that some of these computations, beginning with the function
f
(
x
,
y
) = 1, are not trivial. We also note that there are some other useful functions computed by the ALU but not listed in the figure.
It may be instructive to describe the thought process that led to the design of this particular ALU. First, we made a list of all the primitive operations that we wanted our computer to be able to perform (right column in figure 2.6). Next, we used backward reasoning to figure out how x, y, and out can be manipulated in binary fashion in order to carry out the desired operations. These processing requirements, along with our objective to keep the ALU logic as simple as possible, have led to the design decision to use six control bits, each associated with a straightforward binary operation. The resulting ALU is simple and elegant. And in the hardware business, simplicity and elegance imply inexpensive and powerful computer systems.
2.3 Implementation
Our implementation guidelines are intentionally partial, since we want you to discover the actual chip architectures yourself. As usual, each chip can be implemented in more than one way; the simpler the implementation, the better.
 
Half-Adder
An inspection of figure 2.2 reveals that the functions sum(
a
, b) and carry(
a
, b) happen to be identical to the standard Xor(
a
, b) and And(
a
, b) Boolean functions. Thus, the implementation of this adder is straightforward, using previously built gates.
 
Full-Adder
A full adder chip can be implemented from two half adder chips and one additional simple gate. A direct implementation is also possible, without using half-adder chips.
 
Adder
The addition of two signed numbers represented by the 2’s complement method as two
n
-bit buses can be done bit-wise, from right to left, in n steps. In step 0, the least significant pair of bits is added, and the carry bit is fed into the addition of the next significant pair of bits. The process continues until in step
n
–1 the most significant pair of bits is added. Note that each step involves the addition of three bits. Hence, an
n
-bit adder can be implemented by creating an array of n full-adder chips and propagating the carry bits up the significance ladder.
 
Incrementer
An
n
-bit incrementer can be implemented trivially from an
n
-bit adder. ALU Note that our ALU was carefully planned to effect all the desired ALU operations logically, using simple Boolean operations. Therefore, the physical implementation of the ALU is reduced to implementing these simple Boolean operations, following their pseudo-code specifications. Your first step will likely be to create a logic circuit that manipulates a 16-bit input according to the nx and zx control bits (i.e., the circuit should conditionally zero and negate the 16-bit input). This logic can be used to manipulate the x and y inputs, as well as the out output. Chips for bit-wise And-ing and addition have already been built in this and in the previous chapter. Thus, what remains is to build logic that chooses between them according to the f control bit. Finally, you will need to build logic that integrates all the other chips into the overall ALU. (When we say “build logic,” we mean “write HDL code”).
2.4 Perspective
The construction of the multi-bit adder presented in this chapter was standard, although no attention was paid to efficiency. In fact, our suggested adder implementation is rather inefficient, due to the long delays incurred while the carry bit propagates from the least significant bit pair to the most significant bit pair. This problem can be alleviated using logic circuits that effect so-called carry look-ahead techniques. Since addition is one of the most prevalent operations in any given hardware platform, any such low-level improvement can result in dramatic and global performance gains throughout the computer.
In any given computer, the overall functionality of the hardware/software platform is delivered jointly by the ALU and the operating system that runs on top of it. Thus, when designing a new computer system, the question of how much functionality the ALU should deliver is essentially a cost/performance issue. The general rule is that hardware implementations of arithmetic and logical operations are usually more costly, but achieve better performance. The design trade-off that we have chosen in this book is to specify an ALU hardware with a limited functionality and then implement as many operations as possible in software. For example, our ALU features neither multiplication nor division nor floating point arithmetic. We will implement some of these operations (as well as more mathematical functions) at the operating system level, described in chapter 12.
BOOK: The Elements of Computing Systems: Building a Modern Computer from First Principles
12.7Mb size Format: txt, pdf, ePub
ads

Other books

The Book of Wonders by Richards, Jasmine
French Provincial Cooking by Elizabeth David
Courting Carolina by Chapman, Janet
The Ethical Engineer by Harry Harrison
Brighter Than The Sun by Julia Quinn
No Second Chances by Marissa Farrar
Artifacts by Pete Catalano
To Love and Be Wise by Josephine Tey