Read It Began with Babbage Online

Authors: Subrata Dasgupta

It Began with Babbage (52 page)

BOOK: It Began with Babbage
3.53Mb size Format: txt, pdf, ePub
ads

30
. H. A. Simon. (1956). Rational choice and the structure of the environment.
Psychological Review, 63
, 129–138 (see especially p. 129).

31
. Simon, 1991, op cit., p. 201.

32
. A. Newell & H. A. Simon. (1956).
The Logic Theory machine: A complex information processing system
. Technical report P-868. Santa Monica, CA: The RAND Corporation. Also published in
IRE Transactions on Information Theory, IT-2
, 61–79. Citations of this work refers to the RAND report.

33
. Newell & Simon, op cit., p. 27.

34
. Ibid.

35
. Simon, 1991, op cit., p. 206.

36
. Ibid., p. 205.

37
. Ibid.

38
. Newell & Simon, op cit., p. 26.

39
. Ibid., p. 30.

40
. Ibid.

41
. Ibid.

42
. Ibid., p. 37.

43
. H. A. Simon to B. Russell, October 2, 1956; B. Russell to H. A. Simon, November 2, 1956. HASP, op cit.

44
. A. Newell, J. C. Shaw, & H. A. Simon. (1958). Elements of a theory of human problem solving.
Psychological Review, 65
, 151–166 (see especially p. 151).

45
. Ibid.

46
. A. Newell & H. A. Simon. (1976). Computer science as empirical inquiry: Symbols and search (Turing Award Lecture).
Communications of the ACM, 19
, 113–126. Reprinted in Anon. (1987).
ACM Turing Award lectures: The first twenty years 1966–1985
(pp. 287–313). New York: ACM Press (see especially p. 290).

47
. Newell, Shaw, & Simon, op cit., p. 156.

48
. Ibid., p. 165.

49
. There were three others who were the proposal's “originators”: Marvin Minsky (1927–), then a Harvard Junior Fellow and later, cofounder, with McCarthy, of the Artificial Intelligence Project at MIT; Nathaniel Rochester (1919–2001) of IBM, a designer of the IBM 701 and, at the time, manager of information research in IBM's Poughkeepsie laboratory; and Claude Shannon, whom we have already encountered, a mathematician at Bell Telephone Laboratories.

50
. J. McCarthy, M. L. Minsky, N. Rochester, & C. E. Shannon. (1955).
A proposal for the Dartmouth summer research project on artificial intelligence
[On-line]. August 31. Available:
http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

51
. B. von Eckerdt. (1993).
What is cognitive science?
Cambridge, MA: MIT Press; Z. W. Pylyshyn. (1984).
Computation and cognition
. Cambridge, MA: MIT Press.

52
. J. Bruner. (1990).
Acts of meaning
. Cambridge, MA: Harvard University Press.

53
. See, for example, R. Lachman, J. L. Lachman, & E. C. Butterfield. (1979).
Cognitive psychology and information processing
. Hillsdale, NJ: Lawrence Erlbaum Associates.

54
. A. L. Samuel. (1959). Some studies in machine learning using the game of checkers.
IBM Journal of Research & Development, III
, 210–229.

55
. H. Gerlenter. (1959). Realization of a geometry theorem proving machine. In
Proceedings of the International Conference on Information Processing
(pp. 273–282). London: Butterworth.

56
. B. L. Whorf. (1956).
Language, thought and reality
. Cambridge, MA: MIT Press.

57
. Simon, 1991, op cit., p. 213.

58
. Ibid.

59
. J. McCarthy. (1981). History of LISP. In R. L. Wexelblat (Ed.),
History of programming languages
(pp. 173–185). New York: Academic Press.

15
An Explosion of Subparadigms
I

IN 1962, PURDUE
University in West Lafayette, Indiana, in the United States opened a department of computer science with the mandate to offer master's and doctoral degrees in computer science.
1
Two years later, the University of Manchester in England and the University of Toronto in Canada also established departments of computer science.
2
These were the first universities in America, Britain, and Canada, respectively, to recognize a new academic reality formally—that there was a distinct discipline with a domain that was the computer and the phenomenon of automatic computation.

Thereafter, by the late 1960s—much as universities had sprung up all over Europe during the 12th and 13th centuries after the founding of the University of Bologna (circa 1150) and the University of Paris (circa 1200)—independent departments of computer science sprouted across the academic maps on North America, Britain, and Europe. Not all the departments used
computer science
in their names; some preferred
computing
, some
computing science
, some
computation
. In Europe non-English terms such as
informatique
and
informatik
were used. But what was recognized was that the time had come to wean the phenomenon of computing away from mathematics and electrical engineering, the two most common academic “parents” of the field; and also from computer centers, which were in the business of offering computing services to university communities. A scientific identity of its very own was thus established. Practitioners of the field could call themselves
computer scientists
.

This identity was shaped around a paradigm. As we have seen, the epicenter of this paradigm was the concept of the stored-program computer as theorized originally in von Neumann's EDVAC report of 1945 and realized physically in 1949 by the EDSAC and the Manchester Mark I machines (see
Chapter 8
). We have also seen the directions in which
this paradigm radiated out in the next decade. Most prominent among the refinements were the emergence of the historically and utterly original, Janus-faced, liminal artifacts called computer programs, and the languages—themselves abstract artifacts—invented to describe and communicate programs to both computers and other human beings. We have seen that a curious aspect of this new science was its concern for, indeed preoccupation with, procedures. The knowledge that had accrued and that characterized the field by the end of the 1950s was fundamentally
procedural knowledge
—“know-how” knowledge. Algorithms, numeric methods, heuristic search, linguistic tools, design methods were what this new breed of scientists were concerned with.
Methodology
was as much a domain of inquiry as the physical artifacts themselves.

There was strangeness to this new paradigm. There was strangeness to this new identity, for here was invented a new
intellectual tradition
. The mature natural sciences—geology, biology, physics, chemistry, astronomy—and the mature engineering sciences on which the engineering fields rested—mechanics, thermodynamics, fluid mechanics, physical metallurgy, metal physics, process chemistry, and so forth—had not seen the likes of such a science. Even electrical engineering and mathematics, both historically concerned with continuous phenomena, the two disciplines in which the first generation of professors of computer science were mostly trained, stood some distance from computer science.

II

But the computer science that was in place when the first departments of computer science emerged in the groves of academe in 1962 to 1965 was far from stable. If the appearance of a dominant paradigm is supposed to mark a
mature
science, then the computer science of the early 1960s challenged this view. As a paradigm, computer science was far from mature. It was volatile; it was dynamic. The core of the paradigm was not in any danger of being overthrown. No one suggested an alternative to the concept of the stored-program computer as a rival computational model.
That
would occur in the mid 1970s with what would be called the
data flow computing style
3
and the
functional programming style
.
4
Yet, no one would call what transpired during the 1960s “normal science” in Thomas Kuhn's unflattering sense (see
Chapter 6
, Section II).

Rather, within the stored-program computing paradigm as a whole there were regions where
subparadigms
were being created—local earthquakes that shifted the regions with consequences for neighboring regions, as it were. The outcome was to alter significantly “local landscapes” within the paradigm without destabilizing the paradigm itself. In our story, the end of the birth of computer science was marked not by a whimper but a rather big bang—by an explosion of subparadigms created during the 1960s.

III

Of course, to partition this chapter of our story neatly into a split between calendrical decades is too facile. Some of the subparadigm shifts began during the 1950s and reached maturity during the 1960s. A notable instance was the development of a theory of what came to be called variously and synonymously
sequential circuits, sequential machines, finite-state machines, finite automata
.

A physical computer comprises, ultimately, of switching circuit components. Some of these take (Boolean) values as inputs and, after a certain delay, produce (Boolean) outputs. But these circuits have no “memory” of their past behavior. These circuits came to be called
combinational circuits
and were the kind Claude Shannon analyzed using Boolean algebra in 1938 (see
Chapter 5
, Sections III and IV). The circuits in a computer that decode instructions and then encode the decoder's output and issue control signals that cause the instruction to be executed are combinational circuits (see
Chapter 12
, Section IV).

Other circuits have “memory” of their immediate past states. A counter that counts from 0 to 9 (and then returns to 0) is an example. At each time step, a new pulse causes the state of the device to change, so that if the present state represents the digit
n
, then its next state would represent the digit
n +
1 (for 0 ≤
n
≤ 8), and it would represent 0 if
n
= 9. Such devices with response to inputs that depend on its internal state are sequential circuits or machines. Registers, accumulators, shift registers, main memory, control units—all components of a computer—are instances of sequential circuits. The most basic sequential circuit element is the “flip-flop,” which can have only one of two states: 1 and 0. Depending on its present state, and the input value (1 or 0), flip-flops either stay in its present state or switch to the other state.

The structure of combinational and sequential circuits involve components that are abstractions of physical—electronic, electrical—circuit elements. Their behavior is, correspondingly, abstractions of such things as currents, voltages, and resistances. It does not matter whether these circuits are built from relays, vacuum tubes, or transistors, which were invented in 1948 by Willam Shockley (1910–1989), John Bardeen (1908–1991), and Walter Brattain (1902–1987), all of Bell Laboratories, for which they shared the Nobel prize for physics in 1956; or integrated semiconductor “chips” of the kind that appeared during the early 1960s. What was important was that
as
combinational or sequential circuits, their structure and behavior were described in terms of “logical”/binary/Boolean values and Boolean algebra.

Abstracting physical circuits up to the level of combinational and sequential circuits created a
new
kind of design activity that came to be called
logic design
and that provided a bridge between the machine, seen as a complex network of physical devices (diodes, triodes, resistors, capacitors, ferrite cores) obeying the laws of physics, and the machine, seen as a unified, functionally organized digital stored-program computer.

Shannon invented this abstraction level in the realm of combinational circuits in 1938. However, the creation of a theory of sequential circuits/machines, more complex because
of the presence of memory, was initiated during the mid 1950s. This theory began with the recognition of the vital concept of an internal state of a digital device, and this concept was recognized independently by a number of people.
5

In its most general form, a sequential machine has the following characteristics: (i) it consists of a finite set of possible states in which the machine can be; (ii) it can accept (recognize) one of a finite set of possible input values (that is, has a finite input alphabet); (iii) it can produce one of a finite set of output values (that is, has a finite output alphabet); (iv) when an input is received the circuit changes from its present state to a next state, depending on the input and the present state; and (v) the machine produces an output either depending (a) only on the present state or (b) on the present state and the input.

The overall structure of a sequential machine is shown in
Figure 15.1
. Notice that the (mathematical) function that produces the next state
S
k (next-state function) and the function that produces the output
O
j (the output function) are realized by combinational logic circuits to which the inputs are the inputs
I
j and the present state
S
i. The “memory” device is that part of the sequential machine that holds the machine's state at any point of time.

The alternative ways in which the output function may be defined give rise to two alternative models of sequential machines that are identical with respect to (i) through (iv), but differ according to whether the output function follows (v(a)) or (v(b)). The first model came to be called the
Moore machine
, after Edward F. Moore (1925–2003), its inventor in 1956, who was then on the research staff at Bell Laboratories and, later, professor of computer sciences at the University of Wisconsin, Madison. The second model came to be called the
Mealy machine
after George H. Mealy (1927–2010), who conceived it in 1955. Mealy was also, at the time, with Bell Laboratories; later, he became a professor of computer science at Harvard University.

BOOK: It Began with Babbage
3.53Mb size Format: txt, pdf, ePub
ads

Other books

Sarong Party Girls by Cheryl Lu-Lien Tan
Seduce Me Tonight by Kristina Wright
Branded for You by Cheyenne McCray
New Species 09 Shadow by Laurann Dohner
La Tierra permanece by George R. Stewart
The Magickers by Emily Drake