Read It Began with Babbage Online
Authors: Subrata Dasgupta
In fact, computer architectures are important examples of liminal computational artifacts (see Prologue, Section IV). They lie at the borderline between the purely abstract and the purely material. Computer architectures have an abstract identity of their own; one can describe them, analyze them, design them without referring (to any large extent) to anything physical. Yet, the identity of a computer architecture is made complete only when there is an implementation in the form of a material artifact, a physical computer (again, a rough analogy of the mind and brain is useful here). This explicit distinction, emphasized in the EDVAC report, between logical design and physical design, between architecture and implementation of that architecture was entirely new.
The second major contribution of the EDVAC report is its most significant one. It concerned the nature and function of a computer's internal memory. For a machine to perform
long sequences of operations without human intervention, it must have a large memory.
19
For example, the partial or intermediate results of a complex operation such as finding square or cubic roots of a number, or even multiplication or division must be “remembered.”
20
Tables of values of commonly used mathematical functions (such as logarithmic or trigonometric functions) and values of certain analytical expressions, if stored in memory, can expedite computation.
21
In the case of differential equations, initial conditions and boundary conditions would also have to remembered for the duration of a computation.
Information such this was, of course, long recognized as essential to computation. What made the EDVAC report stand apart was the recognition that in a long and complicated calculation the
instructions
themselves may need to be remembered, for they may have to be executed repeatedly.
22
In present-centered language, a program (that is, the sequence of instructions or orders needed to carry out a computation) requires its own memory. And even though these various memory requirements are distinct functionally or conceptually, it seemed natural and, indeed, tempting to conceive such different types of memory, for storing input data and results, for holding tables of values, and for remembering instructions as a single “organ.”
23
In these thoughts lie history, for what they expressed were the first significant hints of a paradigm (see
Chapter 6
, Section II). Why it was a paradigm is discussed later. Eventually, these ideas will be conceptualized collectively by the term
stored-program computer
, wherein the instructions to be executed are not only stored in an alterable read/write memory but there is no distinction between instruction/program memory and the memory holding all the information that is input to the computation and is the result of the computation. There is just one memory organ.
These two principlesâthe concept of the stored-program computer in which the same memory organ holds both data and program, and the principle of separating architecture from implementation, logical design from physical designâconstitute two of the fundamental elements of the newly emergent paradigm. With historical hindsight, the Holy Grail seemed within reach.
Most of the American engineers, mathematicians, and physicists who were embroiled in the development of automatic computers in the decade 1936 to 1946, culminating in the production of the ENIAC, may not have heard of Alan Turing, let alone have read his 1936 paper on the
Entscheidungsproblem
(see
Chapter 4
). Babbage's name appears sporadically in papers, reports, and memoranda written during that period, but neither Turing's name nor his work have any presence (with the exception, of course, of the scientists and engineers in Bletchley Park [see
Chapter 5
, Sections XIIâXIV]).
One person outside Bletchley Park and engaged with automatic computation who knew Turing well was von Neumann. As we have seen, von Neumann first met Turing in
1935 in Cambridge. They met again when Turing went to Princeton in fall 1936, where he stayed for 2 years and worked toward his PhD under Alonzo Church (see
Chapter 4
, Section VIII). There is no direct evidence that von Neumann had read Turing's paper on the
Entscheidungsproblem
and his mechanical concept of computability.
24
There is, however, sufficient indirect and circumstantial evidence. First, given that offprints of his paper arrived while Turing was in Princeton,
25
given von Neumann's legendary interest in all things mathematical including formal logicâindeed, his first field of interest was mathematical logic and the foundations of mathematics (in 1925 and 1927, he published two papers in this area)
26
âand given that von Neumann wrote a letter of support to Cambridge University on Turing's behalf for a fellowship that would allow him to stay a second year at Princeton to complete his thesis,
27
it seems inconceivable that von Neumann did not know of Turing's paper on computability. Certainly Goldstine was sanguine that von Neumann was aware of Turing's work.
28
More definite evidence was offered by Stanley Frankel (1919â1978), a physicist who participated in the Manhattan Project in Los Alamos that led to the production of the atom bomb. In Los Alamos, Frankel had known von Neumann. He would recall, in 1972, that sometime during 1943 or 1944, von Neumann referred Turing's 1936 paper to Frankel and urged him to study it (which he did).
29
This inevitably raises the question: was the idea of the stored-program concept as enunciated by von Neumann in the EDVAC reportâinvolving a single memory organ for both instructions and dataâshaped by von Neumann's knowledge of Turing's universal computing machine of 1936? In that machine, we recall, the tape (the Turing machine memory) held symbols representing both the operations to be performed and the effect of those operations, the data on which operations are performed, and the results of such operations (see
Chapter 4
, Section IV). Was there
any
connection linking von Neumann's knowledge of the Turing paper and the architecture of the EDVAC computer?
According to Frankel's letter to Brian Randell, there was indeed such an influence. von Neumann apparently told Frankel that the basic idea of the stored-program computer was due to Turing. Frankel went on to write that he believed that von Neumann's “essential role” was to introduce Turing's fundamental concept to the larger world.
30
If, in fact, there was such an influence then, of course, Turing's 1936 paper would have great
practical
significance in the story of the birth of this new paradigm. In any case, even assuming that the stored-program concept was not influenced by von Neumann's knowledge of Turing's work (if, for instance, it were Eckert and Mauchly who had originally conceived the idea), it is quite inconceivable that, after the concept had emerged, von Neumann did not think of the Turing machine, and that he did not immediately realize a rather beautiful relationship between the architecture of a purely abstract artifact such as the Turing machine and the architecture of a practically conceived material artifact such as EDVAC, that they were the theoretical and practical faces of the same computing coin.
Let us recall Thomas Kuhn's concept of a scientific paradigm (see
Chapter 7
).
31
The essence of “paradigmhood” is that a substantial majority of scientists (or certainly the most influential ones) working in a particular discipline should agree on certain core concepts governing their scienceânotably, its core philosophical, theoretical, and methodological components. If, in fact, the community was deeply divided on these issues, then, according to Kuhn, the science would be in a “preparadigmatic” state.
The second essence of paradigmhood is that a paradigm is neither complete nor unambiguous, for if a paradigm is complete, then, of course, there is no more to be done in that science! So there are significant clarifications to be made, interpretations to be given, unsolved problems to be solved, and so forth. All such activities Kuhn called “normal science.” A paradigm, then, offers a broad map, a framework, within which scientists carry out their normal research.
We can relate the Kuhnian notion of a paradigm (which is a social concept, involving a community) with the cognitive concept of schemas. The term
schema
was first used most famously in 1932 by English psychologist Sir Frederick Bartlett (1886â1969) in his studies of the process of remembering.
32
Bartlett characterized a schema as an organization of past experiences held in a person's memory that participates in how the person responds to a new situation, and how and what he remembers of his past experiences.
33
Since Bartlett, the original concept of the schema (schemas or schemata in plural) has been adapted, developed, and applied not only by psychologists, but also by anthropologists, philosophers, art historians, cultural and intellectual historians, creativity scholars, and researchers in artificial intelligence, and refined into a
schema theory
.
34
According to this theory, an individual stores in her long-term memory an assemblage of schemas. A schema is a kind of pattern or template that serves to represent in that person's mind a certain stereotypical experience, concept, or situation. For example, we carry in our minds schemas for what it is like to attend a church wedding, go to a football game, or eat at a restaurant. When that person is confronted with a new situation, she attempts to make sense of it in terms of matching it against some existing schema that then guides the person in responding to the situation. And so, even if we have never eaten at a Japanese restaurant, we “cope” with it by referencing our restaurant schema. Schemas, then, enable expectations about a person's social, cultural, or physical milieu and how to respond to new kinds of encounters within that milieu. Beginning in early childhood and over time (as Swiss psychologist Jean Piaget [1896â1980] suggested
35
), humans constantly construct, use, instantiate, and reconstruct their mental stock of schemas.
Schemas, then, provide a kind of
cognitive map
whereby people negotiate new territories of knowledge, physical, social, and cultural situations and experiences. But, to be effective, schemas must be flexible, they must be both elastic and plastic; one must be able to stretch or reshape them, often in surprising ways. Austrian-British art historian Sir Ernst Gombrich (1909â2001) wrote famously of making art as a process of starting
with an initial schema and gradually correcting or modifying it.
36
It is helpful if the initial schema selected by the artist is loose, flexible, even a bit vague, for then it can be extended or modified according to the artist's need. Indian filmmaker Satyajit Ray (1921â1992) once remarked that, when looking for stories to adapt to film, he preferred novellas to full-scale novels because the former allowed more possibilities for expanding or interpreting or changing into a script for a film than the latter. For Ray, novellas afforded schemas more readily than full-length novels, which were more tightly bound, leaving little scope for enlargement. These examples from art and filmmaking suggest that schemas are not only usefulâindeed, essentialâfor everyday thinking, remembering, and making sense of our day-to-day experiences, but also they are at the root of creative thinking.
In the context of science,
a paradigm, when internalized by a scientist, becomes her dominant schema for that science
.
A paradigm is a social entity; inside an individual's mind it becomes a cognitive entity, a very elaborate schema that governs the scientist's entire approach to, and conception of, that science. What Kuhn called “normal science” means, for the individual scientist, the elaboration and refinement of the schema representing the paradigm. It entails adding new elements (subschemas) to a schema, or altering some elements, or sometimes deleting subschemas altogether.
When we say that “a paradigm is born,” in cognitive terms this means that a common schema has been established in the minds of the scientists who work in that science. It also suggests that the schema is initially a mere skeleton, a backbone of related concepts, ideas, theories, and so forth, that need to be elaborated, refined, enlarged, even reshaped without destroying the overall skeleton.
A schema held in a person's mind is something personal; a paradigm belongs to a community. So when an individual scientist absorbs a paradigm and it becomes a schema in her mind, her interpretation of the paradigm may be quite different from a fellow-scientist's interpretation of the same paradigm.
Which brings us back to our story. Around 1945/1946, such a paradigm was just born. Those few people in the world who were engaged in the development of computers and computing would acquire and hold in their minds schemas representing the new-born paradigm. For someone like von Neumann, his computational schema would have two connected but separate subschemas: one representing the ideas and concepts contained in the EDVAC report, the other representing his understanding of the universal Turing machine. For someone like Mauchly or Presper Eckert, a subschema corresponding to the Turing machine was probably absent.
But, people like Mauchly, Eckert, Goldstine, Burks, and others who developed the ENIAC, and also von Neumann, would also possess a schema representing the idea of the
ENIAC. There would be linkages between the ENIAC schema and the stored-program computer schema, for there were obvious common concepts. For example, central to the ENIAC schema was the presence of the idea of vacuum tubes to implement both memory and arithmetic units. The stored-program computer schema also had a place for vacuum tubes, according to the EDVAC report.
37