Read It Began with Babbage Online

Authors: Subrata Dasgupta

It Began with Babbage (53 page)

BOOK: It Began with Babbage
3.46Mb size Format: txt, pdf, ePub
ads

These two models together formed the seed of a new subparadigm, and around this seed an elaborate and mathematically sophisticated theory of sequential machines emerged during the course of the 1960s
6
—a theory that contributed, on the one hand, to automata theory
7
and, on the other, to an elegant and formal foundation for logic design.
8
As an example of the latter, a computer's control unit (see
Chapter 12
) is an instance of a sequential machine. At any given time the device is in one of a finite set of states,
S =
{
s1, s2, …, sm
}. When in state
si
(that is, when the present state of the control unit is
si
), the device issues one or more control signals from a finite alphabet of control signals
C
= {
c1, c2, …, cn
}; the signals issued constitute collectively the unit's output
oj
. The input
iq
to the control unit comes from the “rest of the computer.” Thus, the next state of the control unit
sk
will be determined by both the present state
si
and the input
iq
. Such a control unit is an instance of the Moore machine.

FIGURE
15.1 The Structure of a Sequential Circuit.

IV

Sequential machines were also called finite automata—finite because the number of possible states such a machine can have is finite. A sequential machine, then, has finite memory. In contrast, the Turing machine (see
Chapter 4
, Section IV), with its infinite tape, has infinite memory. So even though the control part of a Turing machine can only have a finite number of states, the machine as a whole is an “infinite automaton.” The “machines” involved here were not physical devices; they were not material computational artifacts. The Turing machine (as previously pointed out in
Chapter 4
, Section IV) is a purely abstract artifact, having no physical reality. A sequential machine, in contrast, has the same liminal quality a program has. It is fundamentally abstract. Like the Turing machine, it is a mathematical entity, and can be studied, analyzed, and manipulated entirely in the abstract as mathematical objects are, as the Turing machine can be. On the other hand, a sequential machine can serve as a
design
for a physical digital circuit. The implementation of a sequential machine design would be a material artifact.

Collectively, the study of sequential machines and infinite automata came to be called
automata theory
, a field of study that came into its own during the 1950s.
9
Automata theory treats both such machines as abstract artifacts. By the end of the next decade, its status as a subparadigm within the computer science paradigm was firmly established. A marker was the publication of texts during the second half of the 1960s—texts that became seminal in characterizing the “state of the art” of the field, most notably Marvin Minsky's
Computation: Finite and Infinite Machines
(1967),
10
Michael Arbib's
Theories of Abstract Automata
(1969),
11
and
Formal Languages and Their Relation to Automata
(1969) by John Hopcroft and Jeffrey Ullman.
12

The study of Turing machines, of course, was concerned with the fundamental problem that had led Alan Turing to create his abstract machine in the first place: what does it mean to compute? Computability, decidability, solvability were the issues with which automata theorists remained concerned. The
theory of computing
was the broad term that encompassed this subsubparadigm (as it were) of automata theory, and a number of key books explored this subsubparadigm between the late 1950s through the 1960s.
13

V

However, there was another face to automata theory that had more practical appeal, and this went back to Noam Chomsky's celebrated monograph
Syntactic Structures
(1957) (see
Chapter 13
, Section XVIII). We recall that the focus of that book was the nature and form of grammar (syntactic rules) that would generate and account for sentences of a natural language such as English. Early in the book, Chomsky explored the idea of
a grammar as a machine
.

Imagine a finite-state machine with, possibly, a very large set of possible states. It has an “initial” or “starting” state and a “final” state. Beginning in its starting state, the machine goes from one state to another in sequence, each change of state being accompanied by the production of a symbol (say, an English word), until it reaches the final state and stops. We can call the sequence of symbols produced by the machine a
sentence
. Each distinct path from a starting state to a final state thus generates a distinct sentence. We can call the set of all such sentences producible by this machine a
language
.

Chomsky called any language that can be produced by this sort of a finite-state machine a
finite-state language
, and the machine itself, a
finite-state grammar
.
14

Chomsky's finite-state grammar is, in fact, a finite-state or sequential machine wherein there is no input, and the outputs (words) are produced as a function of the state only. It is, in fact, a Moore machine. Chomsky would go on to describe several
types
of grammars and characterize the languages they would generate.
15

In identifying grammars with automata, Chomsky summoned up a question of great interest—not to linguists, but to computer scientists interested in the theory and design of compilers for programming languages. One of the language types Chomsky identified he labeled “Type 2” came to be known to computer scientists as
context- free languages
(see
Chapter 13
, Section XIX). The grammars that produce context-free languages came to be called
context-free grammars
. As we have seen, the syntax of Algol 60 was defined by a context-free grammar (see
Chapter 13
, Section XIX). And because a programming language's compiler's first task is to ensure the syntactic correctness of a program written in that language, the segment of a compiler responsible for this task—called a
parser
—must be able to “recognize” that programs in a language like Algol 60 are, in fact, context free.

A parser is itself a computational device. And the question was raised: what kind of a computational machine is needed to recognize a context-free programming language? More generally, if there are several types of grammars, what must be the
power
of an automaton that it can recognize a particular type of language?

Thus, a subfield of the automata theory that emerged during the 1960s concerned itself with the study of automata that corresponded to these different types of languages. A finite automaton (a sequential machine) can recognize only finite-state languages
(called by Chomsky “Type 3 language”) that had a grammar with syntactic rules (productions) of the form

A →
α

A → αB

where
A, B
are nonterminal symbols and
α
is a terminal symbol (see
Chapter 13
, Section XIX). A context-free language has a grammar with productions of the form

A →
α

where
A
is a nonterminal symbol and
α
is a
string
of nonterminal and/or terminal symbols. A more general type of language known as
context-sensitive
language (Chomsky's Type 1) has productions of the form

αAβ
→ απβ

where
A
is a nonterminal symbol and
α
,
β
,
π
are strings of terminal and/or nonterminal symbols.

Thus, a second area of interest in automata theory that emerged during the 1960s was the determination of the kind of automata that corresponded to context-free and context-sensitive grammars. A finite automaton would recognize only finite-state languages. The other types of languages required more powerful automata—infinite machines. Turing machines of different kinds were explored as representing different types of grammars.
16

Of course, no practical parsing algorithms would be designed along such machine principles because Turing machines consume a great deal of “time” moving back and forth along their tapes. But automata theory provided a normative standard by which to compare the different types of languages and their automatic recognition.

VI

A compiler is a programming
system
—meaning that it does not implement a single algorithm but, rather, a collection of algorithms that interact with one another in some fashion. To put this in another way, a compiler's overall goal is to translate programs written in some high-level, machine-independent programming language into object code for a particular computer. This task is complex enough to be decomposable into several subtasks or subgoals, each of which necessitates an algorithmic solution of its own (
Figure 15.2
). The compiling task begins with what came to be called
lexical analysis
. The linguistic origin of this term is clear because a
lexeme
is the technical term for the smallest meaningful unit of sound—words. In lexical analysis, the input source program is transformed into a string of symbols in which all the (usually) multicharacter or variable-length identifiers appearing in the source program (such as reserved words in Algol such as
begin
and
end
, and variable identifiers) are replaced by single or fixed-length symbols. The output of lexical analysis (which will also include a “symbol table” that records the correspondence of fixed-length symbols with the original identifiers) is passed on to the parser, which performs syntax analysis to confirm that the program is legal syntactically, obeying the grammar of the original programming language. Assuming syntactic correctness, the parser's output in some appropriate form is passed to the “code generator,” which produces object code for the target machine. As part of the code generator or as a subsequent compiler phase, there is the task of “code optimization,” which makes the object code more efficient in terms of the amount of memory space required to store the object code or the amount of time required to execute the object code.

FIGURE
15.2 Structure of a Compiler.

These different subtasks, in turn, may demand other subsubtasks; these engender their own problems, such as deciding which data structures should be used to represent the program and its data at various stages of the compiling process and how best to access elements from these data structures, how to represent the syntax of a programming language that leads to more efficient parsing algorithms, how best to analyze a source program for the purpose of parsing, how to represent a parser's output that facilitates efficient code generation strategies.

Thus, as a class of system programs, compilers spawned a whole range of research problems during the 1960s having to do with the invention of (especially context-free) grammars,
17
algorithms for parsing,
18
code generation strategies,
19
and code optimization techniques.
20
The first books on compilers appeared both on compiling techniques for particular programming languages
21
and, more generally, on compiling algorithms and strategies.
22
Surveys and overview papers on particular aspects of compilers were published.
23

Inventing algorithms for various components of the compiling task was not the only issue of interest. There was the practical problem of
writing
compilers.

BOOK: It Began with Babbage
3.46Mb size Format: txt, pdf, ePub
ads

Other books

The Cowboy's Little Surprise by Barbara White Daille
Parsifal's Page by Gerald Morris
Theirs Was The Kingdom by Delderfield, R.F.
Iron Wolf by Dale Brown
Game Theory by Barry Jonsberg
A Time to Move On by Karolyn James
Golden Vows by Karen Toller Whittenburg