Examples of language-functions: mathematics, art, expressive gesture, myth.
One of the most important language-functions is, of course, speech.
In most multiple speaker/hearer situations, there are usually multiple language-functions occurring: A talking to B . . . B talking to A . . . C listening to what A and B say, etc. (In Art, on the other hand, there is usually one only: artist to audience. The language-function that goes from audience to artist is, of course, criticism.)
The language itself is the way, within a single speaker/hearer, an interpretive field is connected to a generative field.
54. The trouble with most cybernetic models of language (those models that start off with “sound waves hitting the ear”) is that they try to express language only in terms of an interpretive field. To the extent that they posit a generative field at all, they simply see it as an inverse of the interpretive field.
In ordinary human speech, the interface of the interpretive field with the world is the earâan incredibly sensitive microphone that, in its flexibility and versatility, still has not been matched by technology. The interface of the generative field with the world is two wet sacks of air and several guiding strips of muscle, laid out in various ways along the air track, and a variable-shaped resonance box with a variable
opening: the lungs/throat/mouth complex. This complex can produce a great many sounds, and in extremely rapid succession. But it can produce nothing like the range of sounds the ear can detect.
Language, whatever it is, in circuitry terms has to lie between these two interfaces, the ear and the mouth.
Most cybernetic models, to the extent that they approach the problem at all, see language as a circuit to get us from a sensitive microphone to an equally sensitive loudspeaker. A sensitive loudspeaker just isn't in the picture. And I suspect if it were, language as we know it would not exist, or at least be very different.
Try and envision circuitry for the following language tasks:
We have a sensitive microphone at one end of a box. At the other, we have a
mechanically
operable squeeze-box/vocal-chord/palate/tongue/teeth/lip arrangement. We want to fill up the box with circuitry that will accomplish the following: Among a welter of soundsâbird songs, air in leaves, footsteps, traffic noiseâone is a simple, oral, human utterance. The circuitry must be able to pick out the human utterance, store it, analyze it (in terms of breath duration, breath intensity, and the various stops that have been imposed on a stream of air by vocal chords, tongue, palate, teeth, lips) and then, after a given time, reproduce this utterance through its own squeeze-box mechanism.
This circuitry task is both much simpler and much more complicated than getting a sound out of a loudspeaker. Once we have such a circuit, however, well before we get to any “logic,” “syntax,” or “semantic” circuits, we are more than halfway to having a language circuit.
Consider:
We now want to modify this circuit so that it will perform the following task as well:
Presented with a human utterance, part of which is blurredâeither by other sounds or because the utterer said it unclearlyâour circuit must now be able to give back the utterance correctly, using phonic overdeterminism to make the correction: Letting
X
stand for the blurred phoneme, if the utterance is
The pillow lay at the foot of the
X
ed
or
She stood at the head of the
X
airs
our circuitry should be able to reproduce the most likely phoneme in place of the blur,
X
.
I think most of us will agree, if we
had
the first circuit, getting to the second circuit would be basically a matter of adding a much greater storage capacity, connected up in a fairly simple (i.e., regular) manner with the circuit as it already existed.
Let us modify our circuit still more:
We present an utterance with a blurred phoneme that can resolve in two (or more) ways:
“Listen to the Xerds.” (Though I am not writing this out in phonetic notation, nevertheless, it is assumed that the phonic component of the written utterance is what is being dealt with.)
Now in this situation, our very sensitive microphone is still receiving other sounds as well. The circuitry should be such that, if it is receiving, at the same time as the utterance, or has received fairly recently, some sound such as cheeping or twittering (or the sounds of pencils and rattling paper) it will resolve the blurred statement into “listen to the birds” (or, respectively, “listen to the words”)âand if the accompanying sound is a dank, gentle plashing . . . Again, this is still just a matter of more storage space to allow wider recognition/association patterns.
*
The next circuitry recomplication we want is to have our circuit such that, when presented with a human utterance, ambiguous or not, it can come back with a recognizable paraphrase. To do this, we might well have to have not only a sensitive microphone, but a sensitive camera and a sensitive micro-olfact and micro-tact as well, as well as ways of sorting, storing, and associating the material they collect. Basically, however, it is still, as far as the specific language circuitry is concerned, a matter of greater storage capacity, needed to allow greater associational range.
I think that most people would agree, at this point, that if we had a circuit that could do all these tasks, even within a fairly limited vocabulary, though we might not have a circuit that could be said to
know
the language, we would certainly have one that could be said to know a lot
about
it.
One reason to favor the above as a model of language is that, given the initial circuit, the more complicated versions could, conceivably, evolve by ordinary, natural-selection and mutation processes. Each new step is still basically just a matter of adding lots of very similar or identical components, connected up in very similar ways. Consider also: Complex as it is, that initial circuitry must exist, in some form or another, in every animal that recognizes and utters a mating call (or warning) to or from its own species, among the welter, confusion, and variety of wild forest sounds.
The usual cybernetic model for language interpretation:
where each box must be a different kind of circuit, the first four (and, arguably, all six) probably different for each language strikes me as a pretty hard thing to “grow” by ordinary evolutionary means, or to program on a
tabula rasa
neural net.
The circuitry I suggest would all be a matter of phonic recognition, phonic storage, and phonic association (short of the storage and associational employment of other sensory information). A great
deal
of recognition/storage/association would have to be done by the circuitry to achieve language. But nothing
else
would have to be done, other than what was covered in our original utterance-reproduction circuit.
Not only would the linguistic bugaboo “semantics” disappear (as experiments indicate that it may have already) but so would morphology; and syntax and phonic analysis would simply absorb one another, so to speak.
Would this really be so confusing?
I think not. It is only a rather limited view of grammar that initially causes it to appear so.
Think of grammar solely as the phonic redundancies that serve to transform a heard utterance from the interpretive field, through the range of associations in the hearer/speaker's memory that includes “his language,” into the hearer/speaker's generative field as an utterance.
In the
qui, quae, quo
of Latin, for instance, I'm sure the Roman brain (if not the Roman grammarian) considered the redundancy of the initial “qu” sound as grammatically significant (in my sense of “grammar”), as it considered significant, say, the phonic redundancy between the “ae” at the end of “quae” and the “ae” at the end of “pullae.” (We must get rid of the notion of grammar as something that applies only to the ends of the words!) In English, the initial sound of
the, this, that, these, those
, and
there
are all grammatically redundant in a similar way. (The “th” sound indicates, as it were, “indication”; the initial “qu” sound, in Latin, indicates “relation,” just as the terminal “ae” sound indicates, in that language, “more than one female.”
*
) What one can finally say of this
“grammar” is: When a phonic redundancy
does
relate to the way that a sound is employed in conjunction with other sounds/meanings, then that phonic element of the grammar is regular. When a phonic redundancy does
not
so relate, that element is irregular. (The terminal “s” sound on “these” and “those” is redundant with the terminal “s” of
loaves, horses, sleighs
âit indicates plurality, and is therefore
regular
with those words. The terminal “s” on “this” is
irregular
with them. The terminal “s” at the end of “is,” “wants,” “has,” and “loves” all imply singularity. Should the terminal “s” on “this” be considered regular with these others? I suspect in many people's version of English it is.) For all we know, in the ordinary English hearer/speaker's brain, “cream,” “loam,” “foam,” and “spume” are all associated, by that final “m” sound, with the concept of “matter difficult to individuate”âin other words, the “m” is a grammatically regular structure of
that particular word group
. Such associations with this particular terminal “m” may explain why most people seldom use “ham” in the pluralâthough nothing empirically or traditionally grammatical prevents it. They may also explain why “cream,” when pluralized, in most people's minds immediately assumes a different viscosity (i.e., referentially, becomes a different word; what the dictionary indicates by a “second meaning”). I suspect that, in a very real sense, poets are most in touch with the true “deep grammar” of the language. Etymology explains some of the sound-redundancy/meaning-associations that are historical. Others that are accidental, however, may be no less meaningful.
All speech begins as a response to other speech. (As a child you eventually speak through being spoken to.) Eventually this recomplicates into a response to speech-and-other-stimuli. Eventually, when both speech and other stimuli are stored in memory and reassociated there, this recomplication becomes so complex that it is far more useful to consider certain utterances autonomousâthe first utterance in the morning concerning a dream in the night, for example. But even this can be seen as a response to speech-and-other-than-speech, in which the threads of cause, effect, and delay have simply become too intertwined and tangled to follow.
55. Quine inveighs against propositions, as part of logic, on the justifiable grounds that they cannot be individuated. But since propositions, if they are anything, are particular meanings of sentences, the impossibility of individuating them is only part of a larger problem: the impossibility of individuating meanings in general. What the logician who says (as Quine does at the beginning of at least two books) “To deny the Taj Majal is white is to affirm that it is not white” (in the sense of “nonwhite”) is really saying, is:
“Even if meanings cannot be individuated, let us, for the duration of the argument, treat them as if they can be. Let us assume that there is some volume of meaning-space that can be called white
and
be bounded. Therefore, every point in meaning-space, indeed, every volume in meaning-space, can be said to either lie inside this boundary, and be called âwhite,' or outside this boundary, and be called ânonwhite,' or, for the volumes that lie partially inside and partially outside, we can say that some aspect of them is white.”
The problem is that, similar to the color itself, the part of meaning-space that can be called “white” fades, on one side and another, into every other possible color. And somehow, packed into this same meaning-space, but at positions distinctly outside this boundary around white, or any other color for that matter, we must also pack “freedom,” “death,” “grief,” “the four-color-map problem,” “the current King of France,” “Pegasus,” “Hitler's daughter,” “the entire Second World War and all its causes,” as well as “the author of
Waverly
”âall in the sense, naturally, of “nonwhite.”
Starting with just the colors: In what sort of space could you pack all possible colors so that each one was adjacent to every other one, which would allow the proper fading (
and
bounding
*
) to occur? It is not as hard as it looks. Besides the ordinary three coordinates for volume, if you had two more ordinates, both for color, I suspect it could be rather easily accomplished. You might even do it with only two spatial and two color axes. Four coordinates, at any rate, is certainly the minimum number you need. Conceivably, getting the entire Second World War and all its causes in
might
require a few more.