Read The Science of Language Online

Authors: Noam Chomsky

The Science of Language (4 page)

BOOK: The Science of Language
9.89Mb size Format: txt, pdf, ePub
ads
On the conceptual side, it's totally different. Maybe we don't know the right things, but everything that is known about animal thought and animal minds is that the analogues to concepts – or whatever we attribute to them – do happen to have a reference-like relation to things. So there is something like a word-object relation. Every particular monkey call is associated with a particular internal state, such as “hungry,” or a particular external state, such as “There are leaves moving up there, so run away.”
JM:
As Descartes suggested
.
NC:
Yes. That looks true of animal systems, so much so that the survey of animal communication by
Randy Gallistel (
1990
) just gives it as a principle. Animal communication is based on the principle that internal symbols have a one-to-one relation to some external event or an internal
state. But that is simply false for human language – totally. Our concepts are just not like that. Aristotle noticed it; but in the seventeenth century it became a
vocation. Take, say,
Locke's chapter 27 in
An Essay Concerning Human Understanding
that he added to the essay on persons. He realizes very well that a person is not an object. It's got something to do with psychic continuity. He goes into
thought experiments: if two identical-looking people have the same thoughts, is there one person, or two people? And every concept you look at is like that. So they seem completely different from animal concepts.[C]
In fact, we only have a superficial understanding of what they are. It was mainly in the seventeenth century that this was investigated.
Hume later recognized that these are just mental constructions evoked somehow by external properties. And then the subject kind of tails off and there's very little that happens. By the nineteenth century, it gets absorbed into Fregean reference-style theories, and then on to modern philosophy of language and mind, which I think is just off the wall on this matter.
. . . But to get back to your question, I think you're facing the fact that the human conceptual system looks as though it has nothing analogous in the animal world. The question arises as to where animal concepts came from, and there are ways to study that. But the origin of the human conceptual apparatus seems quite mysterious for
now.
JM:
What about the idea that the capacity to engage in thought – that is, thought apart from the circumstances that might prompt or stimulate
thoughts – that that might have come about as a result of the introduction of the language system too?
NC:
The only reason for doubting it is that it seems about the same among groups that separated about fifty thousand years ago. So unless there's some parallel cultural development – which is imaginable – it looks as if it was sitting there somehow. So if you ask a New Guinea native to tell you what a person is, for example, or a river . . . [you'll get an answer like the one you would give.] Furthermore, infants have it [thought]. That's the most striking aspect – that they didn't learn it [and yet its internal content is rich and intricate, and – as mentioned – beyond the reach of the
Oxford English
Dictionary
].
Take children's stories; they're based on these principles. I read my grandchildren stories. If they like a story, they want it read ten thousand times. One story that they like is about a donkey that somebody has turned into a rock. The rest of the story is about the little donkey trying to tell its parents that it's a baby donkey, although it's obviously a rock. Something or another happens at the end, and it's a baby donkey again. But every kid, no matter how young, knows that that rock is a donkey, that it's not a rock. It's a donkey because it's got psychic continuity, and so on. That can't be just developed from language, or from experience.
JM:
Well, what about something like
distributed morphology? It might be plausible that at least some conceptual structure – say, the difference between a noun and a verb – is directly due to language as such. Is that plausible?
NC:
Depends on what you mean by it. Take the notion of a donkey again. It is a linguistic notion; it's a notion that enters into thought. So it's a lexical item and it's a concept. Are they different? Take, say,
Jerry Fodor's notion of the language of thought. What do we know about the language of thought? All we know about it is that it's English. If it's somebody in East Africa who has thoughts, it's Swahili. We have no independent notion of what it is; in fact, we have no reason to believe that there's any difference between lexical items and concepts. It's true that other cultures will break things up a little differently, but the differences are pretty slight. The
basic
properties are just
identical
. When I give examples in class like
river
and run these odd thought experiments [concerning the identities of rivers – what a person is willing to call a river, or the same river that you find in my work], it doesn't matter much which language background anyone comes from, they all recognize it in the same way in fundamental respects. Every infant does. So, somehow, these things are there. They show up in language; whether they are ‘there’ independently of language, we have no way of knowing. We don't have any way of studying them – or very few ways, at least.
We can study
some
things about conceptual development apart from language, but they have to do with other things, such as perception of motion, stability of objects, things like that. It's interesting, but pretty superficial as compared with whatever those concepts are. So the question whether it came from language seems beyond our investigation capacities; we can't understand infant thought very far beyond that.
But then the question is, where did it come from? You can imagine how a genetic mutation might have given
Merge, but how does it give our concept of psychic identity as the defining property of entities? Or many other such properties quite remote from experience.
JM:
I've sometimes speculated about whether or not lexical concepts might be in some way or another generative. It seems plausible on the face of it – it offers some ways of understanding it
.
NC:
The ones that have been best studied are not the ones we have been talking about – the ones that are [sometimes] used [by us] to refer to the world, [such as WATER and RIVER,] but the relational ones, such as the temporal[ly relational] ones – stative versus active verbs[, for example] – or relational concepts, concepts involving motion, the analogies between space and time, and so on. There is a fair amount of interesting descriptive work [done on these]. But these are the parts of the semantic apparatus that are fairly closely syntactically related, so [in studying them] you're really studying a relational system that has something of a syntactic character.
The point where it becomes an impasse is when you ask, how is any of this used to talk about the world – the traditional question of semantics. Just about everything that is done – let's suppose everything – in formal semantics or linguistic semantics or theory of aspect, and so on, is almost all internal [and syntactic in the broad sense]. It would work the same if there weren't any world. So you might as well put the brain in a vat, or whatever. And then the question comes along, well look, we use these to talk about the world; how do we do it? Here, I think, philosophers and linguists and others who are in the modern intellectual tradition are caught in a kind of trap, namely, the trap that assumes that there is a
reference relation.[C]
I've found it useful and have tried to convince others – without success – to think of it on an analogy with
phonology. The same question arises. All the work in phonology is internal [to the mind/brain]. You do assume that narrow phonetics gives some kind of instructions to the articulatory and auditory system – or whatever system you're using for externalization. But that's outside of the faculty of language. It's so crazy that nobody suggests that there is a sound–symbol relation; nobody thinks that the symbol
æ
, let's say
(“a” in
cat
), picks out some mind-external object. You could play the game that philosophers do; you could say that there's a four-dimensional construct of motions of molecules that is the phonetic value of
æ
. And then
æ
picks that out, and when I say
æ
(or perhaps
cat
) you understand it because it refers to the same four-dimensional construct. That's so insane that no one – well, almost no one, as you know – does it. What actually happens – this is well understood – is that you give instructions to, say, your articulatory apparatus and they convert it into motions of molecules in different ways in different circumstances, and depending on whether you have a sore throat or not, or whether you're screaming, or whatever. And somebody else interprets it if they are close enough to you in their internal language and their conception of the world and understanding of circumstances, and so on; to that extent, they can interpret what you are saying. It's a more-or-less affair. Everyone assumes that that is the way the sound side of language works.
So why shouldn't the
meaning side of language work like that: no semantics at all – that is, no
reference relation – just syntactic instructions to the conceptual apparatus which then acts? Now – once you're in the conceptual apparatus and action – you're in the domain of human action. And whatever the complexities of human action are, the apparatus – sort of – thinks about them in a certain way. And other people who are more or less like us or think of themselves in the same way, or put themselves in our shoes, get a passably good understanding of what we're trying to say. It doesn't seem that there's any more than that.[C]
Supplemental material from interview 20 January 2009
 
JM:
I'll switch to what you called “
semantic information” in a lecture in 2007 at MIT on the perfection of the language system and elsewhere. You mentioned that at the semantic
interface (SEM) of the language faculty, you got two kinds of semantic information, one concerning argument structure that you assumed to be due to external Merge, and another kind of information concerning topic, scope, and new information – matters like these – that you assumed to be due to internal Merge
.
NC:
Well, pretty closely. There are arguments to the contrary, such as Norbert
Hornstein's theory of control, which says that you pick up theta roles. So I don't want to suggest that it's a closed question by any means, but if you adopt a god-like point of view that you sort of expect that if you're going to have two different kinds of Merge, that they should be doing different things. I don't have proof. But the data seem to suggest that it's pretty close to true, so close to true that it seems too much of an accident. The standard cases for argument structure are for external Merge, and the standard cases of discourse orientation and stuff like that are from internal
Merge.
JM:
It's a very different kind of information
.
NC:
It's very different, and if we knew enough about
animal thought, I suspect that we would find that the external Merge parts may even be in some measure common to primates. You can probably find things like actor-action schema with monkeys. But they can't do very much with it; it's like some kind of reflection of things that they perceive. You see it in terms of Cudworth-style properties, Gestalt properties, causal relations; it's a way of perceiving.
JM:
Events with n-adic properties – taking various numbers of arguments, and the like
.
NC:
Yes, that kind of thing. And that may just be what external Merge gives you. On the other hand, there's another kind of Merge around, and if it's used, it's going to be used for other properties. Descriptively, it breaks down pretty closely to basic thematic structure on the one hand, and discourse orientation, information structure, scopal properties, and so on, on the other.
JM:
It looks like pragmatic information . . .
NC:
After all, the interface is semantic-pragmatic.[C]
There is a lot of discussion these days of Dan Everett's work with a Brazilian language,
Pirahã – it's described in the
New Yorker
, among other places. David Pesetsky has a long paper on it with a couple of other linguists [(Nevins, Pesetsky, Rodrigues
2007
)], and according to them, it's just like other languages. It's gotten into the philosophical literature too. Some smart people – a very good English philosopher wrote a paper about it. It's embarrassingly bad. He argues that this shows that it undermines
Universal Grammar, because it shows that language isn't based on recursion. Well, if Everett were right, it would show that Pirahã doesn't use the resources that Universal Grammar makes available. But that's as if you found a tribe of people somewhere who crawled instead of walking. They see other people crawl, so they crawl. It doesn't show that you can't walk. It doesn't show that you're not genetically programmed to walk [and do walk, if you get the relevant kind of input that triggers it and are not otherwise disabled]. What Everett claims probably isn't true anyway, but even if it were, that just means this language has limited lexical resources and is not using internal Merge. Well, maybe not: Chinese doesn't use it for question-formation. English doesn't use a lot of things; it doesn't use Baker's polysynthesis option. No language uses all the options that are
available.
BOOK: The Science of Language
9.89Mb size Format: txt, pdf, ePub
ads

Other books

Soccer Crazy by Shey Kettle
Alex by Adam J Nicolai
Lady Myddelton's Lover by Evangeline Holland
The Forever Gate by Hooke, Isaac
Soiled Dove by Brenda Adcock
A Death by Stephen King
The Shadow's Son by Nicole R. Taylor