As everyone knows, Noam Chomsky is both one of the world’s reigning experts on the nature of language and the brain’s acquisition thereof. He is also widely known for his views on various socio-political issues around the globe. I have attended several conferences where his views on language acquisition were the main topic, and at one of these Chomsky himself was actually a participant. I will not pretend that I understand all of the views and issues involved in this very complex topic, but here I shall attempt to track the development of his thought.
As I understand it Chomsky’s initially very influential theory was that the brain arrives already programed to perform certain operations on the verbal input it receives from the surrounding human linguistic world. Thus, barring various defects, all humans come equipped with the innate ability to participate in human linguistic interchange. The human brain is structured in advance to be able to receive and make sense out of the human noises it receives no matter what linguistic input they initially receive. Thus any child born anywhere can learn its mother tongue naturally.
Over the years Chomsky received, and successfully answered, many challenging questions and objections. However, at the conference I attended at MIT in Boston in 1984 he openly admitted that he had found it necessary to alter his initial views. Chomsky now claimed, in a paper entitled “Changing Perspectives on Knowledge and Use of Language,” that although all humans are born with the same cranial apparatus that governs how we in general deal with the various linguistic inputs we receive, these inputs are so varied that they need to be and are sorted out by some intermediary processes which initially respond to the respective inputs. Thus, the innate structural apparatus is not sufficient to fully explain how the brain processes the linguistic input.
Chomsky suggested that it would be helpful to posit an additional cognitive apparatus that functions in between the initial receptors and the linguistic output itself which accounts for the immense diversity thereof. He imaged this additional apparatus as a set of switches which would be set by the interaction between the basic structure of the brain and the empirical input received through linguistic and social interaction. Thus Chomsky admitted that his initial, somewhat rationalistic, views were insufficient to cover the huge variety of subtle differences which are part and parcel of regular human discourse but differ from language to language.
Thus there was now a more “empirical” or experiential dimension to Chomsky’s approach than he had at first acknowledged. Although no young speaker of a language is now seen as already programmed at birth openly receptive to any and all languages, he or she does come with a brain capable of acquiring any language. In addition, each would-be speaker of any language also comes equipped with a set of adjustable switches which can be set in accordance with daily experiential input. These switches are perhaps put in place by virtue of the concrete experiential of each speaker through on-going encounters with one’s environment, both physical and social.
So the question of language acquisition is addressed with a more complex model than Chomsky originally proposed. Language acquisition is now seen as a two-step phenomenon involving both preset categories and ongoing experience. This approach seems to me to be more inline, for instance, with that of the later Wittgenstein for whom linguistic is immensely flexible and to a large degree open-ended. Incidentally, Professor Chomsky seems now to have taken up a semi-permanent position at the University of Arizona in Tucson where he is a frequent speaker on both linguistic and political issues.
-
2 responses to “Noam Chomsky and Language Acquisition”
-
Here’s a philosophical and scientific question that can’t get solved. I’ve been messing around with Chat GBT recently, the introduction of AI as a tool for all sorts of automation of human cognition has great promise – to an extent. It raises the question of the tension of art an science. Can art ever be replicated by machines, even with gazillions of variables accounted for? Can nuance ever be rationalized into complex systems of thought and interpretation? As you once asked, can momentum ever be defined in a real game? I’m a tech fan, I love all this new stuff. But I’m also fascinated by how people get trained to learn and do new things. Is Chomsky wasting our time? Is his mapping of the brain and cognitive genome getting us anywhere? Is there any value in Wittgenstein’s Tractatus after we get to the Blue Book?
-
Some how i think i messed up with my response. So here it is again. I don’t think the one can ever be reduced to the other (a la Polanyi) We can still see the results in bodily behavior, etc. and reason from that. Only in that he sets us up for the Investigations :O) Which is worthwhile. Paz, jerry
-
-
-
I once saw a cartoon in which one guy says to another: “I’ve made a careful study of logic and it’s all a bunch of hooey.” It’s really impossible to do without logic so we might as well try to get it right. “Deductive” logic is what is usually taught in college classes and involves understanding syllogisms and the like. Very simply put, a deductive syllogism involves a major premise, like “All dogs are sweet”, followed by a minor premise like “Bozo is a dog”, and a conclusion “Therefore Bozo is sweet.” Of course, everything depends here on to both premises being correct. If both premises are accurate, then the conclusion follows conclusively. If even one is not, then the argument is said to be invalid.
“Inductive” logic is a horse of a different color. It is the bases of scientific investigation and technically can never provide absolutely conclusive results, only varying degrees of probability. Here the most interesting thinker to follow is John Stuart Mill who in the early 1800s was an outstanding and very influential thinker. In addition to his work in ethical theory as the “inventor” of Utilitarian moral theory, Mill wrote A Systems of Logic in which he laid out the basic principles behind inductive logic, the basis for all scientific reasoning based on high probabilities. The methods are five and they are usually referred to as “Mill’s Methods”. The focus is usually on determining the cause of a given phenomenon.
The first of Mill’s methods is that of “Agreement”. If there is agreement as to a common relevant circumstance among all or almost all of the instances involved in a given phenomenon as to a common relevant circumstance then that factor is the cause of the stated effect. For instance, if everyone at the dinner ate the pie and everyone got sick, the pie can be called the cause of the effect, namely their sickness.
The second of Mill’s methods is called the “Method of Difference.” If two instances of a phenomenon have every characteristic in common except for one, then that one is the cause of the difference between them. Here if everyone except for one guest ate the pie and got sick, and that guest did not get sick, the pie can safely be called the cause of the sickness.
The third of Mill’s methods is that of the combination of Agreement and Difference. By combining the first two methods one can reinforce the findings of each, making one’s conclusion doubly strong. This double method is often used in determining the caused of various complex diseases. Indeed, it is necessary.
The fourth of Mill’s methods is called the method of “Residues”, or ‘leftovers.” Here we are asked to isolate all the phenomena that all instances of the phenomenon in question share, and then any remaining factor will be the cause of the given effect. Thus, for instance, by subtracting the weight of a truck from the weight of the truck AND its cargo we can determine the weight of the cargo. This method can be used to determine the cause in a given single case and does not require multiple instances.
The fifth of Mill’s Methods is called the method of “Concomitant Variation”. This method is appropriate when the various phenomena involved cannot be completely removed from their respective instances. This method allows the scientist to track the degree of differences between the causal instances involved. When the degree of variation between the proposed cause and its effect remains constant, then we can say that the former is the cause of the latter. This method is used a lot with astronomical and biological phenomena where the key factors cannot be removed.
In all of the above cases it must be remembered that scientific explanations of causation are always only probable at best. Inductive knowledge, unlike deductive knowledge, is always open-ended, or more or less probable. Such is the logic of logic. Deductive logic works within “closed systems”, as it were, while Inductive logic works with respect to “open-ended” systems. The former, if done correctly, offers us “air-tight” reasoning, while the latter can only provide varying degrees of “likelihood.”Leave a Reply
2 responses to “Logic of Logic”
-
Well, of course logicians and scientists have to concur on a fairly fixed meaning for the terms used in a logical argument. That is precisely the rub in logic. Terms are often metaphorical no matter how “objectively stringent” we try to understand them. They have different meanings in different contexts and can have different allocative and illocutive values. They are always “fuzzy”, so no need to be too fussy about logic. Even logical rules, such as that of identity, can be hard to grasp (how much of an entity has to be the same as another in order to be identical?). Logic has its uses in the presumption of the ordinary (even if slippery) meaning of terms and can help organize a term paper; but truth is not promised in its deceptive clarity. As Wittgenstein said, the meaning of a term is its function, not its reference.
-
Really well put Dr. Jenkins !!! even for a “youngster” :O) Paz, jerry
-
-
Leave a Reply