<<Back to Note Summary Page

<<Back to Cognition Class Home Page

---------------------------------------------------------------------------------------

LANGUAGE

Definition: Language is a SHARED SYMBOLIC SYSTEM FOR COMMUNICATION.

Language:

- is SYMBOLIC: it consists of arbitrary associations between a particular sound, gesture, or visual object and a referent. For example, in English, adding an "s" at the end of the word means "plural". This is an arbitrary rule (other languages express plural in different ways).

- is SHARED: people that speak the same language share the same set of arbitrary connections between symbols and referents.

- enables COMMUNICATION: using language we always convey meaning. In fact, language allows us to convey different level of meaning (the literal meaning as opposed to actual or metaphorical meaning, for example).

Even if languages are partly arbitrary and therefore several aspects are different in different languages, there are other aspects of language that are common and that defines what is a language (as opposed to what is not a language). These properties that are common to every language are called "UNIVERSALS OF LANGUAGE" (Hockett, 1960).

I want to examine some of these properties.

Semanticity. All languages are semantic, that is, they convey meaning.

Arbitrariness. There is no inherent connection between the units of language (sounds, signs, written words) and the meaning that these units refer to. (whale vs. microorganism: can you tell what is the biggest from the name?)

Flexibility. Because the connections between symbols and referents are arbitrary, language is very flexible. Language changes continuously, new words are created, old words change meaning or disappear.

Naming. A language has a name for each (known) object. However, what makes language so peculiar is that usually we have names for things that are not concrete objects but concepts, such as freedom or justice, mental process, or knowledge. Any time a new object or new concept is introduced, a new name is created to refer to that concept or object. Or, often, an old word is used to convey a different meaning (computer).

Displacement. Displacement is the ability to talk about something that is not here and now. We can talk about the past or the future, we can anticipate events that have not happened yet. We can talk about places we have never visited. This is one of the most important features of language. If you want a demonstration of the importance of this property, try to talk using only the present tense for 5 minutes.

Productivity. Another important feature of language: we can combine linguistic symbols in completely new ways. We can produce sentences nobody has said before and we can talk about something completely new, using the set of symbols that our language has. When we speak we usually do not REPEAT the same sentences over and over, but rather we create new sentences to express what we think.

Grammaticality. A consequence of productivity (the fact the language production is creative and not repetitive) is that languages always have combination rules. The way we put words together always follows certain rule (syntactical or grammatical rules). A grammar is the complete set of rules that will generate or produce all the acceptable sentences and will not generate any unacceptable, ill-formed sentences.

-------------------------------------------------------------------

Grammar and five levels of analysis

Linguists usually propose three levels of analysis for language: phonology, syntax, and semantics. However, George Miller noted that other two levels are very important: conceptual knowledge and beliefs; these two aspects are essential in the use and understanding of a linguistic message.

For example, look at the sentence:

Mary and John saw the mountains while they were flying to California.

We all understand that "While Mary and John were flying to California, they saw the mountains". However, another interpretation of the sentence that is syntactically correct is "While the mountains were flying to California, John and Mary saw them". Why don't we interpret the sentence in this way not even for a moment? According to Miller this is because we have knowledge, in our semantic memory, that mountains do not fly. However, this is not information that we can find in a dictionary, right? This is something that we know because of our conceptual knowledge about the world.

The second interpretation of the sentence is also contrary to our beliefs. So, if somebody says: "While the mountains were flying to California, John and Mary saw them", we will probably answer (or think): "I don't believe you", or maybe think that the speaker is trying to be sarcastic, or that she is telling a story.

The Whorfian hypothesis.

Whorf noted that Eskimo language has a large number of words for snow. He proposed the linguistic relativity hypothesis, that states that language shape the way we think about (and, according to the stronger version, the way we perceive) events. One example of application of the whorfian hypothesis would be the use of the pronouns "she" and "her" instead of he and his when we talk about a generic person.

The current position is that probably the opposite is true. It is our experience with the world that shapes our language. So, if you have a lot of experience with snow and if different types of snow are absolutely vital for our survival, we will probably have several words for it.

[By the way, the "Eskimo snow thing" seems to be false. Martin (1986) showed that in each Eskimo dialect there seems to be only two words for snow (snowflakes and snow on the ground) and he describes the series of successive deformation of the original finding that went from 2 words for snow to 4 to 5 to "many".]

-------------------------------------------------------------------

PHONOLOGY

All known spoken languages have about 200 different phonemes. However, every language uses only a small part of all the possible phonemes. For example, English contains about 46 phonemes.

An important concept in phonology is PHONEMIC COMPETENCE. If I present you with strings of letters, you can always decide whether the string can form a legal English word or not. This is because we have an "implicit" knowledge of the phonetic rules of our native language. We cannot explicitly express these rules, but we can use them to classify whether a string is legal or not. The same is true for several aspects of languages, for example for syntax.

I've already mentioned that effects as categorical perceptions are functional to our understanding of a language in a background of noise. Moreover, there is an enormous variability: the same consonants pronounced in different words sounds different, in part because of the co-articulation problem (we start to articulate the next phoneme when we are still articulating the previous one). Categorical perception is one solution.

In class I showed you the spectrograms of /b/ and /d/ sounds in different words. These sounds are physically different, as you can see from the spectrogram, but they sound the same to us. This illustrates the concept of categorical perception in phonology.

Another mechanism that improve our language comprehension is top-down processing. We use the context of the sentence and of the conversation to interpret the sounds that we hear. Pollack and Pickett (1964) showed that, if we splice out single words from a conversation and if we present them in isolation, the comprehension rate is only 47%. However, if these words are presented in context, comprehension increases as a function of the number of words presented. Other studies show that both semantic and syntactic organizations improve comprehension. The conclusion is that when we are listening to speech we process in parallel the phonological, semantic and syntactic structure of the sentence and all these different sources of information allow us to comprehend speech, together with extralinguistic information (for example, the context, the environment, who the speakers are etc.).

One autobiographical example. Sometimes my husband tries to speak Italian with me. I usually do not understand him, until I realize that he is speaking Italian. Then suddenly what he is saying make sense. (He always gets offended, by the way). This depends on the fact that I EXPECT him to speak English, and I try to interpret what he says on the basis of this expectation.

Another seemingly problematic issue is speech segmentation. In our native language you have the impression that words are PHYSICALLY separated from each other. However, if we look at the spectrogram of fluent speech, we will find that there is any physical cue that marks the separation between one word and the next (actually, it is more often true that there is pause within a word than between words!). In fact, if you listen to a language that you don't know, your impression is of a continuous stream without interruption. So, which information do we use to segment speech? Some researchers at the University of Oregon are working on this problem and the idea is that we use all possible cues (meaning, context, stress pattern, syntactic information) to parse speech in a meaningful way.

-------------------------------------------------------------------

SYNTAX

A big part of English syntax is about word order. As I told you last time, other languages do not put so much emphasis on word order. For example German or Latin use grammatical declinations at the end of words to indicate the grammatical role of the word (for example, subject or object). In this case, word order is not so necessary for sentence comprehension. In English, however, changing word ordering often changes the meaning of the sentence.

For example:

Bill told the men to deliver the piano on Monday

Bill told the men on Monday to deliver the piano

have different meanings, even though the only difference is the position of "on Monday".

CHOMSKY'S TRANSFORMATIONAL GRAMMAR

Chomsky was a linguist that proposed a theory of language that was very influential and very important for cognitive psychology. What is the different between linguists and psycholinguists? It corresponds in some way to the difference between algorithmic and hardware implementation level proposed by David Marr: Linguists study language at an abstract level, and investigate the rules and principles found in languages. Psycholinguists, on the other side, study how people use these rules and principles and which mental processes underlie the use of language.

Distinction between Competence and Performance

In a similar way, Chomsky proposed a distinction between competence and performance in the use of a language.

Competence is the internalized knowledge of a language and its rules. According to Chomsky, fully fluent speakers of a language have a perfect knowledge of these rules.

Performance is the actual language behavior that a speaker generates, the ability to follow these "ideal" rules in practice. Performance is influenced by psychological factors such as working memory.

How do behaviorist would react to such a distinction between performance (the behavior that we can observe) and competence (an ideal knowledge that cannot be directly measured)?

On the other side, this is quite an intuitive distinction. Often (but not always!), the reason why we make a mistake in our native language it is not because of our ignorance of linguistic rules, but because we make mistakes in applying rules. For example, we may get distracted and forget what we were talking about, we may have a slip of the tongue and use the wrong word. It is true, though, that this distinction creates some problem to the experimental study of language. According to Chomsky, the way to explore competence is to ask native speakers to judge the correctness of a sentence. We are indeed quite accurate in judging whether a sentence is correct (grammatical) even if we sometimes make mistakes when we speak and we do not always use grammatical sentences.

A second important distinction in Chomsky's theory of language is that between deep structure of a sentence and surface structure. The deep structure is the meaning, what we want to express, whereas the surface structure is the particular form in which the sentence is actually expressed.

To understand this distinction, let's analyze the following sentence:

"The shooting of the hunters was awful".

As you can see, this sentence is ambiguous and can be interpreted in two different ways. The hunters may be the agents of the shooting or the victims of the shooting. This means that the same identical surface form can be associated to two different meanings or deep structures.

(In class:

Time flies like an arrow
Fruit flies like a banana)

According to Chomsky, language uses a generative grammar, that is, a set of transformation rules that allow us generate any possible "well formed" (i.e., grammatical) sentence in our native language. Phrase structure rules are the rules that operate on words and that allow us to build and interpret grammatical sentences; Transformational rules operate on entire sentences and are used to produce different surface structure outputs from a particular deep structure form (that is, to express our ideas and thoughts in a particular grammatical form: active, passive, interrogative, etc.).

Phrase Structure Rules

This set of rules allows us to parse the surface structure of a sentence and to attribute a functional value to each. For example, each sentence can be divided in a noun phrase and in a verb phrase. Each of these phrases can, in turn, be divided into simpler units (D=Determiner, N=Noun, V=Verb). This parsing process allows the grammatical relationships between each part of the sentence, and therefore to determine the meaning of a sentence. Applying phrase structure rules reveals the hierarchical structure of sentences, as well as the internal structure of the various parts and their relationship with each other.

The importance of this theory is that a great number of sentences that share the same structure can be described using the same grammatical structure. We implicitly use this type of analysis to facilitate speech understanding and production. For example, after an article such as "the" we expect a noun. Applying this type of structure also allow us to judge the grammaticality of a sentence.

Transformational rules

Phrase structure rules are not always sufficient to determine the relationship between meaning (deep structure) and form in which this meaning can be conveyed. The same meaning can be translated in a certain number of surface forms. For example, we can express the same concept in an active form (Antonella teaches this class) or in a passive form (This class is taught by Antonella).

The transformational rules allow one to express a meaning (deep structure) in a certain number of different forms. For example, if the deep structure is {(boy kisses girl)} different transformational rules can generate the following surface forms:

(1) The boy kisses the girl
(2) The girl was kissed by the boy
(3) Was the girl kissed by the boy?

Sentence (1) is the basic and simpler form in which the deep structure {(boy kisses girl)} can be expressed. Chomsky called this form KERNEL SENTENCE. The other two sentences can be derived from (1) using a set of transformational rules.

Limitations of Chomsky's theory

Cognitive psychologists began to criticize Chomsky's theory in the 70s. The main criticism was the emphasis that Chomsky's theory put on the syntactic component of language; semantics had a secondary role in this theory. Chomsky seemed to suggest that what we want to express in a sentence is subordinate to how we syntactically organize the sentence, whereas psychologists believe that language is primarily expression of meaning.

-------------------------------------------------------------------

Psychological theories of SYNTAX

Some syntactic rules are psychologically relevant. For example, when we were studying working memory, we analyzed a difference between grammatical and not grammatical sentences that may be related to our processing limitation:

"The delivery person that the secretary met departed"

is a correct sentence. However, adding a third embedded sentence using the same structure:

"The salesperson that the doctor that the nurse despised met departed"

is not grammatical anymore. This particular syntactic rule, that is found in every language, seems to be related to the limitation of our working memory.

The modern approach to syntax is to use experimental paradigm to study how we produce syntactical structures. Empirical data suggests that semantic and syntax strongly interact in building a sentence and that sentence planning occurs on line: we do not plan the entire sentence before starting the utterance, but we plan while we are speaking. Also, syntactic structures seem to be applied automatically and can be primed by previously used syntactic forms.

-------------------------------------------------------------------

LEXICAL AND SEMANTIC FACTORS

How do we understand the meaning of a sentence? First, we must notice that semantic and syntax work interactively. As we saw already, word order, a syntactic element, strongly influence the meaning of sentences. Another example of the interaction between syntax and semantics is what is called "focus" of the utterance. For example, read these three sentences:

(1) I'm going downtown with my sister at four o'clock.
(2) It's a four o'clock that I'm going downtown with my sister.
(3) It's my sister I'm going downtown with at four o'clock.

In each sentence the "focus", the most salient information in the sentence, is emphasized through the syntactic structure of the sentence (and it is always at the beginning of the sentence; what does this suggest about our cognitive processes in language?).

However, often semantics overpower syntax. For example, a study by Fillembaum (1974) showed that syntactically correct sentences with an anomalous meaning tend to be normalized changing the syntactic form. For example, most of the participants in this experiment reported the sentence "Don't print that or I won't sue you" as "Don't print that or I will sue you". This is another example of top-down process: what is familiar to us tend to guide our interpretation of language.

Case grammar (Fillmore, 1968)

Fillmore proposed that an important aspect of linguistic comprehension is the semantic role played by the content words in a sentence. He pointed out that when we try to understand language we do not pay so much attention to the grammatical role of each word (subject, object, verb), but rather to their semantic role. Fillmore called semantic roles semantic cases or case roles. To understand better what he meant with case roles, read the following sentences:

(1) The key will open the door
(2) The janitor will open the door with the key

In the first sentence "key" is the subject whereas in the second sentence "key" is the object of the preposition with. However, in both sentences the semantic role of key is the same: key is the object that will be used to open the door. Fillmore proposed that the case role, which does not correspond to the grammatical role, is an essential aspect of language comprehension.

In an extension of Fillmore's case grammar, Bresnan proposed that our lexicon (e.g., "mental dictionary", the portion of long-term memory in which words and word meanings are stored) stores not only the meaning, but also syntactic and semantic case roles. For example, according to Bresnan in our mental lexicon, the word "hit" is stored with the information that an animate agent and a recipient (animate or non-animate) are required, and some instrument must be stated or implied.

The case grammar approach makes two predictions about language comprehension: (1) the listener begins to analyze the sentence immediately and (2) sentence analysis is a process of assigning each word to a particular semantic case role, with each assignment contributing its part to overall sentence comprehension.

An interesting illustration of these principles are "garden-path sentences" like the following:

After the musician had played the piano was quickly taken off the stage.

Don't you have the impression that in the middle of the sentence you have to stop and start again, changing the first interpretation of the sentence?

In a garden-path sentence such as this one, we tend to consider "After the musician had played the piano" as a subject-verb-object sentence, and this our first interpretation. However, when we read "the piano was quickly taken off the stage" we realize that piano is actually the subject of the second sentence and is not related to "played", so we have to go back and reinterpret also the first part ("After the musician had played"). The idea is that when we read the sentence we also starts to assign semantic roles to the different words in the sentence, but at some points our first interpretation does not work anymore (wrong path!) and we have to go back and reassign semantic roles.

One way to study this type of sentences and the comprehension processes is to measure eye movement during reading. We can measure how much time a person fixates on each word during reading. It has been shown that fixation time is proportional to the difficulty of a sentence.

In a study by Rayner et al (1983), normal sentences and garden-path sentences were presented, and fixation time was measured.

The importance of these results is that they show that we analyze sentences on-line. We do not read the complete sentence BEFORE starting the analysis process, but we start to analyze the sentence since the beginning. In the disambiguation point ("was" in our example), we realize that we have to reinterpret the meaning and our processing time increases, as indicated by increase duration of eye fixation during reading.

---------------------------------------------------------------------------------------

<<Back to Note Summary Page

<<Back to Cognition Class Home Page