Syntactic bootstrapping is a theory in linguistics which proposes that children learn word meanings by recognizing the syntactic categories (such as nouns, adjectives, etc.) and structure of their language. Children have innate knowledge of the links between syntactic and semantic categories and can use their observations about syntax to make inferences about word meaning. Learning words in one's native language can be challenging because the extralinguistic context of use does not give specific enough information about word meanings. This problem can be overcome by using information present in a word's syntactic category. Once conclusions are made about a word's syntactic category, a child can then infer aspects of the word's meaning.
Contents
History
The first appearance of empirical evidence of syntactic bootstrapping comes from 1957 research done by Roger Brown. In his research, Brown demonstrated that preschool-aged children could use their knowledge of different parts of speech to distinguish the meaning of nonsense words in English. The results of Brown’s experiment provided the first evidence showing that children could use syntax to infer meaning for newly encountered words.
Roger Brown began the topic of syntactic bootstrapping unknowingly. In 1990, Lila Gleitman coined the term “syntactic bootstrapping.” This term was modified from Steven Pinker's first usage of the term "bootstrapping" in reference to semantic bootstrapping. According to Gleitman's hypothesis, verbs are learned with a delay compared to other parts of speech because the linguistic information that supports their acquisition is not available during the early stages of language acquisition. The acquisition of verb meaning in children is pivotal to their language development. Syntactic bootstrapping seeks to explain how children acquire these words.
Logic of Hypothesis
The syntactic bootstrapping hypothesis is based on the idea that there are universal/innate links between syntactic categories and semantic categories. Learners can therefore use their observations about the syntactic categories of novel words to make inferences about their meanings. This hypothesis is intended to solve the problem that the extralinguistic context of use is uninformative by itself about a novel word's meaning.
When children are presented with a sentence that includes an unfamiliar verb, they need to make the correct associations between a new word and what it refers to. They might look to extralinguistic context clues to help them determine what the meaning of that verb is, but the environment does not give specific enough evidence to determine that meaning. While some researchers thought that cross-situational learning could help children to learn words, Gillette et al. (1999) have shown that this kind of learning procedure is especially difficult for verbs. Rather, they proposed that children use syntactic information, such as the position of words within sentences, to help them learn word meanings.
For example, a child hears the sentence, “The cat meeped the bird.” If the child is familiar with the way arguments of verbs interact with the verb, he will infer that "the cat" is the agent and that "the bird" is the patient. Then, he can use these syntactic observations to infer that "meep" is a behavior that the cat is doing to the bird.
Landau and Gleitman found when studying the acquisition of the verbs look and see by a blind child that contextual clues appeared to be insufficient to explain her understanding of these verbs . They considered the possibility that perceptual verbs might be used more by the blind child's mother when talking about nearby objects, since the child had to touch objects to perceive them. However, perceptual verbs in the sample were not more common than other verbs when the child was touching or near to objects. Therefore, it seemed unlikely that the blind child learned look and see solely from hearing these verbs while touching objects. However, Landau and Gleitman did find that look and see were consistently used in distinctive syntactic frames. A blind child could use this difference in syntactic context to differentiate the two verbs, even though the child could not physically look or see. This is an example of a child using syntax to bootstrap her acquisition of verb meanings.
Sensitivity to syntactic categories
Numerous experiments show that children can distinguish between syntactic categories, including Fisher, Klinger, and Song (2006), Waxman and Booth (2001), and Mintz (2005).
In Roger Brown’s original experiment, preschool children were shown pictures along with a novel word. The pictures would be associated with a novel verb, mass noun, or count noun. The children were asked to point out the picture corresponding with the novel word, “niss” “sib” or “latt.” The experimenters asked them questions like “Do you know what it means to sib?” and then “This is [a picture of] sibbing. Now show me another picture of sibbing.” More than half of the children identified the meaning appropriate to the syntactic category of the novel word, indicating their ability to distinguish between syntactic categories.
Children's ability to identify syntactic categories may be supported by Prosodic bootstrapping.
Acquiring verb meanings
Gillette et al. (1999) performed experiments which found that participants who were provided both environmental and syntactic contexts were better able to infer what muted word was uttered at a particular point in a video than when only the environmental context was provided. In the experiment, participants were shown muted videos of a mother and infant playing. At a particular point in the video, a beep would sound and participants had to guess what word the beep stood for. It was always a verb or noun. Experimental results showed that participants were correct on identifying nouns more often than verbs. This shows that certain contexts are conducive to learning certain categories of words, like nouns, while the same context is not conducive to learning other categories, like verbs. However, when the scene was paired with a sentence containing all novel words, but the same syntactic structure as the original sentence, adults were better able to guess the verb. This shows that syntactic context is useful for the acquisition of verbs.
Fisher (1996) proposes that children can use the number of noun phrases in a sentence as evidence about a verb's meaning. She argues that children expect the noun phrases in a sentence to map one-to-one with participant roles in the event described by that sentence. For example, if a toddler hears a sentence that contains two noun phrases, she can infer that that sentence describes an event with two participants. This constrains the meaning that the verb in that sentence can have.
Fisher (1996) presented 3 and 5 year old children a video in which one participant caused a second participant to move. Children who heard that scene described by a transitive clause containing a novel verb, associated the subject of the verb with the agent. Children who heard the scene described by an intransitive clause associated the subject with either the agent or the patient. This shows that children make different inferences about meaning depending on the transitivity of the sentence.
Early Abstraction
Because the syntactic bootstrapping hypothesis is formulated in terms of links between syntactic and semantic categories, it assumes that learners are predisposed to treat syntax-semantics links in ways that generalize across many verbs. For example, children look for abstract grammatical relations such as subject and object in order to infer event relations such as agent or patient. They apply these inferences generally across verb categories and not only to specific verbs.
Acquiring attitude verbs
Acquiring the meaning of attitude verbs, which refer to an individual’s mental state, provides a challenge for word learners since these verbs do not correlate with any physical aspects of the environment. Words such as 'think' and 'want' do not have physically observable qualities. Thus, there must be something deeper going on that enables children to learn these verbs referring to abstract mental concepts. Here, syntactic bootstrapping steps in to help.
Acquiring adjective meanings
When it comes to acquiring the meanings of adjectives, Syrett and Lidz (2010) found that children also rely on the syntactic frames in which the adjectives are presented. The authors tested how the type of meaning children would assign to a novel gradable adjective (GA). Gradable adjectives modify nouns by putting them on a scale with other nouns. Some examples include big, dry, full, and tall. There are furthermore two subclasses of GAs. Maximal GAs operate on a scale with an upper bound: these include adjectives like full. Relative GAs do not, and are only interpreted with respect to a particular reference point: for example, a big ant is only "big" with respect to other ants. These adjectives can occur with different types of adverbs: relative GAs can occur with intensifiers like very, but not with adverbs like completely. However, maximal GAs can be modified by completely:
Syrett & Lidz found that learners attributed a relative or maximal GA meaning to a novel adjective on the basis of the adverb frame that it occurred in. This shows that learners use the syntactic frames of adjectives as evidence about their meanings.
An experiment by Wellwood, Gagliardi, and Lidz (2016) showed that four-year-olds associate unknown words with a quality meaning when they are presented with adjective syntax, and with a quantitative meaning when they are presented with determiner syntax. For example, in "Gleebest of the cows are by the barn," "gleebest" would be interpreted as "many" or "four," a quantity. Yet children associate the same unknown word with a quality interpretation when the word is presented in an adjective position. In the sentence "The gleebest cows are by the barn," "gleebest" would be interpreted as "striped" or "purple," a quality. This shows that children use syntax to identify whether a word is an adjective or a determiner, and use that category information to infer aspects of the word's meaning.
Arguments Against Syntactic Bootstrapping
Steven Pinker presents his theory of semantic bootstrapping, which hypothesizes that children use the meaning of words to start to learn the syntax of their language. Gleitman (1990) counters Pinker’s ideas by asserting that context is insufficient to supply word meaning, as a single context can allow for multiple interpretations of an uttered sentence. She explains that simply observing objects and events in the world does not provide sufficient information to infer the meanings of words and sentences. Pinker, however, argues that semantic bootstrapping and syntactic bootstrapping aren't conflicting ideas, and that semantic bootstrapping makes no claims about learning word meanings. He argues that since semantic bootstrapping is a hypothesis about how children acquire syntax, while syntactic bootstrapping is a hypothesis about how children acquire word meanings, the opposition between the two theories does not necessarily exist.
Pinker agrees that syntactic categories are in fact used by children to learn semantics and accepts syntactic bootstrapping, but argues that Gleitman applies the hypothesis too broadly, and that is insufficient evidence to account for all of Gleitman's claims. Pinker argues that while children can use syntax to learn certain semantic properties within a single frame, like the number of arguments a verb takes or the types of arguments such as agent and patient, there are serious problems with the argument that children pick up on these semantic properties from the syntax when a verb is found in a wide range of syntactic frames. Pinker uses the verb "sew" as an example:
Pinker argues that the syntax provides information about possible verb frames, but does not help a learner "zoom in" on a verb's meaning after hearing it in multiple frames. According to Pinker, the frames presented above for "sew" can do nothing for learners other than clue them into the fact that "sewing" is some sort of activity. Furthermore, Pinker disagrees with Gleitman's claim that the ambiguities in the situations where a word is used could only be solved by using information about how the word behaves syntactically.
Some languages allow the noun phrase arguments of a verb to go unpronounced. This poses a challenge for syntactic bootstrapping. For example, it raises the question of how children could distinguish transitive verbs from intransitive ones via the syntax when the NP subject or object argument could be unpronounced. Lee and Naigles (2005) looked at Mandarin Chinese, which allows the null-realization of both NP and PP arguments. They found that despite Mandarin allowing ellipsis of NP arguments, transitive verbs which appeared in the corpus data were more likely to be followed by an overt NP than either intransitive or 'overlapping' (intransitive with a source or direction argument) verb types. This data shows that even when verbal arguments are commonly dropped, the fact that they are pronounced significantly more often with transitive verbs than with intransitives means that these verb types can still be reliably distinguished, and the syntax-semantics relations enabling syntactic bootstrapping still hold in these languages.