How Children Acquire Language:
A New Answer
How do babies acquire language? What do babies know when they start to speak? Prevailing views about the biological foundations of language assume that very early language acquisition is tied to speech. Universal regularities in the timing and structure of infantsÕ vocal babbling and first words have been taken as evidence that the brain must be attuned to perceiving and producing spoken language, per se, in early life. To be sure, a frequent answer to the question "how does early human language acquisition begin?" is that it is the result of the development of the neuroanatomical and neurophysiological mechanisms involved in the perception and the production of speech. Put another way, the view of human biology at work here is that evolution has rendered the human brain neurologically "hardwired" for speech. Over the past 20 years I have been investigating these issues through intensive studies of hearing babies acquiring spoken languages (English or French) and deaf babies acquiring signed languages (American Sign Language, ASL, or Langue des Signes Québécoise, LSQ), ages birth through 48 months. The most striking finding to emerge from these studies is that speech, per se, is not critical to the human language acquisition process. Irrespective of whether an infant is exposed to spoken or signed languages, both are acquired on an identical maturational time course. Further, hearing infants acquiring spoken languages and deaf infants acquiring signed languages exhibit the same linguistic, semantic, and conceptual complexity, stage for stage. If sound and speech are critical to normal language acquisition how then can we account for these persistent findings? In order for signed and spoken languages to be acquired in the same manner, human infants at birth may not be sensitive to sound or speech, per se. Instead, infants may be sensitive to what is encoded within this modality. I propose that humans are born with a sensitivity to particular distributional, rhythmical, and temporal patterns unique to aspects of natural language structure, along specific physical dimensions (temporal "sing-song" prosodic patterning and bite-sized, maximally-contrasting syllable segments--both levels of language organization that are found in spoken and signed languages). If the input language contains these specific patterns, infants will then attempt to produce them--regardless of whether they encounter these patterns on the hands or on the tongue. One novel implication here is that language modality, be it spoken or signed, is highly plastic and may be neurologically set after birth. Put another way, babies are born with a propensity to acquire language. Whether the language comes as speech, sign language, or some other way of having language, it does not appear to matter to the brain. As long as the language input has the above crucial properties, human babies will attempt to acquire it.
How Children Acquire Language:
A New Answer
Deaf children exposed to signed languages from birth, acquire these languages on an identical maturational time course as hearing children acquire spoken languages. Deaf children acquiring signed languages do so without any modification, loss, or delay to the timing, content, and maturational course associated with reaching all linguistic milestones observed in spoken language. Beginning at birth, and continuing through age 3 and beyond, speaking and signing children exhibit the identical stages of language acquisition. These include the (a) "syllabic babbling stage" (7-10 months) as well as other developments in babbling, including "variegated babbling," ages 10-12 months, and "jargon babbling," ages 12 months and beyond, (b) "first word stage" (11-14 months), (c)"first two-word stage" (16-22 months), and the grammatical and semantic developments beyond. Surprising similarities are also observed in deaf and hearing children's timing onset and use of gestures as well. Signing and speaking children produce strikingly similar pre-linguistic (9-12 months) and post-linguistic communicative gestures (12-48 months). Deaf babies do not produce more gestures, even though linguistic "signs" (identical to the "word") and communicative gestures reside in the same modality, and even though some signs and gestures are formationally and referentially similar. Instead, deaf children consistently differentiate linguistic signs from communicative gestures throughout development, using each in the same ways observed in hearing children. Throughout development, signing and speaking children also exhibit remarkably similar complexity in their utterances.
The Discovery of Manual Babbling
In trying to understand the biological roots of human language, researchers have naturally tried to find its "beginning." The regular onset of vocal babbling--the bababa and other repetitive, syllabic sounds that infants produce--has led researchers to conclude that babbling represents the "beginning" of human language acquisition, albeit, language production. Babbling--and thus early language acquisition in our species--is said to be determined by the development of the anatomy of the vocal tract and the neuroanatomical and neurophysiological mechanisms subserving the motor control of speech production. In the course of conducting research on deaf infants' transition from pre-linguistic gesturing to first signs (9-12 months), I first discovered a class of hand activity that contained linguistically-relevant units that was different from all other hand activity at this time. To my surprise, these deaf infants appeared to be babbling with their hands. Additional studies were undertaken to understand the basis of this extraordinary behavior. The findings that we reported in Science revealed unambiguously a discrete class of hand activity in deaf infants that was structurally identical to vocal babbling observed in hearing infants. Like vocal babbling, manual babbling was found to possess (i) a restricted set of phonetic units (unique to signed languages), (ii) syllabic organization, and it was (iii) used without meaning or reference. This hand activity was also wholly distinct from all infants' rhythmic hand activity, be they deaf or hearing. Even its structure was wholly distinct from all infants' communicative gestures. The discovery of babbling in another modality was exciting. It confirmed the hypothesis that babbling represents a distinct and critical stage in the ontogeny of human language. However, it disconfirmed existing hypotheses about why babbling occurs: It disconfirmed the view that babbling is neurologically determined by the maturation of the speech-production mechanisms, per se. Specifically, it was thought that the "baba," CV (consonant-vowel) alternation that infants produce is determined by the rhythmic opening and closing of the mandible (jaw). But manual babbling is also produced with rhythmic, syllabic (open-close, hold-movement hand) alternations. How can we explain this? Where does this common structure come from? A new series of studies is currently under way to examine the physical basis of this extraordinary phenomenon (see Optotrak studies below, "The Physics of Manual Babbling").
The Physics of Manual Babbling
Where does the common structures in vocal and manual babbling come from? Is manual babbling really different from all babies' other rhythmic hand movements? I have hypothesized that the common structure observed across manual and vocal babbling is due to the existence of "supra-modal constraints," with the rhythmic oscillations of babbling being key. Both manual and vocal babbling, alone, are produced in rhthymic, temporally-oscillating bundles, which I have hypothesized may, in turn, be yoked to constraints on the infant's perceptual systems. The next challenge then was to figure out how to study it. I recently conducted a new study of manual babbling with my colleague at McGill, David Ostry, and students Siobhan Holowka de Belle, Lauren Sergio, and Bronna Levy. We used the powerful "OPTOTRAK Computer Visual-Graphic Analysis System. The precise physical properties of all infants' manual activity were measured by placing tiny Light-Emitting Diodes (LEDs) on infants' hands and feet. The LEDs transmitted light impulses to cameras that, in turn, sent signals into the OPTOTRAK system. This information was then fed into the computer software that we designed to provide us with information analogous to the spectrographic representation of speech, but adapted here for the spectrographic representation of sign. Thus, for the first time, we were able to obtain recordings of the timing, rate, path movement, velocity, and "fo" for all infant hand activity, and to obtain sophisticated, 3-D graphic displays of each. This work is presently in press in Nature (2001).
Bilingualism and Early Brain Development
I. Bi-lingual hearing babies acquiring a signed and a spoken language from birth, and bi-lingual hearing babies acquiring two different signed languages from birth (and no speech): Presently, an additional test of the hypothesis that speech is critical to the acquisition process is under investigation in my laboratory, testing two critical populations: (1) "bi-lingual" hearing infants who are being exposed to signed and spoken languages (i.e., one parent signs, one parent speaks), and (2) "bi-lingual" hearing infants who are being exposed to two distinct signed languages (ASL and LSQ), but who are receiving no spoken language input whatsoever. With regard to group (1), bi-lingual, signing/speaking children achieve all linguistic milestones in both modalities at the same time (e.g., vocal and manual babbling, first words and first signs, first grammatical combinations of words and signs, respectively, and beyond; see Petitto et al., in press, Journal of Child Language). Further, infants in both groups (1) and (2) exhibit their linguistic and semantic-conceptual milestones on the identical overall maturational time course as seen in monolingual children (Petitto, 2000), with their specific developmental patterns being identical to that which has been observed in the typical case of bi-lingual hearing babies exposed to two spoken languages (e.g., spoken French and spoken English; more below).
II. Discovery of common timing in bi-lingual and mono-lingual children: In the course of conducting the above research on the maturational timing mechanisms in hearing babies acquiring signed & spoken languages from their bilingual parents, we discovered that our young hearing controls--bilingual children learning spoken French and English--were achieving all major linguistic milestones in each of their respective languages on the identical time course, and on the identical time course as monolinguals (Petitto, 1994, 1997). Significance: Prevailing research on very young bilinguals, however, had reported that bilingual babies under 20 months exhibted language delay and confusion relative to monolingual babies because they ostensibly had a single, fused representation of their two native languages, which they were only able to sort out over the first three years of life. By contrast, my findings suggested that very young bilingual babies have highly distinct representations of their two native languages quite probably from birth. I have further advanced an hypothesis stating what mechanisms in the human brain may enable the very young baby to differentiate between its two native languages from birth, and I have offered the field an explanation as to why the perception of "delay" and "confusion" in young bilinguals has prevailed, both among scientists and the public (see Petitto et al., 2001, Journal of Child Language).