Cerebrum Article

Children Need Natural Languages, Signed or Spoken

Each year in America, some 8,000 children are born deaf or hard-of-hearing. How will they learn to communicate—and at what pace, with what success, and with what implications for later education? New systems and technology offer deaf children some additional alternatives, but now research on the nature of language and how the brain acquires it is compelling a second look at the critical advantages of sign language. Sign languages, research has shown, are “natural language”—a gift of evolution to humans that appears essential to normal development. Three University of Rochester neuroscientists ask: What are the implications for the 90 percent of deaf children born into settings where signing is not used?

Published: January 1, 2003

All human languages, even those as apparently different as English, Mandarin Chinese, and Navajo, share strikingly similarities. All spoken languages draw their sounds from a small subset of the possible sounds humans can produce. All combine these sounds in sequences to form words, phrases, and sentences. In every culture, there are words for a similar set of concrete and abstract concepts that refer to objects and actions. Also surprisingly similar are the principles by which people combine these words to form phrases and sentences; indeed, only a few of the many possible orderings of words occur in the world’s thousands of distinct languages.

Children are equally capable of learning any language. Barring drastically adverse circumstances, they acquire their native tongue on a similar timetable, regardless of culture or family circumstances. When incapable of hearing or speaking—for example, when born deaf, in impossibly noisy environments, or in cultures where long periods of silence are imposed (as in certain aboriginal cultures when a woman is widowed)—humans readily develop sign languages, using their hands and eyes to express themselves. Only in recent decades have we learned that these sign languages are truly natural human languages. They are learned by infants in similar ways and there are many commonalities (and some provocative differences) in how the brain processes them. As we shall see, the left hemisphere of the brain, in particular the left perisylvian cortex, appears to be the agent for this human specialization for language, spoken or signed.

No other species possesses a natural language. Animals do communicate, but their systems do not rely on principles for ordering and combining words or elements, and (despite some publicity to the contrary) animals reared in human families do not appear to be capable of readily acquiring a human language, even when signed language is used. Special, shared abilities for complex communication are unique to our species.

Aside from spoken or signed natural languages, humans have created artificial systems of communication for special purposes—Morse code is one well-known example. Two such systems created for the deaf community are Cued Speech and Signed English. We are now discovering, however, that these systems, unlike sign languages, do not provide adequate natural language input for deaf children. In fact, they can have a negative impact on the deaf child’s developing ability to communicate with language. What has become increasingly clear from studying the characteristics natural languages, spoken or signed, and how the brain processes them, is the critical importance of providing even the youngest deaf children with access to a natural language.

The Grammar of a Natural Sign Language

A widespread misconception is that there is one universal sign language. Sign languages are specific to their communities, so that different sign languages are found in different countries.

Perhaps the most thoroughly studied natural system is American Sign Language (ASL), the visual-gestural language predominant in the deaf community of the United States and other parts of North America. Although its regional distribution overlaps that of the English-speaking community, ASL bears no resemblance to spoken English. It is not some kind of translation or awkward representation of English; nor is it pantomime. In fact, in terms of its structure, ASL shares more with Navajo or Japanese than with English. ASL is also more similar to French Sign Language (LSF) than to British Sign Language (BSL) because Laurent Clerc—one of the first teachers of the deaf in nineteenth-century America—was French. Like all natural languages, however, ASL and LSF have evolved separately since their origins.

During the last 20 years, linguists have shown that sign languages exhibit all the grammatical characteristics of spoken languages, including phonology, morphology, and syntax. As the pioneering work of William Stokoe, Ursulla Bellugi, and Ed Klima initially showed, signs are not icons or global wholes but rather, like speech, are created by combining basic phonological units. In speech, these units are formed by the placement of the tongue in the mouth or by the shape of the lips. In ASL, the hands are the medium for the three basic phonological units: hand shape, hand position with respect to the body, and hand movement.

art_v5n1baveliernewportsupalla_4
Signs, like speech, are created by combining basic phonological units. In American Sign Language (ASL), these include hand shape, hand position with respect to the body, and hand movement. The signs for “candy” and “apple” differ only in hand shape; the signs for “candy” and “Chinese” differ only in hand position. Courtesy of Ted Supalla

As in spoken languages, these phonological units are constrained in ways that differ from one sign language to another. For example, only particular hand shapes are acceptable in ASL, while others are used in BSL. The limited number of acceptable hand shapes, positions, and movements constitute the building blocks of the language, which are then combined to form “syllables” and “words” (that is, signs). By linking the parts, signs are built, just as words in an oral language are created by linking sounds.

Also, as in spoken languages, the basic classes of words, including nouns, verbs, adjectives, pronouns, and adverbs, are combined to form sentences. ASL has what linguists call a basic Subject-Verb-Object word order (sign order) that can be used to indicate who is doing what to whom. However in contrast to English, ASL also has other means of indicating who is doing what to whom, thanks to its complex morphology.

Morphology is the system for forming or altering words in a language. In many languages, a prefix or suffix can be added to the word stem to add some information about that word, such as gender, tense, or number. In English, for example, the past tense is often indicated by adding the suffix “ed” to the stem of the verb (walk-ED). In ASL, the marking is done through gesture.  The richness of ASL morphology, though, is apparent in the structure of its verbs. Typical ASL verbs are marked for agreement in person and number with both subject and object, as well as for timing of the action (completed or ongoing, occurring once or habitually) and other grammatical features common to verbs in many spoken languages. As in spoken language with complex morphology, when verbs are marked for subject and object, word order in ASL is relatively free.

As shown by Ted Supalla, ASL verbs of motion (fly, bounce), are particularly complex, with stems that indicate path (straight, curved); manner (sliding, hopping); orientation (side to side, top to bottom); and size and shape of the moving object.1 For example, describing the aimless wandering of a drunk is done in ASL with a single verb, whose many components indicate a tall thing moving forward, while meandering back and forth and wobbling along the path. Navajo has a similar structure. In contrast, in English this same event requires a 10-to-15 word sentence, but each word has only one or two meaningful parts. Languages of the world vary in how they construct such expressions, and ASL falls within the usual types of variation.

The basic Subject-Verb-Object order of ASL is not required when the signer marks the subject, object, or verb phrase as the topic of the sentence. In these constructions, the topic phrase is moved to the beginning of the sentence and is marked by a special facial expression. This is a particularly intriguing property of sign languages: Facial expressions, ordinarily used to convey emotion, here become formal grammatical devices.2

In sum, while sign languages use a different modality (hand shapes and motions instead of spoken words), the grammatical properties of ASL are familiar to students of spoken languages. The principles of word and sentence structure are common to both signed and spoken languages.

Signed vs. Spoken

This is not to say, however, that there are no differences between sign languages and spoken languages. First, because sign languages have nonlinguistic roots (in gesture and pantomime), some iconic characteristics remain. One is that the association between a word and its meaning, which is usually arbitrary in spoken languages, is often more evident in sign languages. Thus, the sign “bird” in ASL evokes the opening and closing of a bird’s beak; the sign “tree” suggests a tree waving in the wind. Although the visual form of such signs resembles their referents in some way, signs, like words, are conventionalized. For example, the sign for “tree” in Chinese Sign Language only sketches the trunk, whereas the Danish sign outlines the shape of the crown at the top of the trunk. These signs for “tree” are in some sense iconic, but they are conventionalized in different—and mutually unintelligible— ways in different sign languages. Also, although some signs have these iconic features, most signs are arbitrary. An important note here is that studies of sign perception, memory, and acquisition show that iconic and arbitrary signs are processed similarly by the brain, which focuses on the form and structure rather than any iconic meaning.

Sign languages also differ from spoken languages in certain ways that result from being gestured and seen instead of spoken and heard. Each medium offers different capabilities. Of course, separate signs are articulated in sequence, just as words are in spoken languages. Many ASL signs, however, consist of elements combined with one another simultaneously, or even nested inside one another. Few signs show elements combined in linear sequence, which is common in spoken languages like English. When we say “running” or “goodness” in English, “run” and “ing” or “good” and “ness” are articulated in sequence. But in ASL verbs of motion, the shape of the hand is one morpheme while its path of movement is another; these two are articulated simultaneously. Signed and spoken languages have these strikingly different physical representations, but, as noted above, their grammatical properties are surprisingly similar.

Learning to Speak, Learning to Sign

In reviewing studies of infant development, Elissa Newport along with Richard Meier, now at the University of Texas, have argued that if deaf infants are exposed to ASL from birth, they achieve the same milestones during language acquisition as do speaking infants, and at approximately the same intervals.3 By 12 weeks, most hearing infants produce vowel-like sounds called cooing. By 20 weeks, vocalizations begin to include more consonant sounds, a stage called babbling. While initially these vocalizations are similar around the world, by the time babies are 8 to 10 months of age the sounds resemble the narrower range of sounds used in the surrounding language. Isolated words are produced at about one year, usually common nouns that describe everyday objects and social words such as “bye bye.”

Deaf infants also produce vocal sounds in early development; but those exposed to a signed language go on to show their language milestones through gestures. As in the acquisition of spoken languages, infants acquiring ASL babble prior to the time of producing their first lexical item—that is, they produce gestures that resemble signing but are not recognizable or apparently meaningful. At about one year, if not earlier, the first recognizable ASL signs are produced, one at a time, in isolation.

Sometime during the second year, vocabulary grows dramatically for both hearing and deaf infants. Short (two-word) sentences display considerable control over the structure of the language being articulated. Thus, young learners of English typically say “Daddy eat” but “eat pizza,” exhibiting word order that recognizes the distinction between subject (Daddy) and object (pizza). As do their speaking counterparts, young signers go through a two-sign sentence stage, in which they display the full range of childhood semantic relations and use word order to express subjects and objects. Interestingly, as in spoken languages with complex morphology, children at this stage rely on word order to convey grammatical relationships, despite the fact that the adult language they are exposed to shows considerable word-order flexibility. Thus, as is the case in young speakers, young signers do not merely mimic the language they are absorbing.

Between three and five years of age, normal patterns of grammar acquisition include expansion of the syntax and morphology, with classic phenomena such as over-adherence to certain patterns (for example, “goed” instead of “went”), demonstrating that children readily learn the rules of the language. Questions, negation, passive, and other grammatical constructions are acquired at this time. Similar patterns of development can be observed in the acquisition of ASL. Indeed, across all natural languages, spoken or signed, a similar pattern of acquisition occurs. In the earliest stages, uninflected forms are used; then some morphemes are acquired but not well coordinated; and finally the morphemes are correctly and smoothly coordinated. For languages with simple morphologically, this last stage is reached around three to four years of age; for languages with complex morphology (like ASL, Russian, or Navajo), errors on some idiosyncratic morphemes continue as late as age seven or eight.

The discovery that similar milestones are shared by language learners worldwide, whether the language is spoken or signed, reinforces the proposal by Eric Lennenberg (a distinguished neuropsychologist writing in the 1960s and 1970s) and others that language learning has a significant biological basis. The unfolding of these milestones depends on learning, of course, but also on the timing of the brain’s maturation. Noninvasive brain-imaging techniques that have become common during the past decade open a new window on the neural bases of language-processing in adults. Soon we may be able to see how neural control and the representation of language develop and change in infants and children.

How the Brain Processes Language

In studies going back more than a century, brain scientists have established the prominent role of the left hemisphere’s perisylvian cortex in processing spoken language. The origin of this left-hemisphere specialization for language is unknown. One hypothesis is that specialization has evolved to enable humans to control the sophisticated motor movements required to speak, as well as the auditory ability to perceive what is spoken. This hypothesis is consistent with the fact that the temporal and frontal areas of the left hemisphere appear to be specialized for processing and producing speech sounds. An alternative hypothesis is that these brain specializations are involved in processing the grammatical structures of natural languages. In this view, these brain areas should be recruited in the processing of all natural languages, whether or not they are spoken.

Studying signed languages offers a singular perspective on these questions. As we have seen, signed languages require structured grammatical processing but involve visual perception of signs. Commonalities in brain mechanisms for spoken and signed languages must therefore reflect similarities in processing linguistic structure, which is independent of the differing sensorimotor channels. A critical first step is to ascertain whether sign languages and spoken languages activate the same or different cortical areas.

Using functional magnetic resonance imaging (fMRI), Bavelier and her colleagues, in particular Helen Neville from the University of Oregon, Eugene, and David Corina from the University of Washington, Seattle, showed that the same regions of the brain’s left hemisphere are activated during the processing of ASL sentences by deaf native signers of ASL as during the processing of spoken or written English sentences by hearing native speakers of English.4 In particular, the regions involved include Wernicke’s area (the posterior part of the superior temporal gyrus and the supramarginal gyrus); the inferior frontal cortex known as Broca’s area; and other cortical regions in the left hemisphere more recently associated with language processing. Similar studies carried out in the United Kingdom by the group of Ruth Campbell and Mairead MacSweeny at University College London have established that deaf native British Sign Language (BSL) users also recruit typical left language areas when asked to produce or comprehend isolated signs or full sentences of BSL. These results are consistent, as well, with evidence from deaf stroke patients.5

These studies push us to conclude that the left hemisphere’s involvement in language is independent of the particular senses used to produce and perceive that language. This would support the hypothesis that certain regions of the perisylvian cortex in the left hemisphere do indeed mediate the processing of grammatical aspects of natural languages, regardless of their modality.

At the same time, there are fairly predictable differences in how the brain processes spoken versus signed languages. First, audiovisual processing of English by native hearing speakers activates the brain’s auditory cortex, whereas signing leads to a greater recruitment of the visual/motion brain areas.

More surprisingly, perhaps, some studies have observed that when native signers process ASL sentences, there is a greater involvement of the brain’s right hemisphere (specifically, the superior temporal lobe and the parietal lobes) and more symmetry in the activation of the left and right hemispheres than when hearing subjects process spoken English. Does this suggest that signed language is less lateralized to the left hemisphere because the left hemisphere possesses an innate specialization for processing auditory language? Alternatively, the processing of visual gestures may impose additional demands on the brain, requiring the participation of more cortical regions than does the processing of auditory (speech) signals. The increased symmetry is found only in people who began signing when they were young, not in those first exposed to sign language after puberty. This suggests that the symmetry is not simply or solely the result of greater sensory demands during sign processing. These are provocative questions, but because the ASL materials used in this study were fairly complex, further investigation is needed to pinpoint the particular aspects of sign language that are activating various regions of the cortex.

art_v5n1baveliernewportsupalla_5
Imaging studies show that when native deaf signers of ASL process ASL sentences and when native English speakers process spoken or written English sentences, the same regions of the brain’s left hemisphere are activated. Thus it appears that the left hemisphere’s involvement in language is the same, whether the language is seen or heard.© 2003 Christopher Wikoff

Artificial vs. Natural

While natural languages arise spontaneously and spread by means of unrestricted interaction among people who use them (for example, ASL, which emerged within the North American deaf community), artificial or devised communication systems are invented by specific individuals, such as educators of deaf children. Typically, these devised communication systems must be explicitly taught, involving many hours of practice and feedback; they do not develop naturally or easily among users. Speech-reading (lip-reading) is one such devised system, but there are others: Cued Speech, Signed English, and several versions of Manually Coded English that rely on invented signs to represent English grammatical features.

Because devised systems are invented by individuals rather than arising spontaneously emong users, they do not exemplify the unfettered natural tendencies of humans to develop gestural languages. In fact, the devised systems studied by linguists have been found to violate the universal structural principles of both spoken and signed natural languages—even when the system was intended to match a particular spoken language. Is this because the inventors were unfamiliar with linguistic principles and so created their systems without considering the implicit constraints and pressures of the circumstances in which natural languages evolve? Whatever the case, children do not readily learn these devised systems.6 As a consequence, the use of devised systems tends to be confined to the classroom; they do not spontaneously spread to a wider community or to broader use in everyday communication.

We know something about how the brain processes these devised systems. While natural sign languages recruit the usual left-hemisphere language regions in native signers, the devised system of lip-reading leads to a rather anomalous pattern of brain activation in congenitally deaf people.7 When asked to lip-read, deaf participants who had been trained orally displayed robust activation of the right hemisphere but large individual differences, as well as overall reduced and diffuse activation in the left temporal areas. In contrast, hearing individuals asked to lip-read displayed robust activation of the left hemisphere.

These results point to the conclusion that, when they process English visually, the deaf may not rely on the same brain systems as do hearing native speakers. Although no brain imaging studies have investigated the neural bases of Manually Coded English, it would not be surprising if, as with the results reported for the lip-reading system, it turns out that these devised systems do not rely on the typical left-hemisphere language brain structures. This has profound implications for how we educate deaf children.

Why Natural Language Is Indispensable

Natural sign languages such as ASL and BSL provide natural and efficient communication between deaf children and their peers, enabling the children to develop cognitively and emotionally. Unfortunately, fewer than one in ten deaf children is born within a community or family where signing is readily available—usually, of course, within a deaf family. For the 90 percent who are born to hearing parents, access to a natural language (signing) is limited or nonexistent, especially during the first few years of life.

To make things worse, deafness is often not diagnosed until a child is about two years old. The choice of a consistent system for rehabilitation and schooling may then be delayed for lack of agreement about the best methods, with proponents of learning speech (the “oralist” tradition) on one side and the signing community on the other. Nor is either one of these options always right for every child, because deaf children vary greatly in their level of remaining hearing and their experience with language. Nevertheless, despite the complexities, research provides clear answers on some pivotal issues.

First, early exposure to a natural language is critical for language acquisition. This is the cardinal conclusion of more than 30 years of research by linguists, psychologists, and other investigators. The ability to learn a language—spoken or signed—declines with age; exposure early in life is essential for native performance.8 Given the pivotal role of early exposure to a natural language, it is important to challenge the view, still widespread in the field of deaf education, that early exposure to a sign language might be detrimental to training a deaf child to speak oral English. According to this view, signing competes with the arduous process of oral training—a variant on the debate over bilingualism and the ability of children to master several natural languages at once. But research on bilingualism has been extremely productive over the last 20 years, and what it has concluded is that children can in fact master several natural languages at once. There may be some costs of doing so (a slight delay in language acquisition and a preference for one of the languages), but, overall, children exposed to two natural languages become proficient users of both. The notion that sign language competes with oral education is further undercut by the recent findings of Rachel Mayberry and her collaborators that early acquisition of a signed language in deaf children actually facilitates the subsequent learning of English,9 apparently by providing the child with an early natural language.

Second, providing infants and young children with a natural language—and a linguistic community where they are readily understood—unquestionably fosters their emotional and cognitive growth. Without a natural way to communicate their desires, fears, and other feelings, deaf children in a hearing environment are often isolated, with negative emotional consequences.

Oral Training and Cochlear Implants

Although we believe that exposing profoundly deaf children to a natural sign language can only benefit their language, emotional, and cognitive skills, we do not wish to say that oral training should be forgotten. Reading and writing English is a widespread and vexing problem among deaf people. The typical deaf individual reads at the fourth-grade level; the majority find reading arduous and as a result few deaf adults achieve the educational or occupational levels that they might otherwise reach. One hopeful development is advanced technology that may enhance the possibilities for communicating remotely through sign language (for example, visual telephoning). But the treasure trove of information available through reading increases daily, guaranteeing that literacy will remain indispensable in everyday life. To improve English literacy in the deaf community, intensive training with English appears to be necessary.

In recent years, more parents have turned to cochlear implants for their deaf children. The cochlear implant, a device permanently connecting the auditory nerve with the outside ear, turns sound vibrations into weak electrical signals to the brain, bypassing the eardrum. Cochlear implants, however, still cannot make a profoundly deaf child become a native user of oral language. Success of implants, thus far, at least, has been gauged by the ability of children to understand low-level speech sounds (such as the ability to discriminate between a “p” and a “b”), usually not by the children’s ability to comprehend and use full normal speech. Available research indicates that how well children with cochlear implants perform on simple audiological tests does not translate directly into skill in receiving and expressing language.

Much research is still needed on how cochlear implants affect the ability to acquire spoken language. The questions are basic. How complex an English phonology are cochlear implant users able to develop? What is the nature of the syntax and morphology they derive from their experience with the sound patterns of English? Recent brain-imaging studies of late-deafened adults who received cochlear implants indicate that they rely on the normal left-hemisphere language areas while listening to speech. This means that the brain’s standard system for processing spoken language may be at work in this population. Interestingly, these individuals were also found to recruit additional areas of the brain, including the primary visual cortex—probably because they have learned to rely more heavily on visual cues during language interactions. This additional brain activation was observed even when participants were listening to speech in the absence of visual input.10

An open question, though, is whether cochlear implants in congenitally and profoundly deaf children can provide sufficient hearing to support language acquisition. A few promising studies show that children with “oral-only” training who receive implants often outperform a control group. In this research, however, the control group consists not of deaf native signers but of deaf children in a “total communication” program. The total communication approach uses a mixture of techniques including sign, writing, mime, and speech. It tends to expose children only to impoverished versions of English and of ASL, so that they attain fluency in neither. The few studies of cochlear implants that have compared deaf native signers and deaf oral-only children indicate that performance in these two groups is similar. The oral-only children do not do better.

Overall, cochlear implantation is the right choice for late-onset deafness or individuals with moderate hearing loss. This is because two keys to the success of implants are the amount of remaining hearing (the more the better) and the length of deprivation (the shorter the better) before implantation surgery. The situation is more controversial when we look at prelingual, profoundly deaf children. Some information of possible relevance is available from animal studies. Research on a strain of congenitally deaf white cats by the group of Rainer Klinke and Andrej Kral in Germany showed that by adulthood a cat’s deprived auditory cortex has lost the ability to respond adequately to cochlear implants. Therefore, implantation needs to happen while the primary auditory is still developing and maturing. We do not yet know if this is true for humans. Pediatric clinical trials indicate that children who received implants before two years of age performed better on acoustic tests than those who received implants when they were between two and three. We still do not know, however, how successful these children will be in acquiring natural language.

Another vital consideration is what language exposure to give implanted children whose success with auditory language is unclear. Given the uncertainty in predicting the outcomes of cochlear implantation, it may be wise to expose children to a full natural language like ASL as a safety net.

Take-Home Lessons for Deaf Education

A decade or more of brain research has highlighted the critical role for a child’s cognitive development of experiences in the earliest years. Experience with language is no exception. Only early exposure to a natural language, spoken or signed, will enable a child’s language skills to blossom. When hearing is compromised, language development is at risk. For a child hard of hearing, surgical intervention or a hearing aid may restore sufficient hearing to support speech, but the only natural language fully accessible to the profoundly deaf child is signing.

Deaf education could benefit from better evaluation of how English is acquired by both the hard of hearing and the deaf. When comparing different educational methods for the deaf, it would also be helpful to include tests of cognitive and emotional skills. Members of the hearing community assume that deaf individuals who can speak the dominant language—for example, English—will have more opportunities in life than those who do not. Yet, there may be emotional and cognitive costs to pay for those individuals with only partial success at oral methods. Deaf individuals are sometimes described as suffering from social ailments, such as being withdrawn and shy. How much of this can we attribute to a lack of early or full language communication skills? Deaf individuals have also been shown to have delays in the development of what is called “social cognition,” such as on tests of the ability to attribute beliefs to others. Note that this deficit is restricted to deaf children born to hearing parents and not exposed to sign language; deaf children born to deaf parents and exposed to sign early in life perform on these tests as well as or better than their hearing counterparts.

The time has come to try a truly bilingual approach to educating deaf children, where the child is immersed in a full natural sign language for part of the day and trained orally and in written English at other times. The best research by neuroscientists, developmental psychologists, and linguists supports the idea that such an approach can maximize the abilities, with which all children are born, to master both languages.

References

  1. Supalla, T. “Structure and acquisition of verbs of motion in American Sign Language.” Unpublished, University of California, San Diego 1982.
  2. Lidell, S. American Sign Language Syntax. The Hague, Netherlands. Mouton 1980.
  3. Newport, EL, and Meier, RP. “The acquisition of American Sign Language.” In Slobin, DI (Editor). The Cross-Linguistic Study of Language Acquisition, Volume 1. Hillsdale, NJ. Lawrence Erlbaum Associates, 1985.
  4. Neville, HJ, et al. “Cerebral organization for language in deaf and hearing subjects: biological constraints and effects of experience.” Proceedings of the National Academy of Sciences 1998; 95; 922-929.
  5. Poizner, G, Klima, ES, and Bellugi, U. What the Hands Reveal About the Brain. Cambridge, MA. MIT Press, 1987.
  6. Supalla, S. “Manually Coded English: the modality question in signed language development.” In Siple, P, and Fisher, S. Theoretical issues in sign language research, Volume 2: Psychology. Chicago. University of Chicago Press, 1991: 85-109.
  7. MacSweeney, M, et al. “Speechreading circuits in people born deaf.” Neurpsychologia 2002; 40: 801-807.
  8. Newport, EL. “Maturational Constraints on language learning.” Cognitive Science 1990; 14:11-28.
  9. Mayberry, RL, Lock, E, and Kazmi, H. “Linguistic ability and early language exposure.” Nature 2002; 417: 38.
  10. Giraud, AL, and Truy, E. “The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.” Neurpsychologia 2002; 40: 1562-1569.