fbpx
Wikipedia

American Sign Language phonology

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from, those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

Six types of signs have been suggested: one-handed signs made without contact, one-handed signs made with contact (excluding on the other hand), symmetric two-handed signs (i.e. signs in which both hands are active and perform the same action), asymmetric two-handed signs (i.e. signs in which one hand is active and one hand is passive) where both hands have the same handshape, asymmetric two-handed signs where the hands have differing handshapes, and compound signs (that combine two or more of the above types).[1] The non-dominant hand in asymmetric signs often functions as the location of the sign. Monosyllabic signs are the most common type of signs in ASL and other sign languages.[2]

Phonemes and features edit

Signs consist of units smaller than the sign. These are often subdivided into parameters: handshapes with a particular orientation, that may perform some type of movement, in a particular location on the body or in the "signing space", and non-manual signals. These last may include movement of the eyebrows, the cheeks, the nose, the head, the torso, and the eyes. Parameter values are often equalled to spoken language phonemes, although sign language phonemes allow more simultaneity in their realization than phonemes in spoken languages. Phonemes in signed languages, as in oral languages, consist of features. For instance, the /B/ and /G/ handshapes are distinguished by the number of selected fingers: [all] versus [one].

Most phonological research focuses on the handshape. A problem in most studies of handshape is the fact that often elements of a manual alphabet are borrowed into signs, although not all of these elements are part of the sign language's phoneme inventory.[3] Also, allophones are sometimes considered separate phonemes. The first inventory of ASL handshapes contained 19 phonemes (or cheremes [4]).

In some phonological models, movement is a phonological prime.[5][6] Other models consider movement as redundant, as it is predictable from the locations, hand orientations and handshape features at the start and end of a sign.[7][8] Models in which movement is a prime usually distinguish path movement (i.e. movement of the hand[s] through space) and internal movement (i.e. an opening or closing movement of the hand, a hand rotation, or finger wiggling).

Allophony and assimilation edit

Each phoneme may have multiple allophones, i.e. different realizations of the same phoneme. For example, in the /B/ handshape, the bending of the selected fingers may vary from straight to bent at the lowest joint, and the position of the thumb may vary from stretched at the side of the hand to fold in the palm of the hand. Allophony may be free, but is also often conditioned by the context of the phoneme. Thus, the /B/ handshape will be flexed in a sign in which the fingertips touch the body, and the thumb will be folded in the palm in signs where the radial side of the hand touches the body or the other hand.

Assimilation of sign phonemes to signs in the context is a common process in ASL. For example, the point of contact for signs like THINK, normally at the forehead, may be articulated at a lower location if the location in the following sign is below the cheek. Other assimilation processes concern the number of selected fingers in a sign, that may adapt to that of the previous or following sign. Also, it has been observed that one-handed signs are articulated with two hands when followed by two-handed signs.

Phonotactics edit

As yet, little is known about ASL phonotactic constraints (or those in other signed languages). The Symmetry and Dominance Conditions[9] are sometimes assumed to be phonotactic constraints. The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement. The Dominance Condition requires that only one hand in a two-handed sign moves if the hands do not have the same handshape specifications, and that the non-dominant hand has an unmarked handshape. However, since these conditions seem to apply in more and more signed languages as cross-linguistic research increases, it is doubtful whether these should be considered as specific to ASL phonotactics.

Prosody edit

ASL conveys prosody through facial expression and upper-body position. Head position, eyebrows, eye gaze, blinks, and mouth positions all convey important linguistic information in sign languages.

Some signs have required facial components that distinguish them from other signs. An example of this sort of lexical distinction is the sign translated 'not yet', which requires that the tongue touch the lower lip and that the head rotates from side to side, in addition to the manual part of the sign. Without these features, it would be interpreted as 'late'.[10]

Though there are some non-manual signs that are used for a number of functions, proficient signers don't have any more difficulty decoding what raised eyebrows mean in a specific context than speakers of English have figuring out what the pitch contour of a sentence in context means. The use of similar facial changes such as eyebrow height to convey both prosody and grammatical distinctions is similar to the overlap of prosodic pitch and lexical or grammatical tone in a tone language.[11]

Like most signed languages, ASL has an analogue to speaking loudly and whispering in oral language. "Loud" signs are larger and more separated, sometimes even with one-handed signs being produced with both hands. "Whispered" signs are smaller, off-center, and sometimes (partially) blocked from sight to unintended onlookers by the speaker's body or a piece of clothing. In fast signing, in particular in context, sign movements are smaller and there may be less repetition. Signs occurring at the end of a phrase may show repetition or may be held ("phrase-final lengthening").

Phonological processing in the brain edit

The brain processes language phonologically by first identifying the smallest units in an utterance, then combining them to make meaning. In spoken language, these smallest units are often referred to as phonemes, and they are the smallest sounds we identify in a spoken word. In sign language, the smallest units are often referred to as the parameters of a sign (i.e. handshape, location, movement and palm orientation), and we can identify these smallest parts within a produced sign. The cognitive method of phonological processing can be described as segmentation and categorization, where the brain recognizes the individual parts within the sign and combines them to form meaning.[12] This is similar to how spoken language combines sounds to form syllables and then words. Even though the modalities of these languages differ (spoken vs. signed), the brain still processes them similarly through segmentation and categorization.

Measuring brain activity while a person produces or perceives sign language reveals that the brain processes signs differently compared to regular hand movements. This is similar to how the brain differentiates between spoken words and semantically lacking sounds. More specifically, the brain is able to differentiate actual signs from the transition movements in between signs, similarly to how words in spoken language can be identified separately from sounds or breaths that occur in between words that don't contain linguistic meaning. Multiple studies have revealed enhanced brain activity while processing sign language compared to processing only hand movements. For example, during a brain surgery performed on a deaf patient who was still awake, their neural activity was observed and analyzed while they were shown videos in American Sign Language. The results showed that greater brain activity occurred during the moments when the person was perceiving actual signs as compared to the moments that occurred during transition into the next sign [13] This means the brain is segmenting the units of the sign and identifying which units combine to form actual meaning.

An observed difference in location for phonological processing between spoken language and sign language is the activation of areas of the brain specific to auditory vs. visual stimuli. Because of the modality differences, the cortical regions will be stimulated differently depending on which type of language it is. Spoken language creates sounds, which affects the auditory cortices in the superior temporal lobes. Sign language creates visual stimuli, which affects the occipitotemporal regions. Yet both modes of language still activate many of the same regions that are known for language processing in the brain.[14]  For example, the left superior temporal gyrus is stimulated by language in both spoken and signed forms, even though it was once assumed it was only affected by auditory stimuli. [15] No matter the mode of language being used, whether it be spoken or signed, the brain processes language by segmenting the smallest phonological units and combining them to make meaning.

References edit

  1. ^ Battison, Robbin (2011). "Analyzing Signs". Linguistics of American Sign Language (5th ed.). Washington, DC: Gallaudet University Press. pp. 209–210. ISBN 978-1-56368-508-8.
  2. ^ Sandler, Wendy (2008). "The Syllable in Sign Language: Considering the Other Natural Language Modality". Ontogeny and phylogeny of syllable organization, Festschrift in honor of Peter MacNeilage. New York: Taylor Francis. p. 384.
  3. ^ Battison, Robbin (1974). "Phonological Deletion in American Sign Language". Sign Language Studies. 1005 (1): 1–19. doi:10.1353/sls.1974.0005. ISSN 1533-6263. S2CID 143890757.
  4. ^ Landar, Herbert; Stokoe, William C. (April 1961). "Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf". Language. 37 (2): 269. doi:10.2307/410856. ISSN 0097-8507. JSTOR 410856.
  5. ^ Perlmutter, David M. (1993), "Sonority and Syllable Structure in American Sign Language **A slightly different version of this article appeared in Linguistic Inquiry, Vol. 23, No. 3, pp. 407–442 (1992). © 1992 by the Massachusetts Institute of Technology. Reprinted by permission.", Current Issues in ASL Phonology, Elsevier, pp. 227–261, doi:10.1016/b978-0-12-193270-1.50016-9, ISBN 9780121932701, retrieved 2022-04-14
  6. ^ Sandler, Wendy (December 1999). "Diane Brentari (1999). A prosodic model of sign language phonology. Cambridge, Mass.: MIT Press. Pp. xviii+376". Phonology. 16 (3): 443–447. doi:10.1017/s0952675799003802. ISSN 0952-6757. S2CID 60944874.
  7. ^ van der Hulst, Harry (August 1993). "Units in the analysis of signs". Phonology. 10 (2): 209–241. doi:10.1017/s095267570000004x. ISSN 0952-6757. S2CID 16629421.
  8. ^ Demey, Eline (2003-12-31). "Review of Van der Kooij (2002): Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity". Sign Language & Linguistics. 6 (2): 277–284. doi:10.1075/sll.6.2.11dem. ISSN 1387-9316.
  9. ^ Battison, Robbin (1974). "Phonological Deletion in American Sign Language". Sign Language Studies. 1005 (1): 1–19. doi:10.1353/sls.1974.0005. ISSN 1533-6263. S2CID 143890757.
  10. ^ Liddell (2003)
  11. ^ Traci Weast, 2008. PhD dissertation: Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows
  12. ^ Petitto, L. A.; Langdon, C.; Stone, A.; Andriola, D.; Kartheiser, G.; Cochran, C. (November 2016). "Visual sign phonology: insights into human reading and language from a natural soundless phonology". WIREs Cognitive Science. 7 (6): 366–381. doi:10.1002/wcs.1404. ISSN 1939-5078. PMID 27425650.
  13. ^ Leonard, Matthew K.; Lucas, Ben; Blau, Shane; Corina, David P.; Chang, Edward F. (November 2020). "Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language". Current Biology. 30 (22): 4342–4351.e3. doi:10.1016/j.cub.2020.08.048. PMC 7674262. PMID 32888480.
  14. ^ MacSweeney, M. (2002-07-01). "Neural systems underlying British Sign Language and audio-visual English processing in native users". Brain. 125 (7): 1583–1593. doi:10.1093/brain/awf153. ISSN 1460-2156. PMID 12077007.
  15. ^ Petitto, Laura Ann; Zatorre, Robert J.; Gauna, Kristine; Nikelski, E. J.; Dostie, Deanna; Evans, Alan C. (2000-12-05). "Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language". Proceedings of the National Academy of Sciences. 97 (25): 13961–13966. doi:10.1073/pnas.97.25.13961. ISSN 0027-8424. PMC 17683. PMID 11106400.

Bibliography edit

  • Battison, R. (1978) Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.
  • Brentari, D. (1998) A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
  • Hulst, Harry van der. 1993. Units in the analysis of signs. Phonology 10, 209–241.
  • Liddell, Scott K. & Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–277.
  • Perlmutter, D. 1992. Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23, 407–442.
  • Sandler, W.(1989) Phonological representation of the sign: linearity and nonlinearity in American Sign Language. Dordrecht: Foris.
  • Stokoe, W. (1960) Sign language structure. An outline of the visual communication systems of the American Deaf. (1993 Reprint ed.). Silver Spring, MD: Linstok Press.
  • Van der Kooij, E.(2002). Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity. PhD Thesis, Universiteit Leiden, Leiden.

american, sign, language, phonology, this, article, multiple, issues, please, help, improve, discuss, these, issues, talk, page, learn, when, remove, these, template, messages, this, article, unclear, citation, style, references, used, made, clearer, with, dif. This article has multiple issues Please help improve it or discuss these issues on the talk page Learn how and when to remove these template messages This article has an unclear citation style The references used may be made clearer with a different or consistent style of citation and footnoting September 2021 Learn how and when to remove this template message This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources American Sign Language phonology news newspapers books scholar JSTOR July 2022 Learn how and when to remove this template message Learn how and when to remove this template message Sign languages such as American Sign Language ASL are characterized by phonological processes analogous to yet dissimilar from those of oral languages Although there is a qualitative difference from oral languages in that sign language phonemes are not based on sound and are spatial in addition to being temporal they fulfill the same role as phonemes in oral languages Six types of signs have been suggested one handed signs made without contact one handed signs made with contact excluding on the other hand symmetric two handed signs i e signs in which both hands are active and perform the same action asymmetric two handed signs i e signs in which one hand is active and one hand is passive where both hands have the same handshape asymmetric two handed signs where the hands have differing handshapes and compound signs that combine two or more of the above types 1 The non dominant hand in asymmetric signs often functions as the location of the sign Monosyllabic signs are the most common type of signs in ASL and other sign languages 2 Contents 1 Phonemes and features 2 Allophony and assimilation 3 Phonotactics 4 Prosody 5 Phonological processing in the brain 6 References 7 BibliographyPhonemes and features editSigns consist of units smaller than the sign These are often subdivided into parameters handshapes with a particular orientation that may perform some type of movement in a particular location on the body or in the signing space and non manual signals These last may include movement of the eyebrows the cheeks the nose the head the torso and the eyes Parameter values are often equalled to spoken language phonemes although sign language phonemes allow more simultaneity in their realization than phonemes in spoken languages Phonemes in signed languages as in oral languages consist of features For instance the B and G handshapes are distinguished by the number of selected fingers all versus one Most phonological research focuses on the handshape A problem in most studies of handshape is the fact that often elements of a manual alphabet are borrowed into signs although not all of these elements are part of the sign language s phoneme inventory 3 Also allophones are sometimes considered separate phonemes The first inventory of ASL handshapes contained 19 phonemes or cheremes 4 In some phonological models movement is a phonological prime 5 6 Other models consider movement as redundant as it is predictable from the locations hand orientations and handshape features at the start and end of a sign 7 8 Models in which movement is a prime usually distinguish pathmovement i e movement of the hand s through space and internal movement i e an opening or closing movement of the hand a hand rotation or finger wiggling Allophony and assimilation editEach phoneme may have multiple allophones i e different realizations of the same phoneme For example in the B handshape the bending of the selected fingers may vary from straight to bent at the lowest joint and the position of the thumb may vary from stretched at the side of the hand to fold in the palm of the hand Allophony may be free but is also often conditioned by the context of the phoneme Thus the B handshape will be flexed in a sign in which the fingertips touch the body and the thumb will be folded in the palm in signs where the radial side of the hand touches the body or the other hand Assimilation of sign phonemes to signs in the context is a common process in ASL For example the point of contact for signs like THINK normally at the forehead may be articulated at a lower location if the location in the following sign is below the cheek Other assimilation processes concern the number of selected fingers in a sign that may adapt to that of the previous or following sign Also it has been observed that one handed signs are articulated with two hands when followed by two handed signs Phonotactics editAs yet little is known about ASL phonotactic constraints or those in other signed languages The Symmetry and Dominance Conditions 9 are sometimes assumed to be phonotactic constraints The Symmetry Condition requires both hands in a symmetric two handed sign to have the same or a mirrored configuration orientation and movement The Dominance Condition requires that only one hand in a two handed sign moves if the hands do not have the same handshape specifications and that the non dominant hand has an unmarked handshape However since these conditions seem to apply in more and more signed languages as cross linguistic research increases it is doubtful whether these should be considered as specific to ASL phonotactics Prosody editASL conveys prosody through facial expression and upper body position Head position eyebrows eye gaze blinks and mouth positions all convey important linguistic information in sign languages Some signs have required facial components that distinguish them from other signs An example of this sort of lexical distinction is the sign translated not yet which requires that the tongue touch the lower lip and that the head rotates from side to side in addition to the manual part of the sign Without these features it would be interpreted as late 10 Though there are some non manual signs that are used for a number of functions proficient signers don t have any more difficulty decoding what raised eyebrows mean in a specific context than speakers of English have figuring out what the pitch contour of a sentence in context means The use of similar facial changes such as eyebrow height to convey both prosody and grammatical distinctions is similar to the overlap of prosodic pitch and lexical or grammatical tone in a tone language 11 Like most signed languages ASL has an analogue to speaking loudly and whispering in oral language Loud signs are larger and more separated sometimes even with one handed signs being produced with both hands Whispered signs are smaller off center and sometimes partially blocked from sight to unintended onlookers by the speaker s body or a piece of clothing In fast signing in particular in context sign movements are smaller and there may be less repetition Signs occurring at the end of a phrase may show repetition or may be held phrase final lengthening Phonological processing in the brain editThe brain processes language phonologically by first identifying the smallest units in an utterance then combining them to make meaning In spoken language these smallest units are often referred to as phonemes and they are the smallest sounds we identify in a spoken word In sign language the smallest units are often referred to as the parameters of a sign i e handshape location movement and palm orientation and we can identify these smallest parts within a produced sign The cognitive method of phonological processing can be described as segmentation and categorization where the brain recognizes the individual parts within the sign and combines them to form meaning 12 This is similar to how spoken language combines sounds to form syllables and then words Even though the modalities of these languages differ spoken vs signed the brain still processes them similarly through segmentation and categorization Measuring brain activity while a person produces or perceives sign language reveals that the brain processes signs differently compared to regular hand movements This is similar to how the brain differentiates between spoken words and semantically lacking sounds More specifically the brain is able to differentiate actual signs from the transition movements in between signs similarly to how words in spoken language can be identified separately from sounds or breaths that occur in between words that don t contain linguistic meaning Multiple studies have revealed enhanced brain activity while processing sign language compared to processing only hand movements For example during a brain surgery performed on a deaf patient who was still awake their neural activity was observed and analyzed while they were shown videos in American Sign Language The results showed that greater brain activity occurred during the moments when the person was perceiving actual signs as compared to the moments that occurred during transition into the next sign 13 This means the brain is segmenting the units of the sign and identifying which units combine to form actual meaning An observed difference in location for phonological processing between spoken language and sign language is the activation of areas of the brain specific to auditory vs visual stimuli Because of the modality differences the cortical regions will be stimulated differently depending on which type of language it is Spoken language creates sounds which affects the auditory cortices in the superior temporal lobes Sign language creates visual stimuli which affects the occipitotemporal regions Yet both modes of language still activate many of the same regions that are known for language processing in the brain 14 For example the left superior temporal gyrus is stimulated by language in both spoken and signed forms even though it was once assumed it was only affected by auditory stimuli 15 No matter the mode of language being used whether it be spoken or signed the brain processes language by segmenting the smallest phonological units and combining them to make meaning References edit Battison Robbin 2011 Analyzing Signs Linguistics of American Sign Language 5th ed Washington DC Gallaudet University Press pp 209 210 ISBN 978 1 56368 508 8 Sandler Wendy 2008 The Syllable in Sign Language Considering the Other Natural Language Modality Ontogeny and phylogeny of syllable organization Festschrift in honor of Peter MacNeilage New York Taylor Francis p 384 Battison Robbin 1974 Phonological Deletion in American Sign Language Sign Language Studies 1005 1 1 19 doi 10 1353 sls 1974 0005 ISSN 1533 6263 S2CID 143890757 Landar Herbert Stokoe William C April 1961 Sign Language Structure An Outline of the Visual Communication Systems of the American Deaf Language 37 2 269 doi 10 2307 410856 ISSN 0097 8507 JSTOR 410856 Perlmutter David M 1993 Sonority and Syllable Structure in American Sign Language A slightly different version of this article appeared in Linguistic Inquiry Vol 23 No 3 pp 407 442 1992 c 1992 by the Massachusetts Institute of Technology Reprinted by permission Current Issues in ASL Phonology Elsevier pp 227 261 doi 10 1016 b978 0 12 193270 1 50016 9 ISBN 9780121932701 retrieved 2022 04 14 Sandler Wendy December 1999 Diane Brentari 1999 A prosodic model of sign language phonology Cambridge Mass MIT Press Pp xviii 376 Phonology 16 3 443 447 doi 10 1017 s0952675799003802 ISSN 0952 6757 S2CID 60944874 van der Hulst Harry August 1993 Units in the analysis of signs Phonology 10 2 209 241 doi 10 1017 s095267570000004x ISSN 0952 6757 S2CID 16629421 Demey Eline 2003 12 31 Review of Van der Kooij 2002 Phonological Categories in Sign Language of the Netherlands The Role of Phonetic Implementation and Iconicity Sign Language amp Linguistics 6 2 277 284 doi 10 1075 sll 6 2 11dem ISSN 1387 9316 Battison Robbin 1974 Phonological Deletion in American Sign Language Sign Language Studies 1005 1 1 19 doi 10 1353 sls 1974 0005 ISSN 1533 6263 S2CID 143890757 Liddell 2003 Traci Weast 2008 PhD dissertation Questions in American Sign Language A quantitative analysis of raised and lowered eyebrows Petitto L A Langdon C Stone A Andriola D Kartheiser G Cochran C November 2016 Visual sign phonology insights into human reading and language from a natural soundless phonology WIREs Cognitive Science 7 6 366 381 doi 10 1002 wcs 1404 ISSN 1939 5078 PMID 27425650 Leonard Matthew K Lucas Ben Blau Shane Corina David P Chang Edward F November 2020 Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language Current Biology 30 22 4342 4351 e3 doi 10 1016 j cub 2020 08 048 PMC 7674262 PMID 32888480 MacSweeney M 2002 07 01 Neural systems underlying British Sign Language and audio visual English processing in native users Brain 125 7 1583 1593 doi 10 1093 brain awf153 ISSN 1460 2156 PMID 12077007 Petitto Laura Ann Zatorre Robert J Gauna Kristine Nikelski E J Dostie Deanna Evans Alan C 2000 12 05 Speech like cerebral activity in profoundly deaf people processing signed languages Implications for the neural basis of human language Proceedings of the National Academy of Sciences 97 25 13961 13966 doi 10 1073 pnas 97 25 13961 ISSN 0027 8424 PMC 17683 PMID 11106400 Bibliography editBattison R 1978 Lexical Borrowing in American Sign Language Silver Spring MD Linstok Press Brentari D 1998 A Prosodic Model of Sign Language Phonology Cambridge MA MIT Press Hulst Harry van der 1993 Units in the analysis of signs Phonology 10 209 241 Liddell Scott K amp Robert E Johnson 1989 American Sign Language The phonological base Sign Language Studies 64 197 277 Perlmutter D 1992 Sonority and syllable structure in American Sign Language Linguistic Inquiry 23 407 442 Sandler W 1989 Phonological representation of the sign linearity and nonlinearity in American Sign Language Dordrecht Foris Stokoe W 1960 Sign language structure An outline of the visual communication systems of the American Deaf 1993 Reprint ed Silver Spring MD Linstok Press Van der Kooij E 2002 Phonological Categories in Sign Language of the Netherlands The Role of Phonetic Implementation and Iconicity PhD Thesis Universiteit Leiden Leiden Retrieved from https en wikipedia org w index php title American Sign Language phonology amp oldid 1211519649, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.