Reply to thread

Logographic or Phonemic??


I am Deaf myself and involved in a Deaf Nonprofit that is interested in the writing of sign languages. We are presently active proponents of the SignWriting system for a number of reasons.


1) SignWriting has incredible flexibility to depict the phonetic or phonemic details of writing ASL. Compared to HamNoSys, SignFont, or Stokoe, SignWriting's flexibility is very handy and especially useful when it comes to ASL story-telling and poetry.


2) SignWriting is rather easy to learn ... at least on the reading side. We have done a pilot SW literacy class and have found that the reading of SignWriting is easily picked up by all of the students. While writing will take longer than reading, it is certainly not impossible or too difficult. Most did well on their writing assignments. In fact, a 60-year-old deaf woman who had never seen the SignWriting system before this fall was able to read through the Level 2 Goldilocks book that Valerie Sutton published.


3) You made the following statement in your comments:


"The signs of SignWriting are overly complex in all of their detail, which makes them too cumbersome to be written by hand and renders them impractical for everyday usage. Such an approach is unnecessary and is the result of taking techniques that are meant for spoken languages and using them on a signed language."


On the contrary ... SignWriting is not using approaches meant for spoken languages because that was exactly the failing of the previous systems I have mentioned. HamNoSys, Stokoe, SignFont all focused on making written sign language a linear stream. That can be done for spoken languages because the production of speech is a linear stream. Sign languages however are 3D so you have to look at sign languages differently. Sign languages are visual and sign languages utilize 3D space as part of its grammar. Sign langauges use facial expressions and movement modifications as part of its grammar. So there are a number of factors that must be considered in accurately depicting sign language on the written page.


Valerie has done an excellent job by focusing on movement. By describing the movements rather than the linguistic meaning behind the signs, she has made it easy to capture the necessary elements  of each sign. Movement becomes more critical when you consider deaf poetry or storytelling. In ASL storytelling or poetry, there are sections where the signer will not necessarily sign what they say.  They will use mime and gesture intermingled with signing to express their point.  While I have not been able to find a linguistic analysis of these larger segments of signing style, they too are bound by some language rules of which I am still wanting to study.  Stokoe, HamNoSys, and SignFont fail miserably at being able to capture these elements of ASL storytelling and poetry. These elements will appear in dialogue when a person is sharing a story.


My concern is that a logographic system will also fail in being able to capture these segments of mime and gesture which are not focused on words, per se, but the painting of a specific picture in the mind of the listener using tools available to the signer. These elements must be considered in order for a writing system to be successful for sign langauges.


4) SignWriting does have different means of making the writing simpler. For example, there are handwriting conventions that make writing the print version easier. Valerie has also developed a shorthand version of the SignWriting system which is much easier to write. However, because she is focusing on completing all the details of a IMWA (international movement writing alphabet), she has not been able to put the time into writing up all the details of this shorthand system. From those who used it many years ago, it has much promise to become a handwriting version that can also be understood as well as the print version. Lastly, you only have to write the details that are necessary when your audience already knows the sign language you are using. So you can reduce the number of symbols used in a particular sign if the reduction of symbols will be understood.  If however you are writing to an audience who may not be as familiar with your sign language, then you have the option of adding the extra details necessary for your audience to reproduce the signs.  In a way, this could be similar to the way Hebrew omits vowel pointing for experienced readers, but includes them for inexperienced readers.


I have been researching writing systems for sign languages since 1988. So far in my research Valerie Sutton's system is probably the most practical and reasonable for an everyday system.  It is also well-equipped to handle any kind of movement performed by a signer in their discourse. I myself have tried to come up with a different system, but have ended up coming back to SignWriting as probably one of the best systems developed for writing sign languages. Unlike HamNoSys, SignFont, or Stokoe, SignWriting is actively used today as an everyday writing system by people in 27 different countries. It has been critiqued and tested by deaf people in her own community in California and, now, our deaf non profit also has tested this writing system with a number of deaf people who have expressed excitement at how easy it was to match the movements to their own signing.


I welcome a discussion on these matters, and I am interested to see how a logographic system would handle:


- mime

- gestures

- directional verbs (i.e. verbs whose subjects, objects, and/or locational referent is built into the sign) Examples include: give (I-give-to-you, I-give-to-you, I-give-to-each-of-you, I-give-to-all-of-you, I-give-to-some-of-you, you-give-to-me, etc.), have surgery (on my shoulder, on my knee, on my hand, etc.), etc.

- movement modifications (i.e. I work, I work slowly, I work fast, I work repeatedly, I work carelessly, I work over a long period of time). Each of these can be signed with just the signs "I" and "WORK" but the modifications of face and movement can alter the meaning of the signs.

- "supersegmental phonemes" such as the facial expressions that indicate Yes/No questions or WH questions and the facial expressions that indicate the topic section of a topic-comment sentence.


This is a starting place for a discussion on the pros and cons of a phonemic system like SignWriting versus a logographic system such as the one you propose. It is important to consider how these elements of grammar can be included.


Let me close by saying that I am not here to bash your proposed system, but merely to challenge you with the issues that need to be resolved for a writing system for sign languages to be successful. I know for a fact that I and other deaf people want a writing system that can accurately express what I sign. So when I read it on the paper, I am seeing what I sign and all the grammatical information that my face, body, and hands are conveying. If that can be done with a logographic system, more power to it, but it will need to cover those elements so that a deaf person reading can accurately pull that information from the logographic system. Personally, it would take much for me to be convinced that a logographic system is better than the phonemic system like SignWriting because of the intrinsic flexibility built into a phonemic system to be able to express new vocabulary and movements. But I am willing to participate in a discussion to see what the merits of your system would be.


For the record, I believe that there is value in a writing system for sign languages. With the present hearing cultural bias that assumes that a language is only a language when it has been written, ASL and the other 114 different sign languages around the world also need to have a written voice as well so that they can show that they too are functionally equivalent to spoken languages. Further, writing has great value in preserving language and culture. The stories, poetry, and other literary arts can be more easily preserved in writing than on video. Both are valuable tools, but only writing can be done anywhere, at any time, and is easily edited without high expense. It also makes it possible for static materials in the spoken language to be translated to static materials in the sign language at less cost. It also makes it easier to edit the static materials in the sign language when the static materials in the spoken language changes.


It has also be shown (outside the United States) that for people to be truly literate in their second language, they must first be taught literacy in their first language and then the second language can be learned well. I believe that will be a valuable tool for all deaf people to become more efficient in learning the national spoken language while at the same time valuing their own first language.


Well, I guess this post is long enough :)  I hope that this will be part of a fun and invigorating discussion.


Back
Top