Honestly, I'd never heard of CS until this forum. Using ASL while growing up along with speech therapy was all I needed. I did perfectly fine.
I am glad that you did perfectly fine Alley Cat.
Honestly, I'd never heard of CS until this forum. Using ASL while growing up along with speech therapy was all I needed. I did perfectly fine.
I am only looking for more detail, in an attempt to answer the questions the best that I can.
A phoneme is the smallest unit of sound contained in a language. A morpheme is the smalles unit of sound that gives meaning to that sound.
That's just a simplified definition and it does no justice for those who use ASL. All signed languages have phonemes and they don't convey sounds!
Wikipedia defines phoneme CORRECTLY:
In human language, a phoneme is the smallest posited structural unit that distinguishes meaning, though they carry no semantic content themselves. In theoretical terms, phonemes are not the physical segments themselves, but cognitive abstractions or categorizations of them.
Hmm, not sure but isn't that still the same thing? Forgive me if I am reading incorrectly but - I am seeing this, loml:
Translation:
Is this what you meant?
Just curious.
What exactly is the purpose of cueing over ASL? If you had to debate vs an ASL lecturer, as a Cueing representative, what would you offer against your opponent if you were discussing this to a room of 100 students ready to take either courses?
But it seems to me like CS tries to communicate the sounds of one language, but visually, not as the sounds they actually are. So it isn't really communicating language in the same way as other languages, since it's trying to convey one of those methods (sound) by forcing it into another one (visual).
lsfoster - Thank you for respondng.
Would you please clarify for me some of the contents of your post.
What I mean is that, to me, cued speech doesn't seem to work like most other languages do. Instead of trying to present "visual" information visually, it tries to present "aural" information visually. It just kind of feels like trying to shove a square peg in a round hole.
Cued Speech provides complete visual access to phonemic information. Auditory-Verbal therapy supports the development of auditory skills. Therefore, Cued Speech should be considered a part of an auditory-verbal approach. Cued information supports the development of auditory perception, discrimination, and comprehension, and it clarifies potentially ambiguous information.
With consistent and appropriate use of hearing aids, cochlear implants , and/or other technologies, many individuals have more access to auditory information than before. However, the degree to which an individual who is deaf or hard of hearing can comprehend auditory information is unpredictable and inconsistent.
Early, accurate, and consistent cueing with individuals who are deaf or hard of hearing enables them to develop language, which is processed in the auditory cortex of the brain. Recent functional magnetic resonance imaging (fMRI) research has proven that deaf cuers process cued language in the auditory cortex. This is also consistent with research showing that the visual and auditory cortexes are interconnected in individuals with normal hearing.
Clear and accurate cues provide complete visual access to phonemic and environmental information. Thus cueing reinforces the auditory input the child receives. Such reinforcement supports the continuing development of auditory perception, discrimination, and comprehension.
Cued Speech:
- should be used as soon as possible after diagnosis to begin the process of establishing phonemic awareness and discrimination of language, regardless of the use of assistive listening technologies.
- clarifies visually the information the child accesses through audition.
- is especially necessary when hearing aids or implants are removed or compromised (e.g., bedtime, bath time, in the pool, in noisy environments, etc.).
- assures full communication when technology is not sufficient to provide access to every sound or phoneme (e.g., during classroom discussion when speakers overlap).
When information is missing or unclear, that impacts the language learning process, both receptively and expressively. In order to maximize language development and emergent literacy skills, individuals must have 100 percent access to the language all around them.
I feel like there are more effective methods out there. If you want to communicate your "english" sentence, why not use SEE, which is still based on visual elements. Or if you just want to communicate visually, ASL, which is a complete language on its own. I don't see the point of trying to make deaf people "see" the sounds you're making, which they still can't hear. It just feels like another attempt to shove deaf people into the "spoken english" world.
I mean, I don't think anyone would try to incorporate visual parts of language somehow aurally into english for blind people. (Correct me if I'm wrong, though). I just can't imagine anyone saying, "I know, I'll make a new sound and intonation and everything that means I'm raising one eyebrow, and another for shrugging, and another for both together, etc..., and I'll weave them into my sentence when I'm talking to a blind person." Let alone something like, "Instead of speaking english to them, I'll invent a new system of sounds which represent different mouth and facial shapes involved in speaking english, and I'll make those noises, so that they can understand what my face and mouth would be doing if I were speaking normally. That's how we'll communicate." It seems pretty obvious that the best way to communicate with a blind person is not by trying to make them "see" you talk, so I don't know why you would try to make a deaf person "hear" you talk.
I hope any of that made sense...
CS manual cues, supplementing the Tadoma method, may result in improved speech reception for the deaf-blind.
Reed, Rabinowitz, Durlack, et. al. (1992) "Analytic Study of the Tadoma Method: Improving Performance Through the Use of Supplementary Tactual Displays" Journal of Speech and Hearing Research, Vol. 35, 450-465, April 1992.
Wow, can't believe that cuem is still being practised, my dad practically communicated with it with me and I hated it with a passion (dunno why) when I was younger, all I wanted my dad to learn was sign language not cuem. Even nowadays I will deliberately ignore my dad when he uses it and I berate him for it by responding in sign language just to confuse him. I can lipread perfectly well but using cuem uses more brain cells and befuddles me! luckily now I can speak and lipread well, with occasional use of sign language.
I thought cuem was dying out cos it wasn't pushed enough to become another language. I remember back in the late 70's/ early 80's that cuem was all the rage and was forbidden to use sign language and had to learn cuem *GGGRRR*
I am so glad that ASL and BSL and other visual sign languages have retained their solidarity as a language.
Not that I'm dissing cuem, it just was that thing that really got my back up. I unreservedly apologise to practising users of cuem.