The Art of Cueing

Honestly, I'd never heard of CS until this forum. Using ASL while growing up along with speech therapy was all I needed. I did perfectly fine.

I am glad that you did perfectly fine Alley Cat.
 
I am only looking for more detail, in an attempt to answer the questions the best that I can.

loml,

I applaud your attempts to explain cued speech in spite of the nasty and ignorant comments you are getting from the same person who consistently attacks your posts about cued speech.

Keep up the good work and thanks for sharing.

Rick
 
I am a cuer myself and I don't think most people really understand the concept of cueing at all, even native cuers themselves aren't sure and that's ok. English speakers don't even KNOW why they "follow" rules of English grammar and they tend to associate "phoneme" to "sound." The problem is that phoneme is not really a "sound" but rather an abstract unit that is linguistically revelant to the language.

For example, Americans and Britians speak very differently... you know that they're speaking the same language but you just notice there's something different in their speech - they don't match at all but we're able to comprehend them well. Why? Simple, we quickly recognize their speech patterns which quickly confirms the abstract idea that they are sharing the same language as we use.

Cued Speech is not a language, no one ever said that. But even more confusing, cued English IS a language. Also, with that same logic, signs is not a language (which is accurate, signs is NOT a language) but ASL is a language. Speech is not a language either. Spoken English is.

Cued speech or cuem (i prefer cuem) is a modality. Signs is a modality for ASL, LSF, et all. Speech is a modality for English, French, et al. Since cuem is within the constraints of spoken languages, it only stays within those languages that were traditionally spoken.

Since cuem can be adapted to any languages that were traditionally spoken, a language that uses cuem is also a full language. In fact, in france, it's considered "complete spoken language" (again a misnomer on their part because cueing doesn't need speech).

Does this get through your head:

signs is not a language
cuem is not a language
speech is not a language
writing is not a language

They are modalities.

BUT...

ASL is a language expressed through signs
English is a language expressed through speech, cuem, or writing.

No one ever claimed cued speech itself is a language but if they say, "cued English is a language" then it's correct because English is a language, only that it's expressed through cuem. If I tell you that I use "spoken English", does that mean I don't use a language?
 
A phoneme is the smallest unit of sound contained in a language. A morpheme is the smalles unit of sound that gives meaning to that sound.
 
A phoneme is the smallest unit of sound contained in a language. A morpheme is the smalles unit of sound that gives meaning to that sound.

That's just a simplified definition and it does no justice for those who use ASL. All signed languages have phonemes and they don't convey sounds!

Wikipedia defines phoneme CORRECTLY:

In human language, a phoneme is the smallest posited structural unit that distinguishes meaning, though they carry no semantic content themselves. In theoretical terms, phonemes are not the physical segments themselves, but cognitive abstractions or categorizations of them.
 
That's just a simplified definition and it does no justice for those who use ASL. All signed languages have phonemes and they don't convey sounds!

Wikipedia defines phoneme CORRECTLY:

In human language, a phoneme is the smallest posited structural unit that distinguishes meaning, though they carry no semantic content themselves. In theoretical terms, phonemes are not the physical segments themselves, but cognitive abstractions or categorizations of them.

Yes, it is a simplified explanation, and since this is a thread on cued speech, and the wonders of cued speech in conveying phonemic information of spoken language, I related the explanation to that.

And, I don't use Wikipedia for my definitions. If I want to quote a definition for phoneme and morpheme using technical jargon, I will consult one of my linguistic texts. If I want to explain it simply to someone I will rely on the knowledge I have gained from those linguistic texts and instead of copying and pasting, demonstrate understanding by actually using my own words.

Can you explain to me, for instance, what a cognitive abstraction or categorization of a phoneme is?
 
Hmm, not sure but isn't that still the same thing? Forgive me if I am reading incorrectly but - I am seeing this, loml:



Translation:


Is this what you meant?

Just curious.
What exactly is the purpose of cueing over ASL? If you had to debate vs an ASL lecturer, as a Cueing representative, what would you offer against your opponent if you were discussing this to a room of 100 students ready to take either courses?

The idea is that cued speech makes English visible to deaf people since ASL is a language with its own syntax. The belief is that deaf people's reading and writing skills are poor due to not having access to English via the spoken form hence the invention of cued speech.
 
But it seems to me like CS tries to communicate the sounds of one language, but visually, not as the sounds they actually are. So it isn't really communicating language in the same way as other languages, since it's trying to convey one of those methods (sound) by forcing it into another one (visual).

lsfoster - Thank you for respondng.

Would you please clarify for me some of the contents of your post.

What I mean is that, to me, cued speech doesn't seem to work like most other languages do. Instead of trying to present "visual" information visually, it tries to present "aural" information visually. It just kind of feels like trying to shove a square peg in a round hole. I feel like there are more effective methods out there. If you want to communicate your "english" sentence, why not use SEE, which is still based on visual elements. Or if you just want to communicate visually, ASL, which is a complete language on its own. I don't see the point of trying to make deaf people "see" the sounds you're making, which they still can't hear. It just feels like another attempt to shove deaf people into the "spoken english" world.

I mean, I don't think anyone would try to incorporate visual parts of language somehow aurally into english for blind people. (Correct me if I'm wrong, though). I just can't imagine anyone saying, "I know, I'll make a new sound and intonation and everything that means I'm raising one eyebrow, and another for shrugging, and another for both together, etc..., and I'll weave them into my sentence when I'm talking to a blind person." Let alone something like, "Instead of speaking english to them, I'll invent a new system of sounds which represent different mouth and facial shapes involved in speaking english, and I'll make those noises, so that they can understand what my face and mouth would be doing if I were speaking normally. That's how we'll communicate." It seems pretty obvious that the best way to communicate with a blind person is not by trying to make them "see" you talk, so I don't know why you would try to make a deaf person "hear" you talk.

I hope any of that made sense...
 
Thank you for finding time in, I am sure a busy life, to make a response. I will try my utmost to answer your questions.

What I mean is that, to me, cued speech doesn't seem to work like most other languages do. Instead of trying to present "visual" information visually, it tries to present "aural" information visually. It just kind of feels like trying to shove a square peg in a round hole.

I certainly can see where you are coming form on this. I am not a nuero-scientist, nor am I versed in the neuro-plasticity of the brain. What the exact path ways used are beyond my understanding. I do know from experience and from discussion with other cuers, that when a person "reads the cues", the "inner voice" of the brain is used. Are you familiar with the "inner voice" and have you experienced it?

I invite you to read this:

Cued Speech provides complete visual access to phonemic information. Auditory-Verbal therapy supports the development of auditory skills. Therefore, Cued Speech should be considered a part of an auditory-verbal approach. Cued information supports the development of auditory perception, discrimination, and comprehension, and it clarifies potentially ambiguous information.

With consistent and appropriate use of hearing aids, cochlear implants , and/or other technologies, many individuals have more access to auditory information than before. However, the degree to which an individual who is deaf or hard of hearing can comprehend auditory information is unpredictable and inconsistent.

Early, accurate, and consistent cueing with individuals who are deaf or hard of hearing enables them to develop language, which is processed in the auditory cortex of the brain. Recent functional magnetic resonance imaging (fMRI) research has proven that deaf cuers process cued language in the auditory cortex. This is also consistent with research showing that the visual and auditory cortexes are interconnected in individuals with normal hearing.

Clear and accurate cues provide complete visual access to phonemic and environmental information. Thus cueing reinforces the auditory input the child receives. Such reinforcement supports the continuing development of auditory perception, discrimination, and comprehension.

Cued Speech:

- should be used as soon as possible after diagnosis to begin the process of establishing phonemic awareness and discrimination of language, regardless of the use of assistive listening technologies.
- clarifies visually the information the child accesses through audition.
- is especially necessary when hearing aids or implants are removed or compromised (e.g., bedtime, bath time, in the pool, in noisy environments, etc.).
- assures full communication when technology is not sufficient to provide access to every sound or phoneme (e.g., during classroom discussion when speakers overlap).

When information is missing or unclear, that impacts the language learning process, both receptively and expressively. In order to maximize language development and emergent literacy skills, individuals must have 100 percent access to the language all around them.

CUEDSPEECH.org > About NCSA > Position Statements > The Inclusion of Cued Speech in an Auditory-Verbal Environment


I feel like there are more effective methods out there. If you want to communicate your "english" sentence, why not use SEE, which is still based on visual elements. Or if you just want to communicate visually, ASL, which is a complete language on its own. I don't see the point of trying to make deaf people "see" the sounds you're making, which they still can't hear. It just feels like another attempt to shove deaf people into the "spoken english" world.

The cue that is provided receptively(visually), is processed in the auditory cortex, simultaneously with the visual cue of the lip shape. When speaking English, the mouth makes three distinct shapes: round, flat or oval. I don't know why, it is simply what it is. The rest of the stuff of speech is obscure. This is why cueing in and of itself, is not a speech tool. This entire process happens very, very, quickly for the brain. (I think brains are very amazing!)

I mean, I don't think anyone would try to incorporate visual parts of language somehow aurally into english for blind people. (Correct me if I'm wrong, though). I just can't imagine anyone saying, "I know, I'll make a new sound and intonation and everything that means I'm raising one eyebrow, and another for shrugging, and another for both together, etc..., and I'll weave them into my sentence when I'm talking to a blind person." Let alone something like, "Instead of speaking english to them, I'll invent a new system of sounds which represent different mouth and facial shapes involved in speaking english, and I'll make those noises, so that they can understand what my face and mouth would be doing if I were speaking normally. That's how we'll communicate." It seems pretty obvious that the best way to communicate with a blind person is not by trying to make them "see" you talk, so I don't know why you would try to make a deaf person "hear" you talk.

I hope any of that made sense...

Cueing is not a new system of sound, it is the sound of spoken English, or French etc. (56 or more). Have you had the opportunity to read the bibliography of Dr. Cornett?

Here is a link: Cued Speech and R. Orin Cornett

There is some information available about deaf-blind (Tadoma) and CS:

CS manual cues, supplementing the Tadoma method, may result in improved speech reception for the deaf-blind.

Reed, Rabinowitz, Durlack, et. al. (1992) "Analytic Study of the Tadoma Method: Improving Performance Through the Use of Supplementary Tactual Displays" Journal of Speech and Hearing Research, Vol. 35, 450-465, April 1992.

CUEDSPEECH.org > Cued Speech > Research > CS / Deaf Blind

Cuieng is mutli-sensory, one of the most formative ways to understand it is to learn cueing. This is like me describing to my daughter what her first kiss will be l like. :)
 
Cueing is not multi sensory. It provides for the visual sense only.

The rest of the stuff posted comes directly from the NCSA, who stands to make a profit off of CS. Can you say "bias"? Not to mention which, can be easily refuted by actual research.
 
Wow, can't believe that cuem is still being practised, my dad practically communicated with it with me and I hated it with a passion (dunno why) when I was younger, all I wanted my dad to learn was sign language not cuem. Even nowadays I will deliberately ignore my dad when he uses it and I berate him for it by responding in sign language just to confuse him. I can lipread perfectly well but using cuem uses more brain cells and befuddles me! luckily now I can speak and lipread well, with occasional use of sign language.

I thought cuem was dying out cos it wasn't pushed enough to become another language. I remember back in the late 70's/ early 80's that cuem was all the rage and was forbidden to use sign language and had to learn cuem *GGGRRR*

I am so glad that ASL and BSL and other visual sign languages have retained their solidarity as a language.

Not that I'm dissing cuem, it just was that thing that really got my back up. I unreservedly apologise to practising users of cuem.
 
Wow, can't believe that cuem is still being practised, my dad practically communicated with it with me and I hated it with a passion (dunno why) when I was younger, all I wanted my dad to learn was sign language not cuem. Even nowadays I will deliberately ignore my dad when he uses it and I berate him for it by responding in sign language just to confuse him. I can lipread perfectly well but using cuem uses more brain cells and befuddles me! luckily now I can speak and lipread well, with occasional use of sign language.

I thought cuem was dying out cos it wasn't pushed enough to become another language. I remember back in the late 70's/ early 80's that cuem was all the rage and was forbidden to use sign language and had to learn cuem *GGGRRR*

I am so glad that ASL and BSL and other visual sign languages have retained their solidarity as a language.

Not that I'm dissing cuem, it just was that thing that really got my back up. I unreservedly apologise to practising users of cuem.


It has died out. But the NCSA and certain individuals that work for them refuse to let it stay in the grave. They keep digging it back up and trying to breathe new life into it.:giggle:
 
Back
Top