Help with University Assignment

RobynJoan

New Member
Joined
Nov 15, 2015
Messages
1
Reaction score
0
Hey guys,

So, I'm not deaf, and I want to apologise if anybody feels like I am intruding here, and I understand if this post was removed.

However, me and a few other students are working on a product design for use translating sign language into spoken word and I was hoping I would be able to get some feedback on the idea from any of you.

The basics of our idea is this:

Motion detection and muscle movement recognition device which wraps in two bands around the wrist and forearm. Detects movements in fingers, hands and arms to helps translate your chosen sign language into your chosen spoken language. Discrete speaker allows users to either disengage the speaker from the device and attach to clothing, i.e. on the collar or lapel, or to keep it attached to the device. This speaker will transmit the chosen spoken language and contain a simple volume control system with numbers 1-10 for easy use. This device will also come with a free app which will translate spoken word into text.

The link to the survey is below. Any help you would be able to give me would be much appreciated, thanks guys!!

Survey : http://goo.gl/forms/K5l0Rq10iY
 
I'm sorry, but I don't believe in the idea at all. How much do you know about language processing? Have you ever seen a product that reliably can transcribe speech correctly? As hard of hearing I can assure you that no such thing exists. What makes you believe that you could do both correct transcription of movements as well as translating into a completely different language?

Ok, so let's say you capture a pointing movement in a video. How would you know how to translate that? Depending on direction of the pointing it could mean he, she, it, that, this, here, or anything you are referring to, such as a person (Peter, Mary, Claire, or John) or a thing or a place. In Swedish sign language pointing upwards could mean God and pointing downwards could mean basement, but in a slightly different context maybe something else. When signing with people it becomes apparent how the pointing should be interpreted, but I just don't believe you will be able to translate a pointing correctly even if you stay on the project for years. What if the signer is describing what a surface is looking like. How would you automatically describe the signed description?
 
Not to mention having to wear a gadget all the time on the chance that it will be needed.

The sensors would have to include both hands, arms, and all fingers. It would have to see the face for facial grammar and expressions. It would have to detect body shifts for indicating role shifts.

As Swedeafa posted, the referents would have to be identified to make sense.
 
Wouldn't movement be different for each people? Their own personal "accent" if you want to say that?
 
Wouldn't movement be different for each people? Their own personal "accent" if you want to say that?
I'm guessing that each unit would have to be "trained" to learn each user's signing style. Similar to how voice recognition software is trained to understand an individual's speech.
 
I'm guessing that each unit would have to be "trained" to learn each user's signing style. Similar to how voice recognition software is trained to understand an individual's speech.

Didn't consider the training part. hmmm..
 
Ive seen so many people sign so differently, speed, lack of signs between words ( like fill in the blanks ) I doubt it would work except for very basic stuff if even that. The more advanced or lack of, as well as the individual.... it would probly be totally worthless.

Reminds me of the movie Congo, with Koko.
 
Even the different ways people finger spell. I have seen some lazy hands, or forced letters. It would be like a text to speech with different hand writing. Some things would be interpreted wrong.

Sent from my SAMSUNG-SM-N900A using AllDeaf App mobile app
 
Back
Top