Technology companies are coming together with the University Of Illinois for the project of Speech Accessibility to improve AI’s comprehension of people’s speech disabilities.
Karen Panetta, a professor of electrical and computer engineering at Tufts University and an IEEE Fellow but who is not involved in the project stated in an email, “Being able to devise new interventions and screening tools will help us be more proactive in early detection of conditions in children and help us customize more specific therapies for a patient’s condition.”
In many software and voice assistants, speech recognition is being used daily. But the experts say that they do not acknowledge all the speeches and more so with the speech disorder. The project will work towards making the technologies more accommodating by feeding a set of data containing speech specimens according to their blog.
They will collect all kinds of speech disorders like ALS, Parkinson’s, Down syndrome, cerebral palsy, etc which helps the machine to understand and analyze a whole lot of patterns.
Matthew Luken, the vice president of Deque Systems, a company that supports digital accessibility initiatives told Lifewire in an email interview, “For example, if the original dataset only included a single iteration of ‘wash,’ then someone from the Pacific Northwest who might pronounce the word as ‘war-sh’ would not be able to interact with the program properly,”
“Both pronunciations need to be in the dataset for comparison.”
He added, “Voice technology took this approach—originally designed with a narrow set of examples, which did not include people with disabilities.”
“Companies are now going back and adding more inclusive examples to their data sets. This means that in future versions of the software, people with a stutter will have a higher chance of communicating with the software.”
This is the need of the hour
Katy Schmid, the director of Education and Technology at The Arc, an organization for people with intellectual and developmental disabilities stated in her mail, that people with physical, cognitive, and sensory disabilities depend on AI technology for understanding the world around them.
She said, “There are still access barriers that exist when it comes to using technology that can leverage a person with disabilities [or] speech differences to receive the proper accommodations they need for meaningful employment.”
She added, “Students with disabilities could much more easily complete their schoolwork and advance their education and potential post-secondary and college opportunities if AI [and] speech recognition is created so that it can recognize all speech, not just speech that is considered to be clear or ‘normal.”
Panetta states that AI is already being used in learning speech patterns and recognizing face expressions to build new processes for autistic children. She stated, “Then, using this information, the AI can help therapists identify specific positions and patterns and guide the patient with interventions and therapy.”
It is personal as well as professional research for Panetta. She stated, “it’s important to me that my child be able to fully participate in society and flourish in all the activities he enjoys, including education and his passion for music.”