Home » News » This AI Tool Uses 80 Hours of Video to Translate Sign Language Sentences

This AI Tool Uses 80 Hours of Video to Translate Sign Language Sentences

fb twitter pinterest linkedin
This AI Tool Uses 80 Hours of Video to Translate Sign Language Sentences-GadgetAny
AI Tool

(Image credit- PureFluent)

An artificial intelligence (AI) tool has been created by a research team from the Barcelona Supercomputing Center (BSC-CNS) and the Universitat Politècnica de Catalunya (UPC) with the goal of bridging communication barriers for those who are deaf or hard of hearing.

The technology focuses on automatic sign language translation, offering a promising answer to a problem that sign language users frequently encounter.

AI Sign Language

Despite the fact that voice recognition systems like Alexa and Siri have made tremendous advancements, sign language is still not supported by any of their applications.

This restriction makes it challenging for those who use sign language as their main form of communication to connect with technology and use digital services designed only for spoken languages.

Researchers’ creation of open-source software paves the path for more accessible communication. Automatic sign language translation is improving thanks to the integration of computer vision, natural language processing, and machine learning methods.

Beyond text: AI model digests 80 hours of video to learn sign language
Image credit- News Atlas

The system, which is still in its experimental stage, uses the Transformers machine learning model to translate sign language utterances recorded in video format into text format, facilitating communication for people who use sign language.

Given the availability of pertinent data for training and translation, the system—while initially concentrated on American Sign Language (ASL)—has the potential to be applied to other languages.

Also read: Google Announced All the New Android Features

80 Hours of Videos 

The researchers improved upon their prior work, known as How2Sign, which entailed publishing data necessary to train the algorithms, according to Laia Tarrés, a researcher at BSC and UPC.

More than 80 hours of films with ASL interpreters translating video instructions for DIY projects and cooking recipes made up this collection of data.

The team created new open-source software that can learn the mapping between video and text using the data that was already accessible.

SLAIT – Real-time Sign Language Translator with AI
Image credit- slait.ai

Despite the fact that there is still much space for improvement, the researchers see this effort as an important first step toward developing real applications that can help people.

The ultimate objective is to improve the tool further, opening the door for the creation of accessible technologies that meet the requirements of people who are deaf and hard of hearing.

The piece has already been featured as part of the “Code and Algorithms” exhibit at the Fundación Telefónica location in Madrid.

In the exhibition “Sense in a Calculated World,” BSC is prominently featured in a number of artificial intelligence initiatives.

Additionally, it will be shown as a part of a big planned exhibition on artificial intelligence that will debut in October at the Centre de Cultura Contemporània de Barcelona (CCCB).

“This open tool for automatic sign language translation is a valuable contribution to the scientific community focused on accessibility, and its publication represents a significant step towards the creation of more inclusive and accessible technology for all,” said Tarrés in a statement.

 

GadgetAny
Prelo Con

By Prelo Con

Following my passion by reviewing latest tech. Just love it.

Leave a Reply

Related news