Home » Inclusive technology: sign language translated in real time

Inclusive technology: sign language translated in real time

by admin
Inclusive technology: sign language translated in real time

An Indian computer engineering student, Privaniali Gupta, has created an artificial intelligence model that recognizes signs through photograms and translates them into spoken language, helping to make the world of technology increasingly inclusive

Priyanjali Gupta, a young Indian computer engineering student, created an artificial intelligence model in 2022 that translates American Sign Language (ASL) into English in real time. Priyanjali is enrolled in the third year, data science major, at Vellore Institute of Technology, in the Indian state of Tamil Nadu and – inspired by the video on real-time sign language detection by Australian software engineer, Nicholas Renotte, and taking advantage of the application programming interface (API) of the software library Tensorflow that detects objects – has created its model of inclusive technology.

It all started when her mother, a year before, had spurred her on, making her reflect on what she could have created with her skills and knowledge. And so one day – while chatting with Alexa – the idea of ​​inclusive technology arose to help bridge the communication gap between hearing and deaf people. “The dataset was created manually by running the Python Image Collection file which collects images from the webcam for the signs ‘Hello, I love you, Thank you, Please, Yes and No’,” she reads in her post on Github, platform that hosts software projects. His LinkedIn post went viral, with more than 65,000 reactions and 1,400 comments from people who liked the idea. “The model, for now, is trained on single frames but to be able to detect video, it needs to be trained on multiple frames and I’m currently researching that,” Priyanjali says.

See also  India, a couple of parents sue their only child: "After six years of marriage and many expenses, still no grandchildren"

However, create a template of deep learning from scratch for sign detection is not simple. “Creating a deep neural network just for sign detection is quite complex,” he told Interest Engineering. “I’m just an amateur student, but I’m learning. And I believe that sooner or later our community open sourcewho has much more experience than me, will find a solution.’

ASL is the third most spoken language in the United States after English and Spanish, and also in Italy, the community that uses Italian Sign Language (LIS) includes around 40,000 people; if the hearing are also included, the numbers reach 100,000.

However, applications and technologies to translate sign language into spoken language have not yet caught on. Yet, with the worldwide boom of the Zoom platform used to communicate during the pandemic, sign language has once again returned to the spotlight. One example is the work of Google Al researchers who also presented a model of real-time sign language detection that can accurately identify up to 91% of signing individuals.

«Researchers and developers are doing their best to find a solution that can be implemented. However, I believe the first step is to normalize sign language and other modes of communication with the special disabled and work towards closing the communication gap,” says Priyanjali.

Indeed, in an increasingly technologically advanced world where apps and devices seek to make daily life easier, it is important to do research and use technology in an inclusive way in such a way as to gradually reduce the communication gap by facilitating the inclusion and access to services, which are too often difficult, for people with disabilities.

See also  FrieslandCampina fined €561,000 for infant formula

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy