Home Ā» New Text-to-Speech Functions in iOS 17 Revolutionize Communication for Speech Impaired Users

New Text-to-Speech Functions in iOS 17 Revolutionize Communication for Speech Impaired Users

by admin
New Text-to-Speech Functions in iOS 17 Revolutionize Communication for Speech Impaired Users

iOS 17 Introduces New Text-to-Speech Functions for Users with Aphasia

For individuals who have lost their ability to speak or are at risk of aphasia, the latest iOS 17 update has brought in two new text-to-speech functions that can be spoken by the synthesized voice of their mobile phones. These features, known as ā€œInput to Readā€ and ā€œPersonal Voice,ā€ aim to improve communication capabilities for those affected by speech impairments.

The ā€œInput to Readā€ function allows individuals with aphasia to instantly access the keyboard and have their input text read aloud by their mobile phones using a designated voice. Additionally, users can pre-write commonly used words and sentences for convenient one-click reading. By enabling the ā€œType to Readā€ switch, individuals can simply press the right side button on their phones three times to activate the reading aloud keyboard. This function also allows users to select synthetic voices in various languages and set preferences for commonly used phrases and sentences.

When it comes to voice options, there are three choices for Cantonese, with Siriā€™s voice being deemed the best. Apart from Cantonese, users can also select voices in Mainland Mandarin, Taiwanese Mandarin, and different accents from Sichuan, Shaanxi, and Liaoning. In English, users have the opportunity to choose accents from different regions or even opt for voices with ghost or alien characteristics. Chinese voices take up approximately 500MB of storage, while English voices only require 300MB even with high quality. Users can audition each voice before deciding to download, and further adjustments to pitch and reading speed can be made by clicking on the voiceā€™s information icon. If users are unsatisfied with a voice after downloading, they can easily swipe left to delete it.

Another notable feature introduced in iOS 17 is the ā€œPersonal Voiceā€ function, catered to individuals who have recently been diagnosed with a risk of losing their speech. By utilizing the machine learning capabilities of the device, this function learns the userā€™s voice and enables them to speak using their own voice when answering phone calls, using FaceTime, or engaging with other communication software. However, the ā€œPersonal Voiceā€ function is currently only available on iPhone 14 or above and supports English exclusively.

See also  Amazon: counterfeiting harms everyone, not just companies

To create a personal voice, users need to follow on-screen instructions and read around 15 minutes of random content for the device to learn and mimic their voice. The reading does not have to be done all at once, and users have the flexibility to read a few sentences whenever they have the time. However, it is recommended to read in a quiet environment to ensure accurate learning.

The iOS 17 update brings new possibilities and enhanced communication capabilities for individuals with aphasia or speech impairments. By incorporating text-to-speech functions, Apple aims to empower these users to express themselves and communicate more effectively through their mobile devices.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy