iOS 17 Introduces New Text-to-Speech Functions for Users with Aphasia
For individuals who have lost their ability to speak or are at risk of aphasia, the latest iOS 17 update has brought in two new text-to-speech functions that can be spoken by the synthesized voice of their mobile phones. These features, known as āInput to Readā and āPersonal Voice,ā aim to improve communication capabilities for those affected by speech impairments.
The āInput to Readā function allows individuals with aphasia to instantly access the keyboard and have their input text read aloud by their mobile phones using a designated voice. Additionally, users can pre-write commonly used words and sentences for convenient one-click reading. By enabling the āType to Readā switch, individuals can simply press the right side button on their phones three times to activate the reading aloud keyboard. This function also allows users to select synthetic voices in various languages and set preferences for commonly used phrases and sentences.
When it comes to voice options, there are three choices for Cantonese, with Siriās voice being deemed the best. Apart from Cantonese, users can also select voices in Mainland Mandarin, Taiwanese Mandarin, and different accents from Sichuan, Shaanxi, and Liaoning. In English, users have the opportunity to choose accents from different regions or even opt for voices with ghost or alien characteristics. Chinese voices take up approximately 500MB of storage, while English voices only require 300MB even with high quality. Users can audition each voice before deciding to download, and further adjustments to pitch and reading speed can be made by clicking on the voiceās information icon. If users are unsatisfied with a voice after downloading, they can easily swipe left to delete it.
Another notable feature introduced in iOS 17 is the āPersonal Voiceā function, catered to individuals who have recently been diagnosed with a risk of losing their speech. By utilizing the machine learning capabilities of the device, this function learns the userās voice and enables them to speak using their own voice when answering phone calls, using FaceTime, or engaging with other communication software. However, the āPersonal Voiceā function is currently only available on iPhone 14 or above and supports English exclusively.
To create a personal voice, users need to follow on-screen instructions and read around 15 minutes of random content for the device to learn and mimic their voice. The reading does not have to be done all at once, and users have the flexibility to read a few sentences whenever they have the time. However, it is recommended to read in a quiet environment to ensure accurate learning.
The iOS 17 update brings new possibilities and enhanced communication capabilities for individuals with aphasia or speech impairments. By incorporating text-to-speech functions, Apple aims to empower these users to express themselves and communicate more effectively through their mobile devices.