Image credit : MacStories
Apple today unveiled a number of new technologies that will enhance accessibility for voice, vision, and cognitive functions. Later this year, these tools should be available for the iPhone, iPad, and Mac. Apple, a recognized pioneer in mainstream technology accessibility, emphasizes that these tools were developed with input from the disabled community.
The upcoming Assistive Access app for iOS and iPadOS is made for those who have cognitive impairments. With a focus on making it simpler to communicate to loved ones, share images, and listen to music, Assistive Access streamlines the iPhone and iPad’s user experience. For instance, the Phone and FaceTime apps have been combined into one.
Large icons, higher contrast, and labels with better writing are all used in the design to make the screen easier to understand. However, the user has the option to personalize these visual elements, and their selections are carried over to all assistive access-compatible apps.
Users who are blind or have impaired vision can already use their phone to find nearby doors, people, or signs thanks to the Magnifier application. Apple is now rolling out a feature called Point and Speak that makes use of the camera and LiDAR scanner on the smartphone to enable visually impaired users to interact with real-world objects that have multiple text labels.
Therefore, a low vision user may use Point and Speak to distinguish between the “popcorn,” “pizza,” and “power level” buttons when heating food in the microwave. When the gadget recognizes this text, it reads it out. English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese, and Ukrainian will all be supported by Point and Speak.
Personal Voice, which provides an automated voice that sounds like you rather than Siri, is one of the group’s most intriguing features. The gadget is intended for people who run the danger of losing their capacity to speak vocally due to illnesses like ALS. The user must read aloud into their microphone a selection of randomly picked text prompts for roughly 15 minutes in order to create a Personal Voice. The audio is then locally processed on your iPhone, iPad, or Mac using machine learning to produce your Personal Voice. It sounds a lot like Acapela’s “my own voice” service, which integrates with other assistive technology.
Also read : How to Use Your iPhone’s Personalized Sound Recognition Accessibility Feature?
It’s simple to understand how a collection of distinctive, expertly honed text-to-speech models could be harmful in the wrong hands. Apple claims that this personalized voice data is never shared with anyone, not even Apple. In fact, Apple claims that since some homes might share a log-in, it doesn’t even link your voice to your Apple ID. Instead, if consumers want a Personal Voice they create on their Mac to be accessible on their iPhone, or the other way around, they must opt in.
Only English speakers will be allowed to construct Personal Voice at launch, and it can only be done on hardware powered by Apple silicon.
This year, it’s anticipated that these accessibility capabilities will be added to a variety of Apple products. Regarding its current products, Apple is Thursday extending access to SignTime to Germany, Italy, Spain, and South Korea. Customers of the Apple Store and Apple Support can employ SignTime’s on-demand sign language interpreters.