The tech industry’s accessibility-related products and launches this week


0


Every third Thursday of May, the world commemorates Global Accessibility Awareness Day or GAAD. And as has become common in recent years, major tech companies are taking a chance this week to share their latest accessibility-minded products. From Apple and Google to Webex and Adobe, the industry’s biggest players have launched new features to make their products easier to use. Here’s a quick summary of this week’s GAAD news.

Apple’s launches and updates

First up: Apple. The company actually had a huge set of updates to share, which makes sense since it typically releases most of its accessibility-centric news around this time each year. For 2023, Apple introduces Assistive Access, which is an accessibility setting that, when enabled, changes the home screen for iPhone and iPad to a layout with fewer distractions and icons. You can choose from a row-based or grid-based layout, and the latter would result in a 2×3 arrangement of large icons. You can decide what these are, and most of Apple’s first-party apps can be used here.

The icons themselves are larger than usual, with high-contrast labels that make them legible. When you tap on an app, a back button appears at the bottom for easier navigation. Assistive Access also includes a new Calls app that combines phone and FaceTime features into one customized experience. Messages, camera, photos and music have also been adjusted for the simpler interface and they all have high-contrast buttons, large text labels and tools that, according to Apple, “help trusted supporters customize the experience for the individual they support. ” The goal is to offer a less distracting or confusing system to those who may find the typical iOS interface overwhelming.

Apple also launched Live Speech this week, which works on iPhone, iPad and Mac. It will allow users to type what they want to say and have the device read it out loud. It works not only for face-to-face calls, but also for phone and FaceTime calls. You can also create shortcuts for phrases you use frequently, such as “Hi, can I have a tall vanilla latte?” or “Excuse me, where is the bathroom?” The company also introduced Personal Voice, which lets you create a digital voice that sounds like yourself. This can be helpful for those who are at risk of losing their ability to speak due to conditions that can affect their voice. The setup process involves “reading next to random text prompts for about 15 minutes on iPhone or iPad.”

For those with visual impairments, Apple is adding a new Point and Speak feature to the detection mode in Magnifier. This will use an iPhone or iPad camera, LiDAR scanner and machine learning on the device to understand where a person has placed their finger and scan the target area for words, before reading them to the user. ​​​​​​​For example, if you hold up your phone and point to different parts of a microwave or washing machine’s controls, the system will say what the labels are – like “Add 30 seconds,” “defrost” or “Start.”

The company made several other smaller announcements this week, including updates that allow Macs to pair directly with Made-for-iPhone hearing aids, as well as phonetic suggestions for text editing in voice types.

Google’s new accessibility tools

Meanwhile, Google is introducing a new Visual Question and Answer (or VQA) tool in the Lookout app, which uses AI to answer follow-up questions about images. The company’s accessibility lead and senior director of Products For All Eve Andersson told Engadget in an interview that VQA is the result of a collaboration between the inclusion and DeepMind teams.

Google Lookout image question and answer with AI

Google

To use VQA, open Lookout and start image mode to scan a photo. After the app tells you what’s in the scene, you can ask for follow-ups to gather more detail. For example, if Lookout said the image depicts a family having a picnic, you might ask what time of day it is or if there are trees around them. This allows the user to determine how much information they want from a photo, rather than being limited to an initial description.

Figuring out how much detail to include in an image description is often difficult because you want to provide enough to be useful, but not so much that you overwhelm the user. For example, “What is the right amount of detail to provide to our users in Lookout?” Andersson said. “You never really know what they want.” Andersson added that AI can help determine the context of why someone is asking for a description or more information and provide the appropriate information.

When it launches in the fall, VQA can present a way for the user to decide when they ask for more and when they have learned enough. Of course, since it is powered by AI, the data generated may not be accurate, so there is no guarantee that this tool will work perfectly, but it is an interesting approach that puts the power in the hands of users .

Google is also expanding Live Captions to work in French, Italian and German later this year, bringing the wheelchair-friendly labels for places in Maps to more people around the world.

Microsoft, Samsung, Adobe and more

Many more companies had news to share this week, including Adobe, which is rolling out a feature that uses AI to automate the process of generating tags for PDFs that make them friendlier to screen readers. This uses Adobe’s Sensei AI, and will also indicate the correct reading order. Because this could really speed up the process of tagging PDFs, people and organizations could potentially use the tool to go through stocks of old documents to make them more accessible. Adobe is also launching a PDF Accessibility Checker to “enable large organizations to quickly and efficiently evaluate the accessibility of existing PDFs at scale.”

Microsoft also had some minor updates to share, specifically around Xbox. It has added new accessibility settings to the Xbox app on PC, including options to turn off background images and turn off animations, so users can reduce potentially disruptive, confusing or triggering components. The company also expanded its support pages and added accessibility filters to its web store to make it easier to find optimized games.

Meanwhile, Samsung announced this week that it is adding two new levels of ambient sound settings to the Galaxy Buds 2 Pro, bringing the total number of options to five. This would give those using the earbuds to listen to their surroundings greater control over how loud they want the sounds to be. They will also be able to select different settings for individual ears, as well as choose clarity levels and create custom profiles for their hearing.

We also learned that Cisco, the company behind the Webex video conferencing software, is working with speech recognition company VoiceITT to add transcriptions that better support people with non-standard speech. This builds on Webex’s existing live translation feature, and uses VoiceITT’s AI to familiarize itself with a person’s speech patterns to better understand what they want to communicate. Then it will record and transcribe what is said, and the captions will appear in a chat bar during conversations.

Finally, we also saw Mozilla announce that Firefox 113 would be more accessible by improving the screen reader experience, while Netflix revealed a sweet reel that shows some of its latest assistant features and developments in the past year. In its announcement, Netflix said that while it has “made strides in accessibility, (it knows) there is always more work to do.”

That sentiment is true not only for Netflix, nor for the tech industry alone, but also for the entire world. While it’s nice to see so many companies taking the opportunity this week to release and highlight accessibility-minded features, it’s important to remember that inclusive design shouldn’t and can’t be a once-a-year event. I was also pleased to see that, despite the current fear of generative AI, most companies didn’t include the buzzword in every assistant feature or announcement this week without good reason. For example, Andersson said “we typically think about user needs” and adopt a problem-first approach as opposed to focusing on determining where a type of technology can be applied to a solution.

Although it is probably at least partially true that announcements around GAAD are a bit of a PR and marketing game, in the end some of the tools launched today may actually improve the lives of people with disabilities or different needs. I call that a net gain.

All products recommended by Engadget are selected by our editors, independent of our parent company. Some of our stories include affiliate links. If you make a purchase through one of these links, we may earn an affiliate commission. All prices are correct at the time of publication.

Source link


Like it? Share with your friends!

0
ncult

0 Comments

Your email address will not be published. Required fields are marked *