“Feedback from end users with disabilities is critical. Get feedback during the design phase, during programming and testing before your app is released,” recommends Googler Scott Adams to all Android developers, “Not after the release. It takes longer, but You’re making a better product. ” Automated tests, while helpful, are not enough on their own. At the same time, barrier-free design helps in unexpected places.
A billion people are motor or sensory impaired in one way or another. In addition, there are situation-related restrictions that affect everyone. For example, if you cannot read a screen display in bright sunlight, cannot hear sound output due to loud noise, or simply cannot press a key because you have both hands full. That is why Google is constantly investing in accessibility and is relying more and more on machine learning.
In addition, the data company has now secured the support of Samsung. Together they gave the Talkback Android screen reader the biggest update since its introduction twelve years ago. There are a dozen new finger gestures, most of which are customizable. In addition, Talkback and the Voice Access voice control now work together so that Talkback can implement voice commands.
More languages and sounds for Live Transcribe
The Live Transcribe voice and noise recognition app has also been updated. It now supports more than 80 languages and dialects that should work offline this year. No sound recordings then have to be transferred to any server. Already now, users can enter text directly in the same app, which makes dialogues much easier, and add their own words to the dictionary.
In addition, Live Transcribe will soon also be able to be individually adapted for noise detection. For example, if a household has an unusual doorbell that sounds like the chirping of birds, the cell phone can be trained to alert the deaf user to the “ringing”. The connection with wearables can be life-saving, for example when a fire alarm is detected and the user is woken up via the vibrator of his smartwatch.
The separate speech recognition of two people in a conversation or interview is still too difficult. Often the spoken words overlap, which confuses the artificial intelligence (AI). In this area, the Austrian Philips Speech has evidently gained a head start. Their SmarktMike Duo is the first device to record and transcribe human dialogues in real time, separately from the speaker.
Accessibility is SEO
For Android users with poor vision, there will soon be a new magnifying glass that only enlarges a section of the screen, while the rest remains visible in the background in its original size. The old magnifying glass, which enlarges the entire screen, is retained in parallel. An update of the screen brightness has already been rolled out on some mobile phones: this can now be turned down particularly far. This makes reading easier in very dark surroundings and generally for light-sensitive people – anyone who suffers from migraines will thank Google for it.
Programmers remind Google to pay attention to the basic building blocks. These include good contrasts between the foreground and background (usually at least 4.5: 1), sufficiently large touch fields, alt texts for images and proper labels for icons and other graphic elements. Incidentally, these descriptions contained in the source code are not only indispensable for screen readers, but also for the implementation of voice commands – and they support the machine analysis of the website or app.
Accessibility is therefore also optimization for search engines – which are increasingly crawling within apps. In the new Page Experience Ranking, Google’s search engine pays special attention to accessibility. However, labels can be omitted for elements with which the user cannot interact and which the screen reader and voice control should expressly ignore.