If you've ever used a screenreader, or even watched somebody use one, you've probably encountered some pretty confusing user interfaces. This is understandable in smaller/hobby apps written by an individual – these folks simply don't have the resources to prioritize accessibility. But even in well-respected apps, published by industry-leading tech giants, built by talented engineering teams, the UX is sometimes still super awkward. What's going on?
In this post, I'll cover some of the challenges we face when making apps accessible to screenreader users, and how we can leverage the next generation of design tools to do a better job.
Accessibility is a massive topic, but this post will focus specifically on screenreaders. When it comes to accessibility, there are a lot of great existing tools and resources for things like WCAG color contrast checking and text size guidelines. There aren't nearly as many resources when it comes to screenreader UI design tools and patterns.
We often discuss accessibility as if it's a checkbox on a list of requirements, e.g: "is this UI accessible?" Things that can be measured automatically, like color contrast, fit into that mental model pretty well. When it comes to screenreaders however, that mental model is much less accurate. "Good" screenreader design is very subjective: just like with any visual UX, the same screenreader UX might be intuitive for some users and confusing for others. It's hard to measure any UX automatically – it really has to be experienced by users before we know if it works or not.
For this reason, I think there's a big opportunity for design tools to let us explore screenreader UX earlier in the development process. The earlier we can figure out the right UX, the less time and effort it will take to build, and the more likely we can still make big changes if needed. While there are tools for validating UIs once they've already been built by engineers (e.g. in Chrome/Xcode), this is fairly late in the development process, and big changes to the UX at this point may not be feasible.
Let's walk through an example of adding support for VoiceOver, Apple's built-in screenreader, to an iOS app. VoiceOver allows users to swipe left and right to navigate through each accessible element on the screen, reading aloud details about each one. Users can also drag their finger around the screen, and VoiceOver will read aloud the element under their finger. There are some other useful gestures, but we'll focus on these two.
Suppose we're building a food delivery app, and we're designing a component for choosing options in a list:
How would we want this to work for VoiceOver users?
If we don't write any code specifically to support VoiceOver, the operating system will try to guess the correct navigation order of the elements in the UI. Here's what the navigation order would most likely be:
Each of the text elements is automatically detected as an accessible element, and their order is determined by their position on the screen. In this image, I've outlined each text element's touch target with a black outline, and I've numbered each element to indicate their order. When a user navigates to a text element, VoiceOver will read its contents aloud.
The term "touch target" is an informal name for the area that the operating system will register as a touch on a UI element, which may be different than the area of the UI element itself.
Depending on how they're coded, the checkboxes might be automatically detected (before or after the text), or they might be ignored entirely. It can be a bit hard to predict precisely, so we probably want to define the elements and order we think will be best, rather than letting the system guess. Either way, the system won't know exactly what to read aloud ("checked" and "unchecked"? "topping added" and "topping not added"?) when the user interacts with the checkbox unless we specify these details.
In order for this UI to be usable by screenreader users, we could include the checkboxes in the navigation order, and provide accessibility labels for VoiceOver to read aloud (let's assume we use "topping added" and "topping not added"). The result would look like this:
The UI is now usable. When the user navigates to a checkbox, VoiceOver will say "topping not added", and they can toggle the checkbox.
However, there are a couple things that make this UI awkward:
If somebody asked, "is this UI accessible to screenreader users?", I think we could answer "yes". I would imagine it meets the legal requirements for ADA compliance (or similar laws in other countries). At the same time, I don't think we could call it "well-designed".
We can improve both the navigation and touch target size without modifying our visual design at all. In this version, we combine the checkbox and text element into a single large touch target:
When the user navigates to a row, VoiceOver will read (roughly) "Button, Bacon, Topping added. Double tap to remove topping." They can quickly drag their finger through the list to find the right topping without having to zig-zag back and forth between checkbox and text.
When building an app, it's very easy to end up with Solution #1. Solution #1 is a direct translation of a visual design to a screenreader design, without considering what it would actually be like to use. In order to come up with Solution #2, we had to take a step back and rethink our UI from a different perspective. We also had to have a decent understanding of how VoiceOver works, since otherwise we wouldn't know that Solution #1 would be awkward in the first place.
At a big tech company, there's plenty of time, money, and people who know how screenreaders work, so why do we still often end up with Solution #1?
In my experience, on typical product teams at big companies, screenreader accessibility is considered relatively late in the product development process. Here are a few reasons why this is often the case, and why it causes problems.
The product teams I've seen are usually made up of: a product manager, a couple product designers, and a handful or two of software engineers. Which of these people are responsible for making sure an app is accessible to screenreader users?
The answer I hear most often is some variation of "all of them", but that's sort of a cop out. All of them want to do the right thing, but that's different than sharing the responsibility equally when something goes wrong.
In my experience, the answer tends to be the engineering team. Engineers are ultimately the ones who do the work of adding screenreader support to the code. At many companies, accessibility support is a requirement for the software they write. Thus it's part of their performance measurement and/or they're penalized if something goes wrong. As a consequence, engineers often know how VoiceOver works best and end up deciding the screenreader UX.
Engineers deciding the screenreader UX is problematic:
All of these tend to lead us to Solution #1 - a direct translation of a visual UI to a screenreader UI, after the visuals have already been decided and are costly to change.
To come up with Solution #2, we need to shift the responsibility back to the designer. Designers have the most knowledge of the problem and user, the most control over the visuals, and the most time to explore and iterate on possible UI solutions before the product is built. It's the product designer's responsibility to decide the UX of the product, and this should include the UX for screenreader users.
The UX should be the designer's responsibility, but current design tools make this hard in practice. To facilitate good screenreader design, a tool should:
As far as I know, the tools designers use today don't do any of these things. I haven't found any prototyping tools for testing with screenreaders. I also haven't found any vector design tools that help visualize the path a screenreader will take (although, this isn't their purpose, so it's understandable). Only tools typically intended for engineers, such as Xcode, really help with education and testing around assistive technologies. While designers can and do use Xcode, it's not nearly as well-suited for the same kind of fast iteration and exploration as design tools.
In order to assist designers in building screenreader-accessible interfaces, our tools must understand how screenreaders work. Operating systems and browsers have (mostly) well-defined rules for how a screenreader will interpret an interface – an engineer building an app must work within these constraints. If a design tool allows these rules to be broken, any design created within it may be impossible to implement.
Component-based tools are somewhere between tools traditionally intended for designers (like Sketch) and tools intended for engineers (like Xcode). They tend to strike a balance between quick iteration and engineering flexibility, allowing designers and engineers to quickly build production-quality UI components. I call them "component-based" because they often focus on building small, reusable pieces of the UI, rather than entire pages or screens.
Component-based tools are well-suited for screenreader design:
I recently added better accessibility support to the component-based tool I work on, Lona. One of the improvements I made was adding an "Accessibility Overlay" option which automatically draws black boxes and numbers around each of the accessible elements in a component. This helps designers and engineers visualize the screenreader UX before testing on device.
Here's the example we've been using so far, the
Toppings component, displayed in Lona:
The numbered boxes trace the path that a screenreader would take. This lets the designer/engineer preview the screenreader UX quickly and easily, without needing to build the app onto their phone. This shortens the feedback cycle between designing and experiencing the screenreader UX. While testing on a real device is still necessary for some things, the black boxes can be used to validate navigation order and touch target size while designing. Lona can then generate UI code for this component that matches the navigation order shown when using the "Accessibility Overlay".
Lona currently supports generating this code for iOS, React on the web (with keyboard navigation support), and React Native (with some limitations due to platform accessibility support, as of v0.59).
Toppings, is currently hard-coded to contain 3 copies of the
CheckboxRow component. We can see that first the title, "Extra toppings", would be read aloud. Then, each subsequent swipe would move down through the list of checkboxes. In reality, the list of checkboxes would be dynamic, so the
Toppings component is a bit contrived – but the
CheckboxRow component is totally realistic.
Here's what the
CheckboxRow looked like in Solution #1 (the clumsy solution) above:
The black boxes show us that the touch targets are very small, and that the checkbox and label would be read aloud separately.
For Solution #2, I grouped the checkbox and text together into a single larger touch target. The info for both will be read aloud together, to provide enough context for the user:
It's a little hard to tell how well this component will work without more context, so that's why I created the
Toppings component above to demonstrate.
Component-based design tools are proliferating. These days it feels like a new one launches every week. I'm definitely not up-to-date on all of them, but I actually haven't spotted any amazing screenreader-focused features yet. I think that's a missed opportunity. Even if these features do exist, they're not being communicated/marketed very well yet.
Component-based design tools should inherently be good at screenreader design, due to how they enforce limitations of the underlying operating systems & web browsers. Hopefully a few of the folks working on component-based tools will read this post and introduce/share some amazing features!
If there are features of component-based tools that I've overlooked, let me know and I'm happy to mention them here. It's very possible there are some great ones I've missed.
The software development process at big companies often leads to an awkward screenreader UX. The tools that designers currently use don't give them enough visibility or control over the accessibility of the apps they design. As a result, we frequently translate visual interfaces to screenreader interfaces directly without understanding if the result will be easy to use.
Component-based design tools should change that. Designers and engineers shouldn't need to be accessibility experts – our tools and processes should have that expertise baked-in, and guide us along the way.