accessible-design Created with Sketch.

Screenreader-aware Design Tools

July 09, 2019

Building a great screenreader UX is challenging

If you've ever used a screenreader, or even watched somebody use one, you've probably encountered some pretty confusing user interfaces. This is understandable in smaller/hobby apps written by an individual – these folks simply don't have the resources to prioritize accessibility. But even in well-respected apps, published by industry-leading tech giants, built by talented engineering teams, the UX is sometimes still super awkward. What's going on?

In this post, I'll cover some of the challenges we face when making apps accessible to screenreader users, and how we can leverage the next generation of design tools to do a better job.

Why the focus on screenreaders?

Accessibility is a massive topic, but this post will focus specifically on screenreaders. When it comes to accessibility, there are a lot of great existing tools and resources for things like WCAG color contrast checking and text size guidelines. There aren't nearly as many resources when it comes to screenreader UI design tools and patterns.

We often discuss accessibility as if it's a checkbox on a list of requirements, e.g: "is this UI accessible?" Things that can be measured automatically, like color contrast, fit into that mental model pretty well. When it comes to screenreaders however, that mental model is much less accurate. "Good" screenreader design is very subjective: just like with any visual UX, the same screenreader UX might be intuitive for some users and confusing for others. It's hard to measure any UX automatically – it really has to be experienced by users before we know if it works or not.

For this reason, I think there's a big opportunity for design tools to let us explore screenreader UX earlier in the development process. The earlier we can figure out the right UX, the less time and effort it will take to build, and the more likely we can still make big changes if needed. While there are tools for validating UIs once they've already been built by engineers (e.g. in Chrome/Xcode), this is fairly late in the development process, and big changes to the UX at this point may not be feasible.

An exercise

Let's walk through an example of adding support for VoiceOver, Apple's built-in screenreader, to an iOS app. VoiceOver allows users to swipe left and right to navigate through each accessible element on the screen, reading aloud details about each one. Users can also drag their finger around the screen, and VoiceOver will read aloud the element under their finger. There are some other useful gestures, but we'll focus on these two.

The initial design

Suppose we're building a food delivery app, and we're designing a component for choosing options in a list:

Initial design of our food delivery app, with a title and 3 labeled checkboxes for adding toppings

How would we want this to work for VoiceOver users?

The platform default

If we don't write any code specifically to support VoiceOver, the operating system will try to guess the correct navigation order of the elements in the UI. Here's what the navigation order would most likely be:

Our app design, with numbered black boxes outlining the accessible UI elements. The outlines are not around the checkboxes, indicating that the checkboxes would not be accessible to screenreaders

Each of the text elements is automatically detected as an accessible element, and their order is determined by their position on the screen. In this image, I've outlined each text element's touch target with a black outline, and I've numbered each element to indicate their order. When a user navigates to a text element, VoiceOver will read its contents aloud.

The term "touch target" is an informal name for the area that the operating system will register as a touch on a UI element, which may be different than the area of the UI element itself.

Depending on how they're coded, the checkboxes might be automatically detected (before or after the text), or they might be ignored entirely. It can be a bit hard to predict precisely, so we probably want to define the elements and order we think will be best, rather than letting the system guess. Either way, the system won't know exactly what to read aloud ("checked" and "unchecked"? "topping added" and "topping not added"?) when the user interacts with the checkbox unless we specify these details.

Solution #1 (awkward)

In order for this UI to be usable by screenreader users, we could include the checkboxes in the navigation order, and provide accessibility labels for VoiceOver to read aloud (let's assume we use "topping added" and "topping not added"). The result would look like this:

Our app design, now with outlines around the checkboxes to indicate that they support screenreader navigation. Each checkbox outline is small and separated from the outline around the associated label

The UI is now usable. When the user navigates to a checkbox, VoiceOver will say "topping not added", and they can toggle the checkbox.

However, there are a couple things that make this UI awkward:

  • Navigating to a checkbox will read "topping added" or "topping not added" aloud, but the user won't know which topping it's referring to until they navigate to the text element afterward. In other words, there's an implicit visual association between the checkbox and text that may be lost to users with visual impairments.
  • The touch targets are very small. When there are a lot of options like this, users may drag their finger around the screen to quickly find what they're looking for. We should try to use large touch targets so it's easier to for them to land in the right place.

If somebody asked, "is this UI accessible to screenreader users?", I think we could answer "yes". I would imagine it meets the legal requirements for ADA compliance (or similar laws in other countries). At the same time, I don't think we could call it "well-designed".

Solution #2 (better)

We can improve both the navigation and touch target size without modifying our visual design at all. In this version, we combine the checkbox and text element into a single large touch target:

Our app design, now with larger outlines that surround each pair of checkbox and label

When the user navigates to a row, VoiceOver will read (roughly) "Button, Bacon, Topping added. Double tap to remove topping." They can quickly drag their finger through the list to find the right topping without having to zig-zag back and forth between checkbox and text.

The point

When building an app, it's very easy to end up with Solution #1. Solution #1 is a direct translation of a visual design to a screenreader design, without considering what it would actually be like to use. In order to come up with Solution #2, we had to take a step back and rethink our UI from a different perspective. We also had to have a decent understanding of how VoiceOver works, since otherwise we wouldn't know that Solution #1 would be awkward in the first place.

At a big tech company, there's plenty of time, money, and people who know how screenreaders work, so why do we still often end up with Solution #1?

Why we get it wrong

In my experience, on typical product teams at big companies, screenreader accessibility is considered relatively late in the product development process. Here are a few reasons why this is often the case, and why it causes problems.

Engineering owns screenreader accessibility

The product teams I've seen are usually made up of: a product manager, a couple product designers, and a handful or two of software engineers. Which of these people are responsible for making sure an app is accessible to screenreader users?

The answer I hear most often is some variation of "all of them", but that's sort of a cop out. All of them want to do the right thing, but that's different than sharing the responsibility equally when something goes wrong.

In my experience, the answer tends to be the engineering team. Engineers are ultimately the ones who do the work of adding screenreader support to the code. At many companies, accessibility support is a requirement for the software they write. Thus it's part of their performance measurement and/or they're penalized if something goes wrong. As a consequence, engineers often know how VoiceOver works best and end up deciding the screenreader UX.

Engineers deciding the screenreader UX is problematic:

  • Engineers haven't researched the problem and user nearly as much as designers, so they're not well-suited to decide the UX. They should weigh in of course, but they shouldn't have the final responsibility.
  • Building a great screenreader UX can require visual changes, and engineers often don't have the agency to make those. They can point out issues and make suggestions to the designer, but this back-and-forth is ineffective – if the screenreader UI and visual UI are essentially designed by two different people, it's more likely the result will be inconsistent/incoherent. Additionally, the designer may have already decided on a lot of the UI, and has perhaps even gotten sign-off from the leadership team, so making visual changes may not be desirable.
  • Accessibility support is added to the code only after the UI is already built, at which point any significant changes may be very costly. Even non-visual changes, e.g. changing the UI hierarchy to improve the touch targets, can be quite time-consuming. Ideally engineers would catch issues when reviewing mockups and before actually building the UI, but engineers are mainly looking for issues that prevent usability, rather than UX improvements.

All of these tend to lead us to Solution #1 - a direct translation of a visual UI to a screenreader UI, after the visuals have already been decided and are costly to change.

To come up with Solution #2, we need to shift the responsibility back to the designer. Designers have the most knowledge of the problem and user, the most control over the visuals, and the most time to explore and iterate on possible UI solutions before the product is built. It's the product designer's responsibility to decide the UX of the product, and this should include the UX for screenreader users.

Designers don't have the right tools

The UX should be the designer's responsibility, but current design tools make this hard in practice. To facilitate good screenreader design, a tool should:

  1. educate the designer on how screenreaders work for the platform(s) they're designing for
  2. show/speak a preview of the screenreader experience so the designer can test and iterate (it's hard to know if a design is easy to use without actually trying it)
  3. enable an accurate handoff or code export for engineers to integrate into the final product

As far as I know, the tools designers use today don't do any of these things. I haven't found any prototyping tools for testing with screenreaders. I also haven't found any vector design tools that help visualize the path a screenreader will take (although, this isn't their purpose, so it's understandable). Only tools typically intended for engineers, such as Xcode, really help with education and testing around assistive technologies. While designers can and do use Xcode, it's not nearly as well-suited for the same kind of fast iteration and exploration as design tools.

Screenreader-aware design tools

In order to assist designers in building screenreader-accessible interfaces, our tools must understand how screenreaders work. Operating systems and browsers have (mostly) well-defined rules for how a screenreader will interpret an interface – an engineer building an app must work within these constraints. If a design tool allows these rules to be broken, any design created within it may be impossible to implement.

Component-based tools

Component-based tools are somewhere between tools traditionally intended for designers (like Sketch) and tools intended for engineers (like Xcode). They tend to strike a balance between quick iteration and engineering flexibility, allowing designers and engineers to quickly build production-quality UI components. I call them "component-based" because they often focus on building small, reusable pieces of the UI, rather than entire pages or screens.

Component-based tools are well-suited for screenreader design:

  • Fundamentally, these tools must limit the designer to the realm of what's possible in the target platform (e.g. web, iOS, etc), since otherwise the resulting design can't be used in production. Thus, the tools must understand the rules and limitations of the target platform. If these tools can understand the accessibility rules of the target platform, then they can enable designers to build and test screenreader-accessible components.
  • By allowing designers and engineers to collaborate on "real" components, i.e. the designers edits the exact same component that the engineer ships to production, these tools avoid any potential human error in the handoff or implementation of the component's screenreader support.

An example, Lona

I recently added better accessibility support to the component-based tool I work on, Lona. One of the improvements I made was adding an "Accessibility Overlay" option which automatically draws black boxes and numbers around each of the accessible elements in a component. This helps designers and engineers visualize the screenreader UX before testing on device.

Here's the example we've been using so far, the Toppings component, displayed in Lona:

The Toppings component, presented in the Lona app. This is our app design from before, with the same numbered black outlines

The numbered boxes trace the path that a screenreader would take. This lets the designer/engineer preview the screenreader UX quickly and easily, without needing to build the app onto their phone. This shortens the feedback cycle between designing and experiencing the screenreader UX. While testing on a real device is still necessary for some things, the black boxes can be used to validate navigation order and touch target size while designing. Lona can then generate UI code for this component that matches the navigation order shown when using the "Accessibility Overlay".

Lona currently supports generating this code for iOS, React on the web (with keyboard navigation support), and React Native (with some limitations due to platform accessibility support, as of v0.59).

This component, Toppings, is currently hard-coded to contain 3 copies of the CheckboxRow component. We can see that first the title, "Extra toppings", would be read aloud. Then, each subsequent swipe would move down through the list of checkboxes. In reality, the list of checkboxes would be dynamic, so the Toppings component is a bit contrived – but the CheckboxRow component is totally realistic.

Here's what the CheckboxRow looked like in Solution #1 (the clumsy solution) above:

The CheckboxRow component, presented in the Lona app. The black outlines are small and don't associate the checkboxes and labels

The black boxes show us that the touch targets are very small, and that the checkbox and label would be read aloud separately.

For Solution #2, I grouped the checkbox and text together into a single larger touch target. The info for both will be read aloud together, to provide enough context for the user:

The CheckboxRow component, presented in the Lona app. The black outlines are larger and associate checkboxes and labels

It's a little hard to tell how well this component will work without more context, so that's why I created the Toppings component above to demonstrate.

Other examples?

Component-based design tools are proliferating. These days it feels like a new one launches every week. I'm definitely not up-to-date on all of them, but I actually haven't spotted any amazing screenreader-focused features yet. I think that's a missed opportunity. Even if these features do exist, they're not being communicated/marketed very well yet.

Component-based design tools should inherently be good at screenreader design, due to how they enforce limitations of the underlying operating systems & web browsers. Hopefully a few of the folks working on component-based tools will read this post and introduce/share some amazing features!

If there are features of component-based tools that I've overlooked, let me know and I'm happy to mention them here. It's very possible there are some great ones I've missed.

Takeaway

The software development process at big companies often leads to an awkward screenreader UX. The tools that designers currently use don't give them enough visibility or control over the accessibility of the apps they design. As a result, we frequently translate visual interfaces to screenreader interfaces directly without understanding if the result will be easy to use.

Component-based design tools should change that. Designers and engineers shouldn't need to be accessibility experts – our tools and processes should have that expertise baked-in, and guide us along the way.

What next?

  • If you're considering using a screenreader-aware, component-based design tool (and don't mind being a super early adopter), check out Lona. The screenreader accessibility support is still a work-in-progress, but it's already quite powerful.
  • If you're looking to prioritize screenreader accessibility but aren't ready for a new set of tools, consider doing something like what the Capital One mobile team did: rewrite your development process to focus on accessibility. This is a totally different, albeit higher-effort solution that addresses many of the problems I outlined above.
  • If you're a designer interested in joining a team that cares a lot about this stuff, send Kathryn Gonzalez a DM on Twitter: @ryngonzalez. Kathryn is the head of design infrastructure & systems at DoorDash, helped edit this post, and is just generally a super fun person to work with. Their team is hiring! (I don't get a referal bonus 😂)