accessible-design Created with Sketch.

Screenreader-aware Design Tools

February 06, 2019

Building a great screenreader UX is challenging

If you've ever used a screenreader, or even watched somebody use one, you've probably encountered some pretty confusing user interfaces. This is understandable in smaller/hobby apps written by an individual -- these folks simply don't have the resources to prioritize accessibility. But even in well-respected apps, published by industry-leading tech giants, built by talented engineering teams, the UX is sometimes still super awkward. What's going on?

In this post, I'll cover some of the challenges we face when making apps accessible to screenreader users, and how we can leverage the next generation of design tools to do a better job.

Why the focus on screenreaders?

Accessibility is a massive topic, but this post will focus specifically on screenreaders. When it comes to accessibility, there are a lot of great existing tools and resources for things like WCAG color contrast checking and text size guidelines. There aren't nearly as many resources when it comes to screenreader UI design tools and patterns.

I think this is the case for a couple reasons:

  • It's hard for computers to automatically check whether a UI is actually accessible to screenreader users or not. The computer has no way of knowing exactly which parts of the interface are important and whether they're labeled in an intuitive way or not. E.g. if an image isn't labeled, is that a mistake, or is the image purely decorative? If a checkbox is labeled "setting enabled", is there enough context to know what setting that's referring to?
  • "Good" screenreader design is very subjective: just like with any visual UI, the same screenreader UI might be intuitive for some users and confusing for others. We often discuss accessibility as if it's a checkbox on a list of requirements, e.g: "is this UI accessible?" Things like color contrast fit into that mental model pretty well. When it comes to screenreaders, it's just as important to ask "is this UI easy to use?"

An exercise

Let's walk through an example of adding support for VoiceOver, Apple's built-in screenreader, to an iOS app. VoiceOver allows users to swipe left and right to navigate through each accessible element on the screen, reading aloud details about each one. Users can also drag their finger around the screen, and VoiceOver will read aloud the element under their finger. There are some other useful gestures, but we'll focus on these two.

The initial design

Suppose we're building a food delivery app, and we're designing a component for choosing options in a list:

Initial design

How would we want this to work for VoiceOver users?

The platform default

If we don't write any code specifically to support VoiceOver, the operating system will try to guess the correct navigation order of the elements in the UI. Here's what the navigation order would most likely be:

Initial design with accessibility defaults

Each of the text elements is automatically detected as an accessible element, and their order is determined by their position on the screen. In this image, I've numbered each of the text elements to indicate their order. When a user navigates to a text element, VoiceOver will read its contents aloud.

Depending on how they're coded, the checkboxes might be automatically detected (before or after the text), or they might be ignored entirely. It can be a bit hard to predict precisely, so we probably want to define the elements and order we think will be best, rather than letting the system guess. Either way, the system won't know exactly what to read aloud ("checked" and "unchecked"? "topping added" and "topping not added"?) when the user interacts with the checkbox unless we specify these details.

Solution #1 (awkward)

In order for this UI to be usable by screenreader users, we could include the checkboxes in the navigation order, and provide accessibility labels for VoiceOver to read aloud (let's assume we use "topping added" and "topping not added"). The result would look like this:

Initial design with awkward accessibility

The UI is now usable. When the user navigates to a checkbox, VoiceOver will say "topping not added", and they can toggle the checkbox.

However, there are a couple things that make this UI awkward:

  • Navigating to a checkbox will read "topping added" or "topping not added" aloud, but the user won't know which topping it's referring to until they navigate to the text element afterward. In other words, there's an implicit visual association between the checkbox and text that may be lost to users with visual impairments.,
  • The touch targets are very small. When there are a lot of options like this, users may drag their finger around the screen to quickly find what they're looking for. We should try to use large touch targets so it's easier to for them to land in the right place.

If somebody asked, "is this UI accessible?", I think we could answer "yes". I would imagine it meets the legal requirements for ADA compliance (or similar laws in other countries). At the same time, I don't think we could call it "well-designed".

Solution #2 (better)

We can improve both the navigation and touch target size without modifying our visual design at all:

Initial design with better accessibility

When the user navigates to a row, VoiceOver will read (roughly) "Button, Bacon, Topping added. Double tap to remove topping." They can quickly drag their finger through the list to find the right topping without having to zig-zag back and forth between checkbox and text.

The point

When building an app, it's very easy to end up with Solution #1. Solution #1 is a direct translation of a visual design to a screenreader design, without considering what it would actually be like to use. In order to come up with Solution #2, we had to take a step back and rethink our UI from a different perspective. We also had to have a decent understanding of how VoiceOver works, since otherwise we wouldn't know that Solution #1 would be awkward in the first place.

At a big tech company, there's plenty of time, money, and people who know how screenreaders work, so why do we still often end up with Solution #1?

Why we get it wrong

In my experience, on a typical product teams at a big companies, screenreader accessibility is considered relatively late in the product development process. Here are a few reasons why this is often the case, and why it causes problems.

Engineering owns screenreader accessibility

The product teams I've seen are usually made up of: a product manager, a couple product designers, and a handful or two of software engineers. Which of these people are responsible for making sure an app is accessible to screenreader users?

The answer I hear most often is "all of them", but that's sort of a cop out -- if everybody is responsible, it's essentially the same as if nobody is. All of them want to do the right thing, but that's different than sharing the responsibility equally when something goes wrong.

In practice, the answer tends to be the engineering team. Engineers are ultimately the ones who do the work of adding screenreader support to the code. At many companies, accessibility support is a requirement for the software they write. Thus it's part of their performance measurement and/or they're penalized if something goes wrong. As a consequence, engineers often know how VoiceOver works best and end up deciding the screenreader UX.

Engineers deciding the screenreader UX is problematic:

  • Engineers haven't researched the problem and user nearly as much as designers, so they're not well-suited to decide the UX. They should weigh in of course, but they shouldn't have the final responsibility.
  • Building a great screenreader UX can require visual changes, and engineers often don't have the agency to make those. They can point out issues and make suggestions to the designer, but this back-and-forth is ineffective. The designer may have already decided on a lot of the UI, and has perhaps even gotten sign-off from the leadership team, so making visual changes may not be desirable.
  • Accessibility support is added to the code only after the UI is already built, at which point any significant changes may be very costly. Even non-visual changes, e.g. changing the UI hierarchy to improve the tap areas, can be quite time-consuming. Ideally engineers would catch issues when reviewing mockups and before actually building the UI, but engineers are mainly looking for issues that prevent usability, rather than UX improvements.

All of these tend to lead us to Solution #1 - a direct translation of a visual UI to a screenreader UI, after the visuals have already been decided and are costly to change.

To come up with Solution #2, we need to shift the responsibility back to the designer. Designers have the most knowledge of the problem and user, the most control over the visuals, and the most time to explore and iterate on possible UI solutions before the product is built. It's the product designer's responsibility to decide the UX of the product, and this should include the UX for screenreader users.

Designers don't have the right tools

The UX should be the designer's responsibility, but current design tools make this hard in practice. To fascilitate good screenreader design, a tool should:

  1. educate the designer on how screenreaders work for the platform(s) they're designing for
  2. show/speak a preview of the screenreader experience so the designer can test and iterate (it's hard to know if a design is easy to use without actually trying it)
  3. enable an accurate handoff or code export for engineers to integrate into the final product

As far as I know, the tools designers use today don't do any of these things. I haven't seen any prototyping tools for testing with screenreaders. I haven't seen any vector design tools help visualize the path a screenreader will take (although, this isn't their purpose, so it's understandable). Only tools typically intended for engineers, such as Xcode, help with education and testing around assistive technologies. While designers can and do use Xcode, it's not nearly as well-suited for the same kind of fast iteration and exploration as design tools.

Screenreader-aware design tools

In order to assist designers in building screenreader-accessible interfaces, our tools must understand how screenreaders work. Operating systems and browsers have (mostly) well-defined rules for how a screenreader will interpret an interface -- an engineer building an app must work within these constraints. If a design tool allows these rules to be broken, any design created within it may be impossible to implement.

Component-based tools

Component-based tools are well-suited for screenreader design. These tools are already aware of the accessibility rules of the underlying platforms, and thus can enable designers to build and test screenreader-accessible components.

Component-based tools allow designers and engineers to collaborate on "real" components, i.e. the designers edits the exact same component that the engineer ships to production. The designs are then automatically converted into code, avoiding any potential human error in the handoff or implementation of the component.

An example, Lona

I recently added better accessibility support to the component-based tool I work on, Lona. One of the improvements I made was adding an "Accessibility Overlay" option which automatically draws black boxes and numbers around each of the accessible elements in a component. This helps designers and engineers visualize the screenreader UX before testing on device.

Here's the example we've been using in Lona:

Toppings component example

This component, Toppings, is currently hard-coded to contain 3 copies of the CheckboxRow. In reality, the list of checkboxes would be dynamic, so the Toppings component is a bit contrived -- but the CheckboxRow component is totally realistic.

Here's what the CheckboxRow looked like in Solution #1 above:

CheckboxRow component awkward example

For Solution #2 I changed it to have a single accessible element:

CheckboxRow component better example

It's a little hard to tell how well this component will work without more context, so that's why I created the Toppings component to demonstrate.

Other examples?

Component-based design tools are proliferating. These days it feels like a new one launches every week. I'm definitely not up-to-date on all of them, but I actually haven't spotted any amazing screenreader-focused features yet. I think that's a missed opportunity. Even if these features do exist, they're not being communicated/marketed well enough yet.

Component-based design tools should inherently be good at screenreader design. Hopefully a few of the folks working on component-based tools will read this post and prove me wrong by introducing/sharing some amazing features!

Takeaway

The software development process at big companies often leads to an awkward screenreader UX. The tools that designers currently use don't give them enough visibility or control over the accessibility of the apps they design. As a result, we frequently translate visual interfaces to screenreader interfaces directly without understanding if the result will be easy to use.

Component-based design tools should change that. Designers and engineers shouldn't need to be accessibility experts -- our tools and processes should have that expertise baked-in, and guide us along the way.

What next?

  • If you're considering using a screenreader-aware, component-based design tool (and don't mind being a super early adopter), check out Lona. The screenreader accessibility support is still a work-in-progress, but it's already quite powerful.
  • If you're looking to prioritize screenreader accessibility but aren't ready for a new set of tools, consider doing something like what the Capital One mobile team did: rewrite your development process to focus on accessibility. This is a totally different, albeit higher-effort solution that addresses many of the problems I outlined above.