How Accessibility Guidelines Fit into Automation
Accessibility automation is an important aspect of building quality software. You can maintain something easier when a computer can help to test it. You can also bake in a contract for how something should work so that other devs (or yourself, in the future) are less inclined to break it without anyone noticing.
While automation can be helpful, it’s also challenging. Sometimes tooling doesn’t support your use case (or you’re locked into an old version of a library). It’s time consuming to experiment and find solutions that work.
There is also a misconception that everything in accessibility can be automated. It cannot. We still need humans to test things while building accessible digital experiences.
What can be automated, and what can’t
Estimates for how much of WCAG (opens in a new tab) we can automate are around 50% of accessibility issues by volume. And that’s only from one vendor, Deque Systems (opens in a new tab). You can review their rule descriptions for axe-core here (opens in a new tab).
Related: Accessibility Conformance Testing (ACT) Rules Group (opens in a new tab) at the W3C
When it comes to accessibility automation, you have to write feature tests that cover various parts of your app. Your engineering team will know the most about how components should work. So your tests can be tailored for aspects that can be automated, like keyboard support, ARIA states, labels, and more.
Accessibility test APIs like Axe haven’t historically been able to detect click events on DIVs, just like blind users couldn’t. Tooling that does have the ability to detect a click event on a DIV still can’t tell whether it’s a legit DIV click event or a bad one. This is one reason why manual testing is still critical (opens in a new tab).
Create your own coverage
Write feature tests for your components that assert accessibility functionality that can be programmatically determined. Keyboard tests are great examples of this. They are fun to do with the latest tools like Testing Library and Jest. But it can be tricky to capture how a user interacts with a real browser in automated tests.
Let’s dig into what it takes to write automated accessibility tests so we can bake in quality in an effective way. It’s about striking the right balance of automation, reliable code patterns, QA, and persistence. And not writing tautological tests (opens in a new tab) or tests that get commented out.
My hope is that you don’t succumb to a culture of “it’s too hard to make this accessible.” Find a way to work through it and bake meaningful accessibility assertions into your process.