Over the years, accessibility is something that I’ve come to see as foundational to front-end application quality. Not optional, not an afterthought, not a debate. As the product manager for Cypress Accessibility, I’ve been able to talk to hundreds of people about how they and their companies manage accessibility in their approach to development and quality. One thing is clear: lots of us are working in less than ideal conditions when it comes to making applications usable for people with disabilities.
Even with the best of intentions and greenfield projects, mistakes are made and get to production. The difference between “compliance with accessibility standards” and true usability can make us feel that goalposts are constantly moving – which in a sense they are, we are trying to satisfy the needs of many human beings! In many companies, there are tensions and disagreements around the need to ship accessible software in the first place and how to set the quality bar.
And then there’s legacy code. Written long before teams were aware of accessibility concerns, when they didn’t know the enormous difference simple HTML choices can make in the independence and success of disabled customers using web applications. After years of development, some accessibility problems become “load-bearing” core parts of the user experience that are complicated and expensive to change – and this can prevent us from taking our first steps.
In the rest of this post I’ll talk about implementing accessibility standards in legacy applications. We’ll use examples from the Cypress Realworld App to demonstrate and link to live reports where possible.
Making progress
Meryl Evans writes in her Progress over Perfection blog post:
The trick is to start on the journey and stay the course. Even if you plan to launch a redesigned website built with accessibility in mind, you can still make your new content accessible.
At Cypress, we’ve talked to many customers with large legacy projects who want to make them accessible and find the challenge overwhelming. We work with them on an incremental approach using Cypress Accessibility to automatically detect a small, manageable subset of issues and then gradually increase the coverage and accountability.
We recommend this approach because we too have legacy codebases, and we are on the same journey as our customers in moving towards a better experience. We’ve taken the “progress over perfection” approach by commissioning our first accessibility audit, applying increased scrutiny to new work, and implementing a subset of Axe Core® rules as blockers for pull requests to prevent regressions. This allows us to gradually increase our quality bar to specific standards and ensure that everybody is informed and aware, without creating an impossible situation where developers cannot make forward progress.
There has been a lot of thinking over the years about what goes into measuring an organization’s accessibility maturity (the W3C’s proposed Accessibility Maturity Model is one recent example). When it comes to accessibility and test automation, I find it useful to think even smaller than the organization level, and look at projects as in a few different stages.
- Not started - accessibility was never considered, and automation will help to document and reveal the size of the backlog
- Improving - work is being carried out against a backlog of known issues, automation confirms that progress is being made and that fixes don’t themselves introduce new issues
- Maintaining - the product is in good shape, and automation is primarily used to catch new problems early and avoid the silent build up of accessibility debt
Later in this series of accessibility posts, we'll look at how the kind of automation discussed here fits into a broader accessibility program. Today we'll focus on automation as a starting point for setting a standard and building momentum. While I use examples for own our accessibility automation product, the general patterns and principles apply to any scenario where you plan to grapple with a backlog of accessibilities issues in an existing project.
Incrementally adopting accessibility automation standards
Where incremental adoption makes the most sense is during the “improving” stage for a legacy project. When a team is new to accessibility, it won’t be possible fix everything at once, but we can still very quickly implement “a standard” by identifying the following:
- Which accessibility rules are currently failing or not on our application
- Which will be the first failing rules that we’ll focus on improving
- What are the first pages or components where those rules will be fixed
Since Cypress Accessibility is based on the Axe Core® library, there are about 100 rules available to automatically run, depending on your chosen configuration. Only a subset of these rules will apply to most projects, and of those, usually across the entire project some will be passing, some failing, and some marked incomplete. Here’s an example of the 17 Axe Core failures from the Cypress Realworld App, a demo banking application:
View this list of failed elements in the Realworld App Cypress Cloud project.
By recording the tests for this to Cypress Cloud, this report was automatically generated by Cypress Accessibility and covers all the pages, components and user flows rendered during the tests. Since this is the app we use for demonstrating testing practices, its test coverage is high, and we can be confident that this represents “the list” of problems that Axe Core® automation could detect.
Note: it goes without saying here that this kind of automation does not detect all possible issues, but automated checks are still implemented in orgs of all maturity levels because they provide fast feedback and catch a lot of genuine problems. Manual testing and audits are still carried out at these organizations, but focus on problems that are less suitable for automation and require human judgment and a nuanced interpretation of the standards.
What to work on first
When the team is first introduced to accessibility, the primary concerns are understanding of the purpose of accessibility testing, and building momentum around the process by helping the team to experience success and progress early.
First, the purpose of accessibility testing is to detect problems in the code or content of your application that might prevent a disabled user from independently perceiving, understanding, and operating your application. These core principles underlying web accessibility are abbreviated as POUR, which stands for Perceivable, Operable, Understandable, and Robust, which this article by Homer Gaines explains in more detail.
Understanding these principles helps your team get the most out of accessibility automation because they help us build a mental model for how we need to think about accessibility when a problem is surfaced by an automated check. A failing check is a clue that the application is not delivering on at least one of these principles, and we must think about them in order to choose the best solution to the problem.
This emphasizes that accessibility is about clear communication of the nature and structure of what is happening in the application to disabled users, not about implementing arbitrary code changes until the accessibility tests are passing.
With this in mind, we might pick the first few issues to work on. I am always looking for issues that meet a few criteria as the first recommend addressing:
- It’s easy to understand and fix the issue
- The issue represents a tangible usability barrier - fixing it will make things better
- The issue does not require circling the wagons with Design or Product, engineers can fix it on their own
- The issue can be fully fixed across the application in a small number of steps. This is useful because as soon as these first issues are fully passing, we are going to close the door on them and start to block new code changes if they introduce regressions.
- Issue is conceptually related to the other first issues
A good way to find these starter issues is to filter by severity, to see only the rules marked as Critical by Axe Core.
View this list of critical failed elements in the Realworld App Cypress Cloud project.
Within this list of 5 critical rules there are two themes that emerge:
- 3 of rules relate to things missing descriptive text
- 2 of the rules relate to the HTML parent/child relationships of various elements
Of these two themes, I would choose the missing text as the easiest topic to understand and demonstrate. The “Form elements must have labels” rule and the “Buttons must have discernible text” rules both have low failing element counts, so I would target these for the team to fix and get to “zero” before moving on to the alternative text issue related to images.
These quick wins will help establish the pattern of understanding and fixing the problems, and then preventing regressions. Within Cypress Accessibility we can see the details of each of these issues, where they are detected, and the work needed to fix them, so turning this into actionable backlog items is pretty straightforward:
The “Buttons must have discernible text” elements are two variations of a “thumbs-up” button that has no visible label”:
The “Form elements must have labels” issue applies to this slider component that appears in several popups throughout the user journey tests:
The first problem is usually easy to solve: developers just need to make sure that button has an appropriate label applied in the code.
For the second, it’s a little more complicated. These are the handles of a slider that sets some dynamic values. It’s likely that we could get the automation to pass simply by adding some labels to these inputs, but it’s worth pausing to ask if the overall pattern is correct. Based on the developer not including labels for the input, it’s possible this component itself needs more attention from an accessibility perspective, because it seems intended to communicate 2 movable points at either side of
Treating accessibility issues detected by automation as “hotspots” for generally checking on the implementation of a specific component is a great way to get the most of those automation checks and strengthen your overall accessibility.
Preventing regressions
Cypress Accessibility provides a Results API, which is how you can fail your pipeline based on specific accessibility results. Eventually, you will probably want to do this when any accessibility issue appears, but at first, you may use this as a way to enforce the currently-targeted subset of standards you require a project to meet.
One common pattern is to simply identify the list of rules that are current known to be problems in your project, and fail your build if any new rules start to fail. The key parts of the implementation would be as follows:
const { getAccessibilityResults } = require('@cypress/extract-cloud-results')
const rulesWithExistingViolations = [
'button-name',
...
]
getAccessibilityResults().then((results) => {
const newRuleViolations = results.rules.filter((rule) => {
return !rulesWithExistingViolations.includes(rule.name)
})
if (newRuleViolations.length > 0) {
throw new Error(
`${newRuleViolations.length} rule regressions were introduced and must be fixed.`
)
}
})
By doing this, you can create a division between the “in-progress” and “completed” aspects of accessibility in the project that can be adapted over time to reflect your current goals, and eventually removed as you make the switch from “improving” to “maintaining”.
Here is the effect a script like the above might have if in a GitHub Actions pipeline:
This is a simplified version of the output for a real Cypress project where a pull request introduced a new failure of “Buttons must have discernible text” that needed to be fixed before the PR was merged. Importantly - all the functional tests are passing here, meaning that Cypress “saw” everything that was supposed to happen during the test run, and the results are based on that full surface area.
Looking to the future
Accessibility is not a single project of burning down a backlog to reach a final “completely accessible” state. It’s an ongoing process that requires cooperation and understanding across multiple different teams and departments at an organization to get right. Legacy applications can be some of the hardest to deal with because the workload can be overwhelming, and the decisions around design and implementation may have been made years ago, by people who no longer work at your company, and in ways that make it much harder to improve accessibility than it is in greenfield projects that are designed to be accessible from the start.
Automation can help you identify issues and create some guardrails for developers to make incremental improvements to a legacy application without becoming overwhelmed or getting pulled in too many directions at once. If you have legacy applications that you know are going to stay around, there is little benefit to users by waiting for the next big rewrite that may never come. Instead, get started with small improvements and build some momentum in fixing issues, along with empathy and understanding of the needs of your users. This applies regardless of the accessibility testing approach you choose, but if you are interested in how Cypress Accessibility can help you in the journey, reach out to set up a trial.