Challenging Jakob Nielsen’s claim on accessibility

Renowned user experience guru Jakob Nielsen published a post offering generative AI as an alternative to accessibility measures which he claimed to have failed to make computers usable for disabled users. He’s been criticized for the way he offered his thoughts which can easily be taken as dismissing accessibility altogether.

Nielsen claims that he sees all computer users equally, not making distinctions based on their ability to use them so user and interface experience designers:

Where I have always differed from the accessibility movement is that I consider users with disabilities to be simply users. This means that usability and task performance are the goals. It’s not a goal to adhere to particular design standards promulgated by a special interest group that has failed to achieve its mission.

He then offers two reasons why he thinks accessibility has failed. The first is that “accessibility is too expensive for most companies” so instead of making an effort to meet the needs of disabled users, companies either forego accessibility altogether or follow a checklist of items without verifying the results with actual disabled people. That last point actually contravenes the evaluation steps on accessibility work in the W3C’s Web Accessibility Initiative (WAI). Involving actual disabled users is one of the final steps towards compliance with WAI.

The second reason is, “accessibility is doomed to create a substandard user experience” and then continues to dismiss the present approach to auditory interface because it poorly translates a two dimensional visual user interface designed for sighted people.

At the end of his argument he offers generative AI as the core of interface generator which will present a visual interface for sighted users and auditory interface not based on the visual version, for blind users.

He may well be correct in how AI in the future may play a substantial role in presenting computing interfaces based on the user’s conditions but that day has yet to arrive and it may take some time.

The current accessibility solutions for disabled users based on W3C standards are indeed interpretive of the visual interface instead of being fully designed for non sighted users, which makes them less ideal, but for such a renowned leader in experience design to dismiss the efforts entirely may lead to companies taking his advice on face value and use that as an excuse to not make the effort and investment towards accessibility and assistive technologies at all. That’s harmful.

Per Axbom, the Swedish designer and thought leader on human centered design, has a much more comprehensive breakdown of his objections over Nielsen’s proposal. Worth reading in its entirety. The crux of his argument is Nielsen is advocating for radically customized individual interfaces, not just general interface approaches for certain groups of people with different abilities. He said distinct experiences for individuals “is an extreme take with very little foundation in feasibility or desirability”.

Update 7 March:

I just came across several more reactions and responses to Nielsen’s ignorant claims about digital accessibility and they are livid.

Designer and accessibility advocate Eric W. Bailey called Nielsen “an asshole” in a very short post, but he also included links to a handful of other people’s thoughts about the matter that you might be interested to read.

A much more diplomatic response came from Léonie Watson, a board member at the W3C, telling Nielsen to rethink his views on accessibility, based on her own disability and how she’s managed to experience and contribute to the development of the web as a blind person.

In fact, just a couple of weeks prior to Nielsen’s post, she wrote a post about how the leading AI tools would present misinformation or incomplete info, skip rules and guidelines, and even fail when tasked with delivering “all the code that I need to create a set of accessible tabs for a website”.

So just like with any information delivered by AI, we still need to verify their validity, whether they’re factual, whether they work, whether they’re applicable, etc. Maybe one day that won’t be as urgent, but for now, especially in delivering universal digital experience, AI still needs human supervision and oversight to minimize mistakes. Which is ironic because we rely on digital tools to minimize our own.