top of page
  • Writer's pictureJakob Nielsen

Accessibility Has Failed: Try Generative UI = Individualized UX

Summary: Traditional methods for accessibility have been tried for 30 years without substantially improving computer usability for disabled users. It’s time for a change, and AI will soon come to the rescue with the ability to generate a different user interface for every user, optimized for that person’s unique needs.

 

Accessibility has failed as a way to make computers usable for disabled users. My metrics for usable design are the same whether the user is disabled or not: whether it’s easy to learn the system, whether productivity is high when performing tasks, and whether the design is pleasant — even enjoyable — to use.


Assessed this way, the accessibility movement has been a miserable failure. Computers are still difficult, slow, and unpleasant for disabled users, despite about 30 years of trying. (I started promoting accessibility in 1996 when I worked at Sun Microsystems, but by no means claim to have been the first accessibility advocate.)


Where I have always differed from the accessibility movement is that I consider users with disabilities to be simply users. This means that usability and task performance are the goals. It’s not a goal to adhere to particular design standards promulgated by a special interest group that has failed to achieve its mission.


There are two reasons accessibility has failed:

  • Accessibility is too expensive for most companies to be able to afford everything that’s needed with the current, clumsy implementation. There are too many different types of disabilities to consider for most companies to be able to conduct usability testing with representative customers with every kind of disability. Most companies either ignore accessibility altogether because they know that they won’t be able to create a UX that’s good enough to attract sufficient business from disabled customers, or they spend the minimum necessary to pass simplistic checklists but never run the usability studies with disabled users to confirm or reject the usability of the resulting design.

  • Accessibility is doomed to create a substandard user experience, no matter how much a company invests, particularly for blind users who are given a linear (one-dimensional) auditory user interface to represent the two-dimensional graphical user interface (GUI) designed for most users.

Old Users and Low-Literacy Users Are the Exception

Before turning to my recommendation for helping disabled users in general, let me mention that two huge groups of users can indeed be helped with current approaches: old users and low-literacy users.


By “old” users, I mainly mean people older than 75 years who start to exhibit major aging symptoms, such as weakened memory. These users need simplified navigation, simplified comparison features that do not require retaining information in short-term memory, and simplified explanations. Such simplification is difficult, and the easiest UX requires the most design effort and the most rounds of iterative user testing, as subsequent design versions are refined. Make It Easy has always been one of my main UX slogans, but achieving this goal is hard work.


“Make It Easy” poster: originally made with Ideogram, then upscaled with Leonardo.


As an aside, the above image is a good example of the difference between the usability approach and the accessibility approach to supporting disabled users. Many accessibility advocates would insist on an ALT text for the image, saying something like: “A stylized graphic with a bear in the center wearing a ranger hat. Above the bear, in large, rugged lettering, is the phrase "MAKE IT EASY." The background depicts a forest with several pine trees and a textured, vintage-looking sky. The artwork has a retro feel, reminiscent of mid-century national park posters, and uses a limited color palette consisting of shades of green, brown, orange, and white.” (This is the text I got from ChatGPT when I asked it to write an ALT text for this image.)


On the other hand, I don’t want to slow down a blind user with a screen reader blabbering through that word salad. Yes, I could — and should — edit ChatGPT’s ALT text to be shorter, but even after editing, a description of the appearance of an illustration won’t be useful for task performance. I prefer to stick with the caption that says I made a poster with the UX slogan “Keep It Simple.”


Returning to the old users, the United States currently has about 23 million people aged 75 or above. The combined net worth of these users is about $23 trillion. Investing in improved usability is worth it to capture your share of this money. (This is even more true when you consider that making a website easier to use for the elderly will also make it easier for younger users. While they may not need this as much, they’ll still appreciate the increased usability.)


We should also remember that the definition of “old users” starts around 45 for some usability guidelines related to declining eyesight. Again, just looking at the United States, 140 million users will benefit if we shun tiny type.


The second immense group of disabled customers consists of low-literacy users. These are people who can read, but just not very well. I admit that I don’t target this group in my writings (for example, this article has a 13th-grade reading level, meaning that readers must be at least in the second year of college to understand it). But if you serve a broad consumer audience, you must support low-literacy users. My usual guideline is to write at an 8th-grade reading level for this audience.


Estimates from international reading research show that about 40% of the adult population in the United States can be classified as having low literacy. (This is about 100 million customers. Think dollar signs when you read these statistics.) Countries like Japan, Singapore, South Korea, and China have higher literacy levels, but still have on the order of 30% low-literacy adults. (Japan is the only country measured with a low-literacy level of around 25%.) When considering countries like Chile, Mexico, and Turkey, low-literacy levels surpass 80% of the adult population. (Since the research focused on rich and middle-income countries, we don’t have measures from impoverished developing countries with terrible school systems, but their numbers are likely even worse.)


It’s not easy to write copy at an 8th-grade reading level, and generative AI will often miss the target if you instruct it to create text at a specified reading level. Thus, you should always check your copy with a readability tool before publication. That said, it’s a clear guideline to follow, and AI can definitely help with simplifying the exposition of complex topics.


Thus, supporting low-literacy readers is very feasible for companies that have a broad consumer audience as their customers.


Supporting Disabled Users Requires Generative UI

In theory, you could handcraft an optimized user experience for each major category of disabled users. You could conduct user testing with representative members of each of these groups, and you could iterate on your design until it meets your targeted usability criteria.

This will never happen, so let’s forget about an unreachable ideal.


We need an approach that scales, and that can support users with a wide range of conditions. Luckily, this is now emerging in the form of generative UI.


“Generative UI” is simply the application of artificial intelligence to automatically generate user interface designs, leveraging algorithms that can produce a variety of designs based on specified parameters or data inputs. Currently, this is usually done during the early stages of the UX design process, and a human designer further refines the AI-generated draft UI before it is baked into a traditional application. In this approach, all users see the same UI, and the UI is the same each time the app is accessed. The user experience may be individualized to a small extent, but the current workflow assumes that the UI is basically frozen at the time the human designer signs off on it. I suggest the term “first-generation generative UI” for frozen designs where the AI only modifies the UI before shipping the product.


I foresee a much more radical approach to generative UI to emerge shortly — maybe in 5 years or so. In this second-generation generative UI, the user interface is generated afresh every time the user accesses the app. Most important, this means that different users will get drastically different designs. This is how we genuinely help disabled users. But freshly generated UIs also mean that the experience will adapt to the user as he or she learns more about the system. For example, a more simplified experience can be shown to beginners, and advanced features surfaced for expert users.


Moving to second-generation generative UI will revolutionize the work of UX professionals. We will no longer be designing the exact user interface that our users will see, since the UI will be different for each user and generated at runtime. Instead, UX designers will specify the rules and heuristics the AI uses to generate the UI.


Don’t panic.


While exponentially magnified, the loss of exact designer control inherent in generative UI is very similar to the change introduced by responsive web design. Before responsive design, many web designers aimed at pixel-perfect control over their creations. But with responsive design, this became impossible, because design elements would move around the screen (and sometimes appear or disappear), depending on each user’s viewport size.


Don’t panic, even though 2nd-generation generative UI will mean major changes to your design approach, including losing much fine-grained control. (Midjourney)


The following infographic compares the traditional accessibility approach with the new generative UI approach. Traditionally, the computer made a single graphical user interface to represent the underlying features and data. A sighted user would simply use this GUI directly. A blind user would first employ a screen reader to linearize the GUI and transform it into words. This stream of words would then be spoken aloud for the user to listen to. This indirection clearly produces a terrible user experience: with 2D, the sighted user can visually scan the entire screen and pick out elements of interest. In contrast, the blind user is forced to listen through everything unless he or she employs a feature to skip over (and thus completely miss) some parts.


With generative UI, an AI accesses the underlying data and features and transforms them into a user interface that’s optimized for the individual user. This will likely be a GUI for a sighted user, and for a blind user, this will be an auditory user interface. Sighted users may get similar-looking UIs to what they previously had, though the generative UI will be optimized for this user with respect to reading levels and other needs. For the blind user, the generative UI bypasses the representation of data and features in a 2-D layout that will never be optimal when presented linearly.


Besides creating optimized 1-D representations for blind users, generative UI can also optimize the user experience in other ways. Since it is slower to listen than to visually scan text, the version for blind users can be generated to be more concise. Furthermore, text can be adjusted to each user’s reading level, ensuring easy comprehension for everybody.


Feel free to copy or reuse this infographic, provided you give this URL as the source.

 

Top Past Articles
bottom of page