top of page
  • Writer's pictureJakob Nielsen

AI as UX Integration Glue

Summary: Users today face a fragmented digital experience, constantly navigating between incompatible apps and websites. AI could serve as the much-needed glue to bind these disparate elements together, enhancing usability and creating a more cohesive user experience. Nonetheless, it is crucial to also focus on strengthening conventional integration techniques to achieve a balanced future for UX.

 

The grand vision of seamlessly integrated user interfaces, conceived in the 1980s, has been mercilessly shattered over the past two decades. We now find ourselves navigating a digital realm characterized by incompatible and isolated applications and websites.

Never the twain shall meet, never mind the hundreds of different services users employ to meet their various needs. Separation, isolation, silos (a popular Silicon Valley term) — whatever you call it, the resulting user experience is substandard.


AI may come to the rescue and provide much-needed glue between our many separated experiences. More on that later. First, let’s review the hope for better integration many UX pioneers felt in the 1980s.


You would not want company staff to work from isolation booths without the ability to communicate. Yet this is how modern software operates. (Midjourney)


The 1980s Hope for Integrated Software

The first wave of integrated software came in the early 1980s, with products such as the Xerox STAR (1981), Lotus 1-2-3 (1983), Apple Lisa (1983), Windows (1985), and Microsoft Office (1989).


In a field study I conducted in 1985 (Nielsen et al., 1986), we examined how business professionals utilized these tools and defined integration as any measure that minimizes or eliminates boundaries between separate applications. The research identified six dimensions of integration:


  1. Application integration. Easy access to multiple applications, using results of one program in another (e.g. using numbers from a spreadsheet to build a business graph).

  2. Media integration. Composite documents consisting of multiple media (e.g. both text and graphics).

  3. Interface integration. Uniform interaction with the different systems.

  4. Systems integration. Use of multiple computer systems working together.

  5. Documentation integration. Manuals, help information, tutorials, etc. are online and accessible together with the application.

  6. Outside integration. Interfacing with the world outside the computer.


The last form of integration, with the non-computer world, is doing well, with anything from the Internet of Things over QR codes to e-commerce where a click results in a package at your door the next day.


Documentation integration has succeeded almost too well, with nobody shipping printed manuals these days. Our main problem is that certain major applications, such as ChatGPT, have no documentation at all, leaving users to learn prompt engineering from hearsay on social media.


Integration dimension 2, media integration, is also usually well supported. Particularly with Generative AI, anybody can produce content that combines text, images, video and animation, songs, and many other media formats. Authorship is blossoming in Renaissance style thanks to AI.


The 1990s Decent into Isolation

It’s the remaining 3 dimensions of integration that are causing trouble: application, interface, and systems integration have all been abandoned during the last 20 years.


Separate applications don’t talk. They have widely varying user interfaces, which reduces learnability. And it is still a royal pain to transfer information between separate platforms. If you use nothing but Apple or Google products, you can pay them to ease the pain, but try to transfer photos from an iPhone to Windows. Integrated user experience requires a level of ease that was starting to happen in the 1980s, but which has since been slapped down hard.


The second wave of integration started with the World Wide Web in 1983, and was progressing fairly well in the late 1980s. The entire idea of the Web is to interlink information across websites. Concepts like navigating the Web assume a degree of integration between sites. However, since circa 1990 websites started to become walled gardens that didn’t play well with others.


The World Wide Web provided an integrated user experience in its early years, with copious links between websites and many generic commands in the browser that applied equally to any page visited by the user. (Midjourney)


The advent of smartphones exacerbated the isolation of each application. App stores proliferated, with apps that were diminutive, and not just in name. Apps were highly targeted, which meant that users were swamped with apps. Estimates of the number of apps on an average user’s smartphone range from 35 to 80. At the high end, nobody can remember their apps, let alone quickly locate a rarely used app. I have 154 apps on my phone. Poor me.


App overload is a usability problem in its own right. But the fact that each app lives in its own fortified tower in splendid isolation from the other apps and from the applications on your personal computer disempowers users from controlling their own digital destiny.


Each mobile app is a fortress of its own. The user experience of using a mobile device is an ordeal of swiping between pages of endless app icon, trying to locate the correct one for your current problem. Each task has its own app, and many apps only do one thing and don’t integrate with other apps. (Ideogram)


We are left with cut-copy-paste as the only surviving integration mechanism, and even this one remaining tool in the user’s armament is awkward to use between apps on smartphones.


Cut-and-paste is currently our best approach to fixing the fragmented user experience. Will AI provide stronger glue? (Midjourney)


Can AI Glue Together our Fragmented User Experience?

In a recent article, Antonello Crimi and colleagues from the design agency frog share my dismay at the scattered user experience landscape we face today. Yet, they offer hope that AI could serve as the integrative force among our many disparate applications.


Even though the froggers don’t use this comparison, the hope for AI for UX integration is similar to the argument for designing humanoid robots for use in manufacturing: since everything in any factory is already designed to make sense for a human-sized and human-shaped operator, we can automate anything by introducing human-like robots.


Two approaches to mechanizing manufacturing: a robot arm that ends in the tool needed for the job at hand vs. a humanoid robot that can grasp existing tools in its human-like hands and walk around the existing factory layout. (Ideogram)


Similarly, Crimi et al. suggest that the natural-language capabilities of AI agents mean that they can transfer and transform information between applications. After all, even the most siloed app must display its data in a format that can be understood by humans — and thus also by an AI. (Possibly requiring GTP-5 or GTP-6 level AI, but we’re talking about the future of computing, not necessarily what works today.) Similarly, all apps must allow input and commands from humans, and that work could also be offloaded onto an AI.

Even if App-1 and App-2 are totally incompatible, both must be compatible with humans and, thus, also with future AI.


As a simple example, assume that you are really interested in a certain fitness class. Your AI agent will know this, just as it knows more or less everything about you. If the fitness center only has an opening for one particular time slot for that class, your AI agent will go ahead and book that slot, add it to your calendar, and reschedule a less important appointment that you previously had in that time slot.


With one croak, the design agency frog has established itself as one of the very few thought leaders publishing insightful analyses of how AI will reshape UX. (My frog made with Leonardo, not by frog itself, which has a logo with a more severe monochrome rendering of the eponymous animal. What do you expect from a company that spells its name in all lowercase?)


The frog authors mention an interesting aspect of our likely AI-agent future that I haven’t seen discussed elsewhere: the need to maintain personal and brand expression.


For example, I have a certain writing style and perspective on the world of UX, and during the last year, I have also adopted a playful use of generative artwork. These three elements should be retained, even if you access my work through an AI agent with its ability to homogenize everything for integration purposes. Of course, you should be able to ask the AI to “compare Jakob Nielsen’s approach to UX integration with frog’s approach,” and it should integrate the two in a compare-and-contrast presentation. But my material should still retain my style, and frog’s material should retain theirs.


AI and Non-AI Integration Both Needed

I’m a strong believer in integrated software and integrated user experience. I’ve worked on these ideas for 39 years, and they can be important drivers of usability. The loss of integration in mobile apps and walled-garden Internet services is deplorable.


The current lack of integration and the potential for tighter integration create a gap wide enough that we need all hands on deck to bridge it. AI has strong potential for being an integrative force where agents span separate apps and data environments. But user experience can’t solely be that of natural-language interactions with AI agents. We need to retain a strong GUI component of the future of UX and users need to retain their own sense of agency. The balance between the two forms of interaction remains to be determined. But substantially better UX integration will be needed for the non-agent parts of the user experience of the future. Traditional approaches to integration must be strengthened and grow beyond the copy-paste vestige left over from the ambitions from the 1980s.


References

Antonello Crimi, Jess Leitch, and Jason Severs (2024): “Next-Gen UX: From Apps to Natural Language.” frog.


Jakob Nielsen, Robert L. Mack, Keith H. Bergendorff, and Nancy L. Grischkowsky (1986): “Integrated software usage in the professional work environment: evidence from questionnaires and interviews.” CHI'86: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (April 1986), Pages 162–167, DOI: 10.1145/22627.22366

 

Top Past Articles
bottom of page