top of page

No More User Interface?

  • Writer: Jakob Nielsen
    Jakob Nielsen
  • May 8
  • 7 min read
Summary: AI products have changed from invisible enhancement of classical user interfaces to soon become the main avenue for users to engage with digital features and content. This may mean the end of UI design in the traditional sense, refocusing designers’ work on orchestrating the experience at a deeper level.

The 3 main topics of this article. (ChatGPT)


Luke Wroblewski wrote an insightful overview of the evolution of AI products. He defined the following 6 stages:

Stage

What changed

Real-world example

Core UX challenge

1. ML behind the scenes (2016-2022)

Models quietly power key features without exposing themselves.

Google Translate, YouTube recommendations

Match AI output to existing UI patterns so it feels “native.”

2. Chat interfaces (late 2022)

ChatGPT puts the model in the foreground; conversation becomes the product.

ChatGPT & image/video chat clones

Conversation design, prompt scaffolding, error recovery.

3. Retrieval-augmented products (2023)

Adding external context (RAG) boosts quality and trust.

“Ask LukeW” content search, ChatGPT w/ web

Show provenance, citations, and source controls.

4. Tool use & foreground agents (2024–2025)

Models chain tools, plan steps, and let users steer mid-flow.

Augment Agent, Bench workspace

Visualize plans, allow mid-task edits, surface tool boundaries.

5. Background agents (2025→)

Multiple agentic workflows run in parallel and mostly unattended.

Bench scheduling, early multi-agent demos

Dashboards for monitoring, alerting, and confidence signals.

6. Agent-to-agent ecosystems (emerging)

Agents negotiate with one another across products (e.g., Google’s A2A protocol).

Still speculative

Define hand-offs, permissions, and audit trails across silos.

The 6 stages of AI products. (ChatGPT)


(Read Luke’s full article for the details behind my summary.)


AI Moves from Behind the Scenes to Become the Scene

Simplifying further, we’ve had a progression where AI started as masquerading behind the scenes in traditional user interfaces, to the current state of AI-first user interfaces (when we’re lucky), to a future state of AI agents being the only way users interact with digital features and content.


That trajectory raises an uncomfortable question: if user interfaces shrink to a few notifications — or disappear altogether — does traditional UI design die with them? Not entirely, but its center of gravity moves. Interaction design’s old mandate was to translate tasks into visual affordances; the new mandate is to shape systems of intent where value is delivered through orchestrated services, guardrails, and feedback loops.


The closest current analog is service design: mapping actors, backstage processes, and frontstage touchpoints so the experience feels coherent even when the customer never sees the plumbing. Yet autonomous AI stretches that canvas. We now design:


  • Policy surfaces (permissions, cost ceilings, ethical boundaries).

  • Confidence conveyors (provenance chips, uncertainty meters, graceful roll-backs).

  • System temperament (how patient or proactive an agent should be in a given context).


We’re trading wireframes for operating manuals to codify the rules that govern agent behavior, escalation paths, and trust signals. Some call this “agent choreography,” others “experience governance.” Whatever label sticks, it demands a toolbox that blends classic IA, conversation design, Ops thinking, and a dash of behavioral economics.


We still need to keep users in ultimate control, even as AI does all the work and the traditional user interface disappears. (ChatGPT)


So no, UX doesn’t die; it metamorphoses. We’ll still craft humane experiences, but increasingly through policies, protocols, and orchestrations rather than panels and palettes. The sooner we embrace that pivot, the more influence we’ll retain over how AI acts on our users’ behalf.


(In the above discussion, I use the terms “UI” and “UX” according to the definitions in my video about “UI vs. UX” — YouTube, 6 min.)


We will change from doing UI design to orchestrating AI-produced user experiences. (ChatGPT)


Current Action Items

For now, while we still do UI design, here are some action items in light of Luke Wroblewski’s analysis of AI progress:

  • Model exposure: Decide whether your AI belongs in the UI (chat, agent) or under it (auto-complete, ranking).

  • The locus of control keeps moving. Early ML features needed micro-copy; chat demanded conversation design; agents now need orchestration UI (task queues, progress chips, “undo” checkpoints).

  • Mental models lag behind capability. Each jump (chat → agent) resets user expectations. Invest in onboarding that reframes what the system is and how to think about it.

  • Trust shifts from outputs to process. In RAG and agentic stages, provenance indicators, step-by-step visibility, and quick-fix affordances build confidence more than perfect answers.

  • Error handling escalates. Behind-the-scenes ML fails quietly; agents can waste money or spam colleagues. Provide guardrails (cost limits, sandbox modes) and clear escalation paths.

  • Context controls: Let users add, remove, or prioritize sources; surface citations inline.

  • Agent dashboards: Show tasks as cards with status, cost, and “jump-in” actions; enable pause, resume, and rollback.

  • Progressive autonomy: Start with single-step suggestions → multi-step agents → scheduled/background runs—letting users opt in at each tier.

  • Cross-agent protocol design: Plan now for secure credential sharing, data-silo boundaries, and conflict resolution between agents.

  • Design ops must speed up. Luke notes AI for coding is racing ahead of other domains (something I’ve also pointed out). Pair-up early with AI teams so UI patterns evolve in lockstep with model capability.


By mapping your product’s current (or aspirational) place on Luke’s timeline (as summarized in my table above), you can focus design effort on the next user-experience inflection point rather than chasing yesterday’s patterns.


There’s still a lot to be done when designing for the current generation of AI products, before we get to the stage where UI may disappear. Do you prefer the monochrome style or the colorful style? Let me know in the comments. (ChatGPT)


🚀 AI climbs from hidden to headline 🎯 Interfaces fade, intent remains 🧭 Designers pivot toward orchestration 🔐 Trust leans on transparent process ⚙️ Guardrails guide autonomous agents


🐛 Old UI cocoons into AI butterfly 🎛 From panels to policies we design 🌀 Orchestrating flows, not screens 📜 Operating manuals replace wireframes 🕹 Users stay pilots above automation


Quiz to Check Your Understanding of This Article

Here are 9 questions about this article. The answers are provided after the photo at the end of the quiz. Please do not scroll down to the answers until you have written down your answers to all the questions.

 

1. Which of the following UX objectives best captures the new mandate proposed in the article for designers in an AI-first future?

A. Shaping systems of intent through orchestrated services, guardrails, and feedback loops B. Translating tasks into visual affordances inside static wireframes C. Maximizing screen real estate with minimalist iconography D. Reducing development cost by eliminating human-factors research


2. In the article’s six-stage progression, why are agent dashboards necessary once AI moves into background agents?

A. They reduce model hallucinations by shrinking context windows B. They provide monitoring, alerting, and confidence signals for largely unattended workflows C. They replace conversation design with skeuomorphic icons D. They eliminate the need for provenance indicators


3. The article argues that trust in AI systems shifts from output quality to process transparency beginning at which transitional point?

A. During unseen ML-powered features B. When chat interfaces first appear C. As retrieval-augmented and agentic stages introduce provenance cues and step-by-step visibility D. Only after full agent-to-agent ecosystems emerge


4. Which pairing of old mandate vs. new mandate for interaction design is stated in the article?

A. Old – translate tasks into visual affordances; New – orchestrate services and guardrails B. Old – create conversational agents; New – design physical hardware C. Old – define brand tone; New – optimize server latency D. Old – maintain color contrast; New – manage typography scale


5. When mental models “lag behind capability,” the article advises onboarding should do what?

A. Display detailed latency graphs to users B. Reframe what the system is and how to think about it after each capability jump C. Hide advanced features until users progress through levels D. Replace chat interfaces with static FAQs


6. A product currently offers chat but plans to chain multiple tools under AI control. Using the article’s framework, which design focus should come next?

A. Visualizing multi-step plans and allowing mid-task edits B. Implementing static breadcrumb navigation C. Removing context-control affordances D. Replacing citations with emojis


7. The article likens future AI UX work most closely to which existing discipline, and why?

A. Graphic design, because visual affordances remain central B. Service design, because mapping backstage processes ensures coherence even when the UI is invisible C. Copywriting, because microcopy guides conversation tone D. Gamification, because reward loops incentivize usage


8. “Agent choreography” or “experience governance” demands a hybrid toolbox. Which combination best reflects the skills the article highlights?

A. Brand strategy, video editing, performance marketing B. Hardware prototyping, embedded systems, supply-chain management C. Information architecture, conversation design, operations thinking, behavioral economics D. Motion graphics, AR modeling, social-media analytics


9. If agent-to-agent ecosystems mature into standard practice, which design task emphasized in the article becomes paramount?

A. Optimizing skeuomorphic widgets for small screens B. Selecting typefaces that mimic handwriting C. Animating micro-interactions for delight D. Defining hand-offs, permissions, and audit trails across product silos


Take the test! Write down your own answers before you check the answer key after the following image. (ChatGPT)


In the future, people may think that clicking GUI widgets is an antique habit from the prehistory of computing. (ChatGPT)


Quiz Answers

Question 1: A. The article says designers must orchestrate intent, guardrails, and feedback instead of merely drawing screens.


Question 2: B. Background agents run without supervision; dashboards keep users informed and in control.


Question 3: C. RAG and early agent stages shift trust to visible sources, steps, and fix-it affordances rather than perfect single answers.


Question 4: A. It explicitly contrasts the old task-to-widget translation with the new systems-of-intent orchestration.


Question 5: B. Onboarding must reset user expectations every time capability jumps (chat → agent).


Question 6: A. Tool-chaining agents require plan visualization and mid-flow edit affordances.


Question 7: B. Like service design, future UX maps actors and backstage flows that users may never directly see.


Question 8: C. The new craft blends IA, conversation design, Ops, and behavioral economics—labeled “agent choreography.”


Question 9: D. Multi-agent ecosystems hinge on well-defined hand-offs, permissions, and auditable trails between silos.

Top Past Articles
bottom of page