2025 Year in Review: Themes, Trends, Status, Top 10 Articles
- Jakob Nielsen
- 20 minutes ago
- 33 min read
Summary: AI disrupted UX practice across multiple vectors: research methodology, interface ontology, temporal dynamics, economic structures, and distribution channels. UX professionals must reconceptualize their discipline from artifact-centric design toward orchestrating intent-driven systems while cultivating individual agency as the primary professional competency.

(Nano Banana Pro)
I published about 330,000 words across 100 articles on UX Tigers in 2025 (including this article, but excluding articles for the last two weeks of the year). This is roughly equivalent to three books, though I have long since stopped publishing books.
I also published 112 videos on my YouTube channel, with a blend of explainers, songs, and demonstrations of new AI models. (Including one April Fool’s joke video.) See my separate article on developments in AI video in 2025 and my top 10 videos, as well as the special highlights reel I made to showcase improvements in AI songs throughout the year.
I publish all of this for free, so if somebody doesn’t like my content (whether text, videos, or cartoons), I’ll gladly refund them the zero cents they paid me.
Across the year, the most consistent throughline is that AI stops being a feature inside an interface and starts becoming the interface itself. In “Hello AI Agents,” the claim is blunt: as autonomous agents act on the user’s behalf, people won’t use your site so much as they’ll use their agent, and the design target shifts accordingly, toward agents, policies, and intent orchestration rather than page flows and pixel decisions.
That theme echoes again in the pieces that circle “No More User Interface?” and “Generative UI from Gemini 3 Pro,” where the question isn’t whether screens disappear tomorrow, but whether their strategic importance collapses once the user delegates interaction work to a single, general-purpose mediator. The practical consequence is that UX work becomes less about shaping steps and more about shaping outcomes: what the system is allowed to do, what it must not do, how it explains itself, and how it stays aligned with user goals when the user is no longer clicking through your carefully designed funnels.

Main takeaway from 2025: Goodbye UI design, hello AI-First design. (NotebookLM)
Designers, the old order is dead. Screens vanish. Research accelerates. Titles mean nothing, only agency. Your value lies not in pixels but in vision. Master the prompt. Embrace exploration. Design for time, not just space. Speak to humans and machines alike. Adapt relentlessly or become obsolete.
Top Themes of 2025
My writing in 2025 mostly falls within these 10 themes that are discussed further below:
Relentless Change: AI evolved so rapidly in 2025 that no theme persisted. UX shifted from designing screens to steering systems that change faster than design systems can update. Adaptability became essential.
The AI Career Pivot: Agency Over Titles: AI proficiency became basic literacy. Careers shifted from ladders to unpredictable paths. The critical skill is agency: initiating action amid uncertainty while pivoting from deliverables to strategic outcomes.
User Research at Machine Speed: AI democratizes research through autonomous investigation and synthesis. Qualitative and quantitative methods merge. Synthetic users may handle initial testing, reserving human time for nuanced validation.
The Death of the User Interface (Generative UI): The GUI paradigm is obsolete. Interaction shifts from commands to intent-based delegation. UX design moves from pixel decisions to defining constraints, guardrails, and intervention points.
Time Is the Experience: Think-Time and Slow AI: Time is UX’s most neglected dimension. Interfaces should respect cognitive latency. Long-running AI agents require progress visibility, checkpoints, and clear cancellation options.
Helping Users Say the Right Thing to AI: Prompting is a literacy challenge requiring scaffolding. UI should provide examples, labeled slots, and feedback. Prompt augmentation amplifies user intent without replacing authorship.
Creativity in the Age of AI: Creation becomes exploration rather than execution. The “AI Sandwich” positions humans for creative spark and curation while AI generates volume, extending experienced professionals' productive careers.
Visual Communication Goes Mainstream: AI democratizes visual communication beyond trained designers. Non-specialists produce visuals as quickly as they can describe them. The differentiator shifts to conceptual clarity: showing the right thing.
Transformative AI Economics: AI reshapes organizational structure and transaction costs. Human-in-the-loop isn't automatically beneficial. UX changes from interface concerns to questions of value creation and capture.
Reaching Audiences in the Post-Website Era: Discovery shifts from PageRank to DeepRank with AI as intermediary reader. Content must satisfy dual audiences: humans seeking insight and AI systems needing extractable claims.

My 10 main themes this year. (Nano Banana Pro)
Theme 1: Relentless Change
The most prominent theme of the year was that there was no stable theme that stayed constant all year. (Refer to the section later in this article about “Main Changes Through the Year” for some of the main areas where I changed my mind.) I used to say that “the only constant is change,” but even that is no longer true. Change isn’t constant, it’s accelerating.

Our reality is changing at an accelerated pace. Something new every week. Something revolutionary every month. (Seedream 4.5)
My writing oscillated between macro forces and micro frictions, and that oscillation is precisely the theme. One week, I wrote about how people actually use AI in day‑to‑day work, how AI is starting to “do” user research, and why prompt engineering debates keep resurfacing; the next, I’m pulled into the material constraints that shape everything: compute, electricity, and the economics that decide which features survive contact with budgets.
The AI models change weekly, but the UX questions do not. What happens to trust when an assistant sounds empathic? What happens to learning when summaries cannibalize clickthrough? What happens to control when long-running systems don’t fit turn‑taking chat?
My 100 articles serve as a living diary of 2025, a watershed year where Artificial Intelligence transitioned from a promising novelty to the fundamental operating system of the digital world. Unlike traditional monthly or quarterly reports, my weekly newsletter reflects a new reality for UX professionals: the industry is now evolving in sprints, necessitating continuous, almost aggressive, learning to stay relevant.
What made my “change” coverage feel coherent is not agreement; it is insistence on the same evaluative stance. My posts treated AI as simultaneously tool, user, coworker, and channel; and I kept checking how those roles collide inside real interfaces. Sometimes the collision is optimistic (new creative workflows, new UI forms, better synthesis); sometimes it is grim (automation pressures, entry-level job erosion, accessibility risk, and the creeping disappearance of “UI” as a stable object of design). My 100 articles from 2025, taken together, read like a year-long argument that UX is no longer a set of screen rules. It is a discipline of steering systems whose behavior changes faster than your design system can be updated.

UX professionals must evolve fast to keep ahead of the tsunami of AI changes. (Seedream 4.5)
The basic skill for a UX professional is no longer design, but adaptability: the ability to digest, filter, and apply a firehose of new information week after week. AI will take care of the design just fine, but humans need the agency to tell it what to do. This leads me to the second theme:
Theme 2: The AI Career Pivot: Agency Over Titles
I stopped considering “a career” as a ladder: now it’s a moving walkway that has abruptly changed direction. My underlying claim is not subtle: AI changes the unit economics of knowledge work, and that inevitably changes what companies value. “AI-First Companies” framed the shift as organizational, not personal: if the firm builds around AI from the start, then the workflows, expectations, and decision cadence change with it. In that world, AI proficiency is not a bonus skill; it becomes basic literacy, like being able to write a coherent email was for all professionals in the previous 2–3 decades.
“AI Is Crossing the Chasm” added a diffusion lens: what mattered in the early-adopter phase (clever hacks, novelty, “prompt tips”) matters less once AI turns into infrastructure and enters the mainstream. The chasm metaphor is important here because it changes the advice. Early on, you can win by being interesting; later, you win by being dependable. That’s a career argument disguised as a product argument: as adoption broadens, the people who thrive are those who can operationalize messy tools into reliable work.
My most explicit “what should I do?” advice was “Use the AI Transition Period to Transition Your Career.” This article treated the present as an interim regime in which rules are fluid enough to allow big moves but stable enough that effort compounds. The suggestion is not a single tactical upskill. It is a posture: treat the transition as permission to re-allocate your time toward higher-leverage skills (framing problems, understanding users, and choosing what not to build) because AI will keep eating the mechanical middle.

Career pivot now! Or perish when we get superintelligence. (Seedream 4.5)
I gave this posture a name in “How to Develop Agency,” which explicitly elevates “agency” as the career skill that matters most when tools are volatile. Agency here is not motivational fluff. It’s the ability to initiate, to test ideas in the world, and to treat uncertainty as a design constraint rather than a reason to wait. Put differently: if AI is changing the map weekly, the most employable person is the one who can still navigate.
Finally, “Future is Lean, Mean, and Scary for UX Agencies” turns the career pivot into an industry pivot. Agencies face compression from both ends: clients expect faster output because AI makes production cheaper, and they question paying premium rates for work that looks “automatable.” The implied survival strategy is to move up the value chain: less “deliver screens,” more “steer outcomes,” and do it with a smaller, sharper team.
I urge UX professionals to pivot toward strategy and orchestration. The value of a designer shifts from making the thing to defining the right thing to make. “Learn UX Strategy” provides a roadmap for this evolution, encouraging designers to learn the language of business, ROI, and competitive advantage. Finally, my “Agency Manga” article used visual storytelling to dramatize this high-stakes environment, showing the emotional toll of this transition. The overarching message is clear: to survive, UX professionals must stop being user advocates and become strategic partners who wield AI to drive business growth.
Theme 3: User Research at Machine Speed
User research, once a bottleneck of slow, manual processes, is being reinvented by AI. This theme explored how new tools are democratizing and accelerating the gathering of insights. “Deep Research” described a new class of AI agents capable of autonomous investigation: scouring the web, reading thousands of papers, and synthesizing findings into comprehensive reports in minutes. This allows researchers to start every project with a PhD-level literature review.

AI tools now make secondary research a no-brainer. Literally. (Seedream 4.5)
My “Qual Quant” article discussed the blurring lines between qualitative and quantitative methods. AI can now analyze thousands of open-ended survey responses (qualitative data) and extract statistically significant trends (quantitative data), giving researchers the “why” at the scale of the “what.” In “Does User-Driven Design Still Need User-Centered Design?” I went one step further, suggesting a future where users, empowered by generative tools, create their own solutions, changing the researcher’s role from finding problems to studying user innovations. Designers are no longer designing a single “best” flow. They’re designing the constraints, defaults, and guardrails that shape what users generate for themselves.

Fuzzy qualitative user data is best for thinking about design. AI now enables qual at scale and can also turns fuzziness into numbers. (Nano Banana Pro)
“12 Steps for Usability Testing: Plan, Run, Analyze, Report” reinforces that, despite automation, human validation remains critical. This article was procedural on purpose. I presented user testing as a craft with a beginning, middle, and end (planning, recruiting, running, analyzing, and reporting), because “we tested” is meaningless unless it changes decisions. My accompanying comic strip post looks playful, but it serves a serious function: knowledge transfer. “Usability Testing Process Explained in Comic Strips” turns the method into a visual narrative so that non-researchers can internalize it without reading a manual. That matters because testing fails most often when it is isolated inside a specialist silo. When the method becomes legible to the whole team, it stops being “research’s thing” and becomes “how we decide.”
However, the methods of testing are evolving. The scaling question arrives as a provocation in “The Usability Scaling Law: Death of User Testing?” My title was intentionally inflammatory, but the underlying concern is practical: if usability problems follow predictable distributions, then research choices become optimization problems. When do you run another test, and when do you stop because you’ve already harvested the big issues? By framing this as a “law,” the piece nudges teams toward thinking in curves and diminishing returns rather than rituals and superstition. With enough usability training data, we will soon see the rise of “Synthetic Users” (AI personas based on real data) for initial rounds of testing, reserving expensive human time for the final, nuanced validation.
“Declining ROI From UX Design Work” provided the economic backdrop: UX returns diminish as a product category matures and the obvious failures are already fixed. The piece argued that this is not tragedy but evidence of victory: usability became normal. Yet the sting is real: when UX is expected, it stops being rewarded as exceptional. I used this insight to push teams away from overinvestment and toward sober ROI thinking, including the idea that AI may temporarily “juice” ROI by opening new paradigms, but the long-term trend still bends toward normal business metrics.

As UX methods have become commonplace, the gains have flattened out. (Seedream 4.5)
Theme 4: The Death of the User Interface (Generative UI)
The GUI walked into my office in '84 and never left, until now. AI bumped it off quietly. Research got fast, interfaces got invisible, and time became the real mark. The game changed overnight.
The most radical theoretical development of 2025 was my declaration that the Graphical User Interface (GUI), which has been the dominant interaction paradigm since 1984, is effectively dead. This theme explores the transition to Intent-Based Interaction. For several years, I have argued that we are moving away from Command-Based interactions (clicking buttons) to a model where users simply state their goal, and the AI handles execution.

It’s not quite a crime scene, because the incessant clicking that characterized the PC era is not fully killed off but is being gradually abandoned. (Nano Banana Pro)
If AI systems can interpret intent and carry out tasks, then the user’s relationship to software shifts from operating tools to requesting outcomes. That reframing doesn’t merely change UI components; it changes what “using” even means, and therefore what designers are responsible for.
“No More UI” and “Generative UI from Google” describe the technological realization of this shift. Instead of designers building static pages, AI models now generate bespoke interfaces in real-time. For example, if a user wants to compare shoe prices, the AI generates a comparison table on the fly. The interface exists only for the moment it is needed and is then discarded. The consensus is that the “Best UI is No UI,” just a completed task.

Reach for something, and a UI to do that will be generated on the fly, just for you. (Seedream 4.5)
(Despite the looming UI death, I still found time to celebrate one of the best ideas from the GUI era, Direct Manipulation, in a triplet of music videos: Opera, Jazz, and Rock, each with singers and performance venues appropriate to that genre.)
“Hello AI Agents: Goodbye UI Design, RIP Accessibility” made the shift concrete by highlighting what breaks when agents replace interfaces. Traditional UI design gives users visible state, predictable controls, and assistive technologies that can hook into structure. Agents, by contrast, can turn interaction into opaque delegation: the system acts elsewhere, over time, across apps. The article insists that this is not just a technical shift but a governance problem: how do you preserve user control, recover from mistakes, and maintain accessibility when the “interface” is an agent making moves on your behalf?
“Vibe Coding and Vibe Design” supplied the cultural layer. Vibe is a way of naming work that is guided by feel and iterative feedback rather than specification. In an AI-mediated world, you can prompt, test, adjust, and converge without fully articulating everything in advance. That can be powerful, but it also creates a new risk: when the product is built by “vibe,” accountability and repeatability become harder. Put together, my articles in this theme argue that UI design is not disappearing because users don’t need interaction; it is disappearing because interaction is being re-encoded as intent, delegation, and generated structure. The UX discipline survives by shifting from pixel decisions to system constraints: clarity of goals, predictability of outcomes, and the user’s ability to intervene when the machine confidently heads the wrong way.
Theme 5: Time Is the Experience: Think-Time and Slow AI
In two articles, I argued that the most neglected UX dimension is not layout; it is time. “Think-Time UX: Design to Support Cognitive Latency” treats waiting as something designers can shape, not merely measure. Cognitive latency differs from network latency. When users are thinking, reading, deciding, or comparing, silence can be productive rather than painful. The article’s premise is that interfaces should respect that rhythm: reduce premature interruptions, provide “parking spots” for partial progress, and make it easy to resume after a pause. The UX goal is to support human cognition instead of racing it.
“Slow AI: Designing User Control for Long Tasks” pushes the time horizon outward. The article frames a future where AI agents run for hours or days, performing batch-like work that doesn’t fit the chat model’s turn-taking pattern. The main design problem becomes control across time: users need ways to start work, monitor it, intervene when necessary, and trust the system without staring at a spinner all afternoon. This introduces interface requirements that many products still treat as afterthoughts: explicit commitments, progress visibility, checkpoints, partial deliverables, and clear cancellation semantics.
(See also my older article: “Time Scales of UX: From 0.1 Seconds to 100 Years.”)
The interesting connection is that both pieces treat “waiting” as a design space with multiple species. Sometimes waiting is a human phase (thinking, sensemaking). Sometimes waiting is a machine phase (processing, searching, acting). Traditional UX advice often collapses these into a single KPI called “response time,” but my articles split them apart. Human think-time should be protected; machine think-time should be communicated and controllable. In other words, the interface needs to know whose time it is, and treat it accordingly.
What emerges is a view of UX as choreography. Fast systems can still feel stressful if they demand constant attention. Slow systems can still feel usable if they provide dependable structure. For AI products, this becomes decisive: when the system can act autonomously, time becomes part of the contract between user and machine. The user needs to know what will happen, when it will happen, and what they can do in the meantime. Without that, “slow AI” isn’t just slow; it becomes uncanny, because the user can’t locate the system’s state or predict its next move. The two articles, together, make a straightforward claim: time is not a backdrop. Time is the experience.
Theme 6: Helping Users Say the Right Thing to AI
Prompting as the new literacy problem: users can get tremendous leverage from AI, but only if they can express intent in ways the system can interpret. “Aided Prompt Understanding” focused on the comprehension side. Before users can control AI better, they need to understand what prompts are doing, what the model is likely to infer, and why small wording changes can swing outcomes. The post frames this as a UX design opportunity: don’t leave users alone with a blank text box and vague hope. Teach them through the interface.
The core move is scaffolding. Prompt understanding is aided when the UI makes implicit structure visible: what inputs the system expects, what constraints exist, and what success looks like. This can be done through examples, labeled slots, progressive disclosure, and “explain what happened” feedback after an output is generated. In effect, the UI becomes a tutor that turns invisible model behavior into something the user can reason about. This is classic usability work applied to a new object: the prompt is the control surface, and the user needs a correct mental model of that surface.
“Prompt Augmentation” moves from comprehension to action. Instead of merely helping users understand prompts, the system helps them produce stronger ones. That can mean rewriting a rough request into a structured prompt, adding missing constraints, proposing alternative formulations, or offering parameter-like toggles that translate into prompt text behind the scenes. The key UX question is where agency sits. Augmentation is useful when it increases user power without stealing authorship. The user should feel that the system is amplifying their intent, not replacing it with a generic corporate voice.
When read together, the articles outline a design pattern library for “prompt literacy” that looks a lot like earlier UI history. Early GUIs taught people what buttons do through labels, grouping, and immediate feedback. Prompting needs the same treatment: discoverability of capability, constraints that prevent self-sabotage, and feedback that helps users iterate. The deeper argument is that prompting will remain a usability problem even as models improve, because ambiguity is not a bug, it is the natural state of human language. The interface therefore needs to translate between human ambiguity and system specificity, and it must do so without making users feel stupid for not speaking “AI.”
Theme 7: Creativity in the Age of AI
Creative workflows, the very processes by which designers, researchers, and other professionals bring ideas to life, are undergoing a profound transformation. The catalyst for this change is the rapid advancement and integration of AI into both tools and practices. The significance of this theme lies in its potential to redefine roles, enhance productivity, and unlock new avenues for innovation within the creative industries. Understanding the impact of AI on creative workflows is no longer a futuristic consideration but a present-day necessity for any professional seeking to remain at the forefront of their field.
“A New AI: Creation as Exploration and Discovery” presented my current metaphor for the creative domain. I framed creation as navigating a latent space: less like executing a plan, more like exploring a landscape. That changes the mental model of design work itself: instead of drawing a solution from scratch, the user searches, iterates, and “discovers” outcomes. This is metaphor with operational consequences. If creation becomes exploration, then the interface must support wandering without getting lost: previews, branching, history, and a sense of where you are in the space of possibilities.

Creating by exploring the latent design space to discover new possibilities. As I said in my song about this theme, “creation isn’t making more, it’s finding what we’re searching for.” (Nano Banana Pro)
In December, I proposed an “AI Sandwich” workflow for creative work: humans provide the creative spark and strategic context, AI generates volume and variation, and humans curate, refine, and polish the best outputs.
A significant advantage of delegating the majority of ideation to AI while preserving agency for humans, is that it prolongs the productive careers of experienced knowledge workers by decades. As people age, brain decay lowers fluid intelligence, which is responsible for the raw creativity of producing fresh ideas. But old people have superior crystallized intelligence, which works well for deciding what should be done.
Theme 8: Visual Communication Goes Mainstream
This theme is about a toolshift that feels small until you realize it changes who gets to communicate visually. Goodbye gatekeeping monopoly of trained visual designers. Hello, indie creators.

AI empowers everybody to become visual communicators and will probably make the old job role of “a visual designer” obsolete by 2030. Accordingly, many of the old gatekeepers have become “AI Haters,” though it would be better for them to help the people adjust to their new superpowers. (Nano Banana Pro)
“Nano Banana Pro is the ‘ChatGPT Moment’ for Visual Communication” presented AI image generation not as a novelty filter but as a new baseline capability: an inflection where non-specialists can produce workable visuals quickly. The implication is that “visual thinking” stops being bottlenecked by software mastery. Once the cost of producing an image approaches the cost of describing it, the constraint moves upstream: having something worth showing.

AI will draw whatever you want, but you need to have something to say. (Seedream 4.5)
“2025 in AI Video” extended my argument from still images to moving ones. Video generation changes more than marketing assets; it changes explanation itself. UX work often fails because teams cannot make future behavior concrete enough for stakeholders to understand. As AI video improves, you can prototype not just screens but scenarios.
“The 10 Usability Heuristics in Cartoons” shows what happens when these capabilities meet pedagogy. The post isn’t just “comics are fun.” It’s a claim that teaching UX principles benefits from compression, humor, and recognizability. When the heuristics become cartoons, they become memory hooks. The medium matters: cartoons are forgiving, fast to scan, and emotionally sticky, which is exactly what you want when the goal is recall and shared vocabulary.
“UX Career Pivot for the AI Age: Told in Manga by 4 Different AI Models” pushedthe same idea into narrative advice. It uses AI-generated manga as a comparative lens (different models, different storytelling style) while also smuggling in a serious message about career adaptation. The subtle point is that AI isn’t just generating images; it’s generating editorial forms. You can now choose a communication genre (cartoon, manga, storyboard, explainer video) the way you choose a font, and that choice shapes comprehension.
Across these four articles, AI visuals and videos are treated as democratizers and multipliers. These AI capabilities let practitioners externalize ideas faster, teach principles more widely, and prototype narratives that used to be expensive to produce. At the same time, it raises the stakes on conceptual clarity: when everyone can generate visuals, the differentiator becomes whether the visual actually explains the right thing, at the right level, for the right audience.
Theme 9: Transformative AI Economics
If AI becomes truly transformative, then “how we build” and “who we hire” won’t merely evolve; they will be re-priced. “Transformative AI Changes the Future of Work and Firms” approached AI as a force that reshapes organizational structure. When capabilities expand, transaction costs shift. Some coordination becomes cheaper, some oversight becomes harder, and the boundaries of the firm (what you do in-house versus buy) start to wobble. In other words, my article treats AI as a change in economic gravity, not a feature.
“When Humans Add Negative Value” attacked the comforting compromise of “human-in-the-loop” by pointing out that human involvement can degrade outcomes. The argument is less moral than mechanical: since humans are slow, biased, and inconsistent, we can become a bottleneck or a source of error. The implication for UX is sharp. If humans sometimes reduce quality, then “add a human review” is not automatically a safety valve. It must be designed, justified, and measured like any other intervention.

If humans need to check all AI work, they become a bottleneck. Worse, they often worsen the outcome by injecting human weaknesses into the AI results. (Seedream 4.5)
“AI Scaling Laws 4 & 5: More Engineers and Designers” turned the lens toward inputs: people and process as scaling variables. The premise is that scaling is not only compute and data; it is also the number and type of builders who can productize new capability. That reframes the labor question in a way designers should notice: if scaling laws predict more engineers and designers, then the bottleneck shifts from invention to integration, from “can it be done?” to “can it ship reliably?”
Finally, “Estimating AI’s Rate of Progress with UX Design and Research Methods” uses our UX methods themselves as measurement tools, treating evaluation techniques as instruments for forecasting. It’s an inversion that matters: UX is not only downstream of AI progress; it can be a way to quantify that progress. When usability testing, heuristic evaluation, or research protocols start failing to distinguish machine from expert performance, you have a signal that capabilities have crossed a practical threshold. Taken together, these four articles treat AI and design as economic actors. I argued that the “UX story” is no longer about interfaces; it is about where value is created, who captures it, and what kind of work remains expensive.
Theme 10: Reaching Audiences in the Post-Website Era
This theme treats distribution as part of UX, not a marketing afterthought. “Email Newsletters Build Loyal Audiences” makes the simplest argument: if you want durable attention, you need a channel you control. Newsletters are framed as a direct relationship rather than a traffic tactic, which matters more as platforms and algorithms become less predictable. The user experience here is not the email template; it is the cadence of trust: showing up reliably with something worth opening.
From there, the theme advances into the AI era with “From PageRank to DeepRank: Attracting AI-Driven Traffic to Digital Properties.” The title itself announces the shift: from ranking pages to ranking answers. When users arrive via AI summaries, the “landing” experience changes. You’re no longer competing only in SERPs; you’re competing to be the cited source inside generated output. That changes what it means to be discoverable, and it also changes what it means to be persuasive: the AI is now an intermediary reader.
“GEO Guidelines: How to Get Quoted by AI Through Generative Engine Optimization” takes that intermediary seriously and proposes guidelines for being quoted by AI systems. This is distribution strategy turning into content design: writing in ways that are extractable, attributable, and semantically clear. It’s also a new form of UX because the “user” is dual: people and AI agents. Humans want narrative and insight; AI systems want structure and unambiguous claims. The post treats success as satisfying both without turning your writing into robotic sludge.

We’re now producing content and digital services for two very different users: humans and AI. (Seedream 4.5)
Across my articles, the common message is that discoverability is being refactored. First it was search engines; now it’s answer engines. In both regimes, the fundamentals remain stubbornly similar: clarity, credibility, and information architecture. The difference is who does the reading. When an AI system becomes the first reader of your content, you design not just for human scanning but for machine extraction; and you still need the human on the other side to feel served, not manipulated.
Main Changes Through the Year
Here are six main changes in my topics and treatment, sorted by the magnitude of the shift from the first half of the year (January–June) to the second half (July–December).
Is it bad that I change my mind and publish new interpretations and predictions? Should I have known better from the start of the year? Maybe, but I have two excuses:
AI changes so fast that our expectations for what is to come and my recommendations for how to act will have to change as well. It’s impossible for even the best futurist to predict everything accurately, given such substantial change.
On a personal note, I am about to enter my 44th year as a usability expert, and it has taken me a long time to shake off the chains of my legacy experience from my first 40 years, when UI design and traditional user research methods were indeed important. I was extremely committed to a single approach to making computers serve people. Now there’s a new way, and I have to discard 40 years of thinking. That takes some time to take hold.
In any case, whether or not I ought to have changed faster and known better a year ago, it’s better to admit that my thinking has changed based on new insights; thus, my predictions and my advice must change as well. The Danish Prime Minister Jens Otto Krag once said, “You hold a position until you adopt a new one.” While he was sometimes ridiculed (particularly by opponents) for the idea that politicians should not stick to their guns, I think it’s more productive to acknowledge that conditions can change, which also means the preferred solutions should change.

Six areas where I changed my mind during 2025. (Nano Banana Pro)
1. The Competence Inversion (From “Human-in-the-Loop” to “Human Liability”)
Magnitude: Paradigm Shift
The Change: In the first half of 2025, my prevailing narrative was one of augmentation. Articles like “Learn UX Strategy” and “AI Transition Career Transition” focused on upskilling. The tone was reassuring: AI is a powerful tool, but the human is the expert pilot who verifies the output to catch hallucinations. I discussed the concept of “AI Stigma,” suggesting that people were afraid to admit they used AI, implying that human work was still considered the premium standard.
By November, this narrative violently inverted. The article “When Humans Add Negative Value” marked my most significant conceptual turning point of the year. It presented data suggesting that for many execution tasks, inserting a human to verify or tweak the AI actually lowered the quality of the output compared to the AI working alone. The late-year articles argue that humans are slower, more biased, and prone to “correcting” accurate AI outputs with erroneous intuition. This shifted the UX strategy from assisting AI to “Agency”: setting the goal and then letting the machine execute without interference. The new professional mandate is not to collaborate, but to direct and then get out of the way.
This reversal shows up again in organizational framing. “AI‑First Companies” described massive automation and workforce transformation where humans shift toward oversight roles while autonomous systems do more of the execution. In early-year writing, the human is the steady center, and AI is the new helper. In late-year writing, AI becomes the operating layer, and humans become the exception handlers. That is a structural inversion, not a feature update.
The deeper shift is that my “risk” story changes. Early-year risk is: “the AI will mislead you.” Late-year risk is: “your intervention will slow it down, distort it, or inject noise.” Once we adopt that lens, the UX problems change too. Trust calibration is no longer primarily about persuading users that AI is competent. It becomes about defining when human judgment is legitimately useful and building interfaces that make non-interference psychologically acceptable. It’s hard for people to do less, especially when “responsibility” has historically meant hands-on control. My late-year stance implicitly challenges that moralized sense of control and replaces it with a performance criterion: intervene only when you can outperform the system on the margin.
2. The Interface Extinction (From "Better UI" to "No More UI")
Magnitude: Existential
The Change: My early 2025 articles were still deeply rooted in the Graphical User Interface (GUI) paradigm. Topics included “SEO UX” (optimizing search) and “Required Fields” (optimizing forms). The assumption was that designers were still building static screens, menus, and navigation systems. The goal was to use AI to build traditional software faster (“Vibe Coding and Vibe Design”).
In the second half, the articles declared the GUI effectively dead. The conversation shifted to “Generative UI” and "No More UI." The late-year articles argue that pre-designed screens are obsolete. In this new view, the Best UI is No UI: users simply state an intent, and the AI generates a bespoke, disposable interface for that single moment, or simply executes the task. This moves the field from Interface Design to Intent Specification.
3. The Visual Singularity (From "Text & Code" to "Nano Banana")
Magnitude: Technological Breakthrough
The Change: For the first ten months, the "AI Revolution" described in my articles was primarily textual and logical. The major milestone discussed was “Claude Sonnet 3.7,” a reasoning model. My articles focused on "Prompt Understanding" and writing logic. Visual AI was treated as a secondary feature, often plagued by artifacts (bad hands, unreadable text).
The arrival of Nano Banana Pro in November changed my coverage completely. Late-year articles describe this as the “ChatGPT moment for visual communication.” Suddenly, the limitation of “AI can't do text in images” vanished. The articles shifted to presenting “Usability Testing as Comic Strips” and “UX Career Pivot Manga,” promoting a workflow where non-designers create professional-grade visual narratives. The deliverables shifted from text reports to visuals and movies. I now use the very tools they discussed (AI visuals) to communicate complex ideas. The medium became the message: in late 2025, if you weren’t using AI for visualizations, you weren’t communicating effectively. With AI visual content we can condense a complex argument without breaking it, and earn attention instead of just demanding it.
4. The Tempo Pivot (From "Instant Speed" to "Think Time")
Magnitude: Interaction Design
The Change: In the first half of 2025, speed was the metric of UX success. Articles discussed “Small AI Model vs Big AI” with a focus on edge computing and reducing latency. The user expectation was instant gratification (sub-1-second response times), consistent with decades of web performance best practices.
In the year’s second half, the physics of AI interaction changed. Recognizing that reasoning models and complex video generation tools require minutes, hours, or even days to complete tasks, I began to discuss “Slow AI.” The treatment shifted to managing “Think Time.” Late-year articles acknowledge that high-quality AI outputs take time. The UX challenge shifted from reducing latency to designing for latency, using transparent chain-of-thought visualizations to maintain user trust during long waits.
5. The Career Definition (From “Skills” to “Agency”)
Magnitude: Professional Identity
The Change: Early in the year, my advice was educational: “Learn UX Strategy.” The implication was that designers should climb the traditional corporate ladder by learning business theory. I even discussed “UX Agency Future” as a business model question: how will design firms bill for their time?
By July, the term “Agency” was redefined as a personal trait (and I don’t believe there is a future for traditional UX or usability consulting as a business). The article “How to Develop Agency: The Number-One Career Skill for the AI Age” defined a new professional archetype. It wasn’t about knowing how to do the task anymore; it was about having the volition and “Vibe” to decide what should be done. The late-year coverage suggests that as execution costs drop to zero, the only value remaining is Direction.

Designing based on “vibes” may sound like a rather hippie idea, but it’s a very serious thrust in Silicon Valley, aiming to improve business productivity while empowering a broader set of people with agency. (Seedream 4.5)
6. From Prompt Whispering to Latent-Space Navigation
Magnitude: Structural
The Change: In early 2025, I treated prompting as a literacy problem. People want outcomes, but they don’t yet know how to ask, and they don’t know why the system answered the way it did. My response was classic UX: introduce scaffolds, make invisible mechanics legible, and give users tools that reduce the articulation burden. “Prompt Augmentation” was framed as a set of design patterns (style galleries, prompt rewrite, prompt builders) that help people say what they mean without requiring them to become prompt engineers. “Aided Prompt Understanding” went one layer deeper: it treats the prompt-response relationship as a black box that users can’t learn from, and it proposes UX support so users can debug their own requests, iterate faster, and feel in control. The underlying metaphor is conversational craftsmanship: better phrasing in, better results out.
By late 2025, the metaphor shifts. The interface is no longer primarily about describing; it’s about exploring. In “A New AI: Creation as Exploration and Discovery,” the user isn’t depicted as a person who specifies a target clearly and then refines prompts until the AI complies. Instead, the user is a navigator moving through a latent space of possible solutions, discovering options by encountering them and steering by recognition rather than by perfect articulation. uxtigers.com That’s a profound change in how you treat the user’s job. Early-year, the user is a writer learning to express intent; late-year, the user is a curator learning to choose among emergent possibilities.
The late-year Generative UI theme advances the same evolution in interface design. In “Generative UI from Gemini 3 Pro,” my headline claim is that interfaces can be synthesized on the fly, individualized to the user and situation rather than selected from a static menu of screens. That changes what “skill” means: if the UI itself adapts, the user’s competence is less about mastering a fixed system and more about directing a system that keeps changing. In that world, classic learnability still matters, but it migrates. The learnability challenge isn’t “where is the button?” It’s “how do I steer a system that can invent new buttons?”
The magnitude here comes from the fact that my early prompting analyses implied continuity with earlier interaction paradigms: users articulate, systems respond, and UX helps people get better at articulation. My late-year position implies a discontinuity: users don’t fully know what they want until they see what they can have, and the system’s role is to surface the space of possibilities in a way that doesn’t drown judgment. That’s a different psychological model of creation: less like writing a brief and more like walking through an unfamiliar city with a good guide. The guide doesn’t demand perfect directions. It shows you interesting streets and watches what you slow down to examine.
Status Check: The State of AI in December 2025
AI is progressing steadily, with new models released seemingly every week. This pace has led some skeptics to declaim that AI is “hitting a wall,” bringing into question the various landmark achievements that I expect to see by 2030:
Superintelligence, meaning that AI will be smarter than any living human.
Super-accelerated “pizza-sized” teams of high-agency humans who outcompete legacy enterprise companies through AI synergy.
Media empowerment that enables individual creators to make works that rival, or possibly surpass, what legacy media companies needed large teams to produce.
Fully self-driving cars that will be safer than any human driver in any traffic conditions.
Individualized learning through AI tutoring that educates high-IQ kids to stellar heights never even attempted by the legacy education establishment has done, while also upskilling low-IQ students and enriching their lives more than old-school schools could do with an assembly-line approach to students.
AI healthcare that discovers new cures and diagnoses and treats patients better than human physicians ever did, especially in developing countries.
These goals are ambitions and may not all be reached by 2030, though I think they are almost guaranteed to be reached by 2025. I think most will indeed be possible by 2030, though the advances in driving, learning, and healthcare may be delayed by the incumbents in those industries and not reach all humans equally fast.
The advances in AI seem to be slowing because each model is only slightly better than its predecessor. That’s a problem with dot-releases. For example, Kling 2.6 is not necessarily super impressive compared with Kling 2.5. It added the ability to maintain consistent character voices from one clip to the next, which is very useful when editing certain types of video. But it’s admittedly an incremental advance, not a revolution in AI filmmaking.
However, consider a full year’s advance in any area of AI, and the improvements are more noticeable. Compare a music video I released in January 2025 with one I made in December 2025: The instrumentation and vocalization are a little better, but the avatar animation and the non-singing sequences are miles apart. To see longer-term change, watch my video from April 2024, which was the first song I attempted with an animated singer: not even two years older, and there’s simply no comparison. I expect that if I were to make a music video in late 2027, the delta between my late-2025 attempts and that new video will be as compelling as the delta between early 2024 and late 2025.
AI seems to progress by a full generation roughly every two years, which is what it takes for its capabilities to be fundamentally different. So, for example, we can compare ChatGPT 4 from March 2023 with GPT 5 from August 2025. Reasoning, tools access, native image model — there were so many advances that this new generation was indeed a revolutionary step up in the say that, say, the change from GPT 5.0 to 5.2 was not.
Maybe it’s more fruitful to think of AI as following a tick-tock model of advancement. The “Tick” is the major revolution that comes from two orders of magnitude in effective compute and model parameters, as well as methodology advances such as think-time compute or integrating language models with image generation along the lines of Nano Banana Pro. As I mentioned, there’s only a “tick” every two years or so. But we get plenty of “Tock” moments where the same basic AI paradigm is tweaked for more capabilities, faster execution, or cheaper resource consumption. For example, Nano Banana Pro currently has difficulty in always pointing the speech bubble toward the correct character when drawing comic strips. I would not be surprised at all if this problem were fixed sometime in 2026.
In my articles in early 2025, I spent real attention on the “which model is better?” question, and treated new releases like competitive events. “DeepSeek: Does a Small AI Model Invalidate Big Models?” was fundamentally a performance-and-economics comparison: how a compute-efficient model can reach the level of leading systems, what that means for investment, and why the trajectory still points toward much more capability being needed. “Claude Sonnet 3.7 Compared with Previous AI Models” continued the theme in a more direct product-evaluation register: an upgrade arrives, I tested it, and reported how it stacks up against recent competitors. My early-year readers were invited to keep score.
By late 2025, I turned away from scorekeeping and toward measurement frameworks: less “who won this week?” and more “how fast is the entire field moving, and what does that imply for UX work?” In “Estimating AI’s Rate of Progress with UX Design and Research Methods,” the core proposal was longitudinal: track AI vs. human experts across multiple UX methods, discover whether something like a scaling law exists for UX skills, and use that slope to decide when to delegate which tasks to AI. That’s a change in epistemology. Weekly wins are noisy. Rates of change are strategic.
The same late-year instinct showedup in my retrospective on AI video. Instead of praising a single model, “2025 in AI Video” treated the year as a sequence of capability leaps and breaks progress into components (avatar expressiveness, dance/movement, audio-video coordination) so that “better” becomes something you can diagnose rather than merely admire. This is the mindset shift: decompose capabilities, track improvement, and predict the next constraints.
What makes this a meaningful change is that it alters how you should respond. A horse-race mentality encourages tool churn and opportunistic experiments. A progress-science mentality encourages portfolio planning: invest in workflows that will get cheaper, but and temper redesigning the organization around capabilities relative to their improvement speed. In other words, think less like a reviewer and more like a forecaster when assessing AI progress. We should be pushing for empirical forecasting grounded in repeated measurement.
As an example, I made infographics about this section with GPT’s native image model 1.0 (released March 24, 2025) and 1.5 (released December 16, 2025). A 9-month difference for a dot-release is exactly what I call a “tock” advance, and yet it’s noticeable in these images:

GPT Native Image Model 1.0 (best of 8 attempts)

GPT Native Image Model 1.5 (best of 4 attempts)

For comparison, this infographic was made with Nano Banana Pro, also as the best of 4 attempts. I do think it’s a better image model than GPT 1.5, but maybe OpenAI will reclaim the crown in 2026 with a “tick” style advance to a model 2.

To complete the series: Nano Banana Pro’s visualization of my “tick–tock” model of AI advances.
It’s clear to me that there will be no human visual designers in five years, when we project out the pace of advances evidenced in these examples. (Human art directors will probably remain until maybe 2035. There are signs of progress in AI’s ability to make judgment calls about content quality, but it has a ways to go.)
What about the huge improvements in corporate profitability and human standards of living that I predict AI will bring? They are a little slower in coming than the technical advances, because they require organizational adaptation.
AI is not something you slot into an existing workflow. Doing so might double productivity for isolated tasks, but not double the company’s profitability. We know from the recent Anthropic study of how people use AI that most employees currently hide their AI use from colleagues and bosses, eliminating any chance of a cross-departmental synergy. To realize AI’s true potential requires complete restructuring of workflows, making the company AI-First at a minimum (legacy enterprise companies will probably never reach AI-Native status and such startups will eventually outcompete most).

Strapping a dose of AI on top of the rickety coach of a legacy company’s obsolete workflows and product lines won’t make it an AI-Native company and won’t cause corporate profitability and human living standards to explode. (Seedream 4)
Organizations move slowly. I still expect us to reach superintelligence by 2030, but it may not be until 2035 that our living standards double due to these new capabilities.
Top 10 Articles of 2025
Here’s the list of my 10 most popular articles from 2025, as voted by readers’ clicks and pageviews. Since I published 100 articles, these are the 10% that resonated the most with my audience.
Hello AI Agents: Goodbye UI Design RIP Accessibility Autonomous agents will transform user experience by automating interactions, making traditional UI design obsolete, as users stop visiting websites in favor of solely interacting through their agent. Focus on designing for agents, not humans. Accessibility will disappear as a concern for web design, as disabled users will only use an agent that transforms content and features to their specific needs.
No More User Interface? AI products have changed from invisible enhancement of classical user interfaces to soon become the main avenue for users to engage with digital features and content. This may mean the end of UI design in the traditional sense, refocusing designers’ work on orchestrating the experience at a deeper level.
Use the AI Transition Period to Transition Your Career In the great UX pivot, you have 5 years to trade yesterday’s expertise for tomorrow’s relevance. Legacy UX skills won’t save you in the AI age, but cultivating agency, judgment, and persuasion will. Most importantly, now is the time to prepare for working with superintelligence, before it’s too late and you become obsolete. Don’t cling to a vanishing past.
Vibe Coding and Vibe Design AI transforms software development and UX design through natural language intent specification. This shift accelerates prototyping, broadens participation, and redefines roles in product creation. Human expertise remains essential for understanding user needs and ensuring quality outcomes, balancing technological innovation with professional insight.
Future is Lean, Mean, and Scary for UX Agencies Rapidly increasing in-house UX maturity coupled with AI’s productivity explosion threatens the survival of UX consultancies and design agencies. Major downsizing is imminent for many agencies. Survive by either offering highly specialized, strategic consulting or becoming a provider of automated, scalable design. The window to adapt is closing.
Generative UI from Gemini 3 Pro Google's new Gemini 3 Pro is making waves, but not just for its top leaderboard scores. The real story? Generative UI: interfaces that AI designs just for you, right when you need them. Users overwhelmingly prefer these custom-made interfaces over regular websites (90% of the time!). Sure, human designers still win by a hair, but with AI improving exponentially and humans staying roughly the same, that won't last long.
The 10 Usability Heuristics in Cartoons Finally, what you have been waiting for: a humorous take on Jakob Nielsen’s classic 10 usability heuristics explained in 80 cartoons.
GEO Guidelines: How to Get Quoted by AI Through Generative Engine Optimization |Being mentioned in AI answers is the new share-of-voice for brands and influencers. Being ignored by AI is like being on page 5 of a Google SERP in the old days. We don’t know for sure yet how to optimize for AI, but some guidelines have started to emerge. The only sure advice is to follow the space and track the placement of your own content across the main AI tools.
Prompt Augmentation: UX Design Patterns for Better AI Prompting Six UX design patterns can help users overcome the AI articulation barrier: Style Galleries, Prompt Rewrite, Targeted Prompt Rewrite, Related Prompts, Prompt Builders, and Parametrization.
Top 10 UI Annoyances Users encounter usability annoyances daily in their computer use. Sometimes annoyances can be sidestepped at the cost of extra delays in achieving the user’s task, and sometimes the cumulative effect of too many annoyances makes users abandon the task, resulting in lost business for the offending company.

These 10 articles published in 2025 received the most page views. (Nano Banana Pro)
