UX Roundup: 2025 Predictions Revisited | AI Paradigm Shifts | AI and Doctor Burnout | Useful AI Sells
- Jakob Nielsen
- 10 hours ago
- 21 min read
Summary: Predictions for UX in 2025 revisited | 2025 saw major paradigm changes in AI | Simple AI reduces doctors’ burnout | Fast sales growth for providers of useful AI services

UX Roundup for December 22, 2025. (Nano Banana Pro)
Merry Christmas

Merry Christmas from UX Tigers and Jakob Nielsen. (Nano Banana Pro)
I hope you have a wonderful and peaceful holiday season.
Watch my UX Christmas Song (YouTube, 2 min.), and compare with my 2024 Christmas Song to appreciate how far we have come in AI video in just one year. (For more examples, tracking the full year, watch my 2025 AI Songs Highlights Reel (YouTube, 15 min.).)

My 2025 UX Christmas Song is now on YouTube. (Nano Banana Pro)

Key differences between my 2024 and 2025 Christmas songs: (A) My workflow (besides stating my Merry Christmas wishes) changed from experimenting with AI (putting together anything that would work) to using AI as an agentic tool in a clear production process. (B) Better sound quality. (C) Lyrics drew from old-school manual UX in 2024 to modern AI UX in 2025. (D) Characters such as Santa Claus morphed between scenes and even within individual cuts in 2024 (for example, his gloves alternate between black and white). In contrast, the combined advances in Nano Banana Pro for base images and Kling 2.6 for animation ensured Santa’s character consistency in 2025. (E) I didn’t even attempt lip synch in 2024, whereas 2025 has an actual singing avatar.
Predictions for UX in 2025 Revisited
On January 4, 2025 I published two YouTube videos with 6 predictions for the UX field in 2025: an avatar explainer (4:27 min.) and a jazz song (2:47 min.). Let’s see how I did, looking at each of the 6 predictions.
Prediction 1: UX in Transition: The Great AI Metamorphosis
Prediction Summary: UX hits a runway moment: AI will replace handcrafted production. Companies that refuse new tools fade, while adaptable teams thrive by applying timeless user understanding to deliver measurable business value.
What Happened: The “metamorphosis” I predicted didn't just occur; it accelerated at a breakneck pace, fundamentally altering the daily reality of the profession. By the third quarter of 2025, the era of “pixel-pushing” (manually drawing rectangles and adjusting padding) had effectively ended for commercial production. Tools like Figma’s advanced AI features and Vercel’s V0 moved from novelty to standard practice, allowing teams to generate high-fidelity UI directly from text prompts or whiteboard sketches.
Adoption wasn’t uniform, but the direction was unmistakable. At the same time, the economic and organizational pressure to “do more with less” never left the stage. Tech layoffs continued through 2025 in waves, keeping design and research leaders focused on efficiency narratives. Even outside the UX function, Reuters reported Amazon’s CEO telling employees that generative AI and agents at scale would reduce the company’s corporate workforce in the coming years. Put together, 2025 looked like the year UX finally stopped arguing about whether AI belonged in the workflow and started arguing about where it belonged, who controlled it, and which parts of the craft were worth protecting as human judgment rather than automated output.
However, the transition was chaotic. While 75% of firms adopted GenAI tools, many struggled with “synthetic genericism”: a flood of polished but identical-looking interfaces. This created a new, desperate demand for “Brand Identity Specialists” who could break the AI homogeneity. The "extinction" I warned of was most visible in the junior market; entry-level roles focused on production (wireframing, UI mockups) nearly vanished, replaced by AI agents. Conversely, senior designers who positioned themselves as “Systems Architects” or “AI Directors” saw their value skyrocket. The turning point was a cliff edge, and those who refused to jump to the new workflow found themselves plummeting into obsolescence. The profession didn't die, but the job description of “UI Designer” was rewritten to look more like “Product Architect.”
Historically, designers have always aimed to exert vast influence within their organizations, guiding the creative vision and making critical decisions that shape the product. That hope simply doesn’t feel palpable anymore when you look at the state of UX more broadly. Instead, this year started a fundamental shift in responsibilities and a transfer of design control from designers to a complex network of algorithms, automated tools, and business stakeholders.

Correct vs. Wrong Summary: I was largely correct about the shift from handcrafted to AI-driven design. I pushed the “adapt or die” rhetoric a bit too hard; 2025 looked more like uneven migration than sudden extinction.
Score: I was more right than wrong.
Prediction 2: Co-Pilot UX
Prediction Summary: I predicted AI would not replace human UX staff but would act as an intern, coworker, coach, and teacher. I argued productivity would soar, enabling smaller teams to deliver quality work, provided human judgment remained the strategic guide.
What Actually Happened in 2025: In 2025, AI mostly behaved the way I predicted: less like a replacement and more like a relentlessly available teammate. The clearest evidence came from research practice, where adoption stopped being a novelty and became a baseline. User Interviews reported that AI use among researchers jumped year over year to reach 80%. That is not “some teams dabble.” That is “the majority have already integrated it.” At the same time, the report captured why this felt like an intern the professionals still had to supervise: 91% of researchers worried about output accuracy and hallucinations, and sentiment about AI’s current impact was mixed, with more seeing it negatively than positively. High usage paired with high skepticism is exactly what an assistive tool looks like when it is powerful but not fully trustworthy.
Productivity gains were quantifiable and massive. Reports in 2025 indicate that senior designers were outputting the volume of work previously associated with a 3-person squad. This reality birthed the “Super-IC” (Individual Contributor) in the form of designers who operated like one-person agencies.
Outside research, the organizational picture matched that same pattern of uneven but accelerating uptake. Gallup’s workplace tracking said that by Q3 2025, 37% of employees reported their organization had implemented AI for productivity, efficiency, and quality. Vendors responded by embedding AI into the surfaces UX teams already inhabit. Figma positioned “Figma AI” as a “creative collaborator” and continued integrating AI features into everyday workflows. Miro, likewise, pushed the idea of AI working inside shared team space, framing its “AI Innovation Workspace” around teams collaborating with AI on the canvas.
AI is revolutionizing UX design through nine major areas: automated design generation (reducing prototyping time by 40–60%), intelligent user research analysis (saving 70–90% of manual analysis time), real-time personalization at scale, automated accessibility testing, predictive analytics that require smaller sample sizes, content strategy optimization, design system intelligence that monitors consistency automatically, conversational interface design, and emotional intelligence.
What surprised me wasn’t whether AI showed up. It was how quickly it became socially invisible. Business Insider, citing Ladders’ analysis, reported that AI mentions in job listings declined even while employers increasingly treated AI fluency as assumed. That’s a classic maturity signal: the skill stops being marketed and starts being required.
So the lived reality of 2025 was a working partnership. AI sped up first drafts, synthesis, and exploration, while humans remained accountable for framing problems, validating claims, and deciding what actually ships.
However, this came with a significant downside I didn't fully articulate: the Junior Crisis. Because AI effectively handled the “intern” work (summarizing research, basic layout, asset variation), companies stopped hiring human junior staff. This created a broken talent pipeline that the industry is now scrambling to fix. Furthermore, the “Coach” aspect of AI became a critical defense mechanism. As we moved fast, we relied on AI agents to real-time audit our work for accessibility and heuristic violations. The partnership was optimal for business output, but it was exhausting for the humans, who found themselves constantly in “Director Mode,” reviewing and approving the relentless output of their synthetic coworkers. The human judgment I predicted as essential became the single most expensive and sought-after commodity in the market.

Correct vs. Wrong Summary: I was correct about the specific roles AI would play (intern, coworker) and the massive productivity gains allowing for smaller teams. I was right that human judgment would remain critical. I underestimated the negative secondary effect: the decimation of entry-level roles because the AI “intern” was cheaper and faster than human trainees. I underpredicted how quickly AI competence would become “expected” rather than celebrated.
Score: I was more right than wrong.
Prediction 3: Pancake Teams, Builder Leaders
Prediction Summary: I predicted a structural shift toward “pancaking” with flatter organizations with fewer managers, and the rise of “Founder Mode” leadership, where leaders remain hands-on and small teams replace large departments.
What Actually Happened in 2025: “Flattening” stopped sounding like a theory and started showing up as reported numbers. Google became a headline example. Business Insider reported that, in an all-hands meeting, Google said it had cut 35% of managers overseeing teams of three or fewer people over the prior year. That detail matters because it’s exactly the kind of “pancaking” I was talking about: fewer layers, fewer small-team managers, and a stronger push for speed and direct ownership. A middle management squeeze saw the elimination of thousands of “Design Manager” and “Director of UX” roles where the individual was not actively contributing to the work.
At the same time, AI became a common justification for reorganizing, and sometimes for shedding roles. Reuters reported Fiverr laying off 30% of its workforce as part of an AI-focused restructuring aimed at streamlining operations and investing more heavily in AI. And Amazon’s CEO wrote that as generative AI and agents roll out, the company expects efficiency gains that should reduce its corporate workforce over time. Even when companies emphasized “role shifts” more than “role elimination,” the signal to UX orgs was unmistakable: keep the team lean, make it faster, and justify every layer.
Meanwhile, the cultural vocabulary around leadership kept circulating. “Founder Mode” stayed in the air as shorthand for leaders staying close to details rather than retreating into administrative distance. That idea did not magically make every leader better, but it did change expectations: more leaders were pushed to be present in the work, not just present in meetings. We saw the rise of the “Super-IC” [for “individual contributor” = worker bee = employee who doesn’t manage anybody] and the “Pod” model: tiny, autonomous teams of a Product Manager, a Lead Designer, and an Engineer reporting directly to the C-suite or a Founder.
The trade-offs also became clearer. Commentary on flattening warned that while it can save costs and empower employees, it can also create problems around communication, employee development, and morale if you remove layers without rebuilding support systems. In 2025, the best teams didn’t just pancake; they redesigned coaching, decision-making, and craft ownership for the new shape.

Correct vs. Wrong Summary: I was right that delayering became real and that “hands-on leadership” moved from slogan to expectation. I didn’t emphasize enough how much flattening can harm mentorship and career ladders unless teams rebuild support structures.
Score: I was more right than wrong.
Prediction 4: Design That Pays
Prediction Summary: I predicted that aligning UX with business objectives would be critical, and success would depend on demonstrating clear financial impact (revenue/cost) rather than just “user advocacy.”
What Actually Happened in 2025: In 2025, the “Empathy Shield” finally broke. For decades, UX designers defended their work by claiming to be the “voice of the user” (I’m as guilty as anybody of using the empathy shield), but in the tight economic climate of 2025, that argument lost all currency. UX Research teams that could not draw a straight line to ROI faced severe budget cuts, while “Growth Designers” who spoke the language of Customer Acquisition Cost (CAC) and Lifetime Value (LTV) thrived.
The integration of UX into business strategy became absolute. We saw a decline in centralized design studios within companies and a dispersal of designers directly into Revenue and Growth squads. New analytics tools, powered by AI, allowed companies to measure the dollar value of a usability improvement with terrifying precision. If a feature didn’t move a metric, it was killed.
This vindicated my prediction that “business terms” would become the required language of the field. However, this pressure sadly led to a dark pattern renaissance, where desperate teams optimized for short-term revenue over long-term trust; a trend that only began to correct itself late in the year as user churn spiked. The successful UX professionals were indeed those who adapted to this reality, often rebranding themselves as “Product Strategists” to escape the stigma of being “just designers.” Those who couldn't make the pivot to financial justification found themselves marginalized.
As a result, the industry kept publishing practical guidance on measuring and communicating value in business terms. Maze published a 2025 guide on calculating user research ROI and proving impact, with an explicit focus on connecting research work to business outcomes. UXmatters published a 2025 piece that framed the problem bluntly as learning to “speak ROI”. Looppanel wrote directly about building a business-case deck for research tools in 2025, emphasizing ROI, efficiency gains, and strategic value.
What changed in practice was less about discovering metrics (which I and many others have emitted immense wordcount about over the years) and more about packaging UX work as decisions. (One of my songs highlighted the phrase “we’re in the insights business,” which I shamelessly stole from an interview I did with UserTesting’s CEO.) In 2025, I saw more teams frame outputs around funnel movement, support deflection, churn reduction, and engineering rework avoided, because those were the currencies stakeholders recognized. The teams that couldn’t translate into that language didn’t necessarily disappear, but they moved slower, fought harder for funding, and were more likely to be told to “use the tools” rather than hire expertise.

Correct vs. Wrong Summary: I was entirely right. The market ruthlessly punished those who could not articulate their business value. The shift to “business terms” was not optional; it was the gatekeeper for employment. I did not foresee the extent to which this would initially drive a spike in hostile/dark design patterns.
Score: I was right.
Prediction 5: Evergreen UX Principles
Prediction Summary: I predicted that despite AI, core principles like Gestalt, empirical research, and usability heuristics would remain the foundation of effective work, balancing timeless psychology with new tech.
What Actually Happened in 2025: This prediction was validated by the “Usability Crisis” of 2025. As AI tools allowed non-designers to flood the market with auto-generated interfaces, we saw a proliferation of products that looked beautiful but were fundamentally broken. They violated basic laws of cognitive load, visibility, and error prevention. Consequently, my 10 Usability Heuristics didn’t just stay relevant; they became the primary weapon for quality control.
Furthermore, the use of AI personas instead of user testing created a hallucination loop where AI designed for AI. (Fully AI-driven usability analysis will likely become reliable enough to replace most user testing, but not for a few more years.) Companies realized that while AI cannot yet simulate behavior, it especially can’t simulate trust. Real-world, human-to-human research became a high-fidelity activity used to validate the mountains of AI-generated hypotheses. The principles I cited were not replaced; they were the only things preventing the digital world from becoming a landscape of polished, unusable garbage.
So 2025 didn’t turn UX principles into nostalgia. It turned them into guardrails that let AI-accelerated work remain usable, inclusive, and defensible.

Correct vs. Wrong Summary: I was correct that core principles would remain the foundation. I was right that the challenge would be balancing AI tools with these timeless truths. I was particularly right about the continued need for empirical research, which gained value as a counterweight to synthetic data.
Score: I was mostly right.
Prediction 6: The Perpetual Beta Career
Prediction Summary: I predicted that continuous learning would be critical, requiring professionals to evolve their skillsets to master AI tool, operate in lean teams, and emphasize adaptability to keep pace with change. UX professionals who deny these 3 essentials will be left behind as the world moves on without them.
What Actually Happened in 2025: In 2025, continuous learning stopped feeling like professional self-improvement and started feeling like professional maintenance. The best snapshot of that reality came from how researchers described adapting their craft. User Interviews reported that the most common ways respondents grew capabilities were through experimentation: with tools in their workflow (90%), ways of sharing insights (71%), and collaborating with others (64%).
The “adapt or die” warning was not an exaggeration. The job market of 2025 bifurcated into two distinct tiers. On one side were the “AI-Fluents”: professionals who mastered prompt engineering, model steering, and data analysis. These individuals found exciting opportunities in new hybrid roles like “AI Interaction Designer” and “Product Engineer.”
On the other side were the Traditionalists who refused to engage with the new stack. This group faced a brutal year. Layoffs and hiring freezes disproportionately targeted those who could not demonstrate how they used AI to accelerate their workflows. The skill gap didn’t just widen; it became a canyon. Continuous learning became a daily requirement, as the half-life of a software tool dropped to mere months. The designers who thrived were those who treated their skillset as fluid software, constantly updating.
We also saw the rise of the “Generalist-Specialist” as an evolution of the traditional UX Unicorn concept: designers who used AI to become “good enough” at copywriting, coding, and data analysis, effectively expanding their scope of influence. Those who clung to the specialized “I just do wireframes” mentality were indeed left behind, often transitioned into lower-paid asset management roles or forced out of the industry entirely. The Darwinian nature of the year was exhausting, but it confirmed my prediction absolutely.
The job market added a subtle twist: AI skills became assumed and therefore less visible. Simply saying “I use AI” has become an insufficient differentiation to land somebody a job; you had to treat mere AI use as table stakes and differentiate on judgment, outcomes, and taste.

Correct vs. Wrong Summary: I was correct about the necessity of upskilling and the risk of being left behind. The specific new capabilities I mentioned (AI tool usage and working in efficient teams) were exactly what employers demanded. I was right that the career landscape would be transformed, validating my “evolve or die” sentiment.
Score: I was right.
Why I Was (Mostly) Right
Overall, my predictions for 2025 proved remarkably accurate. The UX field did transition toward AI integration, organizations did flatten, and core principles remained relevant even as tools changed. Where I was slightly off was in underestimating both the immediate pain of the transition (layoffs and job market contraction) and the challenge of demonstrating marginal ROI as baseline design quality improved industry-wide. The fundamental thesis that UX professionals who embrace change while maintaining focus on user needs and business value would thrive was validated by the year’s developments.
Of my 6 predictions, I scored 2 as being “right,” 1 as being “mostly right,” and 3 as being “more right than wrong.” When I set out on this exercise of revisiting my predictions from a year ago, I expected that I would have been “more wrong than right,” or possibly “mostly wrong” on a few of the predictions, because that’s what usually happens. But no.
Why was my prediction performance so strong? I suspect it’s because this set of predictions stayed well within my legacy expertise as a usability guru. I leveraged 42 years (at the time) of experience with the inner workings of UX to predict how AI would impact the field.
In contrast, if I had made specific predictions about AI, and not about UX, I would probably have racked up a lot of “wrong” scores. For example, I would not have predicted that Google would have the best image-generation model by late 2025, in the form of Nano Banana Pro, even to the extent that I hailed this tool as the birth of visualization for the masses. Given how terribly Google had performed with image-generation in 2024, I would have guessed “anybody but Google” as the likely 2025 winner. I might have picked Midjourney or Ideogram, which almost faded into irrelevance in 2025.

(All comic strips in this section made with Nano Banana Pro)
Karpathy’s 6 AI Paradigm Shifts
Andrej Karpathy’s review of AI in 2025 reflects on the rapid evolution of AI. Karpathy is arguably the world’s leading independent AI expert (I published an overview of his illustrious career in an earlier newsletter, so I won’t repeat this here.) He described AI as having undergone the following 6 paradigm shifts.
2025 was a year of significant momentum for AI progress, defined by several conceptual paradigm changes that altered the technological landscape. These emerging forms of intelligence are proving to be simultaneously smarter and dumber than anticipated. Despite rapid advancements and exciting new avenues for exploration, Karpathy thinks the industry has not yet realized anywhere near 10% of AI’s potential, despite the year’s strong growth.
1. Reinforcement Learning from Verifiable Rewards (RLVR). RLVR has emerged as the new de facto production stage for AI, joining pretraining, SFT, and RLHF. By training against objective, non-gameable rewards in verifiable environments like math or code, AI models spontaneously develop strategies that resemble human reasoning, breaking down problems into intermediate steps. Unlike previous fine-tuning stages, RLVR involves significantly longer optimization runs, consuming compute originally intended for pretraining. This paradigm also introduced a new mechanism to control capability by increasing test-time compute, allowing the AI longer “thinking time” to generate reasoning traces.
2. Ghosts vs. Animals / Jagged Intelligence. The industry has begun to internalize the unique “shape” of AI intelligence, realizing these entities are not analogous to evolving animals but are rather “summoned ghosts.” Unlike human neural nets optimized for survival, AI nets are optimized for objectives like text imitation and reward collection. This results in jagged intelligence: AI can show genius-level polymath capabilities in verifiable domains targeted by RLVR, yet remain cognitively challenged elsewhere. Consequently, trust in standard benchmarks collapsed as they became susceptible to gaming via RLVR and synthetic data.
3. Cursor / New Layer of AI Apps. Tools like Cursor convincingly revealed a new, “thick” layer of AI apps. Beyond providing basic model access, these applications bundle and orchestrate multiple AI calls into complex workflows for specific verticals, balancing performance and cost. They handle necessary context engineering, provide application-specific GUIs, and offer “autonomy sliders” for human-in-the-loop interaction. While labs may graduate generally capable base models, this new app layer is expected to organize and animate teams of them into deployed professionals using private data, sensors, and actuators.
4. Claude Code / AI That Lives on Your Computer. Claude Code emerged as the first convincing demonstration of an AI Agent, loopily stringing together tool use and reasoning for extended problem-solving. Critically, it established a paradigm of AI running locally on the user's computer using private environments, data, and context, rather than via cloud deployments. In the current era of jagged capabilities, this localhost approach proved superior to cloud-based agent swarms. It changed the AI form factor from a distant website to a distinct spirit or entity living directly on the user's machine.
5. Vibe Coding. In 2025, AI crossed a capability threshold enabling “vibe coding” (Karpathy himself coined this term and seems amused that it took off to the extent it did in 2025), where impressive programs are built simply using English, ignoring the underlying code entirely. This paradigm shift empowers regular people to approach programming and enables professionals to create software that otherwise wouldn’t be written, such as quick demos or ephemeral tools meant for single-use debugging. Because code has suddenly become free, malleable, and discardable, vibe coding is expected to significantly alter job descriptions and terraform the software industry.
6. Nano Banana / AI GUI. Models like Google Gemini Nano banana represent a significant shift toward an “AI GUI.” Viewing AI as a major computing paradigm analogous to historical shifts, the current text-based chat interface is seen as primitive, akin to 1980s console commands. (Something I have said repeatedly.) Since humans prefer visual and spatial information over slow text consumption, the AI interface must evolve to communicate via images, infographics, and dynamic layouts. Nano banana offers an early hint of this future, combining text generation, image generation, and world knowledge into joint capabilities. (I declared Nano Banana Pro to be number 8 of my 10 themes for 2025 under the title “Visual Communication Goes Mainstream.”)

The 6 paradigm shifts Andrej Karpathy identified in AI in 2025. (Nano Banana Pro)
In various interviews, Karpathy has discussed several other important AI trends:
AI Trend 1: The Emergence of Software 3.0. Karpathy delineates a transition from Software 1.0 (explicit code) and Software 2.0 (neural networks) to Software 3.0. In this new paradigm, large language models function as a new class of programmable computers, where natural language prompts serve as the source code. This shift fundamentally changes how humans instruct machines, moving from deterministic logic to probabilistic negotiation with AI models, requiring new skills in prompt design and intent specification.
AI Trend 2: AI as Foundational Utilities. Similar to public utilities like electricity, powerful AI models are becoming foundational infrastructure. Major tech companies invest heavily in training massive models, which are then delivered as services via APIs. This centralizes capital-intensive infrastructure while democratizing access to advanced AI capabilities, allowing developers to build applications on top of these “intelligence utilities” without needing to train their own base models from scratch.
AI Trend 3: From Autonomy Hype to Collaborative AI. Moving past visions of fully autonomous systems replacing humans, the focus shifts to practical, partially autonomous products. Karpathy likens effective AI tools to an “Iron Man suit,” augmenting human capabilities rather than replacing them. The most successful applications in 2025 embrace a human-in-the-loop approach, using AI for heavy lifting while relying on human judgment, oversight, and verification for critical tasks.
AI Trend 4: The Reality Check on AI Agents. While the concept of autonomous AI agents performing complex tasks is popular, Karpathy provides a grounded assessment. He argues that current agents are often “cognitively lacking,” struggling with novel tasks, long-term memory, and reliable execution. True, capable general intelligence agents are likely still a decade away, requiring significant breakthroughs beyond current model architectures and training methods to overcome existing limitations.
AI Trend 5: The Redefinition of Coding. Programming is evolving into “vibe coding,” where developers use natural language to express intent, and AI generates the underlying code. This democratizes software creation but shifts the required skill set. The focus moves from syntax and algorithms to high-level system design, problem decomposition, and the ability to effectively evaluate and debug AI-generated code. Judgment and domain expertise become the new primary assets.
AI Trend 6: Rethinking AI Training and Architectures. Karpathy highlights deep flaws in current AI training, particularly Reinforcement Learning, describing it as inefficient and noisy. He points out issues with pre-training on low-quality internet data, leading to models that over-memorize rather than generalize. This suggests a pending shift towards new training paradigms, possibly involving better data curation, process-based supervision, or entirely new learning mechanisms to overcome current cognitive deficits.

Karpathy’s 6 trends for AI. Note that “training” and “teaching” in the last trend refer to how we train AI models, not to the teaching of human students. (Nano Banana Pro)
Simple AI Reduces Doctors’ Burnout
A study with 100 physicians at Sutter Health in California found benefits from using a simple AI tool: an AI‑powered ambient scribe that automatically captured and summarized patient–physician conversations, generating draft notes directly in the electronic health record. Three highlight statistics from the report:
Physicians who spent at most an hour per week on after‑hours notes rose dramatically from 14% to 54%. (Good, since we want doctors to spend most of their time on actually providing healthcare, not on documenting what they did.)
Physicians who felt they could give patients their full attention during visits increased from 58% to 93%. (Likely leading to better results, but definitely better for the patient experience.)
Burnout scores fell from 42% to 35%. (Good, since it’s expensive to train physicians, and we want to avoid having them retire too early.)

The percentage of doctors who devoted their full attention to the patients increased from 58% to 93% from the use of a simple AI tool. (Nano Banana Pro)
An early pilot project in April 2024 had less positive outcomes. At the time the AI tool was not fully integrated into the electronic health record system (EHR). Physicians either had to copy and paste into the EHR or do an additional step to incorporate the AI writeups. That has changed since then, and the technology is now fully integrated into the EHR.
The project lead, Dr. Veena Jones, said that while previously she has often been in the position of “pushing” necessary but difficult IT changes to the chagrin of doctors and other health professionals, that dynamic has reversed in the case of ambient AI. Doctors want the AI tool and are “pulling” to get access. These issues underscore the importance of designing AI systems that fit seamlessly into established work patterns, unless we’re capable of completely redesign the full workflows in one go, which is rarely possible in legacy environments such as an existing health clinic.

A notable aspect of the AI trial at the clinic: the doctors actively wanted the AI. This is in grave contrast to the management’s usual experience with introducing new computer tools which are resisted by the clinical staff. (Nano Banana Pro)
From a UX standpoint, this case exemplifies how well‑designed AI can reduce cognitive and administrative burdens. The AI scribe freed physicians from the dual task of listening to patients and typing notes, thereby enhancing presence and empathy during appointments.
For designers working in healthcare, the lesson is clear: AI should act as an unobtrusive assistant that fades into the background. Features such as ambient listening, automatic summarization, and easy corrections can support professionals without adding complexity. Moreover, the pilot highlights the value of flexible customization; clinicians have diverse note‑taking styles and informational needs, so AI systems must adapt rather than impose rigid formats.

The burnout reduction was maybe the most generalizable finding from the study of introducing AI into a healthcare setting. It vastly reduced doctors’ late-night hours, keeping up with paperwork, and allowed them to focus on the parts of the job they like. (Nano Banana Pro)
Fast Growth of Useful AI Services
Faithful readers will recall that I am a fan of HeyGen for avatar animation. While I have experimented with other avatar tools, such as Hedra, Humva, and Wan (for shorter clips), I keep coming back to HeyGen’s Avatar IV model as my go-to tool for animating the singers in my music videos and the presenters in my explainer videos.
One downside of HeyGen is its roots as a product to generate boring corporate HR and PR videos. It performs well when animating straightlaced, corporate-style, photorealistic spokes-avatars, but virtually always fails with animal characters. (Wan does a little better there.) I was also not fully satisfied with HeyGen’s rendition of Santa Claus in my Christmas song, because I had chosen a nostalgic, painted Christmas card style rather than photorealism. (I assume my readers are old enough to know that Santa is not real, so it’s highly appropriate to show him as a nostalgic painting.)
Anyway, despite the criticism, HeyGen is the best. The company recently announced its usage statistics:
2023: 4 million minutes of video generated
2024: 24 million minutes of video generated
2025: 101 million minutes of video generated
HeyGen usage grew by more than a factor 4x this year. Very impressive! I did my share to rack up those minutes. While I did animate a few avatars in 2024, the results were so poor that I only made the bare minimum of AI videos to keep up with the evolving technology. I think I made around 10x more avatar video with HeyGen in 2025 since now it’s a satisfying creative experience to make AI videos.
(While still also frustrating due to the many limitations, especially regarding video length in most tools.)
Another interesting AI company press release from Manus: The Manus AI service reached $100 M in ARR (annual recurring revenue) in just 8 months since they launched earlier this year. A rather nerdy statistic is that is has now processed 147 trillion AI tokens.
Even more impressive: Manus only has 105 employees, which means that each person generates almost a million dollars per year in revenue. This is a good example of the efficiency of small AI-powered superteams I discussed in my “year in review” newsletter.
Manus is a general AI agent that’s useful for tasks such as collecting financial data across all your competitors, aggregating the data in a spreadsheet, and creating an attractive slide presentation with the findings.
I have experimented with Manus a few times, both the original 1.0 release and the current version 1.5, but don’t find it very useful for my use cases. However, the statistics clearly show that Manus is indeed useful for many people since it’s been racking up so much use so fast, despite being priced at the higher end of AI tools.
Truly clever AI that performs an economically valuable task for businesses can get away with setting tokens on fire and charging accordingly. It’s no problem to pay $100 for an AI to do a job that would have cost $1,000 if a human professional had done it. (Say, digging through all those annual reports to extract the data and analyze it.)
Expensive AI is a sign of AI success. And don’t worry, AI prices have been dropping faster than the proverbial stone (defying Galileo’s famous experiment from the Leaning Tower of Pisa), so what’s expensive now will be cheap next year and almost free in two years. By then, we’ll have even more advanced AI capabilities to consume our money, so budget to spend more money on AI, not less, even as it gets cheaper for any given level of task performance.

Galileo proved that all objects fall at the same speed due to gravity. But AI prices fall faster. (Nano Banana Pro)
