top of page

UX Roundup: UX Leaders Need Profits | In-depth AI | Emotional Avatar Animation | Construction Time | User Needs

  • Writer: Jakob Nielsen
    Jakob Nielsen
  • 9 minutes ago
  • 10 min read
Summary: UX should quest for profits, not a seat at the table | In-depth AI reviews of draft academic papers | Emotional lip-synching of avatars when singing | Time needed to build AI supercompute data centers | Discover user needs
ree

UX Roundup for December 1, 2025. (Nano Banana Pro)


Abandon Your CEO Crush

Matthew Holloway published a great article deploring design leaders’ fixation on reporting directly to CEOs. He wrote that this resembles adolescent crushes: idealized fantasies about what we want rather than reality. Like celebrity crushes (think Jony Ive or Brian Chesky), “CEO crushes” are products of mythologized branding campaigns that reflect the admirer’s desires more than actual relationships.


The harsh truth? Your CEO crush likely doesn't know who you are: just that person in black talking about humanity-centered design and ethics. Many design organizations thrive without direct CEO reporting. The real question isn’t whether you sit beside the CEO, but why he or she would possibly want you there.


CEOs value design leaders who can lead effectively, understand business, align organizations around actionable plans, and define clear metrics proving design’s impact. If you can translate the CEO's vision into reality while finding new growth opportunities, they'll bring you closer, but only after you've proven trustworthy with the business.

Instead of chasing CEO access, ask yourself: What makes me indispensable to my current business? Stop demanding a seat at the table. Start delivering dramatic, game-changing value. Meet customers. Turn needs into revenue. Demand accountability for results. Stop waiting to be noticed: lean in, engage, and build tangible value.


ree

To get a “seat at the table” (the metaphor for people actually listening to you), focus on profits and kill the CEO crush. (Seedream 4)


I recommend that you read Holloway’s full piece and redirect your “crush” accordingly, to be in love with profits rather than that infamous table. If you sit at the table but don’t make money for the other people around that table, they’ll stop listening fast.


Deep Review

You’ve heard of Deep Research (and hopefully use it almost daily): AI that “thinks harder” to write the answer to your question by iteratively pursuing leads for more pertinent information, which it evaluates and synthesizes.


Now enter what I’m calling “Deep Review”: AI that thinks harder to evaluate your manuscript and suggest improvements. Refine.ink is a new tool designed to review academic research papers in depth. The service charges $50 for one review (or $300 for a package of 10 reviews), which they claim is necessary to cover the amount of AI compute consumed in a review.


ree

Deep review is a new class of AI service that goes deep in the analysis of a single academic manuscript. (Seedream 4)


Refine “combines the strengths of several leading AI models” (though they don’t tell us which ones), and devotes many hours of compute to examining every detail of a paper. It doesn’t give your manuscript a once-over; it gives it a “many-over” and combines the results into a single set of recommendations.


ree

Currently, “Deep Review” services like Refine work by having AI review the manuscript many times. It’s the equivalent of asking all the other scientists in a research group for manuscript feedback, but having somebody collate synthesize all the feedback into a single set of recommendations. In 10 years, equivalent services will spend thousands of times more AI tokens on a paper review, which will be the equivalent of having a workshop of the world’s leading researchers in a field devote a full week to discussing and analyzing a single paper. (Seedream 4)


I used Refine to review the draft of one of my recent articles, and I was not impressed. It did identify a few areas for improvement, but fewer than I got from Gemini 2.5 Pro Deep Think (which admittedly costs $250 per month for a Google Ultra subscription, which also includes many other features). In particular, Refine only identified formal weaknesses in my arguments and didn’t provide suggestions for how to take my ideas further, which Gemini did.


I have seen many academics praise Refine for its ability to review academic papers, and so my comparatively poor results likely comes from Refine’s focus on such papers, as opposed to the popularizing “influencer” style articles I write.


If you are an academic researcher, I recommend you give Refine a try with your next draft paper. It’s unconscionable to subject other researchers in your group to having to review a manuscript that contains weaknesses that an AI could have spotted for $50. And it’s even more unethical to ask unpaid journal referees to identify these weaknesses for you. (Let alone the point that a better manuscript improves your chances of getting the paper accepted.)


If you do try Refine, let me know how it went.


In general, I am bullish on super-scaled AI models that allocate many millions of thinking tokens to each user problem for a better solution than what you get from the standard thinking models included with chatbot subscriptions. The AI scaling laws imply that each time you 10x the thinking budget you only improve the answer by one step, meaning that 100x the budget (which may be what you get for $50) will only give you two steps up, and a 1,000x token spend (costing maybe $500) gives you three steps up in answer quality. But for some tasks, that’s peanuts, relative to the economic worth of a better answer.


ree

We will have many more expensive AI services in the future, costing thousands of dollars per run and consuming billions of AI tokens, once they can solve truly high-value problems. (Seedream 4)


Most academic papers probably have zero economic value to society, but some are worth millions. For an average estimate, let’s say that a decent scientist costs $100,000 and writes 5 papers per year. (I wrote more back when I was a scientist, but most produce less research output.) That means each paper costs $20,000, plus the cost of any lab resources consumed during the research. Given those expenses, it should be worth $500 to improve the paper, which is usually the only tangible product to come from academia.


ree

As a rough estimate, each academic paper rolling off the research assembly line costs more than $20K. (Seedream 4)


Users Like AI Products

Olivia Moore (my favorite VC) has posted a set of slides about recent trends in AI use in the United States. At the most basic level, ChatGPT is still number one, with five times as many paid subscribers as Google’s Gemini. (This might change now that Gemini has released a much-improved model.)


ree

While AI still has limitations and flaws, we should remember that so did the previous generation of user interfaces. Users are voting with their clicks and subscription dollars in favor of the new world. (Nano Banana Pro)


Over the last year, ChatGPT more than doubled its number of paid subscribers, from 6 million to 13 million. Incredible continued growth. However, Google did better, going from 1 million to 2.5 million, which is an even faster growth rate. As anybody who has studied exponential math will know, a faster growth rate will eventually overtake any initial lead. Of course, it’s possible that either Google will fumble (as they did big time in 2023 and early 2024) or that OpenAI will up its game.


ree

Traditional user interfaces are increasingly being relegated to the Museum of Technologies Past, as users turn to AI to directly satisfy their needs instead of having to navigate awkward websites. (GPT Image 1)


User engagement with AI is increasing, with DAU/MAU for ChatGPT now at 17%, whereas legacy Google search has decreasing engagement with DAU/MAU down to 25%. (Still better, but the lines may cross in 2026). The DAU/MAU statistic indicates what percentage of monthly users use the product every day.


ree

A peek inside OpenAI’s growth team war room? However, I think Google will strike back with better AI and abandon any long-term hope for legacy search. (Nano Banana Pro)


ree

Now, let’s peek into OpenAI’s boardroom to consider the eventual fate of growth curves rising at different rates. (Nano Banana Pro)


Finally, as the ultimate indicator of whether users feel that they get value from AI, the retention curve is high, with 65% of paid ChatGPT users still subscribing after a year. (Before AI, it was considered great when a consumer software subscription renewed at about 35% after a year.) Other AI products are below ChatGPT in retention, while still getting numbers that would have been considered outstanding before 2023.

AI Helps Old Users Stay Creative

New music video: How old knowledge workers retain their productivity and creativity by using AI to compensate for declining fluid intelligence (YouTube, 3 min.)


ree

AI revives decaying brains, like a virtual phoenix. (Nano Banana Pro)


Emotional Lip-Synching of Singing Avatars

I made a short demo video with a face-to-face (literally) comparison of two AI models for animating singing avatars: HeyGen Avatag IV and Lemon Slice v. 2.7 (YouTube, 1 min.).


ree

Comparing two AI models animating the same avatar singing the same song. Who sings better? (Seedream 4)


Lemon Slice is said to better animate a singer's emotions, whereas HeyGen has traditionally focused on speaking (not singing) avatars that present dry corporate information rather than emotional songs. I used the latest version of Lemon Slice, which is currently limited to generating low-resolution videos, so try to ignore the resolution when comparing the two lip-synch models.


Which model do you prefer? Let me know in the comments! I used a clip from my recent music video on Forward Deployed Engineers (YouTube, 3 min.).


ree

Can Lemon Slice cut HeyGen down to size when it comes to singing avatars? (Nano Banana Pro)


I think HeyGen went overboard in its animation of the avatar’s scarf, but that’s not a lip-synch issue.


Lip-sync quality is determined by how accurately the AI converts audio phonemes (sounds) into visual visemes (mouth shapes). Lemon Slice’s model appears to use the audio amplitude (volume) and pitch to drive not just the mouth, but the entire face. When a singer holds a note (e.g., “Lovvve”), the phoneme doesn’t change, but the energy does. Lemon Slice avatars will often add a slight vibrato to the jaw or close their eyes to simulate intensity. This “whole face” activation is what makes the performance feel “sung” rather than “spoken.”


The Lemon Slice architecture prioritizes the emotional mapping of audio amplitude to facial expression, allowing it to interpret the melismatic nature of singing (pitch variation, sustained vowels, vibrato) with greater naturalism than HeyGen. HeyGen’s algorithms are aggressively tuned for spoken phonetics, resulting in high precision for consonants but a “robotic” or “stiff” delivery according to some influencers when applied to singing avatars.


ree

Singing with full emotion remains a challenge for AI avatar models, especially HeyGen, which was originally optimized for corporate spokes-avatars. (Nano Banana Pro)


Personally, I have noticed some issues with HeyGen’s avatars when singing, especially on drawn-out notes. This problem was particularly striking in the opera I recently created about Direct Manipulation. (Opera arias are notorious for going on and on with a syllable before getting to the point.)


ree

The development history of AI models impacts the impact their content has on viewers, often based on the dominant media used for training and reinforcement learning. A model trained on stiff corporate speakers may not emote as well as we want for music videos. (Nano Banana Pro)


Theory aside, I thought HeyGen did best in my test, so I’ll stick with it for now. However, they are on notice to improve their performance in music videos.


ree

My avatar recommendation remains HeyGen. (Nano Banana Pro)


Turning the topic for a minute from avatar animation to cartooning, the examples in this news piece may indicate that Nano Banana Pro was indeed mostly trained on rather boring illustrations, because it doesn’t seem to draw great cartoons.


Construction Time to Build AI Supercompute Data Centers

Epoch AI did a bit of detective work with satellite images to locate OpenAI’s “Stargate” supercompute data center in Abu Dhabi. According to the PR release, the center aims to scale to 1 GW of compute by 2027, and the satellite images show that this is a stretch, but probably possible to reach by Q3 of 2027, with 200 MW possible by end of 2026.


Epoch AI is a great source of many trends in AI, and they also collected information about other announced AI supercompute centers and their likely build time from initial construction to reaching 1 GW of compute. Here’s the time estimates for some of the main AI contenders:


  • Microsoft (“Fairwater” center, Atlanta): 3 years

  • Amazon (New Carlisle): 2 years

  • OpenAI (both in Abu Dhabi and in the United States): 2 years

  • xAI (“Colossus 2” center):  1 year


Of the 5 AI supercompute centers tracked by Epoch AI, Amazon’s is likely to reach 1 GB first, in early 2026, and then xAI will be second, a few months later. However, xAI started building its center almost a year after Amazon did. Nobody accelerates as Elon Musk does!


I find it interesting that AI compute is now measured by power consumption rather than FLOPS. This makes sense for two reasons: First, electricity is likely to be the limiting factor that determines which countries reach superintelligence, with the associated lift in standard of living for its citizens. Second, no normal person has any inkling of what FLOPS are or what they do, let alone what a gigaFLOPS is.


In case you care: FLOPS = Floating-Point Operations Per Second. It measures how many math calculations (with decimals) a computer can do each second. GigaFLOPS = 1 billion of those calculations per second. A modern smartphone GPU runs at around 1,000 GigaFLOPS (1 TeraFLOPS), while a high-end AI training cluster might hit hundreds of petaFLOPS (hundreds of millions of GigaFLOPS).


Estimates suggest generating 1 second of high-quality AI video requires somewhere in the range of 1–100 petaFLOPs-worth of compute.


ree

The race to build AI supercompute centers is the physical part of the race to superintelligence. Pouring concrete, pulling data transmission wires, and installing electricity turbines is a big part of determining which countries will improve their citizens’ living standards with AI. (Nano Banana Pro)


To Meet User Goals, You Must Discover Them First!

In halls where engineers their visions weave,

Let not the builder’s pride the user grieve!

Go forth among the masses, learn their ways,

Observe their toils through anthropology’s gaze.

The Forward Deployed, with notebook keen in hand,

Shall dwell where users work, and understand

What hidden needs lie buried, unexpressed,

What secret pains afflict the human breast.

Before one pixel graces gleaming screen,

Divine the goal, the purpose, the unseen—

For interfaces forged in empathy’s flame

Shall win the user’s heart, and deathless fame.


ree

Focus on user goals: use UX anthropology or Forward Deployed Engineers (preferably both) to discover users’ deep needs. (Seedream 4)


Welcome to December

ree

(Nano Banana Pro)


One of the nice Christmas traditions is to have an Advent calendar, which has a numbered window for each day in December until Christmas: every day, you open the window for that date and see a small Christmas-related picture hidden behind it. Simpler times, when a drawing could make children happy. I still maintain this tradition and wanted to share it with you, if only as a picture.


This Advent calendar shows the Round Tower from 1642 in my hometown of Copenhagen, Denmark. Fun fact: when my Dad was a student at the University of Copenhagen, he lived rent-free next to this Tower under a royal ordinance (also from 1642), which granted free housing to the two smartest graduates from my Dad’s provincial high school so they could study in the capital.

Top Past Articles
bottom of page