UX Roundup: Year of the Horse | Creative Workflow | AI Coding | Usability Scaling | Was I Right or Wrong? | 30,000 Citations | Winning AI Video
- Jakob Nielsen
- 2 hours ago
- 15 min read
Summary: Year of the Fire Horse | Reversing creative workflows | AI coding wins complete dominance | Usability scaling continues | 40 years of being right become 40 years of being wrong | Usability Engineering book passes 30,000 citations | Award-winning AI video

UX Roundup for February 16, 2026: Happy Year of the Horse! (xx)
Year of the Fire Horse
Happy Year of the Horse! But it’s not just any old horse this year, it’s the fire horse, a combination that only comes around every 60 years. 60 years is also the approximate lifetime of the previous user interface paradigm: command-driven interactions. (Initially, commands were written as words in command-line systems like Unix and DOS. Later, commands were issued by clicking icons and selecting from menus in graphical user interfaces like Mac and Windows.) We are now transitioning to the new UI paradigm of intent-driven interactions based on AI.
A Fire Horse year occurs only once every 60 years and combines the Horse’s speed and freedom with the Fire element’s intensity, volatility, and transformative potential. It is associated with rapid change, disruptive breakthroughs, and a tendency to burn through old structures at pace. Exactly what we need to bring AI to the world.
First new interaction paradigm in 60 years. First Year of the Fire Horse in 60 years. I need to celebrate this, so I made a Year of the Fire Horse music video (YouTube, 3 min.) For comparison, watch my song marking the beginning of the Year of the Snake, made only a year ago, and note how much AI video improved during the Year of the Snake.
Fire Horse years are traditionally seen as high-opportunity but high-risk: they reward bold movement and experimentation, yet demand wise guidance to avoid burnout, fragmentation, or chaos. That is an almost too-perfect metaphor for the current AI moment: explosive capability, thin guardrails, enormous upside if steered well.
Even better, in Chinese astrology, the Tiger (my symbol) and the Horse (this year’s zodiac animal) form one of the “Three Harmonies” (San He), together with the Dog (which I couldn’t fit into the song). The Horse brings speed, creativity, and restless forward motion, while the Tiger contributes strength, leadership, and protective focus. Classical descriptions already cast Tiger as the bold strategist and Horse as the free-spirited engine of action.
A Tiger that aligns with its Horse ally magnifies its own strengths and channels the Horse’s power toward shared, constructive goals. They are natural allies: UX Tigers (the fearless defender of usability) is the partner to the Fire Horse (the raw, chaotic energy of AI). The Tiger steers; the Horse powers, making UX the wise rider whose courage and judgment turn volatile innovation into meaningful progress.

Happy Year of the Horse! Watch my music video, “filmed” on the Great Wall of China. (Nano Banana Pro)
Reversing Creative Workflows
As a consequence of AI, I have reversed much of the traditional creative workflow in recent projects. For example, I am currently working on a comic strip set in the Viking Age, and to decide on the drawing style for the full comic, I simply created final artwork for a fully-designed mini-comic in each of the styles I considered.
This is the exact opposite of all traditional workflow recommendations, which have always started with rough sketches that were then gradually refined before completing a final story. Similarly, when writing the storybook for a long comic book, I don’t fiddle with the details. I just block out the storyline and convert it into a full draft write-up with AI. I then render this draft storybook as final artwork in my chosen style, and only then do I start on the review and editing stages. It’s much easier to envision the design details and the consequences of changing lines of dialogue when seeing everything in final format.

AI allows us to reverse the traditional creative workflow, to start with the end result, making it much easier for creators to visualize their work from an early stage. (Nano Banana Pro)
There’s a reason we were all taught in school to start with outlines and drafts before writing an essay, and that reason traditional comic book artists started with rough manuscripts and proceeded to block out the action on a storyboard is that it would be too much work to meticulously draw 40 pages of comic strips before having decided precisely what should be in each frame. If you wanted to make even the smallest change, you would be throwing away weeks of the artist’s time. Now we have just wasted 5 minutes of AI time, and it takes only a few seconds to redraw any page that I decided to modify in the storybook.
For my Viking style experiments, I decided to make a throwaway one-page comic about the Viking jarl Rollo’s siege of Paris in the year 885. Here are a few of the styles I tried, all drawn with Nano Banana Pro:

Classic adventure strip style.

Belgian “ligne claire” (clear lines) style, similar to the Tintin stories.

Noir-style modern graphic novel.

Seinen Manga. (The speech bubbles in the bottom panel are garbled. The text could have been fixed in a minute or two in the Freepix editor, but this was not worth doing for a style exploration.)

Humor strip.
Which of these styles would you choose for a longer story set in the Viking Age?
Reversing the legacy creative process also works for other media forms, besides writing and comics. For example, I have a music video coming out next week where my original song lyrics ran for 7 minutes. I know this because my new workflow is to immediately generate a complete song with Suno as soon as I finish the draft lyrics. Once I hear how the lyrics sound in my chosen genre, I can then start editing. In this case, because the song was too long, I cut several verses and also edited many lines for improved singability. For other songs, I have changed the genre after hearing how it sounded in the genre I was originally targeting. Or, for smaller tweaks, I’ll add or delete instruments. (For next week, adding a saxophone made all the difference. In the old workflow, it would be much too expensive to hire an entire orchestra and bring in a sax player for one take while you’re still working on draft lyrics, just to hear how they would sound with various instrumentation.)

Comic books, music, articles, user interface designs: whatever your creative project, I suggest using AI to reverse the traditional workflow and start at the end, with the “final,” fully-refined product. Then iterate and refine some more. (Nano Banana Pro)
AI Coding Wins Complete Dominance
Three of the world’s top programmers have recently stated that AI coding is taking over: Andrej Karpathy, Linus Torvald, and David Heinemeier Hansson (better known as DHH). DHH’s write-up of the use of AI agents for coding at 37signals is particularly interesting because 37signals has always been one of the most based software development companies with an emphasis on excellence and small teams. One example he gave was that they asked AI to critique some software, it pointed out a weakness, they agreed and asked the agent to fix the problem, and 20 minutes later it was done.
Karpathy mentioned that over the last few weeks, he has shifted from 20% of his code being written by AI (and 80% by himself) to 80% now written by AI, with the remaining 20% being human edits and touch-ups.
Linus Torvalds is probably one of the world’s most arrogant software engineers, and as the creator of Linux, this is his well-earned right. Even Linus has recently stated that he used AI for parts of his latest coding project. Linus’ admission that he used AI created what observers termed a “permission effect.” When a figure historically associated with purist, manual engineering openly uses AI for appropriate tasks, it legitimizes similar choices by others who previously felt guilty about using AI. The symbolic weight matters precisely because Linus Torvalds represents uncompromising technical standards.
February 2026 is a watershed moment in AI agent capabilities. As I have mentioned before, AI capabilities in software development are advancing faster than its abilities to do UX work. This is a classic example of the “jagged” capabilities of AI, and thus is not surprising as such. AI is still improving fast at UX, and I think it’s highly likely that there is a scaling law for AI’s UX skills that will make it several times better every year.
Today, the world’s best software developers acknowledge that AI is better than them at coding. By 2028 (or 2030 at the latest), the world’s best UX designers and usability researchers will have to acknowledge that AI is better at every aspect of UX design and research than they are.
This doesn’t mean that human software engineers are useless: the best of them can now produce 10x or 100x as much software as they used to make. (And they were already 10x to 100x better than average programmers). Similarly, each of the best UX folks will soon be able to do the design work that used to take a 100-person UX department. Pancaking the UX profession! Watch what happens to programmers for a preview of what will happen to UX staff over the next 2–4 years.

Over the past two weeks, the world’s best programmers have begun to acknowledge that AI is superior at coding. UX lags behind, but AI’s UX skills will catch up shortly. (Nano Banana Pro)
Usability Scaling Continues
My proposed “Usability Scaling Law” is the idea that AI’s ability to carry out UX activities such as user testing, user interviews, discovery research, heuristic evaluation, and UI design is likely to scale with more compute, with more UX-specific training data (such as recordings from user testing sessions), and as AI models in general get better.
Even though AI models are famously “jagged” (i.e., better than humans at some things, but still worse than humans at other things), the frontier advances at pace. This again means that, even though UX work is heavily dependent on judgment and context and thus harder for AI than most other tasks, AI will be better at it than even the world’s most talented human UX staff in a few years. My current guess is that this will happen around the time AI achieves superintelligence in 2030, meaning that current UX workers still have four years left to pivot their careers away from the manual execution of the legacy UX design process I advocated during my 40 years of teaching and consulting before AI.
Usability scaling remains a conjecture that has not been fully proven, unlike general AI scaling, which is extremely well supported. However, we now have one more data point that supports usability scaling.
The Baymard Institute announced that as of February 2026, their AI service is now capable of analyzing e-commerce website designs according to 209 of their 769 usability guidelines, or 27%.
This is up from January 2026, when their AI could handle 154 usability guidelines, as I discussed in a previous newsletter.

The ability of AI to apply Baymard Institute’s usability guidelines for e-commerce usability with at least 95% accuracy, by date. Note that the data are plotted on a logarithmic scale, as is appropriate for a scaling law, which is usually exponential.
Anybody with the slightest scientific inclination will realize that this chart is still insufficient to prove the Usability Scaling Law: too few data points, and insufficient regularity in the distribution. However, the plot certainly doesn’t disprove the Law, but does lend it a slight degree of support.
You might say that it’s not very impressive that AI can currently apply 27% of the usability guidelines when evaluating e-commerce websites. But that’s like saying that AI image generation models were unimpressive in 2024 when they often drew hands with 6 fingers. AI images improved dramatically in 2025, as we should expect from anything that follows an exponential scaling law.

Early AI is usually unimpressive, but exponential growth adds up. (Nano Banana Pro)
Exponential growth eats anything in the long run. Progress seems slow in the beginning, even if the percentage-wise improvement each year is high. A large percentage of a small number is still a small number. But compound growth adds up, and soon enough, we see that same percentage’s improvement on the basis of a mid-size number, and shortly afterwards on the basis of a large number. That’s why exponential growth is often referred to as a hockey-stick curve: it looks very flat in the beginning (unless drawn on a logarithmic scale), but then shoots up.

The classic hockey-stick curve doesn’t look like much in the beginning, but then takes over, as is always the case with exponential growth. (Nano Banana Pro)

Humans do not have good intuitions for exponential growth, because we evolved in a linear world. (Nano Banana Pro)
Baymard Institute’s AI covers so few usability guidelines in its automated design review because they require the AI to have achieved 95% accuracy in applying a guideline before adding it to the product. I simultaneously agree and disagree with this decision, depending on whether I view it from the company’s or the customers’ perspective.
From the company’s perspective, I understand why they demand extreme accuracy of their AI product. After all, if the Usability Scaling Law holds up, Baymard will be able to include the full list of e-commerce usability guidelines in their AI-driven design review product by the end of this year. If they were to target my preferred accuracy level of 80% instead of 95%, this goal would likely be met around August 2026.

Baymard wants 95% accuracy from its AI because that’s the level of its best human consultants. However, remember that most UX staff are not world-class. The average UX professional is, well, average. Heuristic evaluation is not a precise or infallible process, and the average evaluator will usually perform less well. I have always accepted imperfect usability work because a decent level of usability improvements is better for users than not doing anything, unless you can achieve perfection. (Nano Banana Pro)
August or December for AI to take over design reviews? Doesn’t matter much. But from a selfish company perspective, there is much to be said for shipping only extremely accurate AI products: if customers try a less accurate AI and have a bad experience, the company could likely lose that customer for years to come, even after the AI product has achieved perfect accuracy. It is almost impossible to get customers to resample an offering once they have rejected it for being unsatisfactory.
Of course, in designing its product strategy, a company should be selfish and value long-term profitability. Improving the world always has to take second place.
But for improving the world, and the experience of billions of humans, it would be better to release AI with lower accuracy. I think 80% accuracy in applying usability guidelines in a design review is a better target. The reason is that lower accuracy means that the AI can apply more guidelines and therefore improve the usability of the design by a bigger margin.

If we accept AI with a lower accuracy in applying usability guidelines, we’ll get some howlers in the report. However, human review should usually be able to catch them fast. (Nano Banana Pro)
What about those 20% of errors, though? The AI would recommend several design changes that would either not improve usability (and thus waste money on implementing something useless) or even reduce it.
My point is that most of these AI errors would be caught by human review and thus not make the final website design. In two years, we can likely skip human oversight of AI design reviews, but this would be premature to do now.
What about the counterpoint that the human UX experts would be wasting their time reviewing those 20% of AI recommendations that turned out to be wrong? I think the cost–benefit analysis would be advantageous for this “waste.” Remember that AI in my model (accepting lower accuracy) has done twice as much work, which no longer needs to be done by humans. This time saved will vastly outweigh the time wasted in locating AI errors.
When conducting a heuristic evaluation, it takes substantial time to assess a full design from first principles when you don’t know where to look. You need to keep a completely open mind and assess everything: usability problems can hide everywhere, and a large website can easily have thousands of pages. In contrast, when you are given a list of specific (potential) usability problems that pinpoint where the design (supposedly) violates a specified usability guideline, it’s fairly quick to check whether you agree that the guideline was applied correct and that the specified design element does in fact constitute a usability problem. (Well, it’s quick if you’re a UX expert, but if not, then you will be hard-pressed to conduct a good design review without AI help.)
Thus, lower AI accuracy in applying usability guidelines during evaluation will result in a better outcome at a lower cost, improving ROI. That’s why I say that customers would benefit from an AI product that targets lower accuracy. Humanity would benefit as well, since a higher usability ROI means companies would use the method more often, resulting in products with a better user experience.
Conclusion: Customers and humanity would benefit from lower AI accuracy. The vendor will make more money in the long run from higher AI accuracy. I can’t blame them for valuing their own profitability over customers’ profitability. In any case, this is only a problem for the next one or two years, so not really a major issue in the big picture.

For the next several years, we will need human UX experts to oversee any heuristic evaluations conducted by AI. A downside of insisting on extremely high accuracy in the AI design reviews is that it will lull the humans into accepting the AI recommendations without a sufficiently critical assessment. There’s some benefit to be had from AI being wrong often enough to keep the humans on their toes. (Nano Banana Pro)
40 Years of Being Right Become 40 Years of Being Wrong

(Nano Banana Pro)
As we saw in the previous news item, AI will likely completely invalidate the manual UX design process I spent four decades evangelizing, from 1983 to 2023. Should I be ashamed of having urged hundreds of thousands of UX professionals to master a skillset now hurtling toward irrelevance? Did I lead people into doomed careers?
I feel no shame, because when I taught the old ways, they were the right ways. The legacy UX process was the best means we had of building usable products until 2023. It still functions today, though it grows more dated with each passing year and will likely vanish entirely by 2030, when superintelligent AI assumes the work.
My one regret is that I lacked greater foresight in the years leading up to the release of GPT-4 in March 2023. Yet even if I had predicted the future with perfect clarity, I believe it was still ethical to champion the traditional approach up until 2022. Companies needed to ship products during those years, and the only reliable path to good usability at the time ran through discount usability methods and the many other established design processes that were considered best practice.
My error was in assuming that what had worked for forty years would continue to define the standard. A reasonable assumption, perhaps, but one I have now been proven wrong in holding.
I do not, however, believe UX is a doomed career, even if every task once performed by human practitioners is expected to pass to AI by 2030. New responsibilities will emerge at a higher level of abstraction, and pivoting toward them will be essential for anyone who intends to remain in the field.
I don’t think UX is a doomed career, even if everything UX staff used to do is expected to be done by AI by 2030. There will be new tasks for humans at a higher level.

Your legacy UX skills are fading in value, but the discipline of product design itself is not disappearing, it is ascending, so you don’t need to study cheesemaking. (Nano Banana Pro)
Usability Engineering Passes 30,000 Citations
My most cited publication, the book Usability Engineering, now has more than 30,000 citations on Google Scholar.
To make sense of this number, the 100 most-cited papers from the roughly 11,000 published at the top outlet for HCI research, the ACM CHI conference, over 43 years from 1982 to 2024, have a mean citation count of 1,577. Thus, my one publication has the same citation count as the sum of all citations for 19 of these 100 best-in-history papers.
The most-cited paper from everything published at the CHI conference had 6,019 citations at the time of the analysis I linked to above, meaning that my one book corresponds to 5 of the best-ever papers. (This best-ever HCI paper happens to be one of mine, from back when I was a university professor: Jakob Nielsen and Rolf Molich, Heuristic evaluation of user interfaces. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 1990, 249–56. It now has 6,492 citations.)
To celebrate, I made a new version of the book cover, converting the original drawing into a photo with Nano Banana Pro:

As depicted on the cover, my Usability Engineering book taught the methods to bridge the gap between computer capabilities and user needs, overcoming the rapids of interface complexity.
Even though my book is now 32 years old, usability methods have not changed much, so I am not surprised that it's still being widely cited. The main change is wider use of remote usability testing, but the basic principles of user testing are the same, whether you run a session in person or remotely.
Best AI Video
The winner of the recent competition for a short video made with the new Grok Imagine 1.0 model is worth watching. Only 30 seconds. Do watch all the way to the end, past the closing titles.
The video is about Galileo’s famous problems, stemming from standing up to the powers of the day, and the new evidence that the Earth orbits the Sun, not the other way around. Spoiler alert: In real life, if Galileo had pulled out a smartphone to prove his point, the Pope’s men would probably have burned him at the stake immediately as a sorcerer.

Does the Sun revolve around the Earth? Many people thought so, and in 1616, the authorities warned Galileo Galilei against promoting his revolutionary theory, based on empirical observation rather than transmitted belief, that the Earth orbits the Sun, known as Heliocentrism. Despite the warnings, Galileo persisted in his research, and in 1633, the Inquisition sentenced him to life imprisonment and forced him to recant. (Nano Banana Pro)
