UX Roundup: AI Panic | Progress Indicators | Reversing Idea Flow | AI-Native Companies | Fake AI Research
- Jakob Nielsen
- 3 days ago
- 7 min read
Summary: GUI panic vs. AI panic | Progress bars and percent-done indicators | Reversing the sequence between idea and result | AI-native companies rethink workflows | Fake research: MIT retracts AI creativity paper

UX Roundup for June 2, 2025. (ChatGPT)
GUI Panic Was Good Panic; AI Panic Has Allure
Back in the really ancient days when Microsoft Windows 2.0 was initially hitting the street (1988), some colleagues and I had great success selling courses on designing graphical user interfaces to enterprise IT departments that had been stuck in the mainframe era and character-based UI for too long. I coined the phrase “GUI Panic Is Good Panic” to describe the sudden sense of urgency for adopting GUI design in those IT departments.
In other words, a major shift in the technology landscape can motivate sluggish companies to finally get with the program and learn about recent user experience design patterns.
I hope we will see a similar panic in enterprise IT caused by the AI explosion, which is much more consequential than the transition from character-based UI to GUI in the 1980s. Let’s call this “AI Panic.” Right now, many designers are in denial and hope that AI will go away, or at least not invade their cozy little world. No, that’s not going to happen. AI is here to stay and will be 10–100 times bigger in a few years.

Strong parallels between the slow introduction of graphical user interfaces in the mainframe-dominated enterprise IT departments in the 1980s and the slow adoption of AI now. (ChatGPT)
Legacy companies and their UX teams will have to embrace AI in both the UI and in their design processes. The more people defer this transition, the faster they have to make it when it becomes unavoidable. This is very similar to the way enterprise companies ignored the introduction of the “toy” Macintosh computer and its “cute” graphical user interface in 1984. Once Microsoft and IBM got on the GUI bandwagon, the signal to modernize UI design became so blindingly strong that even the most conservative IT organization had to change within a year. Thus our many course sales.
When will AI Panic hit? My guess is 2027, when the next generation of AI products ships and becomes unavoidable. The allure of AI Panic is that at least it will push companies and UX designers out of their comfort zone and force them to change.

GUI Panic and AI Panic can both drive hesitant companies and UX staff to modernize their UI and design process. (ChatGPT)
Progress Bars and Percent-Done Indicators
New song: The Progress-Indicator Ragtime (YouTube, 2 min.).

From Result to Idea: Reversing the Ideation Flow
The AI creator Crom’s Overtures posted the insight that the directional flow between idea and image creation is often reversed with AI. For literally thousands of years, creators have first thought of an idea, and then executed that idea in their preferred artistic medium: painting, sculpture, Greek vases, etc. This can still be your workflow with AI: create your intent (what you want to show), and ask AI to produce it.
However, since AI often shows poor prompt adherence, you may get results that are not what you intended. And you can also issue less-detailed prompts to AI and see what it comes up with. In this creative flow, you take the media product as made by AI as your starting point and then construct a tale around it.

(ChatGPT)
I have done this many times, particularly with Midjourney, which makes beautiful pictures, but you must figure out for yourself how to fit them into your article. My recent article on Reimagining Digital Services as Tangible Devices is another example: I asked AI to make images with minimal specification of the content (only that they should represent iconic features in a software product), after which we can ideate on what those images mean.
Here’s another example of representing software as a physical product, now for a popular consumer website:

TikTok visualized as an information appliance by ChatGPT.
First, we get this image from AI, and then we start thinking about what it says about TikTok and its users. For example, “TikTok is like a slot machine!” (Which it absolutely is, but I hadn’t thought of expressing the operant conditioning nature of TikTok use in those terms until I saw this image.)
AI-Native Companies
Bloomberg ran an interesting analysis of “AI-Native” startups, which are defined as “companies built from the ground up with artificial intelligence not just in the product, but at the heart of workflows and team structure.” The main point in the article is that these companies succeed in keeping staff numbers low due to AI productivity gains. (Something I discussed in my article on the coming pancaking of the UX profession, with smaller teams and fewer management levels.)
These AI-native startups deliberately cap headcount at ≈ 10 people, relying on foundation-model APIs for everything from code generation to HR-policy drafting. Founders say they view every new hire as a “CPU bottleneck,” not a growth lever — a reversal of the classic “hire to scale” mantra.
Most AI-native founders aim to keep their company at no more than 10 employees.

In an AI-Native company, 10 employees can accomplish more than 100 employees could do in a legacy company with traditional workflows. Execution speed is now measured in “dog years,” and executing faster doesn’t mean doing the same tasks in less time: it means embracing new workflows. (ChatGPT)
AI-native companies tend to rely more on generalist staff than specialists, since AI provides any required specialized expertise. With extensive AI help, it becomes more important that all staff understand the full business end-to-end. The article quotes one founder as saying, “Our moat is going to become speed.” (Meaning that the lean AI-first organization can out-execute anybody else and adapt to the expected immense changes in the economy over the coming decade.)

In a rapidly changing business environment, execution speed becomes its own competitive advantage. This is where AI-Native companies beat legacy firms big time. (ChatGPT)
This applies to the UX end of companies as well. Tasks that once required a dedicated researcher, visual designer, content strategist, or QA analyst can be offloaded to AI ≈ 80 % of the time. As a result, AI-first org charts skew toward full-stack generalists who understand the business process of how design grows revenue, not just a design artifact or usability finding.

Generalists who understand the entire value flow from revenue to profitability are the key employees for an AI-Native company, since specialists are replaced by AI that performs more narrow, specialized tasks under the direction of these generalists. (ChatGPT)
As an example of the new workflows enabled by being AI-native, the article cites the founder of Daydream for prototyping 15–20 different ideas in parallel, which is an entirely different approach than legacy product development that proceeds linearly. When design, research, and implementation are all expensive, you can’t afford to test 19 ideas that will be discarded, but doing so makes the one surviving idea much stronger.

Being AI Native requires redesigning entire workflows for the new opportunities, not simply performing existing tasks more efficiently. (ChatGPT)
What does this mean for legacy UX professionals who want to erase that musty “legacy” smell from their resumes and retain a career?
Become the “Polyglot of UX.” Upskill in adjacent domains so you can solo-own more of the end-to-end profitability funnel.
Architect AI-Ready Guardrails. Define consistency constraints, tone-of-voice rules, and safety nets that LLMs must obey.
Design for the “One-Click Pivot.” Assume that business models and feature sets will mutate rapidly; build UI metaphors that stretch rather than snap. The only constant is change, and the only thing that’s certain in business is that everything will be drastically different every few years from now on.

UX professionals must prioritize the ability of design to pivot quickly to new business opportunities, since the AI rollout will dramatically accelerate the rate of change in the underlying business environment during the upcoming decade. (ChatGPT)
Fake Research: MIT Retracts AI Creativity Paper
To its credit, MIT has retracted a paper by one of its graduate students that seems to have relied on purely fabricated data and thus didn’t present real research or true findings.
The supposed findings (which are wrong!) were:
AI dramatically increased the creativity of materials scientists in an unnamed (and probably non-existent) firm, in terms of their ability to identify new promising materials.
AI helped the best scientists more than it helped the less stellar scientists.
I summarize these false findings so that you can recognize them if you see them cited elsewhere. Unfortunately, fake research tends to remain cited for years after it has been retracted.
(I never discussed this research paper in my newsletter, even though it was newsworthy. I can’t claim credit for recognizing the findings as fake, but I thought they were sufficiently odd that I wanted to triangulate them with more research before discussing them. Of course, such additional results never materialized, since the findings were fake.)
Plenty of valid research has found that AI strengthens creativity and ideation, in many fields from poetry to mathematics. (But not in materials science, pending new and proper research on AI use in this domain.)
Contrary to the fake paper, substantial research has found that AI narrows skill gaps, meaning that it helps poor performers more than it helps strong performers.
Some people have theorized that AI might help strong performers more than it helps weak performers in particularly difficult domains where the strong performers will be more capable of recognizing how to use AI best. However, since the one “study” that best supported this hypothesis has been proven false, we’re back to square one and the old saw, “more research is needed” on whether AI narrows or widens skill gaps for particularly difficult tasks.

MIT has retracted a paper with fake research on AI creativity, but unfortunately, people who are unaware of this retraction are likely to keep citing the erroneous findings for many years. (ChatGPT)