UX Roundup: AI Image Consistency | Speak the User’s Language | Business Use of AI | The Exam Is Not the Education | Caption Contest
- Jakob Nielsen
- 9 minutes ago
- 10 min read
Summary: Consistency vs. exploration in AI image design | Speak the user’s language | Strong Growth in Business AI Use | The exam is not the education | The funniest cartoon captions by humans and AI

UX Roundup for May 19, 2025. (ChatGPT)
Consistency vs. Exploration in AI Image Design
To design the promotional infographic for the current edition of this newsletter (see above), I tried to reuse the design of a recent infographic: the left image in the illustration below.
I uploaded the old design to ChatGPT and asked it to “draw an infographic that duplicates the visual style of the attached file, except to replace the content.”

Three recent infographics I made with ChatGPT to promote my newsletters. The leftmost image was used as the basis for designing this week’s infographic, which is shown at the very top of this newsletter.
As you can see from these examples, I have not aimed at a consistent branding theme in my visuals. Instead, I like to experiment with drastically different styles. This certainly goes against all old-fashioned branding guidelines, but I use divergent styles anyway for two reasons:
Most importantly, I create for the joy of it. I retired from separating clients from their money years ago and have no ambitions to build UX Tigers into a money-making business. Thus, I’d rather have fun and enjoy experimenting than be stuck with a single visual style.
In the digital age, brand is experience, meaning that my brand (such as it is) is driven by my personality and opinions, not based on a visual style or even my tiger logo (which I have blatantly morphed for fun). You might even say that experimentation is one of my brand values, to a much greater extent than legacy visual design, and thus, highly divergent visuals represent my brand better than a staid corporate design style where all visuals were by the book and looked interchangeable.
The astute reader might now ask: “What about Jakob’s Law of the Internet User Experience, which dictates consistency?” Good catch — almost! Jakob’s Law is about usability, and it is indeed true that interaction design that complies with conventions is easier to use. Users do spend most of their time on other websites than yours, and that’s true for my newsletter as well: even my most faithful readers are unlikely to spend more than 15 minutes per week consuming my content. So their expectations are definitely set by everybody else.
However, infographic design is not interaction design. My visuals are absolutely part of the total user experience, but the usability of an infographic or a comic strip does not require it to look like the majority of other examples of those media forms. As an example, Tintin and Peanuts are very different types of comics:

Tintin uses color with a detailed and precise “ligne claire” style to tell long stories across many comic book pages. In contrast, the Peanuts strip is monochrome with a minimalist and loosely-drawn visual style and only runs to about 4 frames per comic to focus on a single humorous episode each day. Compare how the grass is drawn in the two parts of this image. (ChatGPT, with apologies to Hergé and Charles M. Schulz)
Despite the many visual differences, both comics comply with conventions such as using speech bubbles so that you can easily tell which character is saying what. They also both use standard English words and grammar (French in the original version of Tintin), so we can easily understand what’s being said. Those are the design choices that matter for comic strip usability and where both artists followed the equivalent of Jakob’s Law.
Speak the User’s Language, Take 3,000
The Baymard Institute published an interesting article about the usability of “mass merchant” websites, by which they mean sites that sell a very wide range of products. Think Walmart or Amazon. (Registration required.)
They provide a range of design guidelines for these websites, most of which may not be overly exciting if you don’t run that type of ecommerce site. But one guideline is so broadly applicable that it deserves wider readership than those who would click to read an article about mass merchandising.
The guideline is to use the “Delivery Date” rather than the “Shipping Speed” when predicting the arrival of any physical shipment.
As always, Baymard bases its guidelines on extensive usability testing. And even though I’m a strong believer in such research, this guideline is so gobsmackingly obvious that I am astounded that more than a third of the big websites in Baymard’s database violate it. You shouldn’t need to watch a bunch of test users to realize that it’s easier for customers to select the option to receive their purchase by Wednesday than “in 3-4 business days.”
So many usability findings come back to the basic principle of speaking the user’s language, or as I phrased it in usability heuristic number two: match between the system and the real world. (It’s number two, because it was one of the most highly loaded factors when I analyzed the predictive power of a bunch of alternative heuristics.)

New research confirms the old insight that it’s better to present information in ways that match the user’s needs. (ChatGPT)
Strong Growth in Business AI Use
Ramp, which is a company providing company credit cards in the United States, has analyzed how many of their corporate customers charge AI subscriptions to these credit cards. The results from January 2023 to April 2025 are shown below:

Percentage of Ramp corporate credit card customers that charge AI tools to their cards. (Source: Ramp.)

AI use in American business keeps growing, from 6% in January 2023 to 40% in April 2025. (ChatGPT)
Two caveats for this data:
Some companies may pay for their AI subscriptions in other ways than charging to a company credit cards, for example, direct billing or by reimbursing employees’ private subscriptions.
The companies that use Ramp as their company credit provider may not be fully representative of all U.S. companies.
However, any bias in the data would probably be fairly constant throughout the period covered by the chart, so the trends should hold up.

With 40% of companies now using AI, we are firmly into the early mainstream of adoption where the “pragmatists” start using an innovation. (ChatGPT)
Some observations:
Growth in AI use inside companies has been incredibly fast, increasing from 6% in January 2023 to 40% in April 2025 — a period of only 27 months.
The annualized growth rate was 129%.
However, growth didn’t happen at a constant rate. Instead, there was an initial bump in March 2023 when ChatGPT 4.0 was launched as the first “good” AI product, followed by a second bump in March and April 2025. (I don’t think any dramatic launches happened in these two months, but maybe it’s finally becoming obvious to company management that AI is here to stay and here to revolutionize business, so they better get with the program.)
Despite the strong growth, we’re still only at 40% of companies using AI tools — or at least paying for AI tools. (Maybe a few cheapskates make do with the free versions, but I strongly recommend against this approach, which is a terrible way to save money because it doesn’t give you a realistic impression of the current abilities of AI and thus skews decision-making in favor of overly slow transitions to our AI-first future.)
Even though 40% corporate AI use is much too little, it does show that we have “crossed the chasm” (in Geoffrey Moore’s terms) and moved past the early adopters into mainstream use of AI. Standing above 40% adoption (as we are in May) means that we are firmly within the range of the “pragmatists,” or early majority. The late majority (the “conservatives”) will surely follow next year if they don’t get the message by late 2025. Converting the last roughly 16% of laggards may not happen until true AGI is achieved in 2027, or maybe not until we get superintelligence in 2030.

AI is now crossing the chasm and has reached almost all the early majority of American businesses. (ChatGPT)
The Exam Is Not the Education
There’s an old saying that “the map is not the territory,” and confusing the two may lead to trouble. Similarly, the exam is not the education, nor are the assignments.

The map is not the territory. If it says you are in a certain spot, this may not be true, for example, if the crew mounting the maps swapped which maps were placed where. (ChatGPT)
Currently, many people in the education sector are up in arms about widespread student cheating, with a growing percentage of students using AI to complete homework assignments. Calls are heard for banning AI in schools and universities, which would be a terrible solution to this problem.
It would be better to recognize that AI is here to stay and that AI use will be the main way any knowledge worker performs their job in the future. Thus, teaching students the best way to use AI in their field should be a key part of any education.
Using AI to learn is not cheating; it’s adapting to how work is done.
Using AI to get higher scores on an exam or homework assignments is another matter. I see two solutions:
The preferred solution is to create new forms of exams and assignments that assume AI use and score students on their ability to perform well with this tool. Long ago, we gave up banning other tools, from dictionaries to calculators, and outside spelling bees, we don’t score people on whether they have memorized the spelling of obscure words.
An alternative solution is to revert to exam formats where all technology use is prohibited (and this ban is strictly proctored). In-person exams with handwritten essays and/or oral presentations and project defenses. (If a student had AI complete a project without understanding what the AI did, he or she would fail the defense.)
Even though I prefer the first solution, the second is more likely to happen in most educational settings, which are profoundly conservative and will prefer reverting to the tried-and-true instead of innovating.

Exams and assignments must change to reflect the fact that students use AI. Denying the existence of AI is not the solution. (ChatGPT)
This same analysis applies to testing job applicants: it’s best to score candidates’ ability to do the job with AI help, since that’s how they will perform the job if hired. (I wrote about this in August 2023, and I stand by this analysis today.)
There is a twist on the story, which is that educational use of AI comes in two forms that have opposite effects on learning, depending on which of the 4 metaphors for working with AI is embraced:
Using AI as a teacher or coach: it helps the student with alternate explanations of complex topics and guides them along an individualized learning path without doing the assignments for them. Such use accelerates learning, according to many studies.
Using AI as a coworker, where it is tasked with completing assignments on behalf of the student. In this scenario, students learn nothing.

Switching metaphors for working with AI reverses its impact on learning. (ChatGPT)
This distinction can be confusing, because some AI use is good and some is bad. Confounding the matter further, the “coworker” use of AI is beneficial in the workplace where the goal is to get tasks done as quickly and cheaply as possible. However, education differs from jobs in this key attribute: the goal is not to be fast, but to learn.
“AI Startup School” October 16-17
Y Combinator (a leading startup investment firm — they would dispute my use of the article “a” and insist on being called “the” leader) is sponsoring an “AI Startup School” in San Francisco, October 16-17, for 2,500 AI students (any level from undergrad to Ph.D. students). The event is free; they even provide a $500 travel stipend for students from out of town.
Speakers include Elon Musk (xAI), Satya Nadella (Microsoft), Sam Altman (OpenAI), Andrej Karpathy (the founder of “vibe coding”), and Aravind Srinivas (Perplexity). Networking with the other 2,499 students should be worth the trip right there.
Attendance is by invitation only, but I encourage you to apply if you are a technical student on your way to a degree in AI. Sadly, the target audience doesn’t include UX students, which I consider a mistake. If the goal is to groom the next generation of AI startup founders, they need UX skills just as much as implementation skills.
Anyway, I do have geeks on my subscriber list, and if you’re one, apply to attend this event.

Caption Contest Winners
Last week, I asked readers for their ideas for better captions for the following cartoon I made with ChatGPT’s native image mode, because I was not happy with the original caption:

I received 122 caption ideas. The funniest human-written captions (in my sole judgment as the publisher of this newsletter!) were:
“That’s the last time I order from a website that doesn't list product dimensions.” (Aaron Shapiro, Director Project Management Office at BlueBolt)
“Did someone forget the ‘dog persona’ in the design sprint?” (Prameela Jeppu, Senior User Experience Consultant at Tata Consultancy Services)
“How do you expect me to squeeze through that tiny window?!” (Wayne Pulford, Principal UX Consultant at Atomise)
I admit I thought the first two were particularly funny because they relate to UX design. Number 3 might be better for a general audience, though.
I made Gemini Advanced 2.5 Pro Preview (what’s with those stupid AI version names?) perform a cluster analysis on the captions to discover the main groupings:
Pooch Practicalities & Protests (Dog's Usability Complaints & Refusals): Dog expresses concerns about the rocket's design, practicality from a canine perspective, or refuses to interact based on past experiences or discomfort. 21 captions.
Astro-Dog Chronicles (Space Travel, Laika, & Cosmic Concerns): Captions directly referencing space exploration, specific space-related figures (like Laika or Neil), celestial bodies, or the experience of being a space-dog. 18 captions.
Tech Woes & Topical Barks (Dog vs. Modernity, AI, Elon & Pop Culture): Dog comments on or struggles with technology, AI, current events, public figures (especially Elon Musk), or other modern life elements. 17 captions.
Alpha Dog Attitude (Canine Commands, Ownership & Assertiveness): Dog takes charge, issues commands, asserts ownership, expresses territoriality, or shows a generally assertive or grumpy demeanor towards the rocket or situation. 15 captions.
Rocket? Bone Appétit! (Canine Confusion & Misinterpretation): Dog misinterprets the rocket as food, a toy, a fire hydrant, or other familiar objects/concepts from its world. 12 captions.
A Dog's Life Musings (Everyday Canine Priorities, Observations & Emotions): Dog reflects on its basic desires (food, walks), human behavior, simple observations, or expresses common dog emotions and perspectives. 11 captions.
Punny Pup-arazzi (Wordplay & Canine Twists on Quotes): Captions primarily relying on puns, wordplay, or humorous rephrasing of well-known sayings from a dog's point of view. 8 captions.
Plus, of course, many individual suggestions that defined categorization.

The top 4 themes in the caption contest. (ChatGPT)
I also made another dog cartoon with ChatGPT, which was so absurd that I need to show it to you:

This time, I asked Gemini 2.5 Pro to suggest UX-related captions, and this was the funniest in my opinion:
“This must be the MVP: Minimum Viable Prohibition.”
(I still don’t really know why a dog would look at a no-parking sign, but we can spin gold from anything if we ideate hard enough — I asked for 10 caption ideas. The one idea related to dogs was: “Someone forgot to map the user journey to the 'Wagging Tail' outcome.” Sadly, that’s too long and convoluted to be a good cartoon caption.)
I conclude that AI still needs to grow its funny bone. But we’re getting close.