UX Roundup: Corkboard Mockup | Loss-Making Promos | AI Movie | Video-to-Music | Mainstream AI | Young Staff Loves AI | Nano Banana | Emotional Speech Model
- Jakob Nielsen
- 3 days ago
- 13 min read
Summary: Corkboard mockup of project history | Are promotions profitable or not? | Making a short AI film | Auto-creating music based on a video | AI is becoming the norm for online shoppers in the United States | High AI adoption among entry-level employees in poor and middle-income countries | Google’s new Nano Banana image editing model | Microsoft releases T2V model that generates emotional speech

UX Roundup for September 1, 2025. (GPT Image-1)
Corkboard Mockup of Project History
Credit to Umesh for the following idea: Create a mockup corkboard of a project, as a way to visualize how it might turn out before embarking.


Top: GPT Image-1. Bottom: Google Imagen 4 Ultra. In this small case study, I like Google’s image better than OpenA’s, even though I do like that GPT 5 Thinking thought to include a Polaroid from the stakeholder playback, which is an essential part of any research project.
Prompt (credit Umesh): Design an image of a UX user research project retold through a series of Polaroid photos pinned to a corkboard. Each photo captures a key moment, with simple captions below. Arrange the photos in a loosely chronological path across the board, using colored strings to connect events and characters. Light the scene warmly to evoke nostalgia. Include incidental details, coffee cup rings, paper clips, handwritten notes, for authenticity.
Obviously, you should replace “UX user research” with a description of your project.
Some Promotions Increase Sales but Decrease Profits
An old study, but I just heard of it from Ron Kohavi: Kusum Ailawadi and colleagues studied sales at CVS (a big American drugstore chain) and analyzed the impact of promotional offers on sales and profits.
As anybody who has ever heard the economics term “price elasticity” would expect, lowering prices increased sales. 45% of those extra sales were a net increase. In contrast, the remaining 46% were due to customers buying the discounted product instead of another product they had planned to buy (so no net sales increase for the store) and 10% were due to stockpiling (customers buying extra while the product was on sale, only to not buy it again for some time until their stockpile had run out).
Furthermore, for each extra unit sold of the discounted product, 0.16 extra units of other products were sold that were not discounted. (Maybe people felt like spending the money they had saved! Or maybe there was a halo effect from the cheap product, making customers feel that this store was a good place to shop.)
More sales. So far, so good. However, the store made less money: they obviously missed out on the discount amount and also lost sales of other products when shoppers switched from their original planned purchase to the cheaper product.
In total, the average promotion reduced profits by $0.33 per item per store per week.
This is not to say that you should never have a sale. The paper analyzed the impact of discounts across four product categories and found that sales increased profits slightly for beauty items and general merchandise. However, profit losses were big for discounts on grocery items and health products.
Thus, the average masks essential differences between product categories.
If you don’t run a drugstore, why should you care about this study? Two big takeaways:
Most importantly, promotions may have second-order effects, so don’t simply analyze their impact on sales. Look one level deeper to see what impact the promotion has on profits, which are much more important than sales. (Sales in themselves are a vanity metric. Profits are what literally count on the bottom line.)
Second, only looking at average metrics may blind you to important differences that would be revealed by a finer-grained analysis.

Promos can increase sales while simultaneously reducing profits. Make sure you track the right metrics. (GPT Image-1)
Making a Short AI Film: Shakespeare’s “Pericles”

I experimented with AI movie-making. (GPT Image-1)
I made a short AI film based on William Shakespeare’s play “Pericles, Prince of Tyre.” (YouTube, 3 min.)
After I cut 99% of Shakespeare’s word count, can you still follow the plot? I deliberately chose this play because Pericles is the least-performed of all Shakespeare’s plays, meaning that it is highly unlikely that you have ever seen it. (Note, the character “Pericles” in this story has nothing to do with the historical Athenian statesman who built the Acropolis. Reportedly, Shakespeare chose this name for branding purposes, to ensure audience recognition that the play was set in ancient Greece.)

New mini-film: Pericles, Prince of Tyre. (Thumbnail made with Imagen 4 Ultra)
Video made with Veo 3, except for the title sequence and interstitial, which were made with Kling 2.1 and 2.1 Master (for the interstitial, I employed an end frame which is not yet available in the “Master” version of Kling). Clearly, we are not yet at the point where AI movies will replace the legacy studios, but then none of them have ever made a movie based on this obscure Shakespeare play. In two years, a full-length feature film may be quite realistic.
Thumbnail images and base images for text-to-video made with Google Imagen 4 Ultra and Seedream.

Even a short 3-minute movie currently requires AI creators to combine several tools, each with its own strength. (GPT Image-1)
My main problem in this project was that Veo 3 refused to render several scenes as described, claiming that they violated its “community standards.” Come on, Google, William Shakespeare isn’t good enough for you? AI censorship is clearly going overboard, being based on narrow, prudishly Puritan values, rather than humanity’s shared heritage, which is much more wide-ranging and therefore more interesting and creatively satisfying.

Google even censors William Shakespeare. Clearly, AI censorship is stepping over the line. As AI enables individual creators from all the world’s cultures, their creations should not be limited by the narrow-thinking Puritanism of a few people in Silicon Valley. (GPT Image-1)
A large number of editorial decisions were involved, even in creating a short 3-minute movie. For most scenes, I made between 4 and 8 versions to choose from, sometimes iterating the prompt between generations, and sometimes simply rerolling in the hope of better outcomes. Sometimes I had to “satisfice” (i.e., settle) instead of working forever. For example, the tournament scene was impossible to get right, so I used a decent, but not great take. (If this had been a commercial project, one would have needed to budget more editorial time and AI credits to wrangle resistant scenes into shape.)
I also simplified the plot as I worked through the movie and watched early cuts. For example, I removed a scene where Pericles delivers aid to a city struck by famine. In the full play, this is important for character development and to establish Pericles as a kind man. But it introduced a confusing element in the short film and got in the way of understanding the main plotline (him repeatedly losing everything and yet recovering).

I had to make many tough editorial decisions to cut 99% of Shakespeare’s play. Traditionalists will deplore such brutal cutting that loses many nuances, but AI video is a new media form and requires a new approach. Traditional movies also take extensive liberties when adapting a book or play into a film script. (GPT Image-1)
The only thing I am sure of is that storytelling will be very different once individual creators get the equivalent of a Hollywood (or Bollywood) studio at their fingertips. Similar to how unboxing videos became popular on YouTube, even though they were never a thing on network television, even to the extent that little kids gained millions of worldwide fans for their unboxing of toys.

Two models for making a film: Old school (top) involves human actors, elaborately tailored (and thus expensive) real costumes, and a small army of technicians to run the camera, lights, sound, and many other elements, all to realize the director’s vision. The new approach (bottom) utilizes a virtual film crew behind computer screens, enabling a single creator to bring his or her vision to life by combining a suite of AI tools. Currently, the production value from the new approach is still limited, making it mostly suited for short projects; however, this will not last. (GPT Image-1)
Currently, my mini-Shakespeare movie is not on par with what Steven Spielberg could create if he were to spend a $500 million budget on filming that bottom-ranked play (which no studio executive in their right mind would greenlight). 5 years from now? Not so sure. 10 years from now? Individual creators will beat the legacy media like the proverbial drum.

What would William Shakespeare have thought of my mini-film version of his play? Impossible to tell, absent a time machine, but I like to think that he would have been open to experimenting with new media forms. He probably still wouldn’t have liked the way I cut so much of his story. (GPT Image-1)

At least I left the basics of the plot alone and didn’t change it to bow to modern attitudes. I didn’t even have Pericles replace his galley with a sedan to go touring Route 66. (GPT Image-1)
Video-to-Music Feature from ElevenLabs
One of the advanced features of AI is its ability to transmute content between media formats. Simple versions of this include text-to-image tools (e.g., Midjourney), which generate an image based on a description, and image-to-video tools (e.g., Kling), which bring an image to life by transforming it into a video clip. Grok’s version of this latter feature has become very popular on X, after they added a long-press capability for any image posted to this social network to become a video.
The latest AI media transformation capability is video-to-music, launched by speech-synthesis company ElevenLabs, which has been moving into music creation lately.
You upload a video, and the AI analyzes its mood and action, then creates music to match. You can use this music either as the entire soundtrack for a scene without dialogue, or as background music for the clip.
Here is a short example I made. I posted this video to my Instagram channel instead of YouTube, as usual, because it doesn't convey any action or information. It’s purely a mood scene, so it’s actually quite bad as a standalone video, even though I think it’s great as a demonstration of this new AI capability. (Unfortunately, I have discovered that YouTube penalizes this type of video; thus, my Instagram post.)
Although many AI creators rave about ElevenLabs’ music, I have been disappointed with every song I've made with them. Suno’s music is much more to my taste. However, based on my limited experimentation, I applaud the ElevenLabs’ music team for their video-to-music feature. Not only is it a true innovation in this space, but the music is actually good!

Another example of media transmutation through AI: making this news item into a comic strip with GPT Image-1.
AI‑Driven Shopping Goes Mainstream
Omnisend’s survey of 4,000 consumers in the USA, Canada, the UK, and Australia shows rapid adoption of AI tools for e‑commerce. 57% of Americans use AI to research products, find deals, and get personalized recommendations. Across all four countries, 46% of users name ChatGPT their go-to assistant (rising to 65% in the U.S.). A quarter of respondents believe ChatGPT offers better product recommendations than Google, and nearly a third say it makes shopping less overwhelming.

57% of American ecommerce shoppers now use AI when researching which products to buy. (GPT Image-1)

Why do users turn to AI for product research? Because you’d have to be an octopus to keep all the websites open that you would have to consult when doing product research the old-fashioned way. (“Old” here meaning since around 1995, so only the last 30 years.) Much simpler to ask AI tools like Deep Research to produce a single, unified overview. (GPT Image-1).
(Percentages are lower in other countries in the study: 33% in Canada, 34% in the UK, and 34% in Australia say that they use AI to research products before buying.)
E‑commerce used to be simple: search, click, buy. Now it’s about conversational product discovery, dynamic recommendations, and algorithmic transactions. Omnisend’s survey quantifies how far we’ve come. This is not just early‑adopter behavior; AI has gone mainstream, at least in the United States.

AI has gone mainstream, at least in the United States. We’ve grown far beyond the early adopters in Silicon Valley as the only AI users. (GPT Image-1)
That’s a seismic shift. As AI assistants replace search engines for commerce, UX designers must rethink information architecture. Instead of designing pages for scanning and clicking, we need interfaces that support iterative, conversational decision‑making. Brands should prioritize GEO (generative engine optimization) over SEO.

Which way, modern content strategist? We can’t completely abandon SEO yet, but most efforts should be targeting GEO to future-proof a brand’s digital presence. (GPT Image-1)
Users enjoy AI because it reduces cognitive overload, with 28% saying it makes shopping less overwhelming. Through progressive disclosure, AI can provide just enough information to make a decision and reveal more on request. AI can tailor the level of detail based on user queries. But this also shifts power away from users if not designed carefully. A conversational AI might hide alternatives or overemphasize certain brands. Thus, transparency is key. AI should disclose how recommendations are generated and allow users to ask, “Why this product?” Without this, we risk replicating dark patterns in AI form.

AI can simplify tasks by compiling and synthesizing disparate sources, thus lowering users’ cognitive overhead. (GPT Image-1)
Trust remains a major barrier. 85% of respondents worry about privacy, misinterpreted preferences, or irrelevant recommendations. This echoes a study on agentic AI: people will not fully embrace AI until it proves trustworthy. For retailers, the remedy is to put users in control. Provide clear opt‑in mechanisms, allow customers to see and edit the data the AI uses, and offer non‑AI alternatives. The report’s advice to “keep human support available” is pragmatic. When AI goes wrong, or when a user simply prefers a human conversation, a human agent should be a tap away.

AI tools must be able to explain how they derived their recommendations. And avoid advertising and “sponsored answer placement” if they want to retain any credibility. (GPT Image-1)
Finally, we should pay attention to transactional AI. The Omnisend report notes that the percentage of people reluctant to let AI handle transactions was cut in half since February 2025 (from 66% to 32%). This hints at the rise of agentic commerce, where the AI buys on your behalf. UX designers must consider safeguards: confirmation steps, budget limits, and easy cancellation. In essence, we need to design AI shopping assistants that are competent, transparent, and accountable.

Ecommerce is entering a new era, fueled by AI on both the consumer and the vendor side. (GPT Image-1)
Young Staff Embrace AI, but Training Gaps Hold Back 35%
According to a recent study by the non-profit Generation, 65% of surveyed entry-level employees use Artificial Intelligence (AI) tools at work, with most being self-taught power users. The report, titled “AI at Work: A Global Entry-Level Perspective,” surveyed over 5,500 Generation alumni in entry-level roles across 17 countries, mostly representing lower- or middle-income economies.
Key findings from the survey:
High AI adoption: 65% of entry-level workers reported using AI tools for their jobs. Many are proactively adopting the technology on their own initiative.
Positive impact: 94% of AI users reported that it has improved their ability to do their job. 91% said AI has made their work more enjoyable, echoing past findings that people like AI once they get experience using it. (59% of respondents said that AI has increased how much they enjoy their job by a lot, with 32% saying that AI has increased their enjoyment a little.) Only 1% said that AI has decreased their job enjoyment by a lot.

91% of AI users among young employees in poor and middle-income countries say that AI made their job more enjoyable. (GPT Image-1)
Significant skills gap: For the 35% not yet using AI, the primary barriers are a lack of training and uncertainty about how AI can be applied to their specific roles.

Lack of training was the main reason some young employees still don’t use AI. (GPT Image-1)
Gender divide: There is a significant gender gap in AI use, with 81% of men reporting use compared to 59% of women. At least the gap is narrower within the tech sector.
Daily use is common: Among those who use AI, 79% use it at least weekly, and 37% use it daily.

Of young people using AI, 37% use it daily. The optimistic view is that we have the potential for a 3x growth in AI use, even with no advances in AI models (an unrealistic assumption) and no new users (also an unrealistic assumption), as existing users discover more use cases. The pessimistic view is that AI still has poor usability, preventing many users from fully exploiting it. (GPT Image-1)
95% of the respondents in this survey work in poor (Ghana, India, and Kenya) or middle-income (Brazil, Colombia, Mexico, and Thailand) countries. Although 5% of respondents come from affluent countries, the fact that the overwhelming majority of respondents are from poor and middle-income countries means that these statistics can be considered representative of the global majority, rather than the few affluent countries that dominate most AI discussions. The fact that workers in poor and middle-income countries are heavy AI users and love it is one more indication of the mainstreaming of AI.
Pearl-clutching about AI seems limited to media outlets in the United States and Europe. Most of the world welcomes the improvements AI is making in their lives.
Google’s New Nano Banana Image Editing Model
Google launched its image-editing model, which had been taking the AI creator community by storm, while it was still in trial mode under the code name Nano Banana. The official name is a much less memorable “Gemini 2.5 Flash Image.” What idiots work in AI marketing departments? OpenAI finally launched some slightly less terrible product names, but now Google is regressing.

Google’s “Nano Banana” image-editing model brings some magic to our images. (Midjourney)
The new model excels in editing images through prompts: you upload an image, say what you want changed, and it supposedly changes only those parts of the image. However, in my testing, repeated editing does degrade even those parts of an image that are not edited. Here is an example of the base image I used for my recent video “No More User Interface.” I was never happy with the robot band, so I took this opportunity to add some instruments.

Original image (upper left) made with Midjourney. I edited this image in 9 steps with Nano Banana, but I’m only showing 5 of these. Upper right: moved a robot to sit in front of the piano to play it. Middle left: added an upright bass and a drummer. Middle right: I lost the guitar player in the previous edit, so I added that robot back in. Lower left: when adding the guitar player back, I lost the robot behind the piano, so I added a trumpet-playing robot in that spot. Note that I had never specified any edits for the singer, but her face was slowly degrading in quality through these edits. Lower right: I asked the model to “improve the sharpness and resolution of the singer.”
Here are close-ups of the singer’s face through these editing steps, clearly showing the gradually degrading image quality:

Asking for the singer to be improved in sharpness and resolution (between the lower left and lower right) did work, but didn’t return the image quality to Midnourney’s original (upper left). (Google Gemini 2.5 Flash Image)
Emotional Speech Model
Microsoft has released a new text-to-voice AI model called MAI-Voice 1. (MAI = Microsoft AI Lab.)
I conducted a small experiment to compare the AI voices generated by two models (YouTube, 1 min.): ElevenLabs (which I usually use for my avatars) and Microsoft's new model.
MAI-Voice 1 didn’t just produce text-to-speech audio for the provided script. It took it upon itself to rewrite the script to be hilariously emotional. This is unacceptable to me, particularly because the rewritten script doesn’t represent my tone of voice at all.
However, I admit that MAI-Voice 1 produces much better emotional voices than even ElevenLabs’ new v3 alpha.
In any case, whether you like Microsoft's approach with this model or not, it’s great to see more competition in speech synthesis and different approaches to AI. It’s certainly promising for the future to see Microsoft release its own models, rather than relying solely on OpenAI.

New voice model that emphasizes the generation of highly expressive speech: Microsoft MAI-Voice 1. (GPT Image 1)