top of page
  • Writer's pictureJakob Nielsen

UX News: Discount Envy | AI for All | GUI Toggle Design | Tradeshow User Research | Lightweight UI

Summary: Coupon-code fields make ecommerce shoppers green with discount envy | Wipro invests $1B in AI training for its 250k workforce | Evidence-backed guidelines for toggle switch design| Conduct user research at tradeshows | Lightweight UI for half a billion global users | ChatGPT’s naming blunder and the road to redemption| Time to resume in-person research | Specialized AI app boasts minimalist interfaces and heightened usability

A curated blend of UX news you won't want to miss but don't need an email for each.

Beat Discount Envy: Keep Coupon Codes Under Wraps

Ronny Kohavi shares many examples and A/B testing outcomes for something I have said many times: visible coupon code fields in your checkout process are cart-abandonment magnets. Users without a code abandon the site to go hunting for a discount code. If they don’t find one, they resent being forced to overpay, which Kohavi dubs “discount envy.” Across case studies, an open text field for a coupon code reduced sales by between 2.6% and 7.8%.

Best practices are one of the following:

  • Avoid coupon codes altogether.

  • Show coupon fields only to users arriving via promotional links. For example, from an email offering that discount. (Provide a special link with the promotion: following the special link can also improve usability by including the discount on the checkout page without requiring the user to enter anything.)

  • Show the coupon code option as a link, not as an open text field, which attracts the eye much more. (People who want to use the coupon code will hunt around your checkout page for that link, whereas people who aren’t primed to look for a discount will likely miss the less-prominent placement as they rush through the checkout form.)

Online shoppers go green with discount envy when they see an open text field for entering a coupon code they don’t have. (Image by Leonardo)

AI for All: Wipro's Billion-Dollar Gamble

Software outsourcing powerhouse Wipro has decided to provide AI training to all its 250,000 employees. This will reportedly cost around USD $1 billion over the next 3 years. Everybody will get basic AI training, with more advanced training for staff in AI-specialized jobs. (My comment: an incredibly wide range of knowledge-worker jobs will have a heavy dose of AI in 3 years.)

Bravo to Wipro for recognizing the future and refusing to become dinosaurs. That said, my main recommendation for getting with the program is not AI training but AI experience. Start using the tools for your daily work. Initially, only a few tasks will lend themselves to AI assistance, so you’ll only recognize a few percent of productivity gains relative to your entire workload. But if you have the smallest grain of creativity in your brain, that initial experience will spark ideas for further AI use with the better tools we’ll have next year.

One big reason to get everybody hands-on experience with AI is that such experience changes how employees think about AI to be more realistic than when their impressions of AI are only fed by the scare-mongering press.

Remember the First Law of AI: Today’s AI is the worst we’ll ever have.

(Glad you asked. The Second Law of AI is: You won't lose your job to AI, but to someone who uses AI better than you do.)

Do you work in a company that doesn’t provide AI training and doesn’t reimburse subscriptions to leading tools like ChatGPT as a business expense? Welcome to the Jurassic, is all I can say. Hopefully, you will get another job before the meteor strikes in a year or two. You know what happened to our friends, the dinos, and the same fate awaits anybody who doesn’t heed the Second Law of AI (stated above).

GUI Guidelines for Toggles

Alyaa Al-Jasim and Pietro Murano from Oslo City Education Administration and Oslo Metropolitan University in Norway conducted user testing on 18 variants of GUI toggle designs, mainly looking at design tweaks like colors and labels. The resulting 11 research-validated usability guidelines serve as a helpful checklist. Never design a toggle again without conferring with the Al-Jasim/Murano guidelines (I am only listing the top 6 here to make sure you read the full paper if you design toggles 😏):

  1. Provide an immediate response with UI toggles.

  2. Avoid using red color to indicate an on-state.

  3. Use green or blue colors to indicate an on-state.

  4. On-states of toggles should always be on the right-hand side.

  5. Use a gray color to indicate an off-state.

  6. UI labels should be clear, concise, and not contain negative words.

(My only potential worry about these guidelines is that I am unsure whether #4 also holds in languages that read right-to-left, such as Arabic. Additional research with such target audiences is recommended.)

Rapid User Research at Tradeshows

Have a customer conference or tradeshow? Use to collect fast user feedback: rope in people for 5 minutes as they visit your booth and have them try a UI prototype. This is incredibly potent for B2B or specialized apps where target users are scarce but do attend such events. An excellent example of VMware using this method, with photos.

Lightweight UI to Reach the World

Interesting analysis by Amar Srivastava of a lightweight application designed to consume minimal data traffic. This is crucial for serving billions of users in poor countries where data is not free — indeed will consume a significant share of a person’s daily income. I am ashamed to admit that even though I was probably the world’s most forceful fighter against bloated design during the dot-com bubble, I haven’t given this problem much thought in recent years.

ChatGPT Revamps Misguided "Code Interpreter" Feature Name

Kudos to ChatGPT for finally ditching its cringe-worthy “Code Interpreter” feature name. This moniker misled users into thinking it was designed solely for developers debugging code, while in reality, it serves as a potent tool for data analysis. I used to say that the original name was such an abomination of a feature name, with zero discoverability for the actual functionality, that I assumed that the Abominable Snowman was in charge of OpenAI’s naming strategy.

The clown show that’s ChatGPT naming continues, though now with a slightly better name for that feature: “Advanced Data Analysis.” How many users will be deterred from exploring the feature by the scary adjective “advanced”? But at least the new name does describe what the feature does, so it’s lightyears ahead of the old name. Of course, a few hours of user testing would have revealed the problems with the old name without imposing it on millions of users.

One more usability problem caused by OpenAI’s apparent ignorance of decades of UX knowledge: the switch from one feature name to another just happened one day without any warning or user assistance. I found myself scrambling to locate the “Code Interpreter” for a specific task, only to discover its absence. Thankfully, the new name popped up, and its improved discoverability led me to try it out. But really, OpenAI, would it kill you to add some transitional UI cues?

ChatGPT changes the name of a useful feature from the completely misleading “Code Interpreter” to the slightly scary “Advanced Data Analysis.” Clown by Ideogram.

The Undying Value of In-Person User Research

An impassioned plea from Nick Fine to conduct some of your user research in person instead of doing everything remotely. He has a point, especially about field studies, but even about old-school user testing, where you sometimes pick up more from being in the same room as the user. Even being in the observation room, behind the famous one-way mirror, sometimes feels a little too remote for my taste.

That said, remote research has immense benefits, not least in easier scheduling, less travel and other costs, and the ability to include international participants. It has to be a balance, but don’t do all your research remotely.

Easy Natural Language Translation

Mida is a new AI-driven app for natural language translation. You speak into it in any of 100 languages, and it performs speech recognition, followed by translating the input into well-written English. (Which can further be rendered into audio by speech synthesis.) Very simple user interface and very high usability. It is gratifying to see an AI product with good UX!

In my limited experience, Mida worked well when I spoke to it in Danish. The team behind the product claims that one can mix multiple languages in a single input paragraph. Still, when I tried combining fluent Danish and heavily accented German, it didn’t pick up on the language change but interpreted my entire utterance as (nonsense) Danish. I am not sure how often it’s useful to change language in the middle of saying something you want to have translated, so maybe it doesn’t matter if this is a weak feature.

Much more interesting is that this is an example of a product that uses AI to do one thing and do it well. Following my old slogan, Keep it Simple, is what allows the app to have superb usability. I expect we’ll see many more applications soon along those lines.

Keep it Simple: That’s the road ahead. (Image by Ideogram.)

UI Deep Dive Alert: Tuesday at 1pm PT

Tune in for an illuminating look at the evolution of user interfaces this Tuesday, September 12, 2023, at 1pm US Pacific Time (4pm ET = 9pm UK = 22:00 CET). Catch it live on Zoom from MIT:

Bill Buxton, a certified genius in UI technologies with a hint of mad scientist flair, will draw modern insights from the groundbreaking Put-That-There project. You'll gain a new understanding of human-computer interaction through the lens of this innovative 1970s research. Highly recommended — I'll be there!

The “Put-That-There” project from MIT was an early user interface built on natural language and gesture recognition. Utilizing a voice command system and a two-dimensional graphic interface, users could interact with and manipulate objects on a screen by pointing and issuing spoken commands. Standing in front of a huge projection screen, the user would first say, “Put that” (while pointing to an object on the screen) and then say, “There” (while pointing to a new location on the screen).

Obviously, the actual technology is now irrelevant, but the interaction style was interesting and what Buxton promises to analyze in light of more recent developments.

Moving colored objects around on a screen by gesturing to them and saying out loud what you want to happen. This image is by Midjourney, not from the venerable Put-That-There, but you get the idea.

Jakob’s New Articles

In case you missed them, these are the articles I published in the last two weeks:

  • Jakob’s Law of the Internet User Experience Users spend most of their time on other websites, so they expect your site to work like all the other sites they already know. When a design deviates from users’ expectations, usability suffers. Don’t be arrogant and assume that your new design idea is so brilliant that it can overrule decades of user habituation.

  • AI’s Role in Human-AI Symbiosis: Originator or Refiner It is cheaper to let AI take the initiative in human–AI collaborations where it generates a wide range of initial ideas that are then winnowed by human judgment. But higher quality can result from the opposite workflow, starting with a human draft and fine-tuning it through AI criticism and variations.

  • Classic Usability Important for AI AI products are fraught with basic usability errors, violating decades-old UX findings. Simple fixes will save AI users much pain. Still, AI companies should also invest in fundamental user research and integrate UX with development to address new issues like managing ideation abundance.

  • Search vs. AI: What’s Faster? User productivity was 158% higher when answering questions with ChatGPT than with Google. Satisfaction scores were also much higher for AI users than for search users. As with previous research, AI use narrowed the skill gap between users at different education levels.

Top Past Articles
bottom of page