top of page
  • Writer's pictureJakob Nielsen

UX Roundup: How UX Researchers Use AI | Integrate AI With Existing App | Short Videos | New Remote Research Tool | Business School AI

Summary:  Study of how user researchers use AI | Integrating AI within an existing software product | Short highlights from Jakob’s recent video appearances | New service for remote user research | Business schools integrate AI throughout the curriculum

UX Roundup for April 8, 2024. (Leonardo)

How User Researchers Use AI: Empirical Evidence

I am frankly disgusted by the UX field's poor performance in discovering evidence about AI's usability. We’re in Year 2 of the biggest technology revolution since 1760 (when the Industrial Revolution took off), and UX researchers are sitting on their hands and refusing to come out to play. (This is despite my pointing out a year ago that UX needs a sense of urgency about AI and to take charge of the design of these new tools.)

All the best research I know about how AI is used and how it’s transforming business has been done by economists and management consultancies, not by HCI professors or UX consultancies, who are missing the boat. (Honorable exceptions like the UX Design Institute integrating AI in its UX certification curriculum are just that: exceptions.)

Finally, some good UX research on AI use is starting to trickle out. Kuldeep Kelkar from UX consultancy UXReactor recently published the results of user research on how user researchers use AI in their work. (Yes, this is meta-UX research, but useful nevertheless. We need to improve the efficiency of UX work by at least an order of magnitude to fulfill my goals for the impact of UX on the world. Researching how UX researchers use AI and how they can get better at their jobs moves us toward this goal.) The study participants were user researchers studying the usability of health insurance websites, but the findings seem to generalize to researchers working on design problems in other industries.

The new findings echo some existing research, but provide additional nuance, like one always gets from qualitative observational research.

Similar to what I found in my study on the top AI tools used by UX professionals, the new study found that user researchers use generative AI to expedite the creation of many documents for research projects, such as “research plans, statements of work, screeners, and moderator guides.” The participants also used AI for “data analysis, theme identification from transcripts, and research report generation.”

Three interesting examples of that additional nuance coming from qualitative research:

  • Senior staff were better than junior staff at creating prompts for the AI to help them in drafting these documents. (While you might have hoped that expensive senior staff would outperform cheaper junior staff, it’s not a given that senior people would actually be better than juniors at using a completely new technology. Their extra experience might have been useless.) The reason seniors were better is that the best results came from feeding the AI more detailed and descriptive context in the prompts.

  • Junior staff very quickly improved their prompting skills. This echoes past findings that AI is a seniority accelerant.

  • AI use changed minds: UX people who used to be skeptical about AI “discovered unforeseen benefits and capabilities, turning apprehension into endorsement” after they got hands-on experience with using AI for their jobs. (This is similar to the finding that the percentage of business professionals who were optimistic about the future of AI almost doubled with personal AI experience, whereas concerns about the potential negative implications of AI were almost cut in half for respondents with hands-on experience with AI use.)

An interesting outcome of this new research is a recommendation for prompt development in UX work along the lines of the famous double-diamond model for UX design. The best results in the study came from alternating two phases of prompting:

  1. Explorative prompts: suggest elements for a research plan, give me ideas for test tasks.

  2. Detail-refining prompts that build on user-curated top hits from the previous stage: elaborate on task ideas 3 and 5. (This could be considered an advanced version of accordion editing, except that the detail phase might be in a later session.)

Phase 1 (left): A hiker is exploring the woods to discover a good place to camp. Phase 2 (right): The hiker is sorting twigs and branches to build the best campfire at his chosen place for the night. (Midjourney) Guess what new Midjourney feature I used to create this matching pair of images.

Integrating AI Within an Existing Software Product

Interesting case study of how Duolingo integrated new AI features with their existing educational products for teaching foreign languages. This is a 38-minute video of a presentation at Figma’s Config conference in June 2023, which only came to my attention recently. Even though June seems ages ago, with the quick advances in AI, the general lessons from this project designing a UX to incorporate AI features are still valuable.

A major point made by the Duolingo UX team is that pure AI chat is rarely a good way of adding intelligent features to an existing product. I made this point myself in my newsletter for December 26, 2023, inspired by a webcast earlier that month. I now feel bad for not crediting Duolingo for making that same point half a year earlier, but it’s impossible to watch all AI videos immediately when they’re released. (And I did publish some rather funny cartoons in that December newsletter about the folly of bolting AI chat on the side of existing software.)

The Duolingo UX speakers argued that simply presenting users with an open-ended interface, such as an “ask me anything” chatbot, often leads to underwhelming user engagement. Users might initially ask a few questions but quickly run out of ideas or motivation to engage further. This is akin to a tennis coach observing from the sidelines without providing active guidance. Just as effective coaches proactively intervene to correct techniques or suggest improvements, the video suggested that AI in educational tools should actively direct the learning process. By having an “opinion” or structured approach to interactions, AI can more effectively guide users through their learning journey, making suggestions, highlighting areas that need improvement, and offering tailored advice. This proactive stance by AI ensures that the technology serves as a more effective teaching assistant, rather than a passive repository of information awaiting user prompts.

Specific to their language-learning application, Duolingo emphasized the use of AI for grammar explanations, conversation practice, and feedback on progress. For other applications, other goals will be more important. The key point is to have a directed use of AI with a UI that focuses on users’ goals with that app.

Duolingo’s UX team mentioned that they initially followed the traditional recommended UX design process and created mockups of the screens for suggested designs that could then be subjected to user testing. (Certainly, this is what I have always recommended myself.)

However, for integrating AI features, this was putting the cart before the horse. The first step recommended by the speakers is to prototype and test the interaction with the AI to see how it will react in various situations. This step is not needed in traditional UI design, because traditional software is deterministic: if the user issues command A, the computer will always activate Feature A, so our main design problem is how to represent A in the UI and how to show the results from running the feature.

With AI, let’s say that Duolingo has a conversation practice scenario for language-learning where the user “walks” into a coffee shop and places an order. But let’s say the user orders a coffee with ketchup, which could easily happen for somebody new to learning Japanese. If the AI doesn’t recognize that the user placed a meaningless order but just continues with the scenario, then the entire interaction will fail, no matter how well the commands and visuals are designed.

Will she add a dollop of ketchup to the coffee you just ordered? If so, your Japanese is not as good as you thought it was. (Midjourney)

Short Videos with Jakob

I have been a guest on several long video interviews in recent months. I recommend them all, but in case you just want a quick bite, as a taste of the longer shows, here are some short clips:

Time passes very fast when watching online videos. 33 seconds seems to be a sweet spot for videos that people want to watch, based on this ludicrously small sample. (Midjourney)

New Remote Research Platform: Crowd

Crowd is a new platform for remote user research. I don’t know enough about it to fully recommend it, but I have heard enough positive comments to mention it here and encourage you to check it out.

Crowd offers a reasonably large number of research options (all remote):

  • Moderated user testing (i.e., you observe while a remote user attempts to perform tasks with your design)

  • Unmoderated user testing (the test participant reads tasks you provided, but no human facilitator is present in real-time to interact with the user — you can watch a recording later, or make do with an automated summary)

  • Live site or prototype design testing

  • Surveys, e.g., for feedback on your website after somebody has used it.

  • Card sorting

  • 5-second tests where a screenshot is displayed for 5 seconds, after which the user has to give his or her first impressions. (I’m not a huge fan of these tests, because even a user who leaves your website in a hurry will usually spend more time — often up to 30 seconds — before leaving the site. But it’s a neat method for quickly gathering some first impressions of a visual design.)

  • Collaborative documents for your team to analyze the research findings

  • AI analysis of user comments

Crowd offers a rather generous free plan for running up to 3 unmoderated tests with up to 20 users each month. Sadly, for many companies, this may be as much research as they do. Higher-priced plans of $80 or $200 per month will cover most companies with higher UX maturity, except for the somewhat low limit of 1,000 responses per survey or feedback widget which seems too little for even medium-traffic websites. Particularly now that qualitative user comments can be organized by AI and have themes and trends extracted cheaply, many companies might want to collect more verbatims. (There’s an “enterprise” plan at unspecified cost that may work in these cases.)

If you try out Crowd, please let me know how it goes.

In any case, I’m thrilled to see more competition in platforms for remote user research. I was somewhat sad when and UserZoom merged, since that reduced competition in the market.

Integrating many research methods in a single platform may encourage more triangulation (the use of several different methods to illuminate a problem from multiple angles). (Midjourney.)

Business Schools Going All In on AI

The Wall Street Journal had an article with this title last week (subscription required). The dean of the American University business school, David Marchick, is quoted as saying that “understanding and using AI is now a foundational concept, much like learning to write or reason.”

More surprising is that “40% of prospective business-school students surveyed by the Graduate Management Admission Council said learning AI is essential to a graduate business degree—a jump from 29% in 2022.” To me, this is a very small jump up, now that it’s become obvious that AI is for real and not hype. I feel like this survey should be reported as “60% of business school students are out of touch with reality.”

However, the bottom line is that many business schools are now integrating AI throughout the curriculum as a tool students are expected to use for more or less everything. If you are contemplating an MBA, I recommend only considering one of these schools. Otherwise, your degree will be completely misaligned with the way business is done when you graduate. New York University’s Stern School is starting a major program in AI, so that may soon be a good choice.

This advice obviously generalizes to other topics, such as UX:

  • Don’t go back to graduate school for a UX master’s degree unless AI is fully integrated as a leading topic in the curriculum. (In general, I don’t think there’s much value to graduate degrees in the UX field, since you’ll learn much more by working in the field than by attending lectures about the field. But if you do want to study more, at least study something that’ll be useful in the future.)

  • Don’t take any UX courses unless they lead with how to integrate AI with whatever subtopic that course is about.

Back to school? Only for AI-First courses. (Midjourney)


Top Past Articles
bottom of page