Summary: Optimizing AI answers for a specific domain and perspective | Participate in user-research studies yourself | Heuristic evaluation template and guidance | Jakob live session February 14, hold the date
UX Roundup for January 22, 2024. Could you use a trip to Miami Beach right about now? (Leonardo)
Luke Wroblewski, who is one of the world’s best designers, has long had an “AskLuke” feature on his website, with answers driven by AI. This is not a traditional search, nor is it a traditional AI. Rather, it’s an optimized AI that integrates and synthesizes his voluminous content from many years past.
Luke has a short video (2 min.) explaining how this works and contrasting his AI answers with those from ChatGPT.
As I pointed out in my article, SEO Is Dead, Long Live AI-Summarized Answers, the beauty of AI is that it doesn’t just list sources where you might be able to find information relevant to your query. Rather, AI summarizes the best information across multiple sources and hands you a single answer that’s optimized for your needs.
This is true in general, and some AI services, such as Perplexity AI, deliver extremely well on this promise. But they are still general services that can answer any question. They suffer accordingly.
Also, all the big, commercial AI services suffer from a distinct lack of a strong voice and perspective. They don’t take a stand, other than maybe a general obeisance to prevailing orthodoxy. The AskLuke AI does the opposite, for better or worse: it gives you Luke Wroblewski’s take on the hot topics in design. If you don’t like him, don’t use his AI. I happen to like him, so I’m very happy to get answers that take a strong stand.
Generalizing to the future: I hope we will see many such domain-optimized AI services with a distinct voice and perspective. Use the ones you like, and stay far away from the ones that conflict with your needs or values.
Participate in Research Studies Yourself
I was recently on the other side of the one-way mirror and participated in a research study. (Metaphorically: it was a remote study.) I can’t tell you what the study was about, only that it was a topic where it was appropriate to have a UX expert as a participant. (Usually, our recruiting screeners eliminate people who work in UX or even product development, because they don’t behave like normal users.)
Because we usually don’t want UX people as study participants, I have not been in that many studies during my 41 years in UX, but I have done a few, including two I can tell you about:
In the early 1980s, I was a participant in a study of how users would manipulate 3D objects in a 3D user interface. This study was brilliantly conceived as a “paper prototyping” session, except that instead of paper, the participants manipulated various toys. One of the toys used as an experimental stimulus was Mr. Potato Head, and since I had not played with this American toy as a kid in Denmark, I performed very poorly. Great for the researchers to have a klutz in their study, and educational for me to personally experience how participants can feel bad when they are unable to accomplish a simple task with the UI.
In the mid 1990s, I participated in a study of a handwriting-recognition system that required users to enter characters with a specialized stroke set. It was very complicated to learn how to write characters correctly, which is why the product never succeeded on the market. (Handwriting-based systems only reached mainstream status after it became possible to recognize the normal letter shapes that people know how to write.) While the product was destined to fail, as the user, I felt bad because I didn’t gain rapid proficiency in this oddball way of writing.
We always tell study participants, “we’re not testing you, we’re testing the system.” And yet people feel bad when they fail, even when it’s the fault of a bad design. Trying to be a test user yourself a few times is valuable for recognizing the need to treat participants gently and follow guidelines such as aiming to have at least one task easy enough that most people will feel successful.
You should have experienced both roles in this classic scenario of a usability study. (Midjourney)
Jakob Live: Hold the Date
I am doing a live Q&A broadcast, sponsored by UX Greece on February 14. It’s open to a world-wide audience. We will start 10am USA Pacific time = 8:00pm Athens time. See time zone converter for the corresponding time in your time zone.
Heuristic Evaluation Template and Guidance
Jason Ogle has created a very useful Excel template for recording an evaluation of a user interface according to the 10 usability heuristics. (My 10 heuristics are celebrating their 30-year anniversary this year since I compiled the current list in 1994. See the infographic below for a summary of the 10 usability heuristics.)
The word “template” does this (free) product injustice, because it’s a complete set of instructions and guidance to using the 10 heuristics, including references to backup information.
Jason recently released a short overview video on how to use his template (9 min.) as well as a longer video (24 min.) where he conducts a live heuristic evaluation of a sample website and uses the template to visualize the results. You will end up with a summary chart like the following, though the detailed findings and the process of applying the heuristics are where the main value lie.
Sample chart produced after conducting a heuristic evaluation with Jason Ogle’s template. The pie chart shows the distribution of observations by severity from catastrophic (luckily none in this example) to cosmetic or not problematic.
Feel free to copy or reuse this infographic, provided you give this URL as the source: https://www.uxtigers.com/post/10-heuristics-reimagined