top of page
  • Writer's pictureJakob Nielsen

UX Roundup: Titan of Human Factors | Long UX Career | AI Articulation Barrier | Design Challenge | Leonardo Text | UX Podcast

Summary: Jakob Nielsen named “Titan of Human Factors” | Retrospective on a long UX career | The articulation barrier to using AI is similar to that of using DOS | Design challenge: how to tell checkout machine that you placed a bag? | Rendering text in Leonardo | UX Podcast: From Heuristics to AI

UX Roundup for January 8, 2024. Winter image by Leonardo.


Jakob Nielsen Named “Titan of Human Factors”

I am honored that the Human Factors and Ergonomics Society has named me a “Titan of Human Factors.” My acceptance speech is online (8 min. video). I will present a more extensive speech analyzing the evolution of UX and usability at the HFES symposium on February 5.


One of the original Titans from Greek mythology, as imagined with Midjourney. Lucky for me, the HFES Titans award is given for intellectual achievement, not for musculature.


Retrospective on a Long UX Career

We very rarely get to consider how a UX career can shape up over the long term. It’s more common to hear from people who got a new job and possibly reflect in public about the differences between this new job and their previous one.

But a career is not one year; or ten years. It’s at least forty years. (I almost wrote forty tears. Well, it’s that, too.)


Here’s a nice article by Chooake Wongwattanasilpa, covering the 23 years from late 2000 to 2023. Still not enough of a longitudinal perspective, so I hope he writes a new version in 17 years. He went from designing CD-ROMs for Morningstar in Chicago to being the Chief Experience Officer for the Bank of Singapore. A success story that may not model realistic career progression for most readers. However, he provides several keen insights for each step of his career.


When you’re at the foot of the career ladder, it’s helpful to hear about somebody who reached the top of the ladder, but maybe more immediately useful to hear from somebody on the middle steps. In any case, we have too few UX career retrospectives, so the more, the merrier. Please publish yours! (Carrer ladders by Dall-E.)


The main downside of truly deep career retrospectives is that they lose relevance for the young UX pros of today. Anybody like myself who started in UX 40 years ago had their first job when the UX field was 3,000 times smaller than it is today — and very different in almost all ways. So, a 23-year retrospective may be more valuable after all.


Can young UX professionals learn from old UXers? I say yes since experience is of extreme value to UX research and design. On the other hand, the UX job market has undergone seismic shifts over the past four decades, changing beyond recognition. So career retrospectives across truly long timelines may be more interesting than valuable as advice for newbies. (Dall-E)


The Articulation Barrier When Using AI Is Similar to That of Using DOS

Using old-school second-generation user interfaces through a command line was notoriously difficult. DOS and Unix were the worst offenders. The articulation barrier of constructing syntactically correct commands was high, and many normal people couldn’t use those early computers. Even people with the tech chops to use DOS didn’t get as much use out of those difficult computers as they did after changing to graphical user interfaces (GUI) with Macintosh or Windows.



Today, AI-driven conversational user interfaces like ChatGPT raise a similar articulation barrier. It’s hard for most users to fully describe what they need in prose, as required by these systems. The following infographic summarizes the problem, as discussed further in the article I just linked:



Feel free to copy or reuse this infographic, provided you give https://jakobnielsenphd.substack.com/p/prompt-driven-ai-ux-hurts-usability as the source.

Design Challenge: Tell the Checkout Machine that You Placed a Bag

David Hamill posted an interesting design challenge: The Tesco supermarkets in the UK have self-service checkout machines. You can place a bag in “the bagging area” of the machine and then tell the machine to proceed to ring up products. The problem is that the dialog box used for this purpose reads: “Place your bag in the bagging area. OK / Cancel.” David (and many other shoppers) frequently read the imperative form of the verb “to place” as an instruction to place the bag, as opposed to a statement that the bag has already been placed.


Thus, many users click OK before placing the bag, even though the system design assumes that people don’t click OK until after they’re done placing the bag. This messes up the system state big time.


How to design this better? That’s the challenge. Think about your own solution, then follow the above link to read the comments with other people’s solutions.

One obvious approach is to demand a better machine that can sense or see bag placement without needing further user input. Wrong answer. We must design for the current machines which are installed in big numbers and would be expensive to replace.


(Of course, a better machine is the correct long-term answer. Sadly, self-service checkout machines have existed for several decades, and they have not gotten much better. This has to change. Still, the design challenge is how to improve the UX of the current machine through software-only changes.)


Confusion reigns in British supermarkets due to bad UI design of self-service checkout machines. (Image by Dall-E.)


Rendering Text in Leonardo

Now that Midjourney has acquired the spelling skills of a 2nd grader, Leonardo wants to join the fray. So far, they have a different model than what’s done by Ideogram, Dall-E, and Midjourney, where you include the desired text within quotes as part of a regular prompt.

In Leonardo, you upload an image file with your desired text in white on a black background. You upload this file to the system and use the “Image Guidance” feature with the “Text Image Input” option. You can vary the strength of the image guidance from 0 to 2. This parameter determines how much the final image is guided by your uploaded typography versus the textual prompt.


Here’s an example where I uploaded the text of one of my favorite UX slogans, “Make It Easy,” and blended it with the prompt “a whiteboard filled with colorful UI UX wireframes. color pencil line art.” I varied the image strength from 2 (which simply replicated my uploaded typography image) to 0.19. This latter value didn’t retain my original typeface, placement, or size, but all the higher values did.



As far as I can tell, the main value from using this feature in Leonardo, as opposed to adding the text later in a non-AI graphics tool, is that Leonardo draws around your text.


UX Podcast: From Heuristics to AI

I was interviewed on the Nodes of Design podcast: episode #108, Heuristics to AI: The Future of UX. (38 minutes.) Here’s a summary of some of my comments:


🎓💡 From Basics to Brilliance: Charting a Course in UX

To get into UX, the path is twofold, differentiated for those already in the workforce and students. If you have a job, infuse UX into your current role, leveraging company resources for practical experience. For students, my advice was to embrace formal design education and supplement it with courses in communication and statistics. This comprehensive approach underlines the multifaceted nature of design, emphasizing the importance of cross-disciplinary skills and the evolving demands of the UX profession.


🤖🔮 From Clicks to AI Predicts: The Web Usability Transformation

Despite rapid changes, fundamental web usability principles have remained largely constant over the past 25 years, with a strong emphasis on clarity, conciseness, and user-centric design. The transition from a conventional, step-by-step approach in web design to AI-driven interfaces will prioritize intent-based interaction. This shift towards AI will streamline user experiences by delivering summarized, relevant content, thereby replacing traditional search engines.


From Clicks to Predicts! Web usability used to be about making clicks easy, through understandable buttons, menus, and links. Often, users were overwhelmed by option overload. New AI-based interaction styles are intent-based and benefit from AI’s ability to summarize and individualize information across multiple sources, predicting what you will want: the AI simply hands you what you need on one sheet. (Midjourney)


🔬📋 The Power of Simplicity: Unpacking Heuristic Evaluation

The 10 usability heuristics originated from a need for a cost-effective, rapid usability analysis. The approach revolutionized usability by distilling thousands of guidelines into ten general principles, simplifying the evaluation process. These heuristics, while not exhaustive, provide a framework for assessing a wide range of design scenarios, including emerging technologies. Heuristic evaluation's strength lies in its simplicity and adaptability. By focusing on fundamental design principles, it offers a quick and effective method for identifying usability issues.


📱🖥️ Mobile and Beyond: Cross-Platform Design Needed

A comprehensive approach, considering both mobile and desktop environments, is key to addressing the diverse needs of users. This approach is particularly relevant for business users who often represent high-value customers. In addition to the typical Android and iOS dichotomy, designers must also consider the nuances of different screen sizes, voice-only interfaces, and the emerging realm of AI-driven content interpretation. My advice is to embrace flexibility in design. Avoid locking down designs to specific formats or appearances or pursuing a narrow “mobile first” approach. Instead, aim for adaptability across different devices and platforms. This inclusive design philosophy not only caters to a broader audience but also addresses accessibility concerns, ensuring that everyone, regardless of their device or ability, has a seamless user experience. Flexibility will become increasingly vital as we continue to see diverse and innovative ways of interacting with digital content.


With ever-increasing device diversity, a “mobile-first” design approach is insufficient. Flexibility is paramount. (Dall-E)

Comments


Top Past Articles
bottom of page