top of page
  • Writer's pictureJakob Nielsen

UX Roundup: Time Estimates for Quant Projects | LinkedIn Rant | Benchmarking UX | Kazakhstan/Uzbekistan UX | Free AI Courses

Summary: Estimating the time duration of quantitative user research projects | LinkedIn’s anti-link algorithm is bad | Benchmarking research | UX flourishing in Kazakhstan & Uzbekistan | 181 LinkedIn AI courses free for a month

UX Roundup for March 11, 2024. (Midjourney)


Estimating Project Duration for Quant Research Projects

Chris Chapman wrote an interesting article on estimating the duration of a quantitative user research project. I always like a shortcut, and Chapman nicely starts the article with the conclusion that the default duration is one month. Very rarely will a quant project be possible in less time, and complex projects will take multiple months. But a “standard,” fairly simple quant research project will take a month. (This is why I prefer qual research, which is faster. But quant has its place.)


Estimating the duration of a quantitative user research project: the default schedule is one month. (Midjourney)


The one-month default quant project schedule breaks into four weekly chunks:

  1. Planning, including review of existing data.

  2. The actual data collection.

  3. Analysis, including review of tentative conclusions with colleagues.

  4. Final results, deliverables, and presentations.


Here’s an infographic, illustrating the point that the default quant project schedule is one month:



To create this comic strip, I first used Gemini Advanced to convert Chris Chapman’s theoretical article into a story with a lead character:


Kate braced herself, a touch of sweat prickling her brow despite the cool office air. “One month,” she told the VP, striving for calm amidst his impatience. This wasn’t her first rodeo, but every quantitative UX research project was a balancing act — expectations versus reality, ambition versus data integrity. She’d seen those glossy Star Trek engineering promises crash and burn, and protecting the value of her work was worth the awkwardness.


Her mind raced like code parsing a dataset. One month: a battle-tested unit of time. Week one was intel gathering – clarifying goals, planning the data hunt. Week two was the foray, messy and unpredictable. Abandoned spreadsheets and scrapped models were common in this stage. Week three meant regrouping, sifting signal from noise, and the dreaded possibility of starting over.  Week four, that was finalization.  Not just reports, but the diplomacy of presenting insights, ensuring they landed for impact.


Experience was her shield. Once, Kate had naively promised a two-week user segmentation project. Execs heard ‘quick study,’ not the complex dance of finding patterns, testing, and validating them. That project had devoured five tense months, mostly spent calming panicky stakeholders when the initial “segments” fell apart.


The VP shifted in his chair. “Now Kate,” he rumbled, “you're our data wizard. Surely that simple breakdown request...” His voice trailed off. He knew better, she could see it in his eyes. That’s why he came to her, not some dashboard-obsessed intern with illusions of instant answers.


“One month,” she repeated, gently but firmly. This wasn't about being difficult. It was about time to read the data, time for things to go wrong and be fixed, time for the insights to become answers.  A flicker of understanding crossed the VP's face. This was trust earned, the hard way. And Kate knew, deep down, that one-month project might just be her next miracle.


Second, I ran this short story through the new Story Illustrator GPT by Umesh, with a prompt to use a 1950s comic illustration style to create a 6-panel comic strip. I had to ask it to redo the last panel because the GPT lost character cohesion and made the researcher blonde. (In fact, if you look closely, this short strip still has a few small continuity glitches. But impressive, considering the state of character persistence just three months ago.)

Interestingly, neither AI included the actual data collection in the story. You’ll have to imagine this step.


A last interesting point is that panel 5, reflecting on the finished project to glean lessons for the next project, was not included in Chris Chapman’s original article. Story Illustrator GPT added this step on its own initiative.


Reflection is a step that I recommend. Not only was this project not our research expert’s first rodeo. It’s not going to be her last rodeo, either. After every project, the investment of a little time to formalize lessons-learned will pay off handsomely in future projects. As a meta comment, the fact that the AI added a useful step that was not included in its instruction is a great example of human-AI co-creation and the benefits of working with AI.


LinkedIn Anti-Link Rant

Despite its name, LinkedIn doesn’t link. Or, rather, it’s common knowledge that the LinkedIn algorithm downplays any postings that include a link to an outside website and shows it to fewer users. (This is apparently true of X as well, so my comments here apply equally to X.) Thus, if an author includes a link to a recommended resource, he or she will be punished with a much-reduced number of impressions and reach across the social media service. As a result, many authors have resorted to workarounds such as adding their links in comments below their actual post, meaning that users have to go hunt for the link if their interest is sparked by a posting.


LinkedIn’s rationale is the old canard of wanting to keep users trapped within their own site. Newsflash: web users aren’t like a poor rabbit who stepped on a trap. They have a back button, and they will go to other websites any time they choose.


A canard (also known as a French duck) is a misconception, such as believing that you can trap users on your website by denying them links to other sites. (Midjourney) Apologies to my French friends for the stereotypes. French wine is the best, so you can’t blame my duck for liking it.


Clicking a link right at the spot where a resource is mentioned has so much higher usability that I find it absolutely obnoxious of LinkedIn to disincentivize authors from including links in their postings. Have they learned nothing from Google vs. Yahoo back in the day?


For people with less than 25 years' experience using the web: Originally, Yahoo was designed to be a portal that encouraged users to stay within the Yahoo service (they also had search, but it wasn't prioritized in the UI). In contrast, Google prioritized showing users clickable links (later to include targeted ads) to other, useful websites. Users would leave Google in about 10 seconds. Still, the high-usability design that got users quickly to the information they liked meant that Google got the credit for leading people to this info. This again made users return to Google many times per day and (as we all know) made Google worth many, many times as much as Yahoo.


I bet that LinkedIn would be worth many times more if it encouraged links to other sites because it would make the service so much more valuable to users that they would return more and engage more.


Benchmarking Your Design Quality

Mani Pande and Nishchala Singhal published a useful short overview of benchmarking studies. The main goal is to track usability metrics over time to see how much you improve (or get worse, which can happen — but if you know, you can correct the situation before you lose too much money).


As a secondary benefit, having UX metrics is a way to socialize the importance of UX in many organizations that value numbers disproportionally. (I mostly prefer qualitative studies, but I recognize that benchmarking has its place.)


Before you buy something for your house, measure if it’ll fit. Similarly, you can measure your user experience quality and track the benchmark over time. (Dall-E)


UX Flourishing in Kazakhstan

I’m bullish on the future of UX: I expect the number of people doing UX to triple over the next decade. Most of this growth will come outside the “usual suspects” of North America, Western Europe, and a handful of super-rich countries in Asia/Australia.


Kazakhstan is a good example of the worldwide growth of UX. It’s only a few years since I noted the first Kazakh attendee at one of my speeches. And now there are flourishing UX companies in the country, such as UsabilityLab KZ.


While the company is based in Nur-Sultan, Kazakhstan, their “Who We Are” section notes that they have more than 30 UX specialists on staff and have done projects in Kazakhstan, Uzbekistan, Kyrgyzstan, Tajikistan, Turkmenistan, UAE, Transcaucasia, and Europe.


In general, I don’t think there’s a future for UX companies with more than 10 people on staff — and even 10-person UX firms will need an extremely niche/boutique focus. We’ve already seen some of the biggest UX firms cut their staff in half, and any UX firm that still has more than 10 people will surely shrink dramatically over the next few years. There’s a strong trend for companies to convert from UX clients to UX actors by building up internal UX staff with an internal talent-development pipeline. However, this change will happen first in rich countries. In contrast, countries that are still new to UX are likely to follow the historical trend where UX is the job of specialized firms for the first 20-30 years until a wider set of companies accept the mantle of owning their own UX. (Since brand is experience in the digital age, every company must eventually own its UX fate rather than outsource UX.)


Kazakhstan business then and now: camel herding 100 years ago, UX design for ecommerce websites today. (Midjourney)


Uzbekistan UX

While we’re in that corner of the world, I want to share a heartwarming photo of the recent ADPList meetup in Tashkent, Uzbekistan. It’s great to see so many UX fans, also in this country. UX has absolutely gone worldwide — in a big way.


Participants in the ADPList UX meetup in Tashkent, Uzbekistan. Photo courtesy of Maftuna Lutfilaeva.


181 LinkedIn AI Courses Free for a Month

LinkedIn has made 181 short online courses about AI free for a month. Some very basic topics, such as “What Is Generative AI?” but also several more specialized technical topics such as “Introduction to AI-Native Vector Databases.” Typically, each short course lasts about an hour, so you can easily take a few to learn topics you’re curious about. I spotted a few courses about UX:



I can’t vouch for any of these, but I did listen to 15 minutes of the first course about AI tools, and it seemed a nice enough survey of a bunch of useful tools. I was rather disappointed, though, that a course released on February 20, 2024, used Dall-E 2 to demonstrate how to make icons with AI, considering that version 3 was released to the public in October 2023 and is much better.


Specifically about AI and UX, if you’re willing to pay a modest fee you’re probably better off with AI for Designers with Ioana Teleanu (Senior AI Product Designer for Miro — to get a feel for Ioana, listen to the discussion she recorded with me). But in truth, the best way to learn AI is to use it, so please see the articles on Getting Started with AI for UX and What AI Can and Cannot Do for UX. (And stay tuned for my own article on the top AI tools for UX, scheduled for my newsletter on March 27.)


The best use of the free LinkedIn courses — given how quick and introductory they are — is probably not to learn something that will be core to your job, but to understand the most important aspects of related topics. For example, AI Challenges and Opportunities for Leadership (with a great business school professor), if you’re not a corporate executive.


Sonauto: Not Good Enough Yet at Song Creation

A new generative AI service for music creation launched last week: Sonauto. Just as with Suno, you can get a song from a simple prompt specifying the style and theme, or you can upload you own lyrics and have them set to music.



The AI-written lyrics are not too bad, as an artistic interpretation of the heuristic, but this is not a great song about the heuristic.


[Verse 1]

Cut the errors before they start, that's the way,

Keeping the trouble far away, every day.

Designing with the user in mind,

Making sure no one's left behind.

[Chorus]

Error prevention, oh, it’s our mission.

No confusion, just clear vision.

Before the mistake, let's put on the brake,

That’s the way, for a smoother operation.

[Verse 2]

A little warning can go a long way,

Guiding users, so they don’t stray.

Feedback instant, keeping it consistent,

Aiding navigation, making it persistent.

[Chorus repeats]


If you listen to the song, you will find that Sonauto mispronounces several words, making it hard to follow the song if you didn’t have the printed lyrics for reference. On the other hand, I like Sonauto’s tool for regenerating specific snippets of a song, if there’s only a small thing you want changed. (This is similar to Midjourney’s vary region feature.)


Compare with the songs I made about the same heuristic with Suno: Danger No More and Safe and Sound.


I prefer Suno’s songs. What do you think? Let me know in the comments.


(Sonauto is still only in release 0.9, so presumably it will be better in release 1.0. On the other hand, I used Suno v. 2 and not the prerelease v. 3 that’s reportedly much better even in beta.)


AI-generated music is heating up, with more services becoming available. (Midjourney)

Top Past Articles
bottom of page