top of page
  • Writer's pictureJakob Nielsen

UX Roundup: TikTok Participation Inequality | GPT-5 UX | 2 UXers > 1 | Jakob Live

Summary: A version of the 90-9-1 rule applies to TikTok video postings | Expected UX advances from GPT-5 | Debrief with a colleague after watching UX talks or podcasts | Jakob live next week on IterateUX


UX Roundup for April 1, 2024. Happy April Fools’ Day. (Midjourney)


April Fools’ Day

In recognition of April Fools’ Day, one item in today’s newsletter is a hoax. You have been warned! My challenge to you: how many paragraphs do you need to read before you recognize which of these items can’t be true?


Of course, just maybe this very first newsletter item is the hoax, and everything else in this newsletter is the honest truth. That would be playing a joke on my readers worthy of a master jester.


TikTok Participation Inequality: The 90-9-1 Rule Rides Again  

Participation inequality characterizes virtually all large-scale online communities. In this phenomenon, a minute fraction of users generates the majority of content, while the vast majority remain passive observers (usually called “lurkers”). This disparity was first comprehensively studied by my colleague Will Hill in the early 1990s at Bell Communications Research, introducing the foundational 90-9-1 rule which posits that:


  • 90% of users predominantly lurk and make almost no contributions

  • 9% contribute intermittently and account for about 10% of total contributions

  • 1% are heavy contributors who are responsible for about 90% of total contributions


This principle, where the number of contributions per user follows a Zipf curve distribution, remains a crucial consideration for UX designers in understanding and addressing user engagement dynamics in any participatory multiuser service.


Inequality persists across various digital platforms, from early Usenet newsgroups to contemporary social networks like Wikipedia and Amazon product reviews, with participation often skewing more drastically than the 90-9-1 rule suggests. Such inequality not only challenges the representativeness of online communities but also has practical implications for user feedback, reviews, and political discourse, potentially skewing perceptions and decisions.


Efforts to broaden participation include simplifying the contribution process, making engagement a byproduct of user actions, and recognizing valuable contributions to motivate wider participation. Such efforts can, at best, reduce participation inequality somewhat, but it is inherent to online community dynamics.


Participation inequality exists in all social media and has now been documented on TikTok as well. Even though many people use such services, you almost only hear from a few big contributors, while the “little people” lurk and stay quiet. Sadly, heavy contributors may have big mouths, but they often don’t have matching big brains. (Midjourney)


Participation inequality has now been documented to exist for TikTok videos, thanks to a new study by the Pew Research Center. Pew analyzed the accounts of 869 American adult TikTok users, and used a weighing technique to reduce the effects of sampling bias.

Unfortunately, Pew only reports data on a simplified model of participation inequality that divides users into just two groups (heavy vs. light posters) instead of the three groups used in the original research (heavy, intermittent, lurkers). The findings are still clear:


  • The 25% most active users account for 98% of public videos posted to TikTok

  • The remaining 75% of the users only account for 2% of TikTok videos


Given that there are 3 times as many users in the low-posting group, we conclude that each heavy poster contributes 147 times as many TikTok videos on average as a light poster.

According to all earlier research, the group of “heavy posters” probably consists of two subgroups: a small set of super-heavy users who are at least a thousand times as active as the light users and a bigger group of medium-active users who may only be about 10-20 times as active as the light users.


The Pew data only concerns the posting of videos. The watching of videos is another matter for which they do not report data. However, there is a small amount of data in the report to suggest that users who post more videos tend to have more followers and likes. This probably means that each video posted by a heavy user will receive more views on average than each video posted by a light user.


If true, this implies that the inequality in video watching is much more skewed than the reported data on video posting. It’s almost certainly the case that most people who watch videos on TikTok predominately see videos produced by a minute percentage (likely 1%) of the total user population.


GPT 5 Will Do User Testing

[This piece is the April Fools’ Day hoax. All the other topics in this newsletters are genuine, but I don’t believe that AI can substitute for humans in user testing. Please see the article I published two days later for my true analysis of humans vs. AI in user research.]


I think OpenIA’s marketing department must have tired of me speculating that the Abominable Snowman must be responsible for their many confused product names.


OpenAI’s head of naming strategy leads a meeting in the OpenAI marketing department. “Can we change to a more catchy name than ‘GPT’ for version 5?” he asks the team, but the Marketing Director vetoes the radical idea of product names that customers can understand. (Midjourney)


OpenAI marketing recently invited me to preview their upcoming GPT-5 release to get me on their good side. While the release date remains shrouded in secrecy, it is clear that GPT-5 will create a leap forward in machine IQ.


Of most interest to me is that this improved AI capability will finally allow the ultimate in discount usability: the complete removal of expensive humans from all user research. It’s common to pay study participants around $100 just so that we can watch them stumble through our product for an hour. And even though UX salaries dropped by 11% in 2023, they are still too high for it to be economical to pay a human usability specialist to spend an hour watching that user before he or she can write the report about how users misused our design (after wasting 4 more expensive staff-hours watching 4 more users).


Unfortunately, as I explained in my recent article, “What AI Can and Cannot Do for UX,” current AI (GPT-4 level) is not smart enough to simulate users. We can’t simply ask an AI to use our software and have another AI analyze the results, even though this would cost a pittance.


Today, I can report that the forthcoming GPT-5 clocks in as being smarter than a user, which is admittedly not a very high bar.


Once OpenAI ships GPT-5, its increased intelligence will allow us to replace costly humans in usability testing. Both the test user and the test facilitator give better results when they are AI. (Ideogram)


Even better, we can also employ GPT-5 to replace human study facilitators. This allows many benefits:


  • All user test sessions can run simultaneously, in parallel, through multithreading.

  • Since AI is so much faster than humans, a one-hour study will take about 10 minutes with the current slow AI response times. Once the software has been tuned and GPUs are replaced with AI-specific chips optimized for inference compute, I expect the time to simulate a one-hour study to drop to about 1 minute.

  • The combination of multi-tasking and time-dilation means that we will receive the completed report with usability test findings 1 minute after we have specified the UI we want to be tested. This fast turn-around will be a boon to iterative design, allowing UX designers to crank through maybe 50 design iterations in a day. (Remember that each design version will be produced by Generative UI in a minute or so, not by slow hand-tweaking of Figma prototypes.)

  • Since machines have infinite vigilance, you will avoid the downside of human test facilitators who may miss a user action while thinking about and writing notes about the previous user action.

  • While I’m describing qualitative user research, an added investment in AI compute will allow us to also complete quantitative studies reasonably fast, with less than an hour needed to run about 100 users through a measurement study.

  • A final advantage is that we can program the AI facilitator to treat all the AI users precisely the same way, which removes a source of bias (and associated noise in the data) when a human facilitator attempts (but fails) to behave identically with a hundred different people coming into the lab.


All these benefits to UX will accrue as soon as GPT-5 is released. We should expect hugely improved computer usability as soon as the following day, since about 50 design iterations can be tested daily.


Many UX experts recommend that you should not have a designer perform user testing on his or her own design because it’s hard to remain objective when watching users mistreat your beloved creation. I somewhat disagree with this common recommendation because of the advantages of employing UX “unicorns” who can keep all the information about both the design and the research findings within one human head and thus avoid the communication overhead of writing reports and having meetings. However, I agree with the risk of bias, which means we need to take extra care when interpreting unicorn study findings.


Using AI for all aspects of user testing eliminates this conundrum. One AI can be the user, and a completely different AI from another vendor can be the facilitator, retaining full objectivity. However, this advance must await the release of GPT-5 level AI from competing vendors. Experience from the GPT-4 era is that this catchup may take almost a year, which will be an insufferable wait since we will be accustomed to lightspeed AI-driven UI advances.


As more high-end AI models are released, you should experiment with using different AI products to serve as users for different persona segments in your market. For example, if you target teenage users, you could use xAI’s Grok, with its infamously snarky and irreverent personality.


For even better results from AI-run user testing, use different language models for the two roles in a usability study. If (as illustrated here) the user is GPT-5, then the facilitator could be Mistral for a more independent and less biased analysis of the user’s actions. However, this improved research method must await the release of a new version of Mistral with level-5 AI capability. (Ideogram)


Debrief With a Colleague After Watching UX Talks or Podcasts

Olga Perfilieva wrote one of the best summaries I have seen of the recent fireside chat I did with Sarah Gibbons for ADPList. Her writeup was so insightful because she produced it after discussing our session with a good colleague, Larry Thacker.


This is a simple, useful trick: to enhance your learnings from a UX talk or course, plan to discuss it with a colleague soon after the event. Some companies organize formal brownbags for this purpose after staff return from a conference, but as demonstrated by Perfilieva and Thacker, debriefs can also be done with less overhead, simply by two friends agreeing to get together — maybe by Zoom these days — and discuss a shorter presentation.


When you compare notes and takeaways, you invariably discover that you have different interpretations of some of what was said in that presentation, and talking through such disagreements will deepen your understanding.


Make an appointment in advance to debrief with a colleague after you both attend a UX talk, podcast, or course. Discussing one of my one-hour fireside chats can easily turn into a 4-coffee-cups meeting. (Midjourney)


Jakob Live April 10 on IterateUX

I will do a live webcast on April 10 at 12pm US Pacific Time (3pm Eastern time, 8pm London, 9pm Paris), hosted by IterateUX. (With that name, how could I refuse the invitation. If you have followed me, you know how I believe in iterative design as the main driver of UX quality.)


Iterative design is the key to UX success. The more times around that wheel, the better your design quality. (Midjourney)

Top Past Articles
bottom of page