top of page

UX Roundup: AI Job Losses for Young Staff | Founder Mode | Learn AI-First Design | Secondary Research | Comparing Avatar Tools

  • Writer: Jakob Nielsen
    Jakob Nielsen
  • 7 minutes ago
  • 13 min read
Summary: AI’s impact on job prospects for junior vs. senior employees | Founder Mode | Learn AI-First design | Secondary user research | Comparing avatars animated with HeyGen and Chinese AI tool Wan

 

ree

UX Roundup for September 8, 2025. (GPT Image-1)


AI Crosses the Chasm

I recently published a lengthy article on the technology adoption phases for AI. Since I know that many people don’t like to read that much, I have now made a short explainer video about how AI is “crossing the chasm” and the UX implications of this change (YouTube, 8 min.).


ree

AI has moved from being used only by the early adopters to having the early majority as users for many use cases. (GPT Image-1)


For a more entertaining spin on that same article, listen to my music video about AI going mainstream (YouTube, 4 min.).

Who Needs Young Employees?

Two new papers both argue the same, based on different data sources, that AI is reducing the job opportunities for junior staff, whereas senior staff are doing well:


Two papers are much more than twice as good as a single paper, in my view: When different authors, from different institutions, using different methodologies, and relying on different data sets, arrive at (roughly) the same conclusion, we can drastically increase our belief that they might be right. Any single paper, no matter how good, or how prestigious and famous the authors, can easily be wrong, especially in economics or other behavioral fields. (As you probably know, these fields suffer a reproducibility crisis, where a disconcertingly large percentage of research findings fail to reproduce when somebody else runs a similar study, meaning that they are likely false.)


Both studies consistently find that since the widespread adoption of generative AI (starting in early 2023), junior workers have experienced a relative decline in employment compared to senior workers:


  • The Stanford paper states that “early-career workers (ages 22–25) in the most AI-exposed occupations have experienced a 13% relative decline in employment even after controlling for firm-level shocks.”

  • The Harvard paper, using U.S. résumé and job posting data, confirms that “junior employment in [AI] adopting firms declined sharply relative to non-adopters, while senior employment continued to rise,” and that “The junior decline is driven primarily by slower hiring rather than increased separations.” AI-adopting firms reduced their junior hiring relative to non-adopters by approximately 22% of their average hiring volume before 2023Q1.


The difference between the two estimates of hiring drops stems from the different methodologies: The Stanford study classified hiring by occupation, whereas the Harvard study classified companies by AI adoption. I believe the Harvard estimate (the higher of the two numbers) is a more realistic measure of the impact of AI on employment opportunities for young people, because Stanford would have included many cases of hiring in occupations that might be top candidates to be handled by AI, but were located in companies that have not yet adopted AI.


Why the drop in hiring of young people, given that the two studies find steady or increasing employment of old people?


In the past, companies gained 3 benefits from hiring young employees:


  1. Young people have higher fluid intelligence (the ability for inductive reasoning, including the ability to get fresh ideas from raw data) than older people, because fluid intelligence peaks at age 20 and then declines as the brain decays with age. This makes them better at ideation and fresh insights.

  2. Young people are more likely to break with “the way things have been done in the past,” because they have no allegiance to the traditional ways. This makes them more suitable for AI-native or AI-first companies that aim for revolutionary change in business processes.

  3. Young people have updated theoretical knowledge, since they are fresh from university.


Given these advantages of junior staff, why hire any old people? Because they have higher crystalized intelligence (supporting deductive reasoning based on having seen things before) and practical knowledge about how to make things work. This means that you can put them to work and expect to get results, whereas young staff need on-the-job training to become productive.


Old and young employees both have their advantages, which is why companies used to hire both junior and senior staff.


Why is AI changing this picture? AI is reducing the value of the young people’s strengths while increasing the value of the old people’s strengths:


  • Fluid intelligence and updated theoretical knowledge are now on tap for free from the leading AI models. Ideation is free with AI, which generates new ideas faster than anyone can judge them.

  • Rather than bringing cutting-edge knowledge, recent university degrees are close to worthless, as universities persist in teaching old material, rather than becoming AI-First education programs.

  • Old people can use AI to compensate for their reduced fluid intelligence.

  • The 3 main job skills for the AI age, agency, judgment, and persuasion, all function better with experience, practical knowledge, and an understanding of the organization.


Right now, the conclusion is clear: the value balance has tipped in favor of senior staff, which is why fewer young people are being hired.


ree

Two detailed studies show the same: AI shifts the value of younger and older staff in favor of more experienced employees, resulting in fewer job opportunities for new graduates. (GPT Image-1)


What about the future, though? Three of the four bullet items will only become more powerful as AI advances. It’s theoretically possible that universities will modernize, though, because they have incredibly heavy institutional inertia; as a result, they may not be able to do so quickly enough to regain the value of college degrees.


Thus, over the next 10 years, I expect even worse employment prospects for young staff, except for the very best ones and people who have secured AI degrees.


What about the longer term? Senior staff consists exclusively of individuals who were formerly junior staff. I used the phrase “old people” above, but in truth, chronological age is worthless. Somebody who passes the years without gaining real business experience doesn’t become an attractive senior professional to hire.


Thus, if we don’t bring in fresh junior staff, we will eventually run out of senior staff. Is there any solution to this conundrum? It’s hard to imagine companies hiring less suitable staff to grow talent that will work at other companies in the future. Maybe junior staff will have to pay companies the equivalent of a tuition fee (instead of receiving a salary) while they essentially serve in an apprenticeship.


Currently, the best estimate is that AI is reducing job opportunities for new graduates by approximately 20%. I would not be surprised if this percentage were to double over the next two years and reach 40% for the 2027 graduates who will hit the job market at the same time as AGI. Then the rate will likely double again over the subsequent three years, meaning 80% fewer jobs for anyone graduating in 2030 when we achieve superintelligence.


What to advise a high school senior who would be graduating in 2030 if following the traditional track? I honestly don’t know. The best scenario would be to abandon the legacy education establishment immediately and start gaining work experience, so that the person would be a senior staff member by 2030, rather than vying for one of the few entry-level positions that will remain by then. Unfortunately, this is not realistic in most cases, as it is currently almost impossible for a high school dropout to secure the type of job that will provide valuable experience over the next five years.


Founder Mode

I wrote an article, “Design Leaders Should Go Founder Mode,” about, well, what it says in the title. (The highest-usability titles for online content simply state in compressed form what the piece is about.)


A summary in case you don’t have time to read the full article 😊


Founder Mode vs. Manager Mode:

  • Founder Mode: Leaders maintain direct involvement across all levels, own the product vision, make fast decisions, and engage directly with teams regardless of hierarchy. They focus on product-market fit and maintain intimate connection with operational details.

  • Manager Mode: Leaders operate through formal hierarchies, delegate extensively, interact primarily with direct reports, and maintain professional distance, separating them from the product and the market. Decision-making is slower, flowing through chains of command.


ree

Realistic view of “Manager Mode” vs. “Founder Mode,” as well as an allegory of the two leadership modes. Founder mode is becoming more realistic for more organizations as AI increases efficiency and reduces the number of management levels. (GPT Image-1)


I have long argued that AI is “pancaking” the UX profession, because when 10 AI-enabled designers can do the work of 100, large hierarchies become obsolete. Design leaders must adapt by:


  1. Owning the design vision directly rather than delegating through layers.

  2. Redefining product-market fit where design is the “product,” and the rest of the organization is the “market.”

  3. Avoiding the trap of being excellent at one design component but failing at holistic leadership.


Whether or not you agree with me about the need for design leaders specifically to embrace “founder mode,” it’s an important concept that drives much success in modern business. “The Social Radars” Podcast recently published 4 interviews that shed more light on founder mode with case studies:



I recommend watching all four: a little more than an hour to gain great insights into one of the most important business developments. Hearing these stories may give you a newfound appreciation for my recommendation and help you pivot your career before it’s too late.


However, if you only have time for one video, Jake Heller’s is the most interesting from a product design perspective, because he discusses how he drove the complete reconceptualization of a legal software product by gaining confidential prerelease access to GPT-4 and realizing that AI would fundamentally change the legal profession. (He was a lawyer himself before starting the company.)


AI-First Designer School

ADPList has launched the AI-First Designer School, comprising a 6-week course series, a private community, and a training hub built from over 100 hours of ADPList keynotes, webinars, and real-world playbooks to help individuals transition into an AI-first career.

In classic ADPList fashion, this mix of training and community is dirt-cheap, with an introductory price of US $79. (Even the later price of $199 will still be a good bargain for pivoting your career for the AI age.)


ree

New cheap resource for becoming an AI-First Designer. (GPT Image-1)


Secondary Research: Standing on The Shoulders of Giants Shoulders Getting a Crick in Your Neck

“Secondary research” is a fancy name for mining existing studies, reports, and published findings rather than conducting your own primary research. It is one of the most underutilized weapons in the UX arsenal. While countless design teams rush headlong into expensive primary research studies, the savvy practitioner knows that somebody, somewhere, has probably already answered half your questions. The trick lies in finding their work and extracting the nuggets of applicable wisdom without drowning in irrelevant academia.


ree

Before initiating expensive user research, embark on a fishing expedition to see if relevant research findings already exist. The leading AI models’ Deep Research tools are ideal for this kind of deep dive, to mix metaphors. (GPT Image-1)


If you want to know if users understand iconography, how they read on the web, or the acceptable limits of response times, you don’t need to recruit 15 new participants and fire up the eyetracker. The answers, or at least most of them, already exist. Leveraging this existing knowledge is called secondary research, and it is the most frugal first step in any intelligent design process.


Secondary research isn’t second-rate research. It is simply the utilization of data, insights, and reports from studies somebody else has already conducted and published. It is the art of insight-inheritance.


Let’s be clear about definitions. Primary research involves you directly observing users, running usability tests, conducting interviews, or deploying surveys. You control the methodology, recruit the participants, and own the resulting headaches. Secondary research, by contrast, involves findings from past research, leveraging the blood, sweat, and statistical tears that others have already shed. You’re doing research recycling, instead of blank slate blundering, the mistaken belief that every project must start from zero.


The fundamental advantage of secondary research is its spectacular ROI. While a proper usability study can cost tens of thousands of dollars and weeks of effort, a competent literature review can be accomplished in days for the price of a few database subscriptions and copious amounts of coffee. (Or a few minutes using Deep Research, if you have an upgraded AI subscription.) You’re not just saving money; you’re saving yourself from the seventh circle of recruitment hell and the inevitable participant no-shows that plague primary research.


Moreover, secondary research provides something primary research rarely can: breadth. When you conduct your own study with a few participants, you’re seeing a narrow slice of human behavior: enough to debug your design in preparation for the next iteration, but that’s all. When you synthesize findings from twenty different studies, each with its own participants (often from different countries), you’re suddenly working with insights from hundreds of users; a sample size that would bankrupt most design departments faster than you can say “incentive payments.”


ree

Secondary research draws upon insights across many different methodologies and participant samples, allowing for more generalizability (but less specificity) than any study you could afford to run yourself. (GPT Image-1)


Being a scholarship scavenger works particularly well for understanding established patterns and general human behaviors. When designing an e-commerce checkout flow, avoid wasting resources by rediscovering that users often dislike creating accounts before making a purchase. This finding has been documented, validated, and beaten to death in countless studies since the dawn of online shopping. Accept the received wisdom and move on to more pressing mysteries.


Finding quality secondary research requires developing what we might call “academic archaeology” skills. Start with Google Scholar, not regular Google: this makes the difference between finding solid research and finding someone’s Medium post titled “10 UX Trends That Will Blow Your Mind” (spoiler: they won't). Industry associations, government databases, and university repositories represent goldmines of data, though you’ll need to develop a strong relevance radar to separate applicable findings from academic exercises in statistical gymnastics.


ree

Glitzy blog postings with nice illustrations (like mine), may attract more attention, but are rarely as useful as true research studies when collecting sources for secondary research. (GPT Image-1)


ree

I caution against treating peer-reviewed papers as a gold standard. When I was a scientist, I wrote numerous peer reviews and benefited from the wise comments of many anonymous referees, who helped me improve my papers. But the process often privileges overly narrow research that can impress highly specialized referees even when it's useless. As long as a study comes from a credible source, I would not make peer reviewing the deciding factor for how much weight to give it in your secondary research review. (GPT Image-1)


Nikki Anderson (a UXR consultant) recently posted a very useful list of the 25 sources she uses the most for secondary user research, complete with one-click links to each resource.


The key to utilizing secondary research effectively lies in critical evaluation. Not all studies are created equal, since most suffer from methodology myopia, where researchers become so enamored with their clever experimental design that they forget to ask useful questions. Look for studies with clear participant demographics, reasonable sample sizes, and methodologies that don’t require a PhD in statistics to understand. If a study’s limitations section is longer than its findings, proceed with caution.


ree

The goal of secondary research is not to collect as many useless facts as possible in one report. Making sense of useful studies should be your goal. (GPT Image-1)


The chief drawback of secondary data is its potential lack of specific relevance. The data wasn’t collected specifically for your project. It might be slightly dated, the participant pool might not perfectly match your target demographic, or it might not answer the highly specific questions you have about your unique interface design.


The tradeoff is clear: Secondary research is broad, cheap, and fast, but potentially less relevant. Primary research is narrow, expensive, and slow, but highly relevant.


The debate isn’t Primary versus Secondary. It is the sequence that matters. You should almost always use both, but the order is crucial for efficiency.


The intelligent approach is to exhaust secondary sources first. Use secondary research to understand the knowns. What are the established best practices for e-commerce checkout? What are the accessibility challenges inherent in dropdown menus?


Smart teams practice research reconnaissance where they start with secondary research to map the terrain, then conduct targeted primary research to explore the specific valleys and peaks relevant to their project. This approach transforms primary research from a fishing expedition into a surgical strike.


Synthesize, don’t just summarize: The goal is not to collect disparate facts. The goal is to synthesize the information from secondary research into actionable insights and coherent design principles relevant to your project. The better AI tools, such as Gemini Deep Think or GPT 5-Pro are great at this step: upload all the papers, reports, and web articles you deem relevant and ask AI to synthesize across them. (Once you have a shortlist of synthesized insights, do check back with the original sources, but you can save immense time by having AI review them and write the first draft of the synthesis.)


ree

Synthesize across the findings from the individual secondary research studies to focus on the generalized insights of value to your project. Don’t just list all the findings, since most will be irrelevant. (GPT Image-1)


Consider this scenario: you’re redesigning a mobile banking app. Secondary research quickly reveals universal truths about financial anxiety, security concerns, and the cognitive load of financial decision-making. Armed with these insights, your primary research can focus on the specific quirks of your user base; perhaps how small business owners in rural areas use mobile banking differently than urban millennials. You’re not wasting precious user research sessions rediscovering that people care about security; you’re uncovering the nuanced ways your specific users manifest that universal concern. Knowing what to look out for saves a great deal of time in your own studies.


The secondary-first, primary-second sequence may seem a numerical mismatch, but it is the way to go.


Secondary research also provides invaluable ammunition for stakeholder conversations. When executives question your design decisions, you can cite both broad industry research and specific findings from your targeted studies, forming a one-two punch of credibility that beats “I think users will like it” every time.


The goal isn’t to eliminate primary research but to make it count. By standing on the shoulders of previous researchers, you can see further while spending less. That’s not laziness; it’s efficiency. And in the resource-constrained reality of modern UX practice, efficiency isn't just smart, it’s survival.


Avatar Animation: HeyGen Avatar IV vs. Wan 2.2

I made a short video comparing two leading AI video tools, animating the same avatar with lip-synch of singing the same song. (YouTube, 1 min.)


I think HeyGen did better than Wan in this small case study, but then HeyGen’s avatar animation is fairly expensive, whereas the Chinese Wan model is open source and can be used for free on the Wan website for small projects.


The demo uses a short clip from my full music video on AI Going Mainstream and Crossing the Chasm. (YouTube, 4 min.)



ree

 

Avatars are becoming increasingly realistic every month, particularly in their movements and voice acting. Still, they are ultimately only actors, with the agency behind the video coming from a human director. (Ideogram)

Top Past Articles
bottom of page