top of page

Transformative AI Changes the Future of Work and Firms

  • Writer: Jakob Nielsen
    Jakob Nielsen
  • 18 hours ago
  • 20 min read

Updated: 47 minutes ago

Summary: Transformative AI (TAI) is AI that fundamentally changes the economy, increasing productivity at an unprecedented rate and likely completely changing (if not eliminating) human work and companies as we know them.

 

According to the world’s leading expert on the impact of information technology on economic growth, Stanford professor Erik Brynjolfsson, Transformative AI (TAI) is AI that fundamentally changes the economy, quantified as increasing productivity at least 5x faster than the pre-AI economy. This transformation is expected to begin between 2028 and 2033.


For a quick summary of this article, watch my 8-minute explainer video on Transformative AI.


The National Bureau of Economic Research (NBOR) hosted a recent workshop on Transformative Artificial Intelligence. This was not merely an academic exercise; it was a preemptive autopsy of the 21st-century knowledge worker. The consensus among these esteemed economists is chillingly simple: The world of work, the fundamental structure of the firm, and the very fabric of our shared reality (information) are all slated for radical restructuring.


ree

Next-generation AI will completely transform the economy, including the nature of work and firms. You ain’t seen nothing yet. (GPT Image-1)


Most discussions of AI relate to its current performance: for example, can I make avatar music videos that appeal more to the audience than, say, Taylor Swift’s music videos? (Currently, the answer is no. But in 10 years from now, maybe individual creators can equal big-budget mainstream productions?)


More generally, the question of Transformative AI is how the economy will change after we get superintelligence (around 2030) and companies finally redesign their workflows to take full advantage of these new AI capabilities. The TAI workshop considered several scenarios for AI, once it moves beyond the current incrementalism to become truly transformative for the economy.


ree

Don’t just do the same tasks more efficiently with AI. That’ll limit you to around  2x productivity gains. Redesign your workflow to be fully AI-native. That’s how you get 10x gains. (GPT Image-1)


Professor Luis Garicano from the London School of Economics posted an extensive summary (5,225 words!) of the workshop, and many of the full papers are available at the link provided in the previous paragraph. My discussion of transformative AI draws from these sources, but as always, don’t blame Prof. Garicano or the workshop speakers if my interpretation is off.


As a UX professional, your job is, by necessity, dealing with fuzzy edges, incomplete preference sets, and the deep, messy complexity of human behavior. You may feel safe from AI exceeding you in these skills. Get ready, because the economists have handed us a set of new blueprints for reality, drawn up with terrifying precision. To navigate this coming epoch, we must first understand the new language of work and value. TAI introduces economic dynamics that treat human effort like a resource perpetually nearing obsolescence.


Part I: Genius on Demand: The New Calculus of Labor

For years, we discussed automation sweeping away manual, routine labor. TAI, however, aims squarely at the intellectual elite: the analysts, the diagnosticians, and the creators. That’s to say, you, Dear Reader. The paper presented by Agrawal, Gans, and Goldfarb introduces the concept of The Genii Shift, examining how the sudden appearance of unlimited, machine-driven “genius on demand” changes the allocation of cognitive capacity.


ree

We’ll all have a genie at our command with higher intelligence than any human who ever lived. The question in this article is how those genies will change the economy, work, and companies. (GPT Image-1)


Defining the Intellectual Hierarchy

To analyze this transformation, the economists neatly segment knowledge workers into two camps:


  1. Routine Workers: These folks apply existing knowledge. They are skilled practitioners; think paralegals applying precedent or software developers implementing known algorithms. Their value decreases linearly as problems move away from established knowledge, and they cannot function where uncertainty exceeds a set threshold.

  2. Genius Workers: These are the innovators, the experts, or the creative specialists who generate new knowledge. They can solve any problem, but the cost of generating that novel insight increases quadratically with the problem’s distance from known knowledge.


Before TAI crashed the party, human geniuses (who are less than one percent of the population, unlike routine workers) were allocated by managers to questions where they provided the greatest additional value relative to routine workers. This is known in economics as absolute advantage. Crucially, the optimal strategy was to assign them not to the easiest, most known problems, but to the domain boundaries: the places where uncertainty was so high that routine workers would simply abstain. It’s a bit like allocating your single master plumber to the most bizarre pipe arrangement in the city, rather than the simple sink leak that any junior apprentice could handle.


Short-Run Specialization

When AI geniuses (modeled as having similar capabilities but differing in efficiency) suddenly arrive in an effectively unlimited supply, the labor market experiences rapid specialization.


In the short run, before organizations can fully rewire their allocation rules (a concept economists call organizational rigidity), human geniuses are immediately pushed to the questions furthest from existing knowledge. This happens because the human geniuses maintain a comparative advantage (the relative benefit they offer compared to AI) that increases the more novel and challenging the problem is. The human genius becomes the Ultra-Specialist, tackling the truly unprecedented; the AI genius handles the moderately difficult questions; and the routine worker, at least initially, clings to their established patch near the core of known knowledge.


ree

The last human knowledge workers will be ultra-specialists who handle edge cases in unexplored territory where the map stops working. (GPT Image-1)


Long-Run Displacement: Enter the “Cognitive Fugue”

The real shockwave hits in the long run, maybe 5 to 10 years from now, when managers fully reoptimize the system. The research shows that if AI genius efficiency is high enough (sufficiently close to human efficiency), routine workers will likely be completely displaced. Why? Because the highly capable AI can perform routine work (applying knowledge near the center) and handle difficult work (generating new knowledge at the edges). Once abundant AI is deployed, routine work vanishes, or perhaps survives only in a “thin outer ring” of tasks where the AI is comparatively inefficient for the time being. (Of course, in 20 years (only half of the hoped-for career duration of today’s fresh graduates), AI will have moved far beyond the simpler superintelligence we’ll get in 2030, and so by 2045, it’s hard to envision any jobs that humans will be able to do better than AI.


This leads us to the grim prognosis from Pascual Restrepo’s work: The Cognitive Fugue.

Restrepo’s model addresses an economy where Artificial General Intelligence (AGI) makes it feasible to perform all economically valuable work using compute. He introduces a crucial dichotomy that defines the future of human purpose:


  1. Bottleneck Work: Work essential for unhindered growth. If this work doesn’t scale, the entire economy stalls or its price inflates infinitely.

  2. Accessory Work: Work that is non-essential to growth. Output can expand indefinitely, even if this work is discarded or limited. Think arts, hospitality, or perhaps even (ouch) academic economics.

ree

Focus on eliminating the slowest-moving actor that forms a bottleneck, preventing a process from scaling or becoming more efficient. (GPT Image-1)


Restrepo’s core finding is that all bottleneck work is eventually automated. Output growth becomes linear in compute. Human labor, while still employed (perhaps doing that accessory work), sees its value fixed. Why? Because the human worker is paid only the value of the scarce computational resources they save by performing a task. (And compute is getting cheaper very fast, as evidenced by the recent announcement of Grok 4 Fast on September 19, 2025: This new model is about as good as Grok 4 but only costs $0.20 per million tokens: a 15x improvement over Grok 4, which was launched on July 9, 2025 and initially cost $3.00 per million tokens.)

 

The consequence is dire for the traditional wage structure: The share of labor income in GDP converges to zero. The economic contribution of the human worker becomes “vanishingly small.” In the era of AGI, humans may still perform a bit of low-value accessory work, and we will maintain positive wages (because compute remains scarce for now), but, economically, “we won’t be missed.” It is a world where work exists, but its meaning is completely divorced from economic necessity.


The Adaptive Buffer: Resilience and Vulnerability

The shift toward the Cognitive Fugue is deeply unfair in its distribution. Manning and Aguirre introduced an Adaptive Buffer index to measure workers' capacity to handle displacement, incorporating factors such as:


  • Net Liquid Savings: Financial buffers to weather income shocks.

  • Skill Transferability: How easily a worker’s current skills can be used in other, potentially growing, occupations.

  • Geographic Density: Concentration of employment opportunities.

  • Age: Older workers often face higher costs if displaced.


Surprisingly, the data shows a positive correlation: many highly-exposed occupations (like programmers and specialized analysts) also contain workers with higher Adaptive Buffer scores. They have the savings and transferable skills to navigate the Genii Shift.


However, the economists found a crucial and vulnerable segment: 7.2 million workers (5.3% of the U.S. workforce) are in occupations with high AI exposure and low adaptive capacity. These are concentrated in administrative support, clerical, and assistance roles. These workers face a double whammy: technological replacement combined with limited personal resources to cushion the fall.


Enterprise Firms Under Siege

If the nature of work is changing, the structure designed to organize that work (the firm) must change too. The workshop focused heavily on two complementary forces reshaping the modern corporation: the drive toward centralization and the total collapse of transaction costs.


The Centralization Conundrum: Hayek’s Hammer

For decades, organizational theory has rested on the wisdom of Friedrich Hayek, who argued for decentralized decision-making. Hayek’s argument was essentially bound by two limitations on central planning:


  1. Tacit Knowledge is Inalienable: Local, contextual knowledge cannot be codified and transferred to a central authority.

  2. Processing Capacity is Bounded: Centralized headquarters (HQ) simply cannot process the vast flow of information necessary to make optimal global decisions.


Brynjolfsson and Hitzig argue that TAI wields Hayek’s Hammer, destroying both constraints. AI makes local knowledge alienable through the digitization of explicit knowledge, the codification of tacit knowledge, and the discovery of machine-native knowledge. Simultaneously, TAI increases centralized information processing capacity, acting as an “expansion of working memory” for corporate HQ.


Under economic models of incomplete contracting (where contracts can never specify all possible future scenarios), this information control becomes decision-making power. If HQ can centrally capture all the dispersed knowledge, from the tacit wisdom of the local sales associate to the context of the regional manager, the organization will naturally shift toward centralization because it becomes the optimal decision-making architecture.

The counter-argument is that AI might instead democratize specialized knowledge. If TAI gives every local entrepreneur access to the centralized expertise (marketing, finance, strategic planning) once reserved for HQ, the result might be radical decentralization. The current empirical evidence, however, leans toward rising concentration across sectors like retail, finance, and utilities.


ree

It is currently unknown whether transformative AI will serve as a centralizing or decentralizing force. My guess is that AI foundation models themselves will be centralized and only offered by a handful of firms, since each will cost close to a trillion dollars in training compute. However, the applications of TAI (which is what we’re discussing here) will serve as a decentralizing force, since the utility from those trillion-dollar investments will be available to anyone with a $100 subscription. (GPT Image-1)


For existing firms, the real economic gains that dramatically increase Total Factor Productivity (TFP) come not from applying AI to existing processes, but from completely reconfiguring the entire process. As one workshop discussant put it, successful but rigid firms face an existential threat: their past success reinforces an inertia that prevents the radical redesign necessary to exploit TAI.


ree

To gain full value from AI, redesign the full process to be AI Native. Don’t just patch individual steps or features. Radical redesign wins the day, but it is hard for legacy companies to embrace because their existing processes are what made them successful. (GPT Image-1)


The Coasean Singularity: Agents as Market Movers

Perhaps the most visceral shift for designers concerns the nature of economic interaction itself, driven by AI Agents. An AI agent is an autonomous software system that can perceive, reason, and act in digital environments to achieve goals on behalf of a human principal. They are capable of executing complex tasks over long time horizons with little direct oversight.


ree

Negotiating the contract with an external vendor adds substantial overhead to any outsourcing solution. In the future, each party’s AI agent will handle negotiations instantaneously, eliminating friction and transaction costs associated with using a market-based solution instead of an in-house solution for any given problem, no matter how small. (GPT Image-1)


Shahidi, Rusak, Manning, Fradkin, and Horton call this dramatic decrease in friction The Coasean Singularity. The foundation of the theory of the firm, posited by Ronald Coase in 1937, suggests that firms exist because the transaction costs of using the external market (finding prices, negotiating, writing contracts, monitoring compliance) are sometimes higher than the internal costs of running a bureaucracy.


ree

Ronald Coase’s theory of the firm says that companies exist to the extent that the transaction costs of using the market (finding solutions, bargaining for the deal, monitoring delivery, and enforcing the agreement) are greater than the organizational costs of doing things internally with employees and the associated management overhead. Overhead grows nonlinearly, placing limits on how big companies can grow. (GPT Image-1)


AI agents attack these very transaction costs. An agent can perform search, negotiation, and monitoring at a near-zero marginal cost, making the market the cheaper, more efficient way to organize most activities. This fundamentally threatens the established “make-or-buy” boundaries of the firm. Possible implication: enterprise IT departments will be cut to 10% of their current size over the next 10 years.


ree

Make vs. buy has always been a strategic decision for enterprise IT bosses. Often they get a warm fuzzy feeling from having their in-house staff make a solution, but once it becomes virtually free to outsource, the balance will change for these decisions, even though some old-school CIO gorillas may have to be removed from office first. (GPT Image-1)


ree

Microcontracts become feasible once transaction costs for establishing and monitoring such contracts drop to zero after they are handled by AI agents. Anything you need done that’s not a strategic core function will be outsourced to the best provider for that microtask. (GPT Image-1)


The Demand for Agents is derived demand; humans deploy them either to optimize decisions they would otherwise botch due to cognitive constraints, or to make similar quality decisions at dramatically reduced cost and effort. This is the efficiency paradox in action: you don’t care about the agent compiling price lists for gas grills; you care about the outcome.


Agent Archetypes and Alignment. The supply of agents is categorized along two dimensions:


  1. Ownership:

    • Bring-Your-Own Agent: User-controlled, portable, better aligned with user preferences, but may be outperformed by platform-specific alternatives.

    • Bowling-Shoe Agent: Platform-operated, integrated with proprietary data and tools, offering convenience and speed, but posing risks of steering (directing users to platform-preferred options) and lock-in.

  2. Specialization: Horizontal (generalists) versus Vertical (narrow domain specialists).


The critical economic design challenge remains the alignment problem: ensuring the agent honors the principal’s preferences, particularly when the principal cannot fully or consistently articulate those preferences. Agents are already demonstrating the ability to infer and even facilitate preference discovery.


The Risk of “Robot Rip-Off Hell.” Widespread adoption, however, does not guarantee efficiency. The low cost of action leads to severe congestion (we’re already seeing job markets flooded by countless AI-generated applications). Furthermore, the anonymity and superior negotiation capacity of agents create a race to the bottom, where identity fraud, spam, and strategic obfuscation may make markets worse for human participants. This necessitates new legal infrastructure around liability, identity verification (such as “proof-of-personhood” systems), and agent-first APIs (Application Programming Interfaces) to manage traffic and consent.


ree

Proof of personhood (or here, proof of parrothood) will become important. (GPT Image-1)


The Blurring of Reality and the Information Ecosystem

The third major theme is perhaps the most fundamental: TAI is fracturing our information ecosystem, making it increasingly difficult to distinguish truth from profitable fiction, and blurring the lines between economic value and time savings.


The Four Horsemen of Information Decay

The research by Stiglitz and Ventura-Bolet begins by acknowledging that the information economy was already sub-optimal because information is a classic public good. A public good is defined by non-rivalry (my watching of a video doesn’t prevent you from also consuming it) and non-excludability (it’s hard to stop people from using it once produced). Producers of valuable information (like investigative journalism) cannot fully capture the social value they create, leading to a tendency toward undersupply of truthful information.

AI exacerbates this precarious structure through four adverse channels:


  1. Efficiency Paradox: AI improves the efficiency of processing and transmission.

  2. Business Model Erosion: AI endangers the creators’ business model by reducing direct visits. By synthesizing information without clear attribution, AI reduces the incentive for private producers (like legacy media) to acquire and process accurate, timely, and reliable content, leading to a diminution in supply.

  3. Cheaper Lies: AI drastically lowers the cost of producing untruthful information, making it very cheap to flood the ecosystem with misleading content.

  4. 4mpaired Screening: AI alters the ability of consumers to screen information from disinformation. AI can be used both to generate untruthful information that is harder to detect and to improve detection. The authors term the conflict between malicious content creation and detection The Drone War, expressing the concern that the former force may dominate. (I am more of an optimist: I think AI will be able to find valuable information for users if we place economic value on such a feature.)


The Research Bottleneck: The Limits of Genius

The discussion on information flows naturally transitions into the highest-stakes information pursuit of all: Research and Development (R&D). Benjamin Jones’ work applies the task framework to R&D, asking how massive increases in machine intelligence translate into accelerated progress.


Jones focuses on three determinants of AI’s impact on R&D acceleration:


  1. Task Share: The fraction of research tasks that AI can perform.

  2. AI Quality: The average productivity of AI at those tasks.

  3. Bottlenecks: The strength of complementarities between tasks.


The core concept here is the Bottleneck Parameter. If tasks in a research process are highly complementary, progress in the entire system is held back by the weakest link. This is like saying human health is not the simple average of all your organs; a single failing organ can kill you. Extreme gains in one area, such as a burst of Concentrated Genius, will be severely muted by the remaining constraints.


We have seen this before: Moore’s Law produced a massive increase in computational productivity for tasks like statistical work, but it did not cause a similar explosion in overall scientific discovery. Even Nobel Prize-winning tools like AlphaFold, which solved the protein folding problem, face upstream and downstream bottlenecks in the broader process of drug development.


Jones concludes that for TAI (defined quantitatively as a 10x acceleration in the rate of progress), the automation share is more critical than raw intelligence. We don’t just need highly intelligent AI; we need AI that can automate a substantial percentage (not necessarily all, but definitely most) of the human and non-AI tasks in the R&D pipeline to break the bottlenecks.


The Measurement Nightmare: Quantifying the Unquantifiable

Finally, the workshop addressed the inconvenient truth that our existing tools for measuring economic activity are spectacularly ill-suited to capture TAI’s value. Coyle and Poquiz describe this as the Measurement Gap.


The central difficulty stems from two factors:


  1. Zero-Price Output: Many of AI’s most widely used services are provided at zero monetary cost (e.g., the free tiers of large language models), meaning they are not recorded as market transactions in official statistics like GDP, obscuring their enormous consumer benefit, as evidenced by the unprecedented fast consumer adoption of AI. A personal example: I was always terrible at drawing, so I used to write boring, text-only articles. However, now AI allows me to include funny cartoons in my articles, pulling readers in. What’s the value of my 25,000 subscribers receiving more entertaining newsletters? Probably much more than the $300 per year I pay for my Freepik subscription. (And many people stick with its free level because they only need a couple of cartoons: these people still get value from being able to do something they could never do before, even if they don’t pay.)


ree

AI drives down the price of many products and services, which looks bad in traditional GDP statistics, even though users enjoy a substantial consumer surplus from improved AI. (GPT Image-1)


  1. Intangible Quality Jumps: TAI is rapidly embedded into existing software (like Microsoft Office) or services (like healthcare diagnostics). These represent massive quality improvements in accuracy, speed, and customization, but traditional price deflators and output measures fail to capture this value. Since AI is often measured using input cost or revenue data, advances due to TAI are systematically underestimated. For example, one of the most beautiful advantages of clinical AI is the ability for patients in rural areas of poor countries to suddenly receive diagnoses from expert medical specialists, in the form of an AI that’s (a) many times cheaper than a non-specialized human doctor and (b) available where they live without requiring a multi-day trek to the capital for an appointment with a specialist they couldn’t afford in the first place. Millions of people will be cured, but they will pay less for healthcare than before, making AI healthcare look bad in GDP numbers.


The economists stress that AI changes processes more than inputs. This is especially true for knowledge workers, where AI frees up time from routine cognitive tasks (like data cleaning or drafting). Quantifying this effect requires new metrics focused on:


  • Time Savings: Capturing the time freed up from routine tasks.

  • Task-Based Metrics: Moving beyond job titles to measure time devoted to tasks.

  • Household Productivity: Valuing the impact of household robots and AI-enabled home production, where capital returns (as opposed to labor wages) will increasingly define value.


Ultimately, the transition may lead to the paradox of efficiency, where measured GDP declines even as economic welfare increases because AI eliminates previously necessary, inefficient, labor-intensive activities (like scheduling or administrative bureaucracy). The real impact of TAI, therefore, will be felt and measured in our time and the quality of our outcomes, rather than in traditional economic aggregates.


The 1,000x Company

Combining the insights from the economics workshop discussed here with some of my previous analyses leads me to believe that we are likely to see companies that are a thousand times more efficient in twenty years. This means that a 10-person company in 2045 will be able to accomplish what currently takes a 10,000-employee firm.


ree

A 10-person company in 2045 will outweigh today’s 10,000-employee behemoths. (GPT Image-1)


I do think this change will take twenty years, mostly because it requires substantial organizational change (which is always slow) and also more progress in AI agents than we’re likely to see when “regular” AI (that only has to think, not act) reaches superintelligence around 2030. Note that even though 20 years seems a long time, it’s less than half of the expected career lifespan of today’s entry-level staff, so they’ll spend the second half of their careers in this scenario.


The 1,000x improvement in company performance stems from three individual advances, each likely to be about 10x more effective. Note that these three changes are multiplicative, not additive: 10×10×10 = 1,000.


  1. Workflow productivity from using AI: The first 10x. This is beyond the productivity increases at the task level, which may range from 100% to 200%, or 2–3x of current performance. Only when workflows are designed from the bottom up to be AI-native will we get the full 10x performance gain for doing that work.


ree

AI-native workflow will achieve the first 10x improvement in staff productivity. (GPT Image-1)


  1. Organization efficiency from pancaking (eliminating hierarchies) and founder mode management: The second 10x. A 10-person company has very little communications overhead, since everybody knows what everybody is doing, and almost zero management overhead, except for the one leader who drives fast product-market fit and product innovation through a small team, being in perpetual “Founder Mode.” In contrast, almost all resources in a 10,000-employee enterprise are wasted on communications and management overhead, as well as the internal politics that inevitably arise in big organizations.


ree

Pancaking and founder mode get us the second 10x improvement in company productivity. Note that aggressive organizational pancaking requires the productivity improvements from the other two factors (individual productivity and offloading of non-core functions to the market) (GPT Image-1)


  1. The Coasean Singularity will cause non-strategic work to be offloaded to the market instead of requiring internal staff, giving us the third 10x gain. This means that there may actually be 100 humans needed in 2045 to achieve the same results as the 10,000-staff current enterprise delivers, but 90 of those people will be employed by other small companies. “Our” company will only be 10 employees strong, meaning that the profits created by each of these 10 people will be almost unimaginable from our current perspective. (However, since AI will make the world economy grow to many times its current size, that money will exist and mostly be created by those few people who work in super-efficient AI-Native firms.)


ree

Once we get much-improved AI agents, this scenario will play out differently: instead of sending human managers to a tradeshow to scope out vendors, AI agents will bring on great outside vendors for any non-strategic task instantaneously and without the current overhead costs of contracting and managing vendors. (GPT Image-1)


From the perspective of overall human employment, we’ll need 1% of the current workforce to achieve the same results as the entire current economy, but since the world economy will probably 10x, we’ll actually need 10% of the current workforce. However, individual companies will only require 0.1% of their current workforce (10 people creating the same value that currently requires 10,000 employees), meaning that this model predicts 100x more companies in 20 years. This will be possible due to the more efficient coordination between companies facilitated by market-based AI agents.


ree

The AI-induced productivity improvements are multiplicative, not additive, meaning that in 20 years, they will enable a 10-person team to deliver what currently requires 10,000. (GPT Image-1)


Why do I expect the best future companies to be “pizza teams” of around 10 people, rather than one or even zero people? Because different humans have different skills and strengths, allowing even the most hardcore founder-mode founder to achieve more with a few extra staff who possess complementary skillsets.


ree

Given the 1,000x improvement in productivity when a 10-person company can achieve the same as a 10,000-staff enterprise can achieve today, why not go all the way to full automation and have a zero-human company? Some functions will be fully automated, but I expect that small superteams of top talent will thrive, adding value beyond the autopilot through agency, judgment, and persuasion.


Conclusion: Designing for the New Reality

We stand at the precipice of a radical economic transformation, one that simultaneously threatens the white-collar labor market through the Genii Shift and dismantles the fundamental architecture of commerce via the Coasean Singularity.


For UX professionals, the workshop provides an urgent mandate:


  1. Mitigate the Cognitive Fugue: As human value decouples from GDP growth, designers must build platforms that support meaningful accessory work (arts, civic engagement, care) and create new sources of non-economic purpose.

  2. Win the Drone War: We must engineer robust identity and verification systems, including proof-of-personhood architectures, agent-first APIs, and content provenance methods, to restore trust and economic signal value against the tide of cheap misinformation.


ree

We will need the ability to audit the information provided by AI: where did it come from? (GPT Image-1)


  1. Resist Hayek’s Hammer: If TAI trends toward centralization (which is not a given), our design efforts must focus on broadening knowledge and augmenting the small companies and individual creators and consumers. We must design transparent agent models that minimize steering and platform lock-in, favoring user autonomy and multi-platform portability over “walled garden” profits.


ree

Small and new companies add immense value, and if Transformative AI turns out to have a centralizing effect, explicit design steps must be taken to mitigate this. (GPT Image-1)


The challenge is no longer about maximizing efficiency, but about designing robust systems that manage the inevitable paradoxes: a world of unprecedented intellectual capacity alongside a fracturing reality; and economic superfluity paired with a deepening human need for meaning. The future of human wellbeing depends on it. When work stops being the defining measure of human value, we must design new systems that give humans value (even if not GDP-measured value) from AI-fueled creation and other AI-enabled and AI-augmented aspects of life.


Action Steps, By Age

If you’re about 60 years old or older, don’t worry. You’ll probably be retired before TAI hits after 2030. However, you should do almost anything to stay with your current firm: it may go under after 2030, but current AI cannot replicate the tacit knowledge you have built up from years of working there, and so you offer immense value in your current job until 2030. If you change firms, you lose much of that tacit knowledge, meaning that you should realistically look at having your salary cut in half.


ree

Tacit knowledge is a huge value booster for senior staff, but mostly as long as they stay within the same company, where they are familiar with all the ins and outs of accomplishing tasks despite formal rules. (GPT Image-1)


If you’re younger than about 35 years: strongly consider founding an AI startup. You’ll have a significant advantage over most non-UX founders who don’t understand human factors methodology and thus have to work harder to achieve product-market fit (PMF). If you're considering the startup route, watch this interview with Aaron Levie, CEO of Box, a company that has gone AI-first in a big way. Levie points out that foundational technology shifts are rare but create irresistible opportunities for startups that can define completely new markets, rather than having to fight over scraps left over by big companies in established markets. The main problem with going the startup route is that there is a short window of 3–4 years where the opportunity is so immense that the Silicon Valley startup ethos has embraced the “996” work style, of working from 9 AM to 9 PM, 6 days per week, or 72 hours per week. I was personally able to maintain roughly a 70-hour workweek until I turned 50, but it’s tough on your health after about the age of 35, and I wouldn’t recommend it. Doing 3 or 4 years of 996 while you’re in your 20s is great, though, and such an intensive creative experience is exhilarating.


ree

The “996” lifestyle originated in China, but has been embraced by the most aggressive AI-Native startups in Silicon Valley. You work from 9 AM to 9 PM 6 days a week, with one day off to recharge. Exciting while you’re young, but draining in the long run. (GPT Image-1)


If you’re between 35 and 60: you’re doomed. UX professionals who are too established in their careers and family life to found a startup, yet not old enough to retire before superintelligence arrives, face a gloomy future, as most of their old skills become worthless in a world without traditional user interfaces. That same short window of opportunity that makes San Francisco startup founders work crazy hours also applies to the UX career pivot: Your legacy skills still have value, and it’s clear where the world is going, so you can leverage your current position to get into place for the future. However, most UX professionals remain in denial and will likely be unemployed after 2030, as they won’t pivot until it’s too late. The main benefit of the pivot is that you can do it in a pleasant number of hours per week, if you start now.


For a quick summary of this article, watch my 8-minute explainer video on Transformative AI.

Top Past Articles
bottom of page