top of page

Building an AI-Positive Work Culture

  • Writer: Jakob Nielsen
    Jakob Nielsen
  • 4 days ago
  • 13 min read
Summary: Employer encouragement predicts workplace AI adoption roughly 4x more strongly than training or tool provision alone. This 10-step playbook shows design leaders how to build a culture where AI adoption happens naturally, without mandates, surveillance, or wasted training budgets, while avoiding the cybersecurity pitfalls of “Shadow AI.”

 

The research is unequivocal: employer encouragement is the strongest predictor of worker AI adoption, far exceeding the effect of training programs or even tool provision. In a recent study by the National Bureau of Economic Research, 47% of workers who received no AI training adopted AI when their employer encouraged it, versus just 10% when their employer did not. That almost 5x difference is a cultural signal doing the work that most managers mistakenly think training should do.


A surprising finding from the NBER research was that formal AI training did not predict higher adoption. Read my extensive analysis of this important study of real-world AI use.

The implication cuts against prevailing HR orthodoxy. If you are a design leader who has been waiting for the “right” curriculum, vendor partnership, or certification program before green-lighting AI use on your team, you are losing months of compounding practical experience that your competitors are gaining right now.


The 10 steps I recommend for building an AI-positive culture in your company. (All images in this article made with GPT-Images-2.)


If training isn’t the primary driver, what is? Building an organically AI-positive culture. Here is my 10-step actionable plan for embedding AI safely and strategically into your organization.


Step 1: Permission, Not Training

The most important thing management can do is remove ambiguity about whether AI use is welcome. In practical terms, this means making an explicit, public, and repeated statement to your team that using generative AI in their design work is not merely tolerated but actively encouraged.



Don’t assume people already know this. Many designers operate under genuine uncertainty: Is AI use “allowed”? Will it be perceived as cheating or laziness? Will using AI-generated outputs reflect poorly on their skills at review time? In the absence of clear signals, your silence is interpreted as discouragement.


There is also a severe and often-ignored consequence to managerial silence: the rise of “Shadow AI.” When leadership fails to explicitly sanction and provide secure AI tools, employees do not simply revert to manual labor to hit their productivity targets. Instead, many turn to unauthorized, consumer-grade models on their personal devices, covertly pasting highly sensitive corporate data or client IP into unvetted systems. Providing clear permission is not just about driving adoption metrics. It is a cybersecurity imperative to prevent data leakage.


Concrete actions:

  • Deliver the message personally, not through a company-wide policy memo that people may or may not read. Say it in a team meeting. Say it in your one-on-ones. Post it in the Slack channel. Then say it again three months later. Cultural signals require repetition to take hold.

  • Name specific use cases you endorse: “I encourage you to use Claude or ChatGPT to draft your user research syntheses, rewrite feedback for clarity, or explore design directions.” (While also encouraging exploration of new use cases.)

  • Name the sanctioned tools explicitly. Ambiguity about which tools are approved drives employees back toward unvetted consumer products.



Ambiguity breeds anxiety. Clear permission breeds innovation.


Step 2: Lead by Example


Permission-granting only works when it is credible. If you tell your team to use AI but they never see you using it, they will read your encouragement as performative. Your visible AI use is the single most powerful demonstration that adoption is safe and valued.



This does not mean announcing “I used AI for this” after every artifact. It means naturally weaving AI into your observable workflow. Share a prompt that worked. Demo a useful output in a team meeting. When you open a document, occasionally let people see you pull it into an AI tool to summarize or critique. If you delegate something, mention that the first draft came from an AI and you iterated from there.


Leaders who refuse to use the tools themselves while exhorting their teams to adopt them are engaged in a form of hypocrisy that is immediately visible to subordinates. The fastest way to accelerate cultural adoption is for the most senior person in the room to become a visibly competent, unselfconscious AI user.



True cultural transformation happens top-down in visibility, but bottom-up in application. You need both.


Step 3: Provide Tools Before You Provide Curriculum

Once you’ve established encouragement, the next priority is access. The research shows that tool provision is the second strongest predictor of adoption, adding meaningful lift even after controlling for encouragement. Training, by contrast, shows no additional predictive power once encouragement and tools are accounted for.



Your budget allocation should reflect this evidence. Before investing in workshops, courses, or AI bootcamps, make sure every member of your team has paid subscriptions to the AI tools most relevant to their work. For most design teams today, that means at minimum a “Pro”-level Big-3 frontier model subscription for text-based tasks, plus access to relevant image-generation and prototyping tools appropriate to your domain.


(Big-3 = Google, OpenAI, Anthropic. This may become Big-5 later this year if xAI and Meta catch up. It does not really matter which top model your employees choose, and there may be benefits to having different staff gain fluency with different vendors, since each has distinct strengths.)


All employees should have a high-tier subscription to at least one provider, such as Google “Ultra” (US $250/month), OpenAI “Pro” ($200/month), or Claude “Max” ($200/month). The cheaper $20/month plans are fine for personal use but not for professional use in an organization serious about AI adoption. Rate limits and model access on the lower tiers will silently cap your team's productivity in ways you will never see in a dashboard.


Eliminate procurement friction. Do not make people request access through a three-week process that requires a business justification form. The friction of obtaining tools is itself a powerful signal about how much the organization values their use. If an employee can expense a $40 book without approval but needs to submit a ticket for a $20 AI subscription, you have communicated something about organizational priorities, whether you intended to or not.



Activate what you have already bought. Where enterprise-grade tools are available, such as Microsoft Copilot integrated into existing workflows, or Figma’s AI features, ensure they are activated, licensed to the right people, and configured for your team. Many organizations have purchased enterprise AI licenses that sit unused because no one ensured they were deployed to the people who would actually benefit from them.


A rough budgeting guideline. A company that aims to be AI-Forward should allocate at least 15% of each employee's salary toward that person’s AI tool stack (for non-developer staff), and 33% for developer staff. For an AI-First company, double these percentages; for AI-Native companies, triple them. You don’t need to be tokenmaxxing, but if senior developers in particular cannot spend on AI with abandon, they are not using AI to its fullest potential, and the opportunity cost of constrained usage almost always dwarfs the marginal tool spend.


(The reason I suggest higher token budgets for developers is not that programming is more important than design or user research. It’s simply the case that right now, AI tools for software development are far ahead of AI’s capabilities for other business workflows. And AI developer tools are hungry for tokens. In the future, as AI’s “jagged” frontier keeps shifting, more powerful, more hungry, AI capabilities will likely become available for other functions, and token budgets should be adjusted up accordingly.)


Step 4: Create Space for Experimentation Without Mandating Specific Workflows

Encouragement and tool access set the conditions for adoption, but the specific ways designers integrate AI into their work should emerge organically from practice, not from top-down workflow mandates. The research shows that AI users apply the technology across an average of four different task types, with the most common being writing communications, searching for information, interpreting and summarizing content, and generating new ideas. This diversity suggests there is no single “right way” to use AI in design work, and trying to impose one will suppress the creative adaptation you need.



New applications will also emerge every few months as more capable models are released, unlocking possibilities previously beyond AI’s reach without prohibitively high hallucination rates. A workflow you mandated in Q1 may be obsolete by Q3.


Create structured experimentation time. Dedicate explicit time in your team's schedule (perhaps an hour per week or a day per month) to exploring AI tools in the context of current projects. Make this time legitimate and protected, not something people squeeze in between deadlines. If it isn’t on the calendar, it isn’t real.


Add a lightweight sharing mechanism. A standing 5-minute agenda item in your regular team meeting, where one person shares an AI experiment (what they tried, what worked, what did not), normalizes the practice and spreads practical knowledge far more effectively than formal training. Rotate the presenter so that every team member eventually has to articulate something they have learned. The act of teaching compounds the learning.


Insist that experiments use real work. A designer who discovers that AI dramatically speeds up their competitive audit process will adopt that practice permanently. A designer who completes a generic prompt-engineering exercise will likely forget it within a week. Abstract training evaporates; lived experience sticks.



Mandating a rigid “Standard Operating Procedure” for AI is a fool’s errand because the technology moves faster than corporate governance. For that same reason, hiring even good strategy consultants to bring you the best practices from their other clients won’t work. Those practices may indeed have been “best” last quarter, but are second-best now. You must discover what works in your company by experimentation, because nobody knows.

Localized, tacit knowledge is your most valuable asset in the AI transition.


Step 5: Address Quality and Ethics Directly, Not Through Prohibition

Many design managers hesitate to encourage AI use because they worry about quality control, intellectual property, confidentiality, and ethical concerns. These are legitimate. But addressing them through prohibition or silence is worse than addressing them through clear, practical guidelines.



Establish explicit guardrails in plain language. Specify what types of information should never be entered into external AI tools: client confidential data, personally identifiable information, proprietary research findings, unreleased product specifications, whatever applies in your context. Specify review expectations for AI-assisted outputs: all AI-generated copy must be reviewed by a content specialist before inclusion in deliverables; AI-generated visual assets must be evaluated against brand guidelines by a visual designer before use; AI-drafted research summaries must be verified against the source materials before being shared with stakeholders.


Treat AI like any other professional tool. You do not prohibit designers from using stock photography because some of it is bad. You establish quality standards and trust professionals to meet them. Apply the same logic to AI. Define the boundaries clearly, then trust your team to operate within them.



Be honest that ethics are evolving. Tell your team directly that reasonable people disagree about some questions (training data provenance, attribution, disclosure to clients) and that the standards are still being worked out across the industry. Encourage critical thinking about when AI use is appropriate and when it is not, rather than issuing blanket rules that will become outdated as the technology and its social context evolve.


Ethical and security boundaries provide freedom. When employees know the red lines, they can innovate fearlessly within the safe zones.


Step 6: Address Job Security Anxiety Head-On

Underneath most resistance to AI adoption lies a question employees rarely ask out loud: Am I training my own replacement? If you do not address this directly, it will quietly throttle every other step in this plan.



Silence on the question is read as confirmation of the worst case. Vague reassurances (“AI will augment, not replace”) are read as corporate-speak. What works is specificity.


Be honest about what you can and cannot promise. You probably cannot guarantee that headcount will never change. What you can commit to is that people who develop strong AI skills will be more valuable, not less, and that the team’s mandate is to do better work (and likely more work), not to produce the same work with fewer people. If layoffs are genuinely on the table, say so; pretending otherwise destroys trust far more than the layoff itself would.


Redirect time savings toward visibly valuable work. If AI saves a researcher five hours a week, and those five hours are invisibly absorbed into more Slack or more meetings, adoption will stall. If they are visibly redirected into deeper user interviews, a new project, or strategic work that raises the team’s profile, adoption compounds. People adopt tools that make their jobs better. They resist tools that make their jobs disappear.



Create a “skills runway.” Explicitly commit team budget and time to helping everyone develop the next layer of skills that AI does not yet do well: systems thinking, stakeholder facilitation, strategic framing, ethical judgment. Framing AI adoption as part of a career-development arc is far more motivating than framing it as a productivity mandate.


Step 7: Reward Outcomes, Not Methods

The research linking AI adoption to performance-oriented management practices is striking. Countries and firms where performance is rewarded, promotions are based on merit, and poor performance is addressed also have dramatically higher AI adoption rates. The mechanism appears to be that performance-focused cultures create natural incentives to seek out productivity-enhancing tools: people who have skin in the game of their own output find AI on their own.



For design managers, this means evaluating your team on the quality, speed, and impact of their work, not on the specific methods they used to produce it. If a usability specialist delivers an excellent research synthesis in half the usual time because he or she used AI to process interview transcripts, that is a success to be recognized, not an asterisk to be noted.


Conversely, be alert to the possibility that some team members resist AI not because they have thoughtfully evaluated its limitations but because adopting new tools feels threatening to their professional identity. The most common reason workers give for not using AI is that they believe it cannot help with their job. A belief that, given the breadth of current AI capabilities, is already incorrect for most knowledge workers, and will be even more false soon. A performance-oriented culture naturally surfaces this disconnect: when peers using AI are producing better work faster, the incentive to reconsider one’s assumptions becomes tangible.



Guard against performative AI use. The goal is better design outcomes, not more prompts entered. Some tasks genuinely do not benefit from AI, and experienced designers should be trusted to make that judgment. If you start celebrating AI use itself rather than the work it produces, you will get a lot of theater and very little improvement. (Again, don’t tokenmaxx; outcomemaxx.)


Step 8: Account for Uneven Adoption Within Your Team

The data shows AI adoption varies significantly by age, education, and individual disposition. Within any team, you will likely have enthusiastic early adopters, cautious experimenters, and outright skeptics. A one-size-fits-all approach will either bore the first group or overwhelm the third.



Leverage early adopters as internal champions. Pair them with more hesitant colleagues on specific projects where AI might add value, and let practical demonstration do the persuading that no amount of managerial exhortation can achieve. Peer influence is powerful. Seeing a respected colleague use AI effectively to solve a real problem is more persuasive than any keynote about AI’s theoretical capabilities.


Create optional “AI office hours” staffed by your most fluent users, where colleagues can bring specific problems (“I have 200 pages of interview notes, what should I do?”) and leave with working solutions. This outperforms centralized training because it is demand-driven, specific, and immediately useful.



Respect persistent skepticism, but set clear expectations. For team members who remain skeptical after genuine exposure, respect their professional judgment while continuing to hold them to the same performance standards as everyone else. Some resistance will dissolve naturally as tools improve and use cases become obvious. Some may reflect legitimate concerns about quality, reliability, or appropriateness that you should listen to carefully. Your skeptics may be seeing real problems your enthusiasts are overlooking.


Step 9: Track Adoption and Impact, Even Informally

You cannot manage what you do not measure. But you also should not create surveillance mechanisms that undermine the trust you are trying to build. A reasonable middle ground is to incorporate AI-related questions into your existing team rituals.



In project retrospectives, ask whether AI tools were used, where they helped, and where they fell short. Always assume that current AI is imperfect: that is fine, and not to be punished. But do learn from the mistakes, and remember that what failed this year could very well succeed next year as models improve.


In quarterly planning, ask team members to identify one or two areas where they would like to experiment with AI in the coming quarter.


In performance conversations, discuss how each employee's toolkit has evolved and whether they are staying current with available technologies.


Over time, you should be able to observe whether your team’s AI adoption is increasing, whether the tools are generating real resource savings or quality improvements, and whether certain use cases are proving more valuable than others. Feed this information back into your tool-provisioning decisions, your team's knowledge-sharing, and your own understanding of where AI adds genuine value in the design process versus where it remains more hype than substance.



A simple dashboard of three numbers, monthly AI tool spend per head, number of experiments shared in team meetings per quarter, and estimated hours saved (self-reported), is more than sufficient for most teams. Resist the temptation to build elaborate measurement infrastructure. It is almost always a substitute for doing the actual cultural work.


Step 10: Connect AI Adoption to Strategic Design Goals

Finally, frame AI adoption not as a technology initiative but as a means to better design outcomes and higher business profitability. The research shows that AI users already report meaningful time savings: about 5.8% of work hours on average. (This will grow substantially every year for the next decade.) For a design team, that recovered time can be redirected toward activities AI currently handles poorly, such as deep user empathy work, complex systems thinking, cross-functional alignment, and the kind of creative synthesis that emerges from sustained human attention to ambiguous problems.



The strategic case for AI in design is not that AI will do the design work for you. It is that AI can absorb a meaningful share of the routine cognitive labor (drafting communications, summarizing research, generating initial variations, searching for precedents) that currently competes for the same hours you would rather spend on higher-order design thinking.

Position AI adoption within this frame for your team, and you accomplish two things simultaneously. You give people a compelling reason to experiment with the technology, and you reassure them that their human skills remain central to their value. Both messages are important, and both are true.



Redesign the workflow, don’t just automate the task.


Conclusion

The companies that will win the next decade are not the ones with the most elaborate AI training programs or the largest enterprise licenses. They are the ones whose leaders understand that AI adoption is, fundamentally, a cultural problem dressed up as a technology problem.


The NBER data is clear:


  • Permission beats pedagogy,

  • Tools beat training, and

  • Culture beats curriculum.


Build a positive AI culture in your company.


My 10 steps are not a rigid sequence; they are a reinforcing system. Permission without tools is empty. Tools without permission are unused. Tools and permission without outcome-based rewards produce performative adoption. Outcome-based rewards without attention to job security produce quiet sabotage.


If you are a design leader wondering where to start, start with steps 1 and 2 this week. Tell your team, out loud and in writing, that you want them using AI. Then use it yourself, visibly, tomorrow morning. Everything else in this plan becomes easier once those two signals are in the air.


The cost of getting this wrong is not that your team will be a little less productive. It is that your best people will safeguard their career future by leaving for companies where their AI fluency is welcomed, while the colleagues who stay will be doing their AI work on unauthorized personal accounts, pasting your client data into systems you cannot audit. The choice is not whether your team uses AI. It is whether they use it in the open, with your support and guardrails, or in the shadows, without them.


Summary of the 10 steps I recommend in this article.

 

Top Past Articles
bottom of page