AI growth acceleration versus distributional fairness
This background briefing guide was distributed to participants ahead of the Forum for Cooperation on AI dialogue on March 31, 2026. A short summary of the conversation is included at the bottom of the piece. FCAI dialogues follow Chatham House rule, so remarks are anonymized.
The global push toward artificial intelligence (AI) adoption reflected in many government plans and the February 2026 India AI Impact Summit highlight links between AI and economic growth. AI-driven growth acceleration is plausible, but it is not guaranteed—it hinges on AI resulting in sustained productivity gains that have yet to materialize at an aggregate level and the distribution of these gains. Will they be broadly distributed across firms and employees or concentrated on a few dominant companies and their shareholders? The tension between growth acceleration and distributional fairness asks whether AI can raise aggregate productivity fast enough to lift living standards while distributing gains broadly.
The tension is sharper for AI because frontier capacity is scaling faster than organizational change, and because key inputs are concentrated. Competition regulators have warned that foundation model systems can reinforce control over critical inputs like compute, cloud services, data, and distribution models, which could limit contestability and pass-through of productivity gains.
It is useful to separate diffusion into the extensive margin (how many firms and workers adopt AI) and the intensive margin (how deeply AI changes workflows and production). Broad-based growth requires both, consistent with OECD cross-country firm evidence that diffusion differs substantially across sectors and firm sizes and that “enabling” technologies and capabilities (for example, broader digitalization, data systems, and skills) are prerequisites for “advanced” adoption. Broad productivity growth also depends upon innovation in firms and industries using the technology.
Microeconomic evidence to date shows that targeted deployments can deliver sizable productivity gains in certain settings, but diffusion is uneven and aggregate effects remain uncertain. In one field study, access to a generative AI tool increased customer support productivity by an average of 15%, with much larger gains for novices and smaller gains for top performers. In contrast, a trial using AI tools available as of early 2025 found that completion time increased by 19% among experienced open-source developers, showing how “AI help” can also create cognitive overhead for complex, context-heavy tasks. Distributional outcomes therefore are not determined by “AI” as a single force, but instead depend on whether technology displaces labor tasks, creates new tasks, or reshapes complements.
The dialogue will examine competing and sometimes overlapping definitions of “success in AI”: frontier capability, broad diffusion, and distributional outcomes. We will discuss which policy tools shift incentives across these objectives, including skills and management capacity, procurement and public-sector adoption, competition policy, and measurement. We will explore whether policies designed to accelerate broad and equitable diffusion complement frontier progress or create trade-offs between these two objectives.
This dialogue will discuss to what extent “success in AI” is primarily defined through frontier capability, broad diffusion, or distributional outcomes and which policy levers (skills and retraining, management capability, procurements, competition, and measurement) move incentives toward broad diffusion without suppressing beneficial use.
Popular narratives in AI diffusion:
These popular narratives are broad outlooks at AI diffusion and sometimes overlap in mechanisms. They may all be true in different sectors or time horizons but serve to surface differences between causal pathways and policy priorities.
- Uneven adoption: AI can accelerate economic growth materially only if diffusion reaches beyond early adopters and is paired with organizational redesign, skills, sector-sensitive measures, and competitive input markets; otherwise, AI is more likely to raise within-firm productivity in pockets while widening gaps between firms, regions, and capital versus labor.
- Broad-based augmentation: Generative AI primarily augments workers, raising quality and speed for common tasks, compressing performance distributions, and lifting wages at the middle and bottom as capabilities spread. Field evidence in customer support and professional writing is consistent with large gains for less-experienced or lower-ability participants.
- Bottlenecked diffusion: AI-driven productivity gains do not spread evenly across firms and benefit organizations that already have the right supporting conditions in place, such as strong digital skills, high-quality data, effective management capacity, and reliable cloud access. At the same time, key parts of the AI supply chain, including foundation models and cloud infrastructure, are relatively concentrated, and switching costs make it hard for firms to change providers or negotiate better terms. That means the main barrier to lasting adoption is whether firms have the organizational capacity and complementary investments needed to use AI models well, not just access to them. OECD work using cross-country firm evidence finds that AI use is more prevalent in certain sectors and among larger firms, and that these complementary capabilities play a critical role in shaping productivity gains. This pattern is consistent with uneven diffusion across the economy. Competition authorities’ scrutiny of partnerships and control over critical inputs also support concerns that these gains may not be widely shared.
- High-friction transition: In the medium term, AI adoption is also likely to be costly and uneven. Adoption can create substantial coordination problems inside firms, require new error-checking and oversight work, increase legal/compliance burdens, and lead to inefficient deployment. As a result, economy-wide productivity gains may remain modest for some time even as tensions over who benefits and who bears the costs intensify. Those tensions may be especially visible in entry-level labor markets. Evidence showing that AI tools can slow experienced developers’ project completion time suggests that, for some complex high-skill tasks, adoption may initially reduce efficiency rather than improve it.
AI productivity measurement limits
AI productivity discussion can conflate three distinct metrics: (1) model capability (benchmarks, scale, capabilities), (2) micro productivity (task- or worker-level outcomes in specific settings), and (3) macro productivity (sector- or economy-wide statistics). Even when AI model capabilities improve quickly, micro and macro productivity can lag because implementation is slow, complements are missing, and measurement is imperfect. One challenge for high-level economic assessment is that AI’s visible technical progress is not yet tightly coupled to measured productivity statistics.
Recent benchmark revisions underscore how uncertain near-term macroeconomic inference can be. With the January 2026 release, the U.S. Bureau of Labor Statistics’ (BLS) benchmarked level for March 2025 was revised down by 898,00 jobs (seasonally adjusted), which changes the labor-input path used in previous productivity calculations. This pattern matches a “productivity J-curve” view of general-purpose technologies (GPTs). Measured productivity can be lower during a buildout period when firms are investing in intangibles, and then can improve as those complements mature and diffuse.
Even if AI is generating surplus, some portion may not appear quickly (or at all) in conventional productivity measures due to how AI software and services are represented in GDP frameworks. More optimistic projections from financial institutions estimate much larger gains (for example, Goldman Sachs Research has projected a sizable long-run GDP level effect and higher productivity growth under faster task automation and adoption). Table 1 presents a range of these projections.
Micro evidence is increasingly rigorous but heterogeneous. In customer support, a generative AI tool increased issues resolved per hour by 15% on average, with larger gains for less-experienced workers. In a preregistered writing-task experiment, access to ChatGPT reduced time and improved quality on average while compressing the productivity distribution. Yet a randomized trial by Model Evaluation and Threat Research (METR) found that allowing experienced open-source developers working in their own repositories to use early 2025 AI tools slowed completion time by 19%. One survey showed U.S. full-time workers using generative AI tools spent the same amount of time fixing low-quality AI-generated work and thus did not impact productivity. The distributional consequences of AI are likely to depend on task type, worker skill, sector, and firm size. Across research, AI exposure and usage measures are not aligned, and many occupations have different exposure and usage levels.
Firm-level data can help clarify the adoption lag. A February 2026 National Bureau of Economic Research (NBER) working paper surveying nearly 6,000 executives across the United States, United Kingdom, Germany, and Australia reports that around 70% of firms “actively use AI,” but executives’ time spent using AI is low on average (about 1.5 hours per week), and about 90% of firms report no impact on employment or productivity over the prior three years despite expecting nontrivial effects over the next three years. Reported adoption rates also vary materially across surveys because many AI enterprise surveys are not directly representative at the national level and differ meaningfully in definitions and sampling across industries, firms, and countries.
By contrast, frontier capability indicators show rapid improvement and scaling. Stanford’s 2025 AI Index shows that training for notable models is doubling once every five months and capital intensity of frontier development is escalating. Diffusion research finds wide cross-firm variation in adoption and complements, implying that measured aggregate gains may lag even as frontier capability advances. Realized productivity depends not just on capability, but on reliability over time and on deployment conditions. The distributional outcome then depends on who has complements (data, skills, management) and market power.
Macro evidence for AI-driven productivity gains remains limited and uncertain in the near term; some recent work relies on scenario-based simulations rather than direct measurement. One task-based macro exercise estimates an upper bound of about a 0.66% increase in total factor productivity over 10 years, with an average estimate below 0.53%, although expected to drop over the next 10 years. OECD modeling suggests plausible aggregate productivity growth contributions from AI on the order of fractions of a percentage point per year over a decade, with ranges depending on adoption and complementary changes. The International Monetary Fund’s (IMF) Europe-focused simulations similarly emphasize that macro evidence is scarce and relies on assumptions about task automation and adoption paths.
Sectoral adoption patterns and bottlenecks to diffusion
While AI adoption was first concentrated in information-intensive and professional service activities with tasks that are more language and data based, it has since diffused into sectors where production is more constrained by physical capital and safety regulations.
Canadian official survey analysis finds that businesses in information and cultural industries; professional, scientific, and technical services; and finance and insurance were much more likely than other industries to report using AI in producing goods or delivering services, while sectors like agriculture and accommodation/food services were among the least likely. Canadian businesses using AI reported organizational changes including training current staff and developing new workflows, while hiring AI-trained staff was less common.
The U.K.’s official statistics similarly show strong variation by sector and size: In 2023, large service-sector firms were the most likely to adopt AI (while large manufacturing firms were most likely to adopt robotics). The U.K. also identified organizational barriers in addition to technical barriers such as difficulties identifying activities, business use cases, cost, and limited AI expertise or skills. Across countries, management practices can also explain productivity differences. U.K. firms in the top decile of management practice scores were far more likely to adopt advanced technologies than those in the bottom decile, and higher management scores predicted follow-through on planned adoption.
A further adoption barrier is risk and compliance uncertainty. Firms often hesitate to scale AI where errors carry legal, safety, or reputational risk; the OECD highlights that firms care about accountability and governance when deploying AI, and public authorities increasingly emphasize responsible procurement and risk frameworks. The World Economic Forum’s January 2026 report similarly argues that scaling AI is “as much an organizational feat as a technical one,” recommending that companies build higher data quality, governance, workflow integration, and organizational redesign before scaling AI systems. Data infrastructure, skills, and governance are often prerequisites for inclusive adoption, particularly in low- and middle-income contexts.
Governance as an accelerant to adoption
Shared governance frameworks that create trust can function as accelerators to adoption by clarifying expectation and reducing buyer and vendor due-diligence burdens. The U.S. National Institute of Standards and Technology’s (NIST) AI Risk Management Framework is designed for voluntary use by organizations to incorporate trustworthiness into AI design, development, use, and evaluation. Similarly, ISO/IEC 42001 creates an auditable AI management system approach that can reduce buyer uncertainty. Newer OECD responsible AI due diligence guidance is a practical tool based on the OECD AI principles to help firms demonstrate trustworthy practices. Current evidence is limited on governance as an adoption accelerant, as most of the current support is conceptual or standards based rather than examining macroeconomic adoption.
Geographical divergence operates in the frontier, where core models, compute infrastructure, and research and development are produced, and in the deployment layer, where firms and workers adopt AI systems and translate the tools into productivity gains.
Diffusion indicators also show divergence, though cross-country comparisons are complicated with inconsistent definitions of “AI adoption.”
- Eurostat reports that 19.95% of EU firms with 10 or more employees used at least one AI technology in 2025, with significant variation across member states.
- U.K. survey data (with a different adoption definition) report about 9% of businesses with 10 or more employees using at least one AI application in 2023, although private-sector survey results for 2025 estimate the number has grown to 39%.
- Canada’s government reported1% of businesses using AI in producing goods or delivering services from Q2 2023–Q2 2024, again with strong sectoral concentration.
- U.S. Census data from February 2026 suggests roughly 17.5% of U.S. businesses used AI in at least one business function in the last two weeks. This question changed in November 2025, following previous wording that surveyed whether firms “used AI in producing goods or services.”
- Nordic countries and Belgium have consolidated their lead, Korea has emerged as a top adopter, and divides have deepened across sectors, firm sizes, and regions.
- OECD analysis of 2023–2024 uptake finds that diffusion has been driven more by leaders pulling ahead than laggards catching up, with gaps in business adoption widening across the OECD area.
Many leading large language models (LLMs) remain English-centric in both training data and evaluation, which means countries and communities using lower-resource languages may face lower model quality and slower diffusion. OECD analysis notes lower availability of AI training data in languages other than English. For example, English accounted for 88% of models with language tags on Hugging Face in 2025.
Recent global evidence suggests that divergence is widening not only within advanced economies but also between the Global North and Global South. Microsoft’s 2025 global adoption report finds that uptake in the Global North grew nearly twice as fast as in the Global South, with substantially higher shares of working-age populations using AI tools in advanced economies. Countries that invested early in digital infrastructure, AI skilling, and public-sector adoption, including the United Arab Emirates, Singapore, Norway, Ireland, France, and Spain, continue to lead, while South Korea recorded one of the fastest improvements in global rankings, reflecting coordinated policy support and strong domestic ecosystem capabilities.
Platform-level data reinforce these divides. The Anthropic Usage Index shows AI usage per capita strongly correlated with income levels: High-income countries such as Singapore and Canada record usage rates far above population share, while emerging economies lag significantly. Within the United States, usage intensity varies across states in line with local economic specialization, with higher activity in regions concentrated in IT, finance, or knowledge services. High-adoption countries also exhibit more diversified and augmentation-oriented usage patterns, while lower-adoption countries show greater concentration in coding tasks and relatively higher shares of automation-oriented use.
Across these countries and analyses, there is not a consistent definition of “AI adoption.” Some count any AI use and others specify AI used to produce goods or services, so the safest inference is directional: Adoption is rising, uneven across countries and regions, and often faster for larger firms and digitally mature sectors.
The split between frontier and diffusion also maps onto a broader political economy argument on technology and power. Jeffrey Ding distinguishes institutional advantages from monopolizing innovation in fast-growing or leading sectors from an alternative pathway in which effective diffusion of a GPT drives broad-based productivity, and ultimately, economic power. If AI systems function as GPTs, it is unsurprising that measured productivity effects remain limited in the near term: GPTs typically raise aggregate productivity through gradual, protracted diffusion into widespread use, mediated by complementary investments and institutional adaptation. Under this hypothesis, widening the skills base toward applied R&D for commercializing and scaling up process innovations could strengthen the technological leadership of countries and regions that were not early movers in foundational AI breakthroughs.
SME access gaps and credential infrastructure
Small- and medium- enterprises (SMEs) matter disproportionately for inclusive growth because they account for the majority of firms and a substantial share of jobs worldwide, so SME diffusion is a large determinant whether AI-driven growth is broad based.
OECD work prepared for G7 discussions highlights persistent diffusion gaps between SMEs and large firms and suggests distinguishing SME adopters by digital maturity and by the complexity and scope of AI use instead of a one-size-fits-all SME policy. OECD diffusion analysis similarly finds that the likelihood of adoption rises monotonically with firm size across most countries.
Recent U.S. evidence from the Small Business Administration’s Office of Advocacy, using Census Bureau Business Trends and Outlook Survey (BTOS) data, shows a measurable (though narrowing) adoption gap between small firms (less than 250 employees) and large firms (250 or more). Many small firms report that AI is “not applicable” to their business. U.K. Office for National Statistics findings show a similar “not applicable” pattern for specified AI applications, pointing to a larger pattern that diffusion is constrained by use case discovery, not only by model capability or access. Government has a potential role to play in funding public-good AI infrastructure and capability building in ways that are helpful for small businesses.
In high-capacity digitally advanced states, structured programs can accelerate SME uptake by making adoption more “off the shelf.” Singapore’s 2025 Digital Economy Report reports a tripling in SME AI adoption from 4.2% in 2023 to 14.5% in 2024, largely to accessible generative AI tools.
Australia’s National AI Centre adoption tracker (focused on SMEs) reports substantially higher SME adoption rates (40% adopting AI in mid-2024, with variation by industry and metro–regional areas). It reports materially higher adoption in metro areas than in regional areas in some states, implying that regional SME diffusion (regional advisory networks, local demonstration sites, and connectivity/security supports) is likely to be a distributional fairness issue even inside advanced economies. The tracker frames adoption constraints in terms of skills gaps, funding constraints, and uneven readiness.
Skills and credential infrastructure are the other half of the SME access story because small firms often cannot hire scarce AI specialists and must rely on upskilling generalists. Evidence on AI labor markets points to shortages at both ends: foundational AI literacy (baseline ability to use and supervise tools safely) and advanced AI engineering capability.
Micro-credentials are becoming institutionalized as a lifelong learning tool; the EU has adopted a Council recommendation establishing a European approach designed to improve portability and recognition. At the same time, the OECD cautions that while micro-credentials can enable flexible upskilling, evidence on their labor-market value and quality assurance remains limited, implying that governments and intermediaries need standards, verification, and interoperability to make credentials credible and portable.
Factors cutting against inclusion
AI’s exposure potential, or which tasks could be affected, differs from its realized displacement or firm effects. The International Labour Organization’s refined global index of occupational exposure provides a structured way to assess occupational exposure to generative AI using task-level data, expert input, and AI model output. The Yale Budget Lab’s tracker similarly stresses that current measures of exposure, automation, and augmentation show no sign of being systematically related to changes in employment or unemployment. The IMF’s staff discussion note argues that AI exposure is widespread in advanced economies and that effects will vary based on occupation and country structure. Task-based estimates also reinforce that a substantial share of tasks in many occupations could be affected, but this is not the same as forecasting net job loss.
If AI removes simpler, routine components, the remaining work becomes more complex; that can raise returns to expertise while shrinking entry points and winnowing less-expert workers. If AI instead removes an occupation’s most complex components, the remaining work can become less specialized and more contestable, potentially compressing wages. This framing helps explain why the same technology can simultaneously generate productivity gains, entry-level disruption, and shifting wage premia across occupations.
In Canada, most firms using AI in producing goods reported no change in employment levels after implementing AI, while many reported some reduction in tasks previously performed by employees. Similarly, the 2026 NBER executive survey reports limited realized employment/productivity impacts over the prior three years even at high-stated adoption rates, with expectations of greater forward impacts.
Early distributional risk may concentrate at entry-level pathways. High-frequency administrative payroll analyses have reported relative employment declines for early-career workers in more AI-exposed occupations alongside more stable outcomes for experienced workers.
Transition and inclusion tools
The OECD’s employment protection indicators and labor-market metrics document cross-country differences in dismissal protections and in public spending on labor market programs. While there is wide variation across member states, in broad terms, many EU labor markets rely more heavily on negotiated adjustment and job retention capacity, while the United States relies more on job-to-job reallocation and decentralized training access.
In Europe, adjustment can draw on job-retention instruments such as short-time work schemes and EU-level mechanisms including the European Globalisation Adjustment Fund for Displaced Workers and crisis job protection support such as the European instrument for temporary Support to mitigate Unemployment Risks in an Emergency (SURE). OECD analysis of job retention schemes and the IMF discussion of Germany’s Kurzarbeit illustrate how retention tools can stabilize employment during shocks. In the United States, federal workforce development is anchored by the Workforce Innovation and Opportunity Act, and trade-related displacement has historically been handled via Trade Adjustment Assistance mechanisms. Research exploiting eligibility discontinuities finds that wage insurance can increase employment probabilities and raise long-run cumulative earnings for displaced workers, largely by shortening nonemployment spells.
Stakeholder dialogues also suggest that reskilling is necessary but not sufficient, and that regional priorities differ. Asian stakeholders emphasized AI literacy for irregular workers, public financial support for SMEs, and job redesign; European stakeholders prioritized robust safety nets, intellectual property protections for creative workers, social dialogue, and job quality; and Latin American stakeholders prioritized comprehensive social protection and experimentation via regulatory sandboxes. Across stakeholders, active labor market policies were underemphasized, as was the use of industrial policy and public procurement to shape adoption toward better job quality through labor and social conditionalities.
For AI-driven transitions, the evidence points toward combining upskilling pathways (including micro-credentials with quality assurance), improved job matching capacity, support for geographic and occupational mobility, and targeted income stabilization for displaced workers, while noting that the right mix will depend heavily on which sectors and worker populations are most affected. AI literacy initiatives and job redesign guidance can reduce distributional downside risk while sustaining diffusion capacity. OECD work points to the need for transition tools to be informed by a strong understanding of the local impact of AI on employment and skills pools, covering exposure, complementarity and mismatches, to calibrate mitigation efforts to local needs. In a changing international context, the localization and embeddedness of AI assets should follow both local and global considerations, to avoid legal and geopolitical vacuums.
Addressing AI-related displacement and transition may require stronger employer participation to identify current and emerging skill needs. Labor unions and other worker organizations can be valuable stakeholders in this process, though their capacity and effectiveness vary across sectors and institutions.
Governments shape AI diffusion not only through regulation but also by acting as lead customers. Public procurement can create demand for trustworthy AI systems, standardize documentation and evaluation expectations, reduce vendor lock-in through interoperability requirements, and lower adoption barriers through shared infrastructure. The OECD provides frameworks for using procurement to stimulate innovation while managing risk and highlights how AI is increasingly used in government functions, including procurement and service delivery. Public procurement often represents a low-to-mid teens share of GDP so procurement design decisions have macroeconomic relevance.
U.K. AI procurement guidance developed with the World Economic Forum and the EU’s emerging model contractual clauses for trustworthy AI procurement illustrate this model by operationalizing responsible AI expectations through standardized contract terms and evaluation criteria. Japan’s Digital Agency issued a guideline specifically on procurement and utilization of generative AI in government, framing government adoption as a way to boost safe uses and strengthen competitiveness.
In the EU, “innovation procurement” mechanisms such as pre-commercial procurement (PCP), designed to let public buyers procure research and development services and share the risks and benefits with suppliers, can support SME participation. PCP-style structures in AI diffusion can fund pathways to adoption in public services while building evaluation standards that can later spill over into private markets.
In the United States, federal guidance emphasizes responsible acquisition and governance of AI in procurement with expectations for model documentation and risk management. OMB Memorandum M-25-22 (April 2025) replaces the earlier M-24-18 memo and emphasizes sourcing practices, data portability, and interoperability to avoid single-vendor dependency, alongside cross-functional governance and performance/risk tracking.
Procurement can also serve as a diffusion tool to SMEs when it lowers barriers to selling to the government. Some examples of inclusion features include published evaluation rubrics, modular contracting, accessible testing and sandboxes, bid support for SMEs, and interoperability requirements that keep downstream switching costs manageable. Canada’s innovation procurement program, Innovative Solutions Canada, is designed to help small businesses develop and test solutions with pathways to purchase, while CanadaBuys positions itself as the platform for doing business with the Canadian public sector and highlights policies prioritizing Canadian suppliers and improving access for Canadian SMEs.
Other jurisdictions use procurement marketplaces and digital sourcing frameworks to make it easier for smaller vendors to compete for public contract, such as the U.K.’s Digital Marketplace and G-Cloud guidance, Australia’s whole of government digital procurement arrangements, and Singapore’s GeBIZ portal.
Distributional fairness and early market signals
Distributional fairness can be assessed across workers (wages, job quality, mobility, and entry-level access), firms (across firm sizes, productivity dispersion), places, and market power. Early labor-market signals are mixed. Large-scale job posting analyses suggest that AI skills are associated with wage premiums and faster-changing skill requirements. This could point to a near-term advantage for workers with scarce legible AI competencies and for firms that can credential and hire them efficiently. Tasks may also shift to supervision and error correction, as low-quality generative AI output increases rework.
At the same time, while there has been a slower pace of hiring young workers in AI-exposed occupations such as compute programming and customer support, the hiring downturn began in the spring of 2022, before the release of ChatGPT in late 2022. Other factors, such as economic turbulence from tariffs and the rise in federal reserve rates can serve as a more logical explanation for the job losses.
At the same time, the most salient inclusion risk may be at the entry level: Using high-frequency payroll data, Stanford Digital Economy Lab researchers report relative employment declines for early-career workers in AI-exposed occupations, while more experienced workers in the same occupations remained more stable. Even if aggregate effects remain modest or contested, this is a fairness concern because entry-level roles are pipelines for skill formation and upward mobility. In contrast, the Yale Budget Lab’s tracker finds no clear relationship so far between exposure measures and aggregate employment or unemployment outcomes.
Distributional risk also differs by who is exposed to which technology. Generative AI tends to affect clerical and professional cognitive tasks, while robotics and earlier automation pressures fall more heavily on routine manual work. That implies different inclusion profiles: Some evidence suggests women, higher-skilled, and urban workers are more exposed to generative AI task change, while men, lower-skilled, and more rural/industrial workers face relatively higher exposure to robotics-driven displacement.
Fairness is also not only about wages; it includes who captures surplus and whether AI strengthens market concentration. Competition authorities have warned that foundation model ecosystems can concentrate control over compute, cloud, and distribution through strategic partnerships and control of key inputs. Frontier indicators point to strong concentration. The Stanford AI Index reports that private industry produced nearly 90% of notable models in 2024. A fair-growth approach therefore requires pairing measurement of productivity and distribution. The updated OECD AI principles broaden emphasis on safety, privacy, labor rights, and information integrity, areas where harms can be borne disproportionately and thus can slow adoption.
If AI eventually substitutes for a much larger share of human labor than current evidence suggests, distributional outcomes may depend on ownership of productive assets. In that scenario, universal basic income transfers may help but may not address governance concerns tied to concentrated capital ownership. An alternative policy is broader-based asset ownership (e.g., universal basic capital variants such as sovereign wealth mechanisms, citizen share funds, or structured equity-sharing approaches) designed to provide meaningful dividend streams.
There is a growing—and politically unusual—convergence around capital-side distribution policies (i.e., widening ownership of productive assets), not just wage supports. This includes mainstream labor economists (e.g., David Autor in a New York Times roundtable with Anton Korinek and Natasha Sarin) alongside parts of the tech sector (including Sam Altman’s long-running public interest in basic income) and right-wing proposals for experimenting with child “asset” accounts.
- What should be the evidence standard for claims of “AI productivity”? Should we prioritize measurement investments over direct subsidies for adoption? How should distributional effects be measured alongside productivity (e.g., wages, job quality, returns to owners of capital/shareholders)?
- To what extent will highly regulated sectors diffuse AI slower but with more equitable gains due to governance requirements?
- How can governments best aid the development of complementary intangible assets that speed the diffusion of AI?
- Is the effective adoption of AI achieved more easily by new entrants or by established incumbents?
- When should governments buy systems, buy services, or build shared components?
- What are best-practice procurement and diffusion principles?
- What minimum worker protections should accompany public support for firm AI adoption?
- Should we treat diffusion as a market outcome or as a main part of AI growth strategies?
- Which sectors should be diffusion priorities given equity stakes (health, education, public sector)?
- Should sector regulators coordinate on common audit/documentation to reduce SME compliance costs?
- Who can adopt and complement AI?
- Who holds pricing and bargaining power in concentrated input markets like cloud computing, compute, and foundation markets?
- Which policy levers will move incentives toward broad diffusion?
- Who can sell AI products to the government and under what conditions?
- Brynjolfsson, Erik, Bharat Chandar, and Ruyu Chen. 2025. “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence.” Stanford Digital Economy Lab, November 13. https://digitaleconomy.stanford.edu/publication/canaries-in-the-coal-mine-six-facts-about-the-recent-employment-effects-of-artificial-intelligence/.
- Calvino, Flavio, Hélder Costa, and Daniel Haerle. 2026. Digital Technology Diffusion in the Age of AI: Cross-Country Evidence from Microdata. OECD Science, Technology and Industry Working Papers. OECD Science, Technology and Industry Working Papers. https://doi.org/10.1787/ebc2debe-en.
- Gimbel, Martha, Molly Kinder, Joshua Kendell, and Maddie Lee. 2025. “Evaluating the Impact of AI on the Labor Market: Current State of Affairs.” The Yale Budget Lab, October 1. https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs.
- Kergroach, Sandrine, and Julien Héritier. 2025. “Emerging Divides in the Transition to Artificial Intelligence.” OECD Regional Development Papers, ahead of print, June 25. https://doi.org/10.1787/7376c776-en.
- Mark Muro, Shriya Methkupally, and Molly Kinder. 2025. “The Geography of Generative AI’s Workforce Impacts Will Likely Differ from Those of Previous Technologies.” Brookings Institution, February 19. https://www.brookings.edu/articles/the-geography-of-generative-ais-workforce-impacts-will-likely-differ-from-those-of-previous-technologies/.
- Baily, Martin Neil, David M. Byrne, Aidan T. Kane, and Paul E. Soto. 2025. “Generative AI at the Crossroads: Light Bulb, Dynamo, or Microscope?” Brookings Institution, September 9. https://www.brookings.edu/articles/generative-ai-at-the-crossroads-light-bulb-dynamo-or-microscope/.
- Nurski, Laura, and Davide Monaco. 2025. “Preparedness in the Labour Market: A Toolkit for Anticipating the Future of Work.” CEPS, September 24. https://www.ceps.eu/ceps-publications/preparedness-in-the-labour-market-a-toolkit-for-anticipating-the-future-of-work/.
- OECD. 2025. “AI Adoption by Small and Medium‑sized Enterprises: OECD Discussion Paper for the G7.” OECD, OECD Publishing, December 9. https://doi.org/10.1787/426399c1-en.
- Pal, Siddhi, Catherine Schneider, and Laura Nurski. 2025. “Solving Europe’s AI Talent Equation.” CEPS, July 9. https://www.ceps.eu/ceps-publications/solving-europes-ai-talent-equation/.
- Renda, Andrea. 2026. “What We Should Expect from the EU’s New Apply AI Strategy.” Substack newsletter. Thinking Ahead for Europe, February 23. https://cepseu.substack.com/p/what-we-should-expect-from-the-eus.
- Sajadieh, Sha, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Lapo Santarlasci, Juan Pava, Nestor Maslej, Russ Altman, Erik Brynjolfsson, Carla Brodley, Jack Clark, Virginia Dignum, Vipin Kumar, James Landay, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Elham Tabassi, Russell Wald, Toby Walsh, and Dan Weld. 2026. “The AI Index 2026 Annual Report.” Institute for Human-Centered AI, Stanford University, April 13. https://hai.stanford.edu/ai-index/2026-ai-index-report.
This FCAI dialogue focused on two linked questions: How is AI diffusion impacting firms, sectors, and countries? And how should policymakers measure the resulting economic and social effects? The discussion repeatedly returned to a distinction between technical capability and real-world adoption. Speakers agreed that frontier AI progress is moving quickly, but organizational changes, productivity gains, and policy adaptation are proceeding more slowly.
AI diffusion
One participant argued that the main economic value of AI will come from users rather than producers and cautioned against overconfidence in predicting which sectors or occupations will be most affected. That intervention emphasized that governments should avoid building policy around current anxieties or favored industries because the most important use cases often emerge unpredictably.
Another participant described China’s policy approach as broad support for AI adoption through strategic planning, pilot zones, public-sector deployment, infrastructure development, and support for firms and open-source ecosystems. At the same time, that intervention stressed a sizable gap between policy ambition and commercial reality. Consumer use of LLMs may be widespread, but deeper industrial applications remain uneven and regionally concentrated.
A further contribution argued that exposure to AI does not automatically translate into real uptake. That participant highlighted wide variation across countries, sectors, firms, and demographic groups. Diffusion appears strongest in digitally mature sectors such as finance and professional services, and weaker in agriculture, hospitality, and many lower-income settings. The discussion also highlighted inequalities linked to gender, geography, firm size, and language, including the possibility that English-centric AI systems may limit adoption in many regions.
Another intervention outlined the EU approach to AI policymaking through the AI Act, the Apply AI Strategy, and the AI Continent Action Plan. The goals of these documents are to facilitate links between supply and demand sides in strategic sectors and support the creation of an ecosystem of trust and excellence. That contribution also stressed the need for better observatories, more granular sector-level monitoring, and shared methodologies for measuring uptake and impact.
During the discussion, several participants questioned whether governments should target “strategic sectors” at all, given how difficult it is to forecast where AI will matter most. Others argued that sector targeting can still be justified as a way to induce spillovers, especially in areas where governments see competitive advantage or public value. Agriculture, manufacturing, and finance were recurring examples, but there was no consensus that governments can reliably identify the highest-value use cases in advance.
On labor markets, participants raised the possibility that AI may initially affect white-collar and entry-level knowledge work more than expected, especially coding and administrative tasks, but repeatedly noted that the evidence remains incomplete. Some comments suggested that women may be disproportionately exposed because of occupational segregation rather than anything intrinsic to AI itself. Others warned against allowing early narratives to harden into policy assumptions before stronger evidence is available.
Outcomes measurement
One participant argued that weak productivity growth remains a central macroeconomic problem across advanced economies, and that AI could matter if it meaningfully raises productivity. That intervention suggested monitoring aggregate indicators such as productivity growth, labor-market disruption, business formation, and investment, while acknowledging that it is still too early to attribute broad economic changes cleanly to AI.
Another participant presented survey-based evidence suggesting that AI adoption is already substantial in some economies, though cross-country comparisons depend heavily on definitions and question wording. That contribution argued that part of the gap between economies can be explained by workforce composition and management practices. Better-managed firms appear more likely to encourage and train workers to use AI. It also argued that higher AI adoption is correlated with stronger recent productivity growth, while acknowledging the limits of current causal evidence.
A further intervention added a global measurement perspective, estimating that large numbers of people have already used generative AI but stressing that many more remain excluded by lack of electricity, connectivity, data-center access, digital skills, or language support. Adoption was described as highly uneven, with some countries far ahead and a widening gap between higher-income and lower-income regions. This reinforced the broader point that AI diffusion depends on complementary infrastructure and capabilities, not just model availability.
Participants agreed that current metrics are inadequate. Counting users or firms that have “adopted AI” is too shallow if it does not distinguish between light use of chatbots and deeper integration into production processes. Several interventions called for better indicators that capture diffusion depth, organizational change, productivity effects, and sector-specific outcomes rather than generic usage alone. The need for common definitions and interoperable statistical methods across countries came up repeatedly.
The meeting closed with a clear takeaway: Meaningful, broad-based AI diffusion should be treated as a policy challenge in its own right, not as an automatic downstream effect of technical innovation. Participants broadly agreed on the need for better data, stronger complementary investments, and more realistic theories of change connecting AI adoption to productivity, welfare, and inclusion.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).