TL;DR
- Theory vs. Reality: Anthropic found AI could theoretically speed up 94 percent of computer tasks, but observed Claude usage covers only 33 percent.
- Who Is Affected: AI exposure falls hardest on higher-paid, college-educated workers like programmers, inverting expectations about low-wage job displacement.
- No Unemployment Spike: No significant unemployment increase has been detected among AI-exposed occupations since ChatGPT launched in late 2022.
- Early Warning: Hiring for workers aged 22 to 25 in AI-exposed fields dropped 14 percent since 2024, a possible early indicator of structural change.
Anthropic this week released a study finding that AI bears down hardest on workers who are better educated, better paid, more often female, and more often white, not the low-wage service workers many expected. And even for those workers, the disruption hasn’t fully arrived.
The new labor market study finds that workers highly exposed to AI hold jobs paying 47 percent more on average than their unexposed counterparts, with a share of college graduates nearly four times higher in the exposed group. That demographic profile inverts the conventional narrative of AI threatening manual or service labor. Yet the study’s central finding offers a counterweight to alarm: a wide gap exists between what AI theoretically could automate and what it is doing in practice.
Measuring What AI Actually Does
To quantify that gap, Anthropic developed a metric it calls “observed exposure.” The approach combines three data sources: the US occupational database O*NET, theoretical exposure scores from the “GPTs are GPTs” paper by Eloundou et al. (2023, and real-world usage data from the Anthropic Economic Index, a dataset drawn from actual Claude conversations. The combination lets Anthropic compare what AI theoretically could do against what users are actually asking it to do.
Moreover, its scoring scale assigns tasks a value of 1 if a language model alone can complete them at a doubling of speed over unassisted work; 0.5 if additional tools are required; and 0 if there is no meaningful AI speed advantage. The methodology weights fully automated API use more heavily than human-assisted use, and work-related contexts more heavily than personal ones, designed to measure displacement risk rather than casual adoption.
In practice, the design choices produce a deliberately conservative measure. By requiring a doubling of speed and weighting work contexts over personal use, the metric functions as a floor estimate: actual AI influence on work is likely higher, which makes the theory-practice gap more striking when it still appears as wide as the data demonstrate.
The Theoretical Baseline
That measurement requires a theoretical ceiling to push against. Anthropic draws it from the paper by Eloundou et al. That foundational study scored occupational tasks for potential exposure to GPT-class language models, assessing which could be completed or meaningfully accelerated by LLMs, establishing a benchmark for theoretical reach across hundreds of US occupations.
Furthermore, Anthropic’s new metric sits beside it, offering a reality check: how much of that theoretical potential has translated into observed Claude use in work contexts. The gap between the two is the study’s central contribution.
Theory vs. Reality: The Numbers
With that baseline established, the headline divergence is stark. Large language models could theoretically speed up 94 percent of all computer and math tasks; observed Claude usage covers only 33 percent. Some 68 percent of that observed usage falls on fully exposed tasks, where a language model alone provides a doubling of speed.
Yet 97 percent of Claude tasks fall into categories theoretically feasible for AI, meaning a large share of automatable tasks are simply not being handled through Claude.
For example, Anthropic reports that authorizing medication refills and transmitting prescription information to pharmacies, a task scored as fully automatable under the theoretical framework, has not been observed among actual Claude usage. The capability exists in theory, but the use has not materialized.
Who AI Is Actually Affecting
However, those aggregate figures conceal sharp occupational differences.
The study places computer programmers at the top with 74.5 percent observed task coverage, followed by customer service representatives at 70.1 percent and data entry specialists at 67.1 percent. Beyond those three, the study identifies roles including legal assistants, technical writers, and financial analysts as carrying significant observed exposure. The pattern runs across knowledge-work occupations involving text processing, analysis, and code generation, precisely the tasks where AI translates written inputs to written outputs without requiring physical action.
In contrast, roughly 30 percent of workers show no observed AI coverage, cooks, motorcycle mechanics, lifeguards, and bartenders among them. A cook’s response to a burning pan, a lifeguard’s split-second water assessment, a mechanic’s tactile diagnosis of engine sound: each requires embodied judgment and physical presence that language models cannot replicate.
In short, the resistance to automation in these roles is not a gap waiting to be closed. It reflects a fundamental mismatch between what LLMs do and what these jobs require.
Labor Market Signals: No Spike Yet
Despite high theoretical exposure in some professions, employment data reveals no alarm signal. The study finds no systematic increase in unemployment among AI-exposed workers since ChatGPT’s release in late 2022. The researchers note their methodology is sensitive enough to detect a doubling of the unemployment rate in exposed occupations from three to six percent, and that has not occurred.
Indeed, according to the Anthropic labor market study, “programmers top the list of most exposed professions at 74.5 percent coverage, followed by customer service representatives and data entry specialists,” yet none of those groups shows statistically significant unemployment increases in the data.
A weaker but real signal comes from BLS employment projections: for every ten percentage point increase in observed AI exposure, the US Bureau of Labor Statistics’ 2024-2034 employment growth forecast drops by 0.6 percentage points. That correlation is modest, suggesting AI is already shaping long-term hiring expectations even if near-term displacement remains invisible in unemployment statistics.
The Young Worker Exception
However, one segment does show an early warning. Among workers aged 22 to 25, hiring in AI-exposed occupations has declined since 2024. The job-finding rate dropped by roughly half a percentage point, amounting to a 14 percent decline over the post-ChatGPT period, though the authors flag it as statistically borderline.
In contrast, no comparable hiring decline was observed for workers over age 25.
Meanwhile, age specificity is telling. Experienced workers in AI-exposed fields appear insulated for now, while entry-level positions may be absorbing the first wave of AI-driven efficiency gains: companies hiring fewer junior staff while expecting the same output from existing headcount. The study frames the young-worker signal as a possible leading indicator of structural change, not confirmed displacement.
How Anthropic’s Findings Fit the Wider Research
Moreover, those early signals from entry-level hiring gain credibility when placed alongside independent research reaching similar conclusions. Anthropic is both the funder of this research and the source of its core dataset, a conflict the study acknowledges but does not resolve. Its findings nonetheless align with independent work from several research teams.
Similarly, a Danish study of 25,000 workers from 2025 found no measurable changes in wages or working hours despite documented high AI usage. A Microsoft Copilot study of 200,000 conversations similarly identified knowledge workers as AI-exposed while cautioning against equating AI capability with automation, the same central argument Anthropic advances.
Furthermore, a Stanford study by MIT economist David Autor examined whether automation exposure translates directly to job loss, concluding that outcomes depend on whether routine tasks are removed alongside expert ones or in their place. In his review of occupational data from 1977 to 2018, Autor traced the removal of routine tasks and the addition of abstract tasks, demonstrating that the difference determines whether a role gains or loses wages and workers.
“These are exposed occupations, but the exposure has completely different meanings for how that work is going to change,” Autor said. An exposure-only framework treats all workers in a high-exposure occupation as equally at risk, when a senior programmer may benefit from AI tools while a junior one faces replacement.
Building on this, Dallas Fed research adds a further distinction. Early wage data suggests AI is simultaneously aiding some workers and displacing others, with the divide falling between codified knowledge (structured, textbook-based information that AI handles well) and tacit knowledge rooted in experience, which resists automation.
Across these four independent research programs, a consistent baseline emerges: AI adoption is shaping long-run labor market projections and compressing entry-level hiring, but not yet generating the unemployment spikes that would confirm structural displacement. For policymakers and employers, the response timeline remains flexible, but the signals are real.
Prior Coverage and Context
As WinBuzzer reported in February 2025, Anthropic’s Economic Index findings had already shown a more complex picture of AI’s workplace role than popular narratives suggested. That earlier report included a downward revision of AI productivity forecasts by roughly half, derived from analyzing Claude’s error rates on complex tasks. The new study extends that methodology directly into labor market analysis.
Furthermore, the “observed exposure” metric is designed as a monitoring instrument, not a forecast. Its value grows as AI adoption either widens or stalls: if the gap between theoretical ceiling and observed reach narrows, the signal will be unmistakable. If it persists, it will point to barriers (regulatory, organizational, or human) that capability analysis alone cannot identify.
The authors are explicit: this is a measuring tool, built to track whether the distance between AI’s potential and its footprint in the labor market narrows or persists.

