A new economic impact report from Anthropic, published on , has put hard numbers to something many workers suspected but could not easily quantify: artificial intelligence is not replacing jobs at scale yet, but it is widening the gap between workers who use it well and those who are still learning. The report, which draws on Claude usage data from February 2026, finds that experienced AI users outperform newcomers on measurable task outcomes by approximately 10 percentage points. That gap, modest-sounding on paper, is already reshaping how employers evaluate talent and how workers must think about their own skill development.
The findings arrive at a moment when the national conversation about AI in the workplace has moved past breathless predictions and into something more complicated. Anthropic's data does not describe a future of mass unemployment. It describes a present of growing inequality, one where the same technology is simultaneously a productivity multiplier for skilled users and a source of frustration and irrelevance for those who have not yet learned to use it effectively. As AI tools demonstrate growing capabilities across knowledge-intensive fields, the workforce implications are becoming harder to ignore.
What the Anthropic Data Actually Shows
The report is careful to distinguish between adoption and proficiency. Adoption of AI tools in the workplace has been rapid: AI assistants are now present in the workflow of millions of American workers, from software developers using code completion tools to marketers using generative writing assistants. But adoption is not the same as skill. The February 2026 Claude usage data shows a consistent pattern: workers who have used AI tools extensively over time complete complex tasks at measurably higher rates, produce outputs that require less correction, and are able to use AI for more sophisticated applications than workers who are newer to the tools.
The 10 percentage point success rate gap between experienced and inexperienced users may seem small in isolation. But across a workforce of millions, it represents a structural difference in productivity, and potentially in compensation. Employers who are paying attention to output quality, not just effort, are beginning to see that difference in performance reviews, project assignments, and hiring decisions. The concern, which Anthropic raises explicitly in the report, is that this gap will compound over time rather than close naturally.
"The data we are seeing is not about AI replacing humans. It is about AI amplifying the capabilities of the people who have learned to use it well, and that amplification effect is significant enough that it is going to show up in labor markets within the next two to three years."
Dario Amodei, Chief Executive, Anthropic
Amodei's framing is worth parsing carefully. The phrase "amplification effect" is doing a lot of work. What it describes, in practical terms, is a scenario where two workers with comparable education and experience who are doing similar jobs diverge in output quality and speed based primarily on their facility with AI tools. That is not a technology problem. It is a training problem, and a fast-moving one.
The DataCamp Evidence: Training Is Not Keeping Up
Anthropic's findings are reinforced by independent research from DataCamp, one of the largest online platforms for data science and AI skills training. DataCamp's 2026 AI skills report, released earlier this year, found that the AI skills gap has persisted and in some cases widened even as organizations have invested in training programs. The headline number is striking: approximately 80 percent of the global workforce will need some form of AI upskilling by 2027 if they are to remain productive in their current roles.
That figure comes with important context. It does not mean 80 percent of workers need to become AI engineers or data scientists. The upskilling required ranges from basic prompt literacy, knowing how to communicate effectively with AI tools to get useful results, to more advanced skills like interpreting AI outputs critically, integrating AI into complex workflows, and understanding the limitations and failure modes of different AI systems. The spectrum is wide, but the common thread is that passive exposure to AI tools is not sufficient. Workers need structured practice.
DataCamp also tracks the job market directly, and its analysis finds that 1 in 10 job postings now explicitly require AI skills. That number has increased substantially over the past 18 months. More significantly, the postings requiring AI skills tend to cluster in higher-compensation roles. The implication is not that workers without AI skills are immediately unemployable. It is that workers with AI skills are gaining access to a different, better-compensated tier of the labor market, one that workers without those skills may find increasingly difficult to enter.
Who Is Being Left Behind
The skills gap is not evenly distributed. Several overlapping factors determine who is likely to fall on the wrong side of it, and they map onto existing inequalities in the American workforce with uncomfortable precision.
Workers in lower-wage roles have, on average, less access to employer-provided AI training. Organizations that invest heavily in upskilling tend to focus those investments on their highest-value employees, the people whose productivity gains will produce the largest returns. Workers in administrative, service, and manufacturing-adjacent roles often receive less training investment even though AI tools are increasingly present in those environments too.
Workers who are older, on average, report lower confidence with AI tools, though the relationship between age and AI proficiency is more complicated than simple age bias suggests. DataCamp's research indicates that structured training programs close the confidence and competence gap across age groups effectively. The barrier for many older workers is not inability. It is access to high-quality instruction and time to practice.
Workers without four-year degrees face a structural disadvantage that is particularly acute. College graduates are more likely to be employed in organizations that offer formal AI training, more likely to have existing relationships with learning platforms, and more likely to have the foundational digital literacy that makes AI tool adoption faster. Workers who entered the labor market through vocational paths or who have built careers in fields that were historically non-digital face a steeper on-ramp.
Geography adds another layer. Workers in major metropolitan areas have more access to in-person training, networking, and organizations that have already integrated AI deeply into their operations. Rural workers and workers in smaller cities often have fewer options for structured upskilling, even as remote work has theoretically equalized access to some digital tools.
The "Power User" Phenomenon and What It Signals
One of the more nuanced findings in the Anthropic report is the description of what the company calls "power users," workers who have moved beyond basic AI adoption to genuinely sophisticated integration of AI tools into complex workflows. These are not simply early adopters. They are people who have invested significant time in learning how AI systems work, what they are good at, where they fail, and how to structure problems so that AI assistance produces reliable results.
Power users are pulling ahead in measurable ways. They complete projects faster, produce outputs of higher quality, and are able to take on work that would previously have required additional team members or more senior expertise. In organizational terms, power users are becoming a form of force multiplier: one skilled AI user can effectively expand the productive capacity of a small team in ways that were not possible two years ago.
The concern raised in the Anthropic report is about what happens to the workers who are not power users. Anthropic is careful to avoid the language of displacement. But the underlying arithmetic is clear: if a power user can produce outputs that previously required two people, organizations are going to make staffing decisions accordingly. That is not a prediction about automation replacing entire job categories. It is a more prosaic and potentially more immediate process: fewer people hired for certain roles, slower promotion tracks for workers whose productivity growth is not keeping pace with the most advanced users, and over time, a labor market that structurally favors people who have invested in AI proficiency.
The technology industry's own AI investments are accelerating this dynamic. As covered in reporting on big tech AI spending under scrutiny in 2026, major technology companies are deploying AI at a pace that is outrunning workforce preparation across the broader economy.
Employer Responses: The Gap Between Intention and Action
The response from employers has been a mix of genuine investment and performative commitment. Some organizations, particularly in technology, finance, and consulting, have built serious internal training programs. They have identified which roles most need AI upskilling, designed structured learning pathways, and created dedicated time for employees to practice with AI tools in supervised settings. These programs are expensive, and the organizations running them tend to be the ones that can afford to do so.
A larger number of organizations have responded by purchasing enterprise licenses for AI tools and announcing that employees are encouraged to use them. This approach confuses access with education. Making AI tools available is not the same as teaching people how to use them effectively. Workers left to figure out AI tools on their own will generally develop slower, shallower skills than workers who receive structured instruction. The DataCamp research is unambiguous on this point: self-directed AI learning produces much slower proficiency gains than structured training programs.
Some organizations have addressed the gap through hiring rather than training. They have recruited workers who already have strong AI skills, often at premium salaries, and positioned them as internal resources or team leads. This strategy has produced results for the organizations doing it, but it does not address the broader workforce development problem. It simply redistributes the existing pool of skilled workers rather than expanding it.
The federal government has been largely absent from this picture. Unlike the significant investments in digital literacy and broadband infrastructure that characterized previous technology transitions, there has been no comparable national initiative to address the AI skills gap. Some states have launched programs, and community colleges have begun integrating AI literacy into curricula, but the scale is modest relative to the stated need.
The Next Two Years: What the Projections Say
DataCamp's projection that 80 percent of the workforce needs AI upskilling by 2027 is, in effect, a countdown. The window between now and that threshold is where the gap either narrows or widens. The Anthropic report offers a more granular view of how that timeline might unfold: AI capabilities are improving faster than workforce training is scaling, which means the productive gap between skilled and unskilled AI users is likely to widen before any large-scale training initiative can close it.
The 1 in 10 job postings requiring AI skills figure from DataCamp is also a leading indicator rather than a final state. Based on current trends, that proportion is likely to be 1 in 5 or higher by 2028. For workers in technology, finance, healthcare administration, marketing, and a range of other white-collar fields, AI skill proficiency is moving from a competitive advantage to a baseline requirement. The workers who treat the current moment as an opportunity to build those skills will be better positioned. The workers who wait for employer-mandated training or assume the gap will close on its own face real risks.
The Anthropic report stops short of prescriptive recommendations, but the implication of its findings is that the workforce development response needs to be faster, broader, and better resourced than anything currently underway. The skills gap it describes is not a prediction. It is a measurement of something already happening, and the question is whether institutions, employers, and individuals will respond at the speed the data suggests is necessary. For those tracking the policy dimension of this issue, the Colorado AI Act framework debate illustrates how states are beginning to grapple with AI governance questions that touch workforce concerns directly.
What Workers Can Do Now
The practical implications of the Anthropic and DataCamp findings for individual workers are specific. The most important step is to move from passive exposure to deliberate practice. Using an AI tool occasionally for simple tasks builds familiarity but not proficiency. Workers who make measurable gains in AI skill tend to be those who consistently push the tools to do more complex work, experiment with different approaches to the same problem, and invest time in understanding why AI outputs succeed or fail in particular contexts.
Structured training resources have expanded significantly. DataCamp, Coursera, and a number of industry-specific platforms now offer learning pathways that go well beyond introductory AI literacy. Certifications from these platforms are increasingly recognized by employers as credible signals of practical AI competence, particularly for workers in roles that have not historically required formal credentialing. Online learning platforms offer some of the most accessible routes to these skills, a topic examined in depth in ANewsTime's coverage of Newsweek's top online colleges and learning platforms for 2026.
The Anthropic report's core message, read between the lines, is that the AI transition is already underway. The workers and organizations that engage with it seriously now will be positioned differently from those that do not. The 10 percentage point gap the data shows today is not a ceiling. It is a baseline measurement, taken at the beginning of a transition that is likely to accelerate.












