The Central Intelligence Agency intends to put an AI "coworker" inside every analytic platform its officers use within a few years, Deputy Director Michael Ellis told a Washington audience on Thursday, , sketching a near-term workflow in which classified generative AI sits alongside human analysts and a longer-term model where officers manage entire teams of autonomous AI agents. Ellis was speaking at an event hosted by the Special Competitive Studies Project, a nonprofit focused on technology and national security.
The remarks, first reported by Nextgov/FCW and confirmed by Politico, are the most concrete public description to date of how America's premier human-intelligence agency plans to fold modern AI into the production of finished intelligence. They also revealed that the CIA has already crossed a small but symbolic threshold: somewhere in the agency's 300-project AI portfolio from last year, a model produced an intelligence report on its own for the first time in the agency's history.
What an AI coworker actually does inside Langley
Ellis was careful to describe the new tools as assistants rather than authors. "It won't do the thinking for our analysts, but it will help draft key judgments, edit for clarity and compare drafts against tradecraft standards," he said. The pitch is simple. Take the most repetitive parts of an analyst's job, the kind of work that consumes hours but does not require classified judgment, and hand them to a model trained to do them faster.
Drafting is the obvious example. CIA analysts spend significant portions of their day pulling raw collection from human sources and signals into early drafts of "key judgments," the structured statements that anchor a finished intelligence product. A model that can produce a serviceable first draft from a collection of source material lets the human analyst spend more time on interpretation and less time on assembly. Ellis also flagged triage assistance and trend spotting, both of which lend themselves to pattern-matching at a scale no individual analyst can match.
The closest civilian analogy is the kind of AI coding assistant that drafts a function and waits for the developer to accept, edit, or rewrite it. The developer still owns the final output. The assistant just removes the friction of staring at a blank page. Ellis is describing the same arrangement, applied to the most secretive writing job in Washington. Anyone who has tracked how enterprise teams use agentic AI tools in commercial settings will recognize the workflow.
From assistants this year to autonomous agents inside a decade
The longer arc Ellis described is more ambitious. Within roughly a decade, he said, the CIA expects to treat AI tools as autonomous mission partners and to organize officers around managing teams of AI agents in a hybrid workflow. The framing matters because it implies a structural shift, not just a productivity boost. Instead of a single human analyst supported by one AI tool, a single human officer coordinates several specialized agents, each working on a portion of a problem.
That shift is exactly the direction enterprise AI is heading in commercial settings, where vendors like Anthropic and OpenAI are pushing agentic frameworks that let models call other models, tools, and external systems. Ellis appears to be saying the intelligence community wants to be in front of that transition rather than chasing it.
The agency has been quietly building toward it. CIA Director Bill Burns and MI6 Chief Richard Moore jointly disclosed in 2024 that both services were already using generative AI for content triage and analyst support. The CIA's own internal chatbot dates to 2023. What is new in Ellis's speech is the public timeline and the explicit reference to a workforce model in which agents are routine.
The 300-project portfolio and the first AI-written report
The numbers Ellis disclosed give a sense of how much experimentation has been happening out of public view. The agency ran more than 300 AI projects last year, ranging from large-dataset processing to language translation, and at least one of those projects produced an entire intelligence report without a human analyst driving the writing.
Ellis did not say which topic the AI-written report covered, who reviewed it, or how it compared to a human-written equivalent. He described it less as a product than a milestone. The first time in the agency's history a machine assembled a finished intelligence assessment on its own. Whether that report ever made it into a policymaker's daily brief is a separate question, and one Ellis left unanswered.
The CIA also doubled its technology-related foreign intelligence reporting in the same period, focusing on how foreign adversaries deploy AI, semiconductors, cloud computing, and cybersecurity capabilities. Ellis said the goal was to track exactly how China is closing the technology gap.
Five to ten years ago, China was nowhere near America in terms of technological innovation. That's just not true today.Michael Ellis, CIA Deputy Director
The Anthropic shadow over the speech
Ellis did not name Anthropic, but the most pointed line in his speech was a clear shot at the company. The CIA, he said, "cannot allow the whims of a single company" to constrain its use of AI, and the agency is actively diversifying across vendors to preserve operational freedom. Politico's reporting framed the comment as a direct response to Anthropic's ongoing standoff with the Pentagon.
That standoff is now months old. Anthropic earlier this year declined to relax its usage policies to allow its tools to be used for domestic surveillance or fully autonomous weapons applications. Defense Secretary Pete Hegseth responded by designating the company a "supply chain risk," and the White House then ordered all federal agencies to phase out Anthropic products. The company has legally challenged the move. Decrypt reported that Anthropic has since filed paperwork with the Federal Election Commission to launch its own political action committee.
For the CIA, the lesson is operational. An intelligence agency cannot afford to build a workflow around a single vendor whose terms of service can change overnight or whose products can be pulled by executive order. The diversification language in Ellis's speech is the agency's hedge. It also matches the broader federal procurement push captured in the recent GSA AI safeguards clause, which forced contracting officers to plan for vendor lock-in risk.
Cyber intelligence is the test case
The CIA recently elevated its Center for Cyber Intelligence into a full mission center, putting it on equal organizational footing with the agency's regional and counterterrorism centers. Ellis said the move is "paying dividends already" in giving the agency new tools and access to priority targets. He also framed cyber as the area where AI competition will matter most.
"The battle of cybersecurity will be a battle of artificial intelligence," Ellis said, arguing that whichever country fields the strongest AI models will hold an advantage in offensive and defensive cyber operations. The new mission center is the agency's bet that consolidating cyber under a single command structure will let it deploy AI models faster than the alternative.
That framing tracks with what the broader intelligence community has been saying privately for months. Anthropic's recent Project Glasswing announcement, a consortium meant to harden critical software against AI-driven attacks, has already started conversations inside the community about how powerful frontier models could change cyber operations on both sides of the line.
What Ellis did not say
Several gaps in the speech are worth flagging. Ellis did not describe what oversight structure will sit on top of the AI coworker tools, how analysts will be trained to spot model errors in their own drafts, or how the agency plans to handle the inevitable cases where a model misreads classified collection. He also did not address the political dimension explicitly, even though President Donald Trump and Director John Ratcliffe have publicly vowed to address what they describe as a left-wing tilt within the intelligence community.
Politico noted that Ellis's speech "hinted at how fights over the political valence of intelligence analysis may look different in an AI-centric future." That is the right way to read it. If a model drafts the key judgments, the question of whose values are baked into the model becomes a question with national-security stakes, not an academic debate about AI alignment.
What to watch next
The near-term things to track are concrete. The agency announced a new acquisition framework earlier this year designed to speed technology adoption, and its rollout will determine whether the AI coworker timeline actually holds. The vendor diversification language in Ellis's speech suggests the CIA is shopping multiple frontier model providers, which means whichever company wins those classified contracts will be working under terms very different from the public ones. And the legal fight over the Anthropic phase-out is still moving, with implications for every federal agency that built workflows around the company's models.
For everyone outside Langley, the most useful frame is this. The CIA is now publicly committing to a future where finished intelligence is co-authored. The remaining question is how long the human stays in the loop, and what the loop looks like when an officer is managing five agents at once instead of writing alone.













