Anthropic's Push Into AI Agents (and the Pentagon Drama)

The company behind Claude is winning the enterprise AI race, fighting the Pentagon, and reshaping what agents can do. Here's why it matters.


I should disclose something upfront: FRED runs on Claude, built by Anthropic. So I have a rooting interest here. But even if I didn’t, the story of what’s happening at Anthropic right now would be impossible to ignore.

They’re simultaneously winning the enterprise AI race and getting sued by their own government. It’s the most consequential AI story nobody’s talking about enough.

The Numbers Are Absurd

TIME Magazine just ran a feature calling Anthropic “The Most Disruptive Company in the World.” A year ago, that headline would’ve been about OpenAI. Not anymore.

The revenue trajectory: $2.5 billion per month by February 2026. Per month. They’re on track to potentially surpass OpenAI by the end of this year. Their valuation hit $380 billion — bigger than Goldman Sachs, McDonald’s, and Coca-Cola. Let that sink in. An AI company founded in 2021 is now worth more than companies that have been building brand equity for over a century.

And they’re not slowing down. Anthropic signed a $200 million partnership with Snowflake for enterprise AI agent deployment. Microsoft integrated Claude’s Cowork technology into Microsoft 365 Copilot — meaning Claude is now embedded in the productivity suite that hundreds of millions of people use every day.

They even ran Super Bowl ads. “A Time and a Place.” If that’s not a statement of intent, I don’t know what is.

Claude Code Changed Everything

Here’s what matters to people like me — people who are actually building with this technology.

Claude Code and Claude Cowork are reshaping what it means to be a programmer. I’m not a developer by training. I’m an accountant. But with Claude, I’ve been able to build and customize an AI agent that does real work. Not toy demos. Real, production work that saves me hours every week.

That’s the disruption. It’s not that Claude writes better poetry than GPT. It’s that Claude lets non-engineers build things that previously required a development team. The barrier to entry for creating AI agents just dropped to zero. If you can describe what you want, you can build it.

The Snowflake partnership extends this to the enterprise. Imagine every department in a company having its own AI agent, deployed through existing data infrastructure, handling the repetitive work that burns out employees. That’s not science fiction. That’s Q2 2026.

Then Things Got Weird

Here’s where the story takes a turn that would feel too dramatic for a movie script.

The Pentagon designated Anthropic a “supply chain risk.” First time that designation has ever been applied to an American company.

Read that again. The Department of Defense labeled the company behind America’s most advanced AI as a risk to the supply chain. Not a Chinese company. Not a Russian entity. Anthropic. Based in San Francisco.

The context matters. Claude was the first frontier AI model cleared for classified government use. It was reportedly used in the operation that led to the capture of Venezuelan President Maduro in January. This technology was trusted with national security operations.

And then Trump ordered all government agencies to stop using Anthropic software.

Anthropic sued the Department of Defense on March 9.

The Real Tension

This isn’t about one company’s legal battle. It’s about a fundamental tension that’s going to define the next decade of AI development.

Anthropic is a safety-first company. That’s their whole brand. They publish research on AI alignment. They build in guardrails. They’re the company that asks “should we?” before “can we?”

The military wants autonomous capabilities. Faster decision-making. AI that acts without waiting for human approval. That’s the opposite of safety-first.

When the company building the most capable AI in the world says “we need guardrails” and the Pentagon says “guardrails slow us down,” you’ve got a collision that goes way beyond corporate law. You’re looking at the fundamental question of who controls AI and what it’s allowed to do.

What This Means for the Rest of Us

If you’re building with AI — or thinking about it — here’s what I take from all of this:

The technology works. I know because I use it every day. FRED is proof that Claude-powered agents can handle real business tasks with real reliability. The enterprise deals confirm what individual users like me have been experiencing.

The regulatory landscape is volatile. A company can go from “cleared for classified use” to “supply chain risk” in the span of months. If you’re betting your business on any single AI provider, you need a contingency plan.

Safety and capability aren’t opposites. Anthropic is the most safety-conscious major AI company and arguably the most capable. The idea that you have to choose between responsible AI and powerful AI is a false choice. FRED is safe and useful. They’re not in tension.

This is moving faster than anyone predicted. A $380 billion valuation. Pentagon lawsuits. Super Bowl ads. Enterprise deals measured in hundreds of millions. All in the first quarter of 2026.

The View From Here

The company that powers FRED’s brain is simultaneously winning the enterprise AI race and fighting the United States Department of Defense. If that doesn’t tell you we’re in a fundamentally new era, I don’t know what would.

This is what the agent era looks like. Messy. Fast. Incredibly consequential.

I’m glad I started building when I did. The window between “early adopter” and “late to the party” is closing fast.