Pentagon Insider: What's Next For Anthropic and The Department of War — With Michael Horowitz

Channel: Alex Kantrowitz

Published at: 2026-03-06

YouTube video id: vDSolODuKRQ

Source: https://www.youtube.com/watch?v=vDSolODuKRQ

Where do Anthropic [music] and the
Department of War go from here now that
their relationship exploded? Let's talk
about it with an actual expert who's
designed AI policy for the Pentagon,
especially regarding weapon systems.
[music] That's coming up right after
this. Welcome to Big Technology podcast,
a show for cool-headed and nuanced
conversation of the tech world and
beyond. [music] Well, many of you have
asked for an expert who's worked
intricately on matters that might
involve the Anthropic [music] Pentagon
dust-up, and we definitely have the
right person for you today. Professor
Professor Michael Horowitz is here
[music] with us. He's a professor of
political science and economics at the
University of Pennsylvania. He's also a
senior fellow for technology and
innovation at the [music] Council on
Foreign Relations. And importantly, he
was the Deputy Assistant Secretary of
Defense for Force Development and
Emerging Capabilities at [music]
Department of Defense. And as I said in
the intro, he worked on policy at the
Pentagon, especially on weapon systems.
So, this is going to be [music] a
discussion that will take you deep
inside what might actually be the
mindset of the Pentagon and and where we
will end up with [music] this dust-up
with Anthropic. Professor, great to see
you. Welcome to the show.
Thank you so much for having me. Looking
forward to the conversation.
Okay, great. So, uh
we have been surmising what might might
actually be the meat of the matter
between Anthropic and the Pentagon. And
I've gone back and forth. On Friday, I
thought maybe it was a a marketing move
by Anthropic. Uh then it became clear
that it's a little bit more serious than
that now that they've been deemed a
supply chain risk. And our audience is
basically centered around three
different potential scenarios. I want to
throw them at you and see which one you
think is closest to the truth. So, the
first option here what we're looking at.
And by the way,
what happened for those who are just
reading in, although I'm sure many of
you are caught up, Anthropic and the
Department of War, they had this
contract where the Department of War
would use their technology and Anthropic
was looking for a carve out
saying that we don't want our technology
used for mass surveillance or autonomous
weapons and then that blew up. The
Pentagon not only canceled the contract
but declared them a supply chain risk
which we'll get into. So here's my three
options of what's going on in this in
this conflict.
One is maybe it's just a culture clash
over really inconsequential details and
it's just an ego blow up.
The second is that potentially is it the
Anthropic CEO Dario Amodei valiantly
standing up against mass surveillance
and the potential of mass surveillance
through AI or third is this what's
really happening at the Department of
War valiantly pushing back against a
private company dictating it how to run
wars. What do you think is closest to
the truth in this scenario?
I mean there's probably like a little
column A, little column B, little column
C going on like fundamentally but to me
this is about personalities and politics
masquerading as a policy dispute.
Although it raises really important
policy issues and let me let me tell you
what I mean by that. If you look at the
relationship between Anthropic and the
Pentagon, Anthropic was the first
frontier AI lab willing to do classified
work to support American national
security. So starting right there like
Anthropic like was ready to be behind
the scenes with the Pentagon in a way
that other frontier AI labs weren't
ready to do yet.
And
Anthropic was also there was no dispute
between Anthropic and the Pentagon about
any current projects that Anthropic was
doing. It wasn't like the Pentagon asked
Anthropic to do something and Anthropic
said no or had hesitations. It also
seems as though there were not any
upcoming projects that the Pentagon was
going to ask Anthropic to do that
Anthropic had had questions or concerns
about. It seems like this kind of
started when after the Maduro operation
when the United States uh uh plucked
the leader of Venezuela from from that
nation and and brought him back to the
United States, that somebody from
Anthropic basically called somebody from
Palantir and said, like, "Hey, was our
tech involved there?" And that's because
the way that Anthropic's technology is
often integrated
within the Pentagon is through a
Palantir product called Maven Smart
System.
And so Anthropic calls up Palantir and
is like, "Hey, was like our tech used?"
And not saying it was bad.
And the Pentagon finds out and is
offended that Anthropic even asked. And
that was essentially the trigger behind
this. So that combined with the fact
that there was no actual current thing
under dispute makes me think that this
is at least as much about personalities
and politics as it is about substantive
disagreements. So how do you get from
there then to this dispute over the
language around surveillance? I mean,
the it was really one word, right? It
was they the Department of War wanted
Anthropic to agree
to language in the contract that said
that they wouldn't use the technology
for mass surveillance consistent to some
laws that are already on the books. And
Anthropic wanted that to be pursuant to
some laws on the books. You know, I I I
and and some people say that's a very
very big difference, not a big
difference, but how do you get from sort
of point A to point B where Anthropic
says, "How's our technology being used?"
to all of a sudden a litigation of like
a single word in a contract that's not
even related to the Maduro thing?
Totally not related at all. I I think it
it may be
it reflects the Pentagon updated its
artificial intelligence policy about
like a month or so ago. And one of the
things that it did was say that all
future contracts that it signed with any
AI vendor, so not even necessarily just
a frontier AI lab, would have to follow
a quote all lawful uses provision,
meaning that they were comfortable with
their technology being used for, like,
wait for it, all lawful uses. Now,
meanwhile, like last summer, Anthropic
and the Pentagon signed a deal that the
that the Department of War was happy to
sign that said that contained these
provisions that, you know, made
Anthropic comfortable surrounding the
use of its technology. And so then the
Pentagon updates its policy and starts,
you know, talking essentially about
renegotiating this this contract more or
like more or less. And the then this,
you know, Maduro trigger essentially
happens, and what you end up with, I
think, is fundamentally a breakdown in
trust between Anthropic and the
Pentagon, where the Pentagon decided
that it didn't trust
Anthropic to be there for important
national security use cases, like, side
note, we can talk about Iran in a couple
of minutes. Um and Anthropic didn't
trust that the Pentagon would use its
technology responsibly. And the mass
surveillance debate, in some ways, is a
good illustration of this. The
Pentagon's been very clear that it
follows the law and that mass
surveillance, like, not surprisingly,
like, violates the Fourth Amendment.
Like, that's not like a thing that the
Pentagon is, like, thinks that anybody
should be worried about the Pentagon
doing.
How much you trust the Pentagon in
general might like reflect your views
like your views about that. Uh and so
they think that Anthropic's provision on
that point is unnecessary because it's
already covered essentially as a lesser
included in the obligations that the
Pentagon already has.
Anthropic wants these assurances because
they're worried about the way that
advances in artificial intelligence
could lead to things like
de-anonymization
of anonymized data and create real mass
surveillance issues including for
American citizens. And And so you have a
conflict there. And the And the crux of
that conflict in some ways is that the
Pentagon is thinking about artificial
intelligence of vendors and services the
same way they think about buying
weapons. And when say like Lockheed
sells an F-35 aircraft or a missile to
the Pentagon, Lockheed doesn't get to
tell the Pentagon like, "Oh, you could
only like use it against like this
country but not that country."
And so from the Pentagon's perspective,
what Anthropic is asking for is like
unprecedented. Like how could they even?
From uh Anthropic's perspective, AI as a
service it's a it's a constantly
updating technology that they need to be
involved in. It's not just like selling
a missile to the Pentagon.
And so that that's that's like a bit of
I think what's going on behind the
scenes. So I just want to clarify here
and this is important. When we're
talking about this dispute, we're not
talking about Anthropic being used,
let's say, in strikes like to pinpoint
autonomous strikes on Iran. And we're
not talking about the Department of War
wanting to like from now start to create
a surveillance database, right? This is
simply language that was surfaced after
the Maduro thing and it's almost a
a
a dispute that seems to have I don't
want to say come from nowhere
um but
it's not like a critical war-fighting
capabilities that are being discussed
now nor are these uh these programs in
the works.
I think there are a couple different
ways to think about about this. I'm not
sure that the dispute necessarily came
from nowhere if you you know,
Anthropic's been very public in its
criticism of some other Trump
administration uh activities unrelated
to defense uh such as sort of easing up
on AI export controls with regard to
China. And so, one wonders, although
like, who knows, whether in some ways
there were maybe some bad feelings
between Anthropic and the White House
that could have played a role here. But,
but shifting back to the defense kind of
side of the house, the Right. Like, I I
think I think there are like reasons why
people may want to worry about, uh, from
my personal perspective, about
artificial intelligence and the way it
advances in AI could enable mass
surveillance. I'm not sure the Pentagon
is the right locus for that concern.
Fundamentally, like, I might worry about
like other departments and agencies like
first in that context.
And the the interesting thing about
Anthropic's other, uh, objection, you
know, surrounding autonomous weapons,
uh, systems is the, you know, the
statement that Anthropic's leadership
made on, you know, Thursday evening, uh,
suggesting they actually don't have a
problem with autonomous weapon systems.
They just think their tech isn't ready
for it yet. And let me tell you, as the
person that drafted the Pentagon's
policy on autonomous weapon systems,
Anthropic is not wrong there.
In that the
if you were going to train an autonomous
weapon system,
what the kind of thing that you would
want that weapon system to do is is
generally not the things that like
people fear the most, which is like, can
this algorithm tell whether like an
individual is a legal combatant on the
battlefield? Like, that'd be super hard.
Like, we can talk about that more if you
want. What you're generally going to be
doing is training an algorithm to do
something, um, do something like say
target Russian tanks or Chinese
fighters. Something very you're a very
specific and bespoke data. And often the
kind of algorithms that you're going to
be most likely to use in that context
are are much more deterministic
than say like Claude trained on the slop
of the internet. And so, Anthropic's not
wrong that their tech like isn't ready
for prime time for autonomous weapon
systems. And they even offered to help
the Pentagon
get their tech ready for that kind of
use case in the future, which makes this
all the more puzzling like how this
escalated.
Okay. And by the way, you're bringing up
an interesting perspective here. And
this is one of the reasons why I was so
thrilled to have you on the show is
because you have actual knowledge of how
this technology is being used. Which by
the way,
up until this point at least for me has
been sort of this this you know, big
cloud cuz we don't fully know exactly
what's going on inside the Pentagon. And
And you know, there's been talk about
how you know, despite this dispute, the
Pentagon still used Anthropic's tool and
tools in the Iran strike. And well, does
that mean
you know, like some people have implied
that Claude is out there targeting you
know, combatants on the Iranian side? Or
is it just like there are they querying
you know, some some databases and then
going to triple-check after Claude makes
you know, some assumption there. So,
and maybe that could be significant. So,
I'd love to turn it to you and just get
your perspective on how are Anthropic's
tools being used inside the Department
of War?
A great question. Anthropic's tools are
being used in a bunch of different ways
inside the Department of war. And what
we're focused on most now in some ways
are the uses in the context of the Iran
operation because that or like something
like that is probably like most
illustrative for for thinking for
thinking this through. And on the
classified side, a tool like Anthropic's
is going to be as I mentioned before
plugged into something plugged into
another tool called Maven Smart System.
Which you know, imagine essentially a
dashboard that help designed to help a
combatant commander, like the person in
charge of all US military forces in the
Middle East or all US military forces in
the Indo-Pacific. Like a dashboard
designed to help that person
understand what's going on in the region
and understand all the different kinds
of things happening. Processing
unclassified data feeds, classified data
feeds, putting all that information
together, like trying to help that
commander like make good decisions with
regards to American forces. And Claude
is one one of many different inputs
essentially into that into that system.
And I have no doubt and there's been you
know reporting suggesting that the there
a couple of different ways that
something like Claude could be used in
this context. One is just querying
public databases, querying public
information, like building like what are
the most important news services in
Iran, like what is the chatter like in
Iranian media right now, like all of
those like kinds of things.
Claude could also be doing things like
helping with with simulation, you
helping more rapidly generate
simulations of what might happen in the
context of of an attack. A thing that
Claude is definitively not doing
at least as far as I know, or like I
would be genuinely shocked is
autonomous targeting on the battlefield
today.
Like that it I would be
astounded if if that was a a Claude
a Claude specific a Claude specific
task. Again, for reasons that have to do
with technological readiness as much as
anything else. And here I think is
important context. There's often a lot
of concern that the Pentagon is going to
take new tools like AI and use them
inappropriately, be sort of overly
aggressive with their with their
implementation. And like don't get me
wrong, accidents will happen when you
integrate new technologies, that happens
all the time. It's happened for sort of
like hundreds of years. But nobody wants
America's military systems to work
effectively more than the warfighter.
Because systems that aren't reliable
they don't work and systems that don't
work they get you killed. So, nobody
wants our our tools essentially to be
effective more than the warfighters.
And so the the US military's actually
been very conservative in some ways when
it comes to the integration of AI in
general let alone a tool like Claude.
And so I've no doubt that any
information that is that's coming out of
Claude in this context is going through
layers of review by humans, you know,
prior to that influencing anything
happening close to the battlefield.
How much of a leg up do you think using
Claude would give a military? I mean
this is sort of going to the importance
of it in in the battle.
I like sort of summarizing media clips
from Iran seems like something that
technology's been able to do for a long
time. I mean maybe maybe I'm curious to
hear your perspective. Here's one
example as it's been reported that the
agencies had you know traffic cameras
throughout Tehran
hacked and were able to see movements.
But is that something that you would use
like a large language model for or just
a you know sort of more traditional
computer vision system?
Well, I guess like you could but you
could do it with computer vision. Sort
of as you you know like as you said and
the military's often pretty ruthless
about using the best tool for the job.
And and in this case you have tools that
have been like proven out over years
able like especially and especially
computer vision tools like less
sophisticated in some ways AI tools
proven out over years able to do
a bunch of these tasks. And so you you
wouldn't you know might you throw Claude
at that in some ways?
Maybe but you wouldn't throw Claude at
that instead of
using computer vision. You might throw
Claude at that maybe to see how how
those things compare to each other.
Perhaps and what the and what the and
what the assessment looks like. But I
honestly this is all speculation in in
some ways and in one thing I think that
it's important for people to keep in
mind is that because this is filtered
through a platform like Maven smart
system and all of these tools whether
whether like Maven smart system or
anything else, they're always on the
back end like more user intensive than
it looks like in the movies and in
television for the military. The they're
always a little clunkier. They're always
a little bit more, you know, user you
know, user intensive. So it's not like
the like humans are being cut out of
this process.
And
note that the use of Claude that we're
talking about in this context is is what
we would say in military parlance is
more more operational.
More looking at at how what's happening
on the battlefield, how can you uh what
are what it's a decision aid essentially
for a commander on the battlefield,
which is neither
the mass surveillance objection that
Anthropic had nor anything involving an
an autonomous weapon system.
Right. Yeah, just knowing what I know
about these LLMs, to me the guess was
always I mean maybe it was an educated
guess that this was tangential. Uh now
maybe useful but largely tangential
versus core to what the military is
doing today. Seems like you I think
that's correct. Mostly agree with that.
Yeah, 100% 100%. I mean it wouldn't even
surprise me if Claude's being used in a
way that's a little more experimental.
Like one of the other things beside
behind the scenes here is that the
you know, because of this conflict is in
is with Iran, it's US Central Command
that is running the that's running the
show for the United States military and
US Central Command of the various US
combatant commands around the world has
been arguably the most forward-leaning
when it comes to experimenting and
prototyping and innovation. They've been
the most excited in some ways to like,
let's see what we can do with emerging
capabilities. Like I worked with them a
lot with my old hat on in in the
Pentagon and they I've no doubt that
they are taking lot they they are taking
lots of things like out for a test drive
so to speak. Um including but not
limited to uh to Claude even while
they're like
keeping it on the straight and narrow
and using the more proving capabilities
to you know, make the big decisions.
Right. And I think Dario I mean you
referenced it. Dario said we don't
believe that today's frontier AI models
are reliable enough to be used in fully
autonomous weapons.
Uh that seems very reasonable to me. We
were talking about on the show like
whether you let the LLM take the shot.
And you know, for anyone who's in these
tools, it's like
Claude code is an amazing tool. You can
build software with it without knowing
how to code. But the amount of time you
spend debugging is almost is certainly
longer than the amount of time
uh you spend giving prompts. So it seems
like a reasonable objection from Dario
there.
All right. Public service announcement.
Okay. The
phrase fully autonomous weapons. If
there's anything I wish Anthropic would
stop doing, it's actually using the
phrase fully autonomous weapons. Here's
why.
It's not a term of art. And so it from
the perspective of the Pentagon. And so
when Dario says, you know, we don't want
to do fully autonomous weapons like this
or like that, um it it frankly can be it
can be confusing in some ways for for
some of the defense community because
the the terminology in in US policy is
autonomous weapon systems.
And there's a there's a difference
between those. And and and here's here's
what it is.
The US military has been using
autonomous weapon systems for more than
40 years.
I think people really underestimate in
some ways the degree of autonomy built
into modern weapon systems, even in a
world like before what we would call
like AI today, like a good old-fashioned
AI like kind of world. Like let me give
you two examples.
One is something like a homing munition
or a radar guided munition where
somebody may believe that there's a
radar over the horizon and they fire a
missile like at that radar. There's no
human supervision of that missile after
it's launched. It just turns on a seeker
and it goes and hits the radar. Is that
What if that radar is on top of a
school? What if that radar is on top of
a you know, on top of a hospital? Like
you don't know, it's gone.
Second example is something called the
close-in weapon system, which is a
weapon system that protects ships and
some military bases from essentially
massed attacks.
So, if there are like 10 missiles coming
in and you couldn't even point and click
at all of them if you were an operator,
you can flip on essentially an algorithm
that automatically like detects and
shoots at those. The US military has
been using that system since like 1980,
as have, you know, like dozens of
militaries around the world. And so, we
need to be careful then when we talk
about autonomous weapon systems.
And to be clear about like what is the
thing that we are worried about?
And what is the thing that we think the
technology is ready for or not ready
for?
As I said before,
I think autonomous absolutely right that
they're like tech isn't ready for prime
time and incorporation like at the edge
in an autonomous weapon system. Also, if
you think about like the compute at the
edge, like how would you even like fit
that into a missile? Like I don't know,
but the
the but like this is a
There are like so many other way Like if
you want an autonomous weapon system,
there are so many ways you would do that
that don't involve LLMs. Essentially,
but the public service announcement, the
phrase autonomous weapon system is the
appropriate term of art. An autonomous
weapon system is a weapon system that
after activation
selects and engages targets without
further human intervention. Like,
period. Dot.
That that is the way that the Pentagon
at least Different people have different
definitions, but the way the Pentagon at
least defines what an autonomous weapon
system is.
Can I tell you where I where I think
uh so much of the confusion is coming
from now that you explained this? Sure.
>> All right. So, this is So, the
I've I've worked
a couple years in the government. Uh so,
and you talked about the technology.
We we both know that the government
technology tends to lag behind
commercial use cases by a good amount.
>> bit. Just a little bit.
The AI industry has gone through two
phases over the past year and a half.
There was a chatbot phase of AI, right?
And that also includes content
synthesis, summarization, these type of
things. And now they're moving into an
agentic moment. Right? I think there is
a misconception that the government is
already on agentic, right? Where the
technology takes its own decisions. But
really, what I think I'm hearing from
you is it's in the chatbot phase. It's
still this year two years behind
commercial, and this this worry about
the technology getting to agentic is
sort of misplaced because of where the
government is.
I think that that's probably broadly
right. Although, frankly, part of what
Anthropic was trying to do in in doing
classified work with the Pentagon in the
first place was fix that. In in getting
in getting in behind the scenes and
ensuring that there that their tech
that, you know, that that America's
warfighters had access to
things closer to the cutting edge. But
the, you know, another thing to keep in
mind here and is the way that testing
and and evaluation standards, or what
the military calls T&E standards, differ
from what you would need to maybe by
toss it like a piece of technology out
in the commercial market. You know,
imagine you were you're releasing an a
either either like last gen like chatbot
kind of system or this gen kind of
agentic system into the marketplace as a
as a company. If there are errors and
problems and whatever like those are
embarrassing, but you fix them on the
fly and frankly like getting there first
can get you market share. There's all
sorts of like economic reasons why like
a for-profit company might do that.
When you release stuff that doesn't work
well in the military, people die.
And so the incentive structure is very
different and so the testing and
evaluation of these systems is thus very
different in a military context. Like
the level of reliability and
cybersecurity etc. you need to hit for
something to be like fieldable
is uh is very different. So people
should at at least in theory like if the
system's working properly
like be reassured on that front.
>> Exactly. Okay, I want to talk now about
the government's perspective and what
this supply chain risk designation might
do to Anthropic. Let's do that right
after this.
And we're back here on Big Technology
Podcast with Professor Michael Horowitz
of the University of Pennsylvania, also
the former Deputy Assistant Secretary of
Defense for Force Development and
Emerging Capabilities.
All right. Uh
let's talk a little bit about the
government's perspective.
Is there validity in the government's
perspective of telling Anthropic, you
might you know, you might have these
thoughts about how to use your
technology, but you don't tell us uh
what to do. We are we should be trusted
to be the ones who determine that, not
you. I think there are the government
has a point in some elements here and
let me uh let me tell you what I Let me
tell you what I what I mean.
And you know, I I hinted at this before.
When the government The government's
used to buying a technology You think
about the when the government buys
hardware, the government buys a fighter
jet or a a submarine or a missile or
something. The companies that build
those technology don't tell the
government how to use it.
The assumption is that the government
will follow the law
when it uses those technologies. Since
like otherwise like kind of what are we
doing here? And and so the government
viewed these requests from Anthropic and
their refusal to yield on them as
essentially challenging the Pentagon's
authority. And and this is I think part
of where the what is a little bit that
the like culture and personality clash
that we were talking about before, like
where it where it comes from because the
the Pentagon's saying hey like we follow
the rules. Like that is the that is a
thing we definitively do. Like you don't
need to worry that we won't follow US
law. You don't need to worry that we
will go do go do crazy things that the
technology isn't ready for. We have law
and policy and process designed to
ensure that that doesn't happen. We
don't let other vendors tell us we can
use their tech in you know, scenario X
but not scenario Y. So what you're
asking for is unreasonable. And and I
understand that from the government's
perspective like why they might
why they might say something like why
they might say something like that.
That's also why you know, as I as I
suggested before, I think what we're
really seeing here it is to start us off
in this part of the conversation but
what we're really seeing here in some
ways is a breakdown in is a breakdown in
trust. Exactly. And so the question is
what happens next. And you know, I in
some ways I I do believe that if if
you're government and you think you
can't trust your technology vendor, you
should probably swap them out. Um but
that's not exactly that's not where the
government stopped here. What they did
was they they deemed
Anthropic a supply chain risk. Uh and
that means that the company cannot work
with US government agencies and uh
defense or war secretary Heckath went
further. He said effectively effective
immediately no contractor supplier or
partner that does business with the
United States military may conduct any
commercial activity with Anthropic.
That that includes Amazon by the way who
is a US government contractor and also
hosts
Anthropic models.
I have this from a source with knowledge
of the department's thinking.
The feeling inside the Department of War
right now is they want to destroy
Anthropic.
What do you think about this reaction? I
have a lot of thoughts about this. Let
me
me start with the bottom line which is
like crushing one of the most innovative
companies in the world insulting the
earth is not good for American
innovation or the American economy and
so like dear God let's hope they work it
out. But like backing up a little bit
the right like you would have met in a
normal marketplace situation like one
can think that the Pentagon's view of
this is reasonable or
unreasonable but it but it is what it is
and in a normal market view of this the
Pentagon would do one of two things.
Either it will say we would work with
Anthropic on these use cases but not
those that you know like they don't want
to do and if we want to do those in the
future and reminder they're not doing
them right now so there was no dispute
about a current current or planned
future use
then we'd find another AI vendor to do
that and that you know whether it's xAI
or openAI or like somebody else like
that would do that. Or the government
could have said you know what it's not
worth it for us to do new business with
Anthropic. Let's cancel the contract
we'll off ramp them and we'll low you
know we'll bring xAI or openAI or like
somebody else on to to meta whatever
like to to address this.
That's obviously not what happened. It's
not just that the government has labeled
Anthropic as a as a supply chain
risk it it's in some ways even more
baffling than that. And the supply chain
risk designation is for companies
believed to present a sort of clear
danger to US national security. Examples
of companies labeled as a supply chain
risk are Huawei.
Uh, you know, like Chinese companies
where the fear is that if a US
government agency worked with them, they
might insert backdoors or
vulnerabilities that could place US
national security at risk. Like, that's
not really what we're talking about
here. And so, I think a lot of people
have wondered whether that uh that
designation would hold up in court.
And also, it's not clear that the supply
chain designation has actually been
delivered to Anthropic yet. The it it
hadn't as of uh about a day ago,
although it had still been threatened.
Um, I mean, Anthropic, I'm sure, will be
in court as soon as the like that as
soon as like they get the letter and
like actual designation.
And it was striking, of course, that I
mean, no pun intended, that less than 24
hours after the supply chain
designation, the US government was using
Anthropic's technology in the context of
Operation Epic Fury against Iran. Like,
how could they really be a supply chain
risk if you are using them in on in the
context of ongoing military operations?
But the government's gone further.
They've on the one hand said they could
label Anthropic as a supply chain risk
or are labeling Anthropic as a supply
chain risk. They've also said that
they're considering using the Defense
Production Act to compel Anthropic to uh
work on use cases with the government
that Anthropic might not uh might not
want to. And the Defense Production Act
or DPA was designed to ensure that, say,
the government was first in line for for
vehicle manufacturers if there was a war
going on and you needed more tanks or or
something like that. It was not designed
for like this kind of environment. But
that the government's thinking about
these two different things, both the
Defense Production Act designation and
the supply chain designation, and they
point in opposite directions. One says
you can't work with the government, and
one says you have to work with the
government. Like points to some of the
confusion here.
Now you've worked within government
agencies. You've worked within the
Department of Defense. Um
This is from Reuters. State Department
switches to open AI as US agencies start
phasing out Anthropic's. And it this
article says, "Leaders not only at the
Department of State but Treasury and
Health and Human Services have directed
their employees to abandon Anthropic's
language trained chatbot platform Claude
on orders from President Trump. They
join the US military in dropping use of
the platform."
I I'd love to get your perspective just
about the speed uh that governments move
and um when you think about governments
evaluating certain technologies cuz
you've been inside one, um what sort of
damage do you think this has already
done to Anthropic now that we're seeing
so many agencies move off?
There are a couple of different pieces
here. I would say and again, a lot of
people seem to not a lawyer, but a lot
of people seem to think that this this
won't stand up in the the the
designation won't stand up in court.
Right, but even so Oh, yeah. I I
abso- absolutely. Yeah. The
the use case. But but it matters in so
far as it's not like Anthropic can't
work with AWS. It would mean that
Anthropic couldn't work with like AWS
government.
The it's not um it's not in theory like
a death blow to uh like working with AWS
or something like that. But the but from
a government's from the a government
agency side, what this implies to me
actually is that LLM integration in US
uh government uh departments and
agencies has uh still behind the power
curve and behind where frankly somebody
like like me would want it to be. And
it's been sort of it was much announced
over the context of the last year that
you know, all the frontier AI labs like
made their uh made their technologies
available either for free or for like a
penny or a dollar or something like that
to the federal government trying to ramp
up trying to ramp up adoption. And so
government employees then at these
agencies in theory have had access to
multiples of these
for a while and are like choosing
whichever ones they like want to use for
various for various tasks. And it it
sounds to me like on the unclassified
side then
that Claude is being people are getting
instructions like don't use Claude use
something else use something else
instead. It's pretty fast moving frankly
for the government. But it was notable
in the announcement both the Trump
announcement and the HexaF announcement
that they laid out this 6-month off-ramp
period
for like real national security use
cases in part because they rely on
Anthropic's technology right now because
Anthropic's the only vendor
like behind the current curtain in a
classified environment. So I think what
we're seeing is that real bifurcation
where for these unclassified use cases
the you know essentially like flip this
like use you know use chat GPT instead
or use like Grok instead or something
like that. And frankly if there's a deal
in the future they'll just like flip
back to using Claude if they want. The
on the classified side it's going to be
a much harder slog because of the
integration of Claude and the fact that
it was the first mover.
Because Anthropic was the first company
willing to do that kind of work with the
with the defense establishment. Then the
question is also in terms of
what this means for companies thinking
about working with the government that
you could potentially be declared a
supply chain risk. This is from Dean
Ball who I think worked on some AI
policy with the Trump Trump
administration. He goes even in the
narrowest supply chain risk designation
the government has [snorts] still said
that they will treat you like a foreign
adversary. Indeed they will treat you in
some ways worse than a foreign adversary
simply for refusing to capitulate to
their terms of business simply for
having different ideas expressing those
ideas in speech and actualizing that
speech and decisions about how to deploy
and not to to deploy one's
property. Each one of these is a
fundamental to our republic, and each
was assaulted by the Department of War
last week. And basically, the worry is
that companies will be wary of working
with the Department of War if this is
what could happen to you.
I I'm less worried about that, but I'm I
would love to hear your perspective as
someone who's been on the inside. I
mean, this is a rough look for our
Pentagon that has worked really hard
across multiple administrations and in a
bipartisan way to build ties with build
ties with Silicon Valley across the
board.
And obviously, this this administration,
the Trump administration, has some like
deep ties with Silicon Valley in some
places, like less deep ties in
in in other places. But certainly, the
notion that if you sign you can sign a
contract with the government,
they might ask you to change that
contract, and if you don't agree to it,
they might attempt to destroy you.
Is very different than in terms of the
the risk then for a company in getting
involved with the Pentagon in the first
place. Because going back to something
that we were talking about before, the
when it comes to the use cases
that Anthropic may be concerned about in
different in different kinds of ways, I
mean, the thing to remember is like if
you do business with the Pentagon, the
business of the Pentagon is war.
So, you shouldn't be surprised then that
the Pentagon wants to do all the war
things with your technology because like
that's like the thing that the Pentagon
does.
But you [clears throat]
but the idea that if you have a contract
issue with the Pentagon that they might,
you know, attempt to annihilate your
entire business, not just cancel the
contract, I do think in some cases could
lead to questions about for companies
that might be on the making a kind of
like marginal choice about whether they
wish to work with the government or not.
That being said, you know, the other
frontier some of the other frontier AI
labs, like XAI and OpenAI, are already
already now willing to work on the
classified side. And you know, Sam
Altman is attempting to broker a piece
essentially and create a deal that
perhaps Anthropic could join as well.
Now, even if he succeeds at that,
will Anthropic then walk through that
door? I mean, obviously like there's
there's there's beef between OpenAI and
Anthropic. Um but the as well as with
OpenAI and actually I but the the
the there are other vendors that clearly
wish to do these things, but it's also
true that America's warfighters have
said very clearly through
what we see in up in in Operation Epic
Fury that they think Anthropic's
delivering a good product and they wish
to use it.
Right. I I think and I'm curious to hear
your perspective on this, this does do
long-term damage to Anthropic because
even if these laws or even if the let's
say the supply chain risk designation
ever makes it to them or is overruled,
public sector companies, contractors
will just in the back of their mind
think twice before rolling out Anthropic
technology in the future.
I don't know. It kind of depends on how
you
I I could imagine that scenario. If the
narrow even if this right, if the supply
chain designation gets struck down but
all of the contracts are canceled and
after 6 months the Pentagon's using
other kinds of things and Anthropic
never gets back into that business, then
one could imagine that occurring.
Although
the you know, in the context of what we
end up seeing in the mid-term elections
or future presidential election like the
politics could change in a way that also
like re-jiggers this.
But it's also possible that this 6-month
off-ramp period, I mean, I would I mean,
maybe I'm just being this could be like
wishful thinking frankly from a national
security perspective.
Could it allow for some bargaining
potentially to occur?
That maybe we've seen that with TikTok.
The 6-month never happened.
>> Yeah.
Yeah, exactly. And the
and you know, that the that the supply
chain letter wasn't delivered on day one
made me wonder like oh like maybe is
there an opportunity for bargaining here
like who knows.
I mean another challenge here is like
there is if there is like any
organization in the US government that
is like full send all offense all the
time it is like the it is like Secretary
Hagel's Pentagon.
And so it it would be it would be
challenging I think to figure out what
the win-win looks like
for both Anthropic and the Pentagon from
a public perspective.
But there's probably a lot of utility in
that and it wouldn't surprise me at all
if there are negotiations that like
maybe they take a couple weeks to start
or maybe they're happening right now but
if there there are negotiations that
that lead to some kind of deal uh
eventually.
Okay. Last question uh for you. Uh
you're someone who's thought a lot about
autonomous warfare. And so I don't want
to end this episode uh without asking
you how do you think AI is going to
change warfare? Now I know it's not like
a just a couple minute answer uh but
>> Yeah, how much time you got?
[clears throat] Like I mean as long as
you have we have but uh just curious to
hear your perspective on where where
things go from here.
So I think about AI as a general purpose
technology. It's you know it's not a
widget it's not a it's not a weapon it's
a general purpose technology which means
the analogies to me if we want to
imagine the impact of AI on militaries
or on the balance of power say more
broadly are other um general purpose
technologies. So think like electricity,
combustion engine, airplane, like those
kinds of those kinds of computing like
those kinds of things. And there are
three different buckets that I would put
the impact of AI in. So one is a bucket
that is analogous to the commercial
world which is the military's use of AI
for
payroll processing, logistics,
acquisition paperwork, like Lord knows
the military could be more efficient
from that perspective having spent a
couple of years recently in the in the
Pentagon bureaucracy. And so there there
potentially massive opportunities there
just in the bare minimum.
Second bucket is in more that
intelligence surveillance and
reconnaissance kind of category like
like bleeding into something like the
decision support we were talking about
before where you already had things like
computer vision algorithms that were
helping the military and intelligence
agencies process all the data that they
get about the world and like separate
the signal from the noise. But there's a
real opportunity with some of those LLMs
if their reliability can be improved to
make that happen much faster and much
more accurately. Because while people
worry about errors from AI in this
context and it's often the AI industry
frankly like speculating about potential
errors and and and and accidents sort of
from AI like humans definitely
error-prone and which we've seen like
all the time.
And like think about like in 1999 for
example in the context of the Kosovo
bombing campaign where the US by
accident like bombs the Chinese Embassy.
The like I don't know maybe the computer
vision algorithm or LLM like might have
caught that. Like there there's like
lots of opportunity essentially in that
like second bucket for uh for more
effectiveness and essentially for buying
decision makers time. Because we tend to
think in the military context that the
more time people have to make decisions
and this is like a behavioral science
insight like not a military insight.
More time people have to make decisions
generally the better the decisions that
they're going to that they're going to
make. And so that that's another way
that AI can be helpful. Then the third
is like close to or on the battlefield.
And autonomous weapon systems frankly
could be hugely important for military
especially if you imagine future
conflicts with great power adversaries
say say if there's like a US-China
conflict or something. One thing people
worry about in the context of that kind
of conflict is say losing access to
satellites losing access to space.
And in what the military would call a
degraded or denied communication
environment, something like an
autonomous weapon system will be
essential
for for all the for for lots of
different kinds of weapons to be able to
operate. And you know, algorithm
algorithmic operational planning to help
commanders then maybe part of the way
that a military like the United States
can still compete and win
in the worst-case kind of scenario. So
there there's a range of different in
some ways uses of artificial
intelligence. So what I would leave you
with is like macro, I think we're
talking about enormous consequences for
military. Like this is why this is one
dimension of that macro US-China AI
competition, you know, not the only
dimension certainly,
but that when we get into it, I would
encourage people to think about uh AI in
the military in the context of specific
use cases rather than as a monolithic
technology, because the kinds of AI you
would use and what you would use them
for will vary a bunch depending on the
use case.
So like autonomous robot wars, not
exactly around the corner. I I mean, not
I mean, I you know, I I'm ready for our
robot robot overlords. Like I have been
for years. I I just it's I'm not not in
the short term.
Okay. All right, Michael. Thank you so
much for coming on. This was so
illuminating and definitely gave me a
deeper understanding of what's going on
than any conversation that I've had
previously. So thank you so much for
coming on the show.
Thanks for having me. I'm happy to chat
anytime.
Awesome. All right, we'll take you up on
it. All right, everybody, thank you for
listening and watching. We'll be back on
Friday breaking down the week's news.
Until then, we'll see you next time on
Big Technology Podcast.