The Pentagon's AI Plan + Behind the Anthropic Fight — With Under Secretary of War Emil Michael

Channel: Alex Kantrowitz

Published at: 2026-04-15

YouTube video id: cf8kz_4eRRs

Source: https://www.youtube.com/watch?v=cf8kz_4eRRs

I worry about other countries using AI
to take humans out of the
decision-making progress. They don't
trust their generals. If you were so
close to being willing to work with
them, then how could they end up being
[music] a supply chain risk?
>> It's just we don't want them in our
supply chain. We don't want to use them.
Yeah, president decided that his one of
the government to use them. If I went
back to my office right now, it's like
how how would I order a pizza from
outside to be delivered in? I'd have no
idea. So, you're not a believer [music]
in the Pentagon pizza index.
>> I'm not a believer in the Pentagon pizza
index. We're here at the Pentagon
[music]
because the AI story that we talk about
on this show has escalated quickly, very
quickly, into a core [music] national
security issue. And you saw that, of
course, when the Pentagon banned
Anthropic [music]
earlier this year. So, let's talk about
it with Under Secretary of War Ami
Michael and speak with him about how AI
might change the future of warfare and
how it might already [music] be doing
so. Mr. Under Secretary, welcome to the
show. Thanks for having me. So, AI's
capabilities are increasing
exceptionally fast, and you're the man
tasked with implementing them at the
[music] Pentagon. So, I want to know
from you, how is AI going to change war?
How do you hope it will change war?
I think um
one of the analogies I like to draw is
having been in Uber, and you look at an
autonomous vehicle. And people were
scared of Uber from taxis, and then they
were scared of autonomous vehicles from
Uber. But in reality, if you look at FSD
from from Tesla or even Waymo, the
safety statistics are amazing.
Self-driving? Yeah. So, and and it's
like people are afraid of the change,
but the change is better than what we
had. The same thing with Uber, people
were afraid of the change from taxis,
but it made service more reliable, there
was less drinking and driving, more
availability, more reliability. So, if
you would apply that to uh
the war context,
you could do much more, be more precise,
be more specific about what you're going
after, what you're defending, um how you
uh
you know, in the precision is really
what's interesting to me.
Um
because if you can use AI to detect and
discriminate, and dis- By discriminate,
I mean discern
um a decoy from a non-decoy, you could
be more precise. And I have the example
I always give is like a drone swarm is
coming at a military base,
and you're trying to determine what Are
they armed? Are they not armed? What are
these things? How do I deal with them?
Well, some of the visual visualization
these models can help you do a better
job of taking them down or not taking
them cuz they're not a threat. Um and
where one human can't really absorb
multiple hundreds of inputs at the same
time and make an action make a reaction
that's as precise.
Yeah, I want to make this concrete for
folks. And uh recently the public has
been lucky because, you know, in an in a
world where sometimes we don't get the
most transparency into how this
technology works, we did get a demo. And
this came from Cameron Stanley, the
Department of War's Chief Digital and AI
Officer, and he showed what a program
called uh Maven Smart System, which is
the Pentagon's core tech platform, looks
like. And I'm pretty sure it was called
Target Workbench. This is where they
select targets and then end up going and
sending
seems like they use the word action for
them. My understanding is they end up
going and and trying to and and sending
the attacks to these targets through
this system. So, the way he described it
is if the single unified visualization
that allows you to look at live images
and then be able to select targets.
Well, that and then imagine the context
around that. Where are my assets? Where
are my planes, my boats?
What are what might happen if you took
that action? What might be the reaction?
Subsuming all that information, but
still having a human make the decision
at the end means that you're you're
increasing the human context window is
one way to think about it, right? Where
you talk about context windows in AI.
Well, think about a human that's trying
to absorb all this information and make
the best decision they can. If you could
synthesize that information so they can
make that decision and using more
sources by definition almost, the data
and choices are going to be better
choices. Yeah, and he what he showed it
in action. You're seeing this data
overlaid on this map, and then he says
when you find something that you want to
target, and you'll see the information.
He says it's very interesting. He says,
"Left click, right click, left click,"
and then it ends up in a targeting
workflow.
Yeah. Well, What's happening in those
clicks? I mean, you know, the What's
happening in those clicks without
knowing exactly I mean, it could be
everything from
if it's an error, let's say it's an
error uh or anything, there's an F-35.
It could be what's the weather, what's
the drag,
what do I have on board the airplane,
what what am what am I going after, what
is the collateral potential collateral
effects? I mean, it's again, it's
it's less uh give me one example of how
that might work, and more just imagining
how much input you could have into that
decision uh
when you're when you have a computer
basically able to gather that
information, help you synthesize so that
you can make the right choice. Yeah,
it's interesting. He shows that there
are toggles that the military can um
select whether it's an optimization for
how much fuel you want to burn, what
munitions you want to use, the distance
that you need to travel to hit the
target.
>> Yeah, distance, fuel, weather, um
you know, what Where other assets that
the adversary might have, and where and
how how might they react? Just the the
amount of information
that you could absorb is almost
infinite. So, the idea of taking one
person and giving them the power of 10
people
um makes them better at what they do by
by an order mag- potentially an order of
magnitude. Yeah, and then within there,
once that's in the workflow, the last
step is whoever's looking at it can can
a- assuming they have the permissions,
can action on that target, which means
send the assets to the target.
>> anyway. So, you have people whose job it
is to do this. So, that right already
without any computers, it could be with
paper and pen, it could be with
whiteboards,
it could be with PowerPoints, and now
you're accelerating that and giving this
person the power of more tools so that
when they do
uh do the right click, left click, right
click, or left click, right click, left
click, Some of those clicks,
[clears throat] yeah. um they're way
more informed.
Uh and then you're going to lead to
you're going to lead to better outcomes.
Right. And and it is interesting to see
what's happened with this
digitalization. Whereas before, this is
from Pirate Wires, um they say by the
start of the conflict with Iran this
year, targeting processes were connected
with PowerPoint, email, and Excel files.
I'm paraphrasing. Target lists were
relayed in spreadsheets, uh sequence
maneuvers sat in Gantt charts and
PowerPoint.
I mean, that that's probably the case
historically because when did
AI models start to become generally
available? And then you you have to
consumers, right? We're the ChatGPT
moment in '22.
And then you say, "When was it available
to enterprises?" And then when was it
available to government on the networks
that government uses for war fighting?
And you're talking about a fairly recent
phenomenon where these tools were even
available.
Um and then we have to we have to went
through protocols, safety, testing, uh
the modeling and simulation for how
would you use this in a conflict?
There's a lot that leads up to actually
using it uh in a way that we feel
responsible for.
What's interesting, I don't see an LLM
in there. Are large language models or
today's generative AI layers baked in
that system? Yeah, I mean, the I think
the the genesis of uh what Palantir does
is an orchestration layer on top of data
streams that we data that we put in it
and say, "Here's the data we would
normally use for any battlefield
operation, plus an AI to help you
synthesize that." So, all those things
are combined, and they provide the
visual visualization. But there's not
like a chatbot on the side window, which
is like
le- like lay out a list of targets that
I want to hit. Here's My objective is to
win this war. What are my targets?
[snorts]
>> It's not like it's not a Skynet thing.
Like no. It is it is a tool like
>> [snorts]
>> um like any other tool that you might
have on your computer or in your war
room or
with your team, except it's on your
computer visualized. Um where but you
have you still have checks and balances.
You still have to get all the
authorities you need to do anything.
Um it just it it it services the choices
in a way that's more consumable. If that
makes sense. Right. Um and it's good to
have this discussion because I think as
you This is a fast-moving technology,
it's good to be able to talk about it so
everybody understands how how this
works. Um and I think that this is again
going through like some of the
what's actually happening versus
misconceptions.
Uh there's been some talk, and we're
going to get into the Anthropic
situation in the middle in a bit, but
just to talk specifically about what an
LLM can do in this process. There's been
talk that like the LLM was involved in
the kill chain.
And um you know, but but that is not
exactly what the LLM is The LLM So,
people have I think um
let let's talk about the the the
extremes.
Uh and I talk about this in the way
we're deploying AI in the department.
There's the enterprise corporate level,
like tons of PowerPoints are generated
in this building, memos. You couldn't
you couldn't imagine us. Like nothing
you've seen in the corporate world. That
could all be made more efficient. And
that's sort of the mundane work that
people would prefer to do less of so
they get more interesting work. Then
there's the intelligence layer, which is
imagine all the intelligence we gather
from satellite imagery all over the
world. How do you synthesize that? So,
right now you have to have a human
analyst look at everything and make a
judgement. Imagine you had the
historical data of all satellite
imagery, then you can look at it and say
this is an anomaly. Um and you and and
they I can learn what it was, so it
could tell you what the anomaly
detection might be, which is a totally
different paradigm for intel intel
analysis, if you will.
Um and then third is for war fighting,
where it could take all the
um paperwork and modeling and
simulation, all that all those things
not only be able to have you react
faster, but react in a more precise way.
Um and those are kind of some some more
tangible ways that of AI, and that's why
I think
if people understood that better,
particularly in Silicon Valley, they'd
all go, "Okay, that makes sense." Like
any big company would do or any big
organization. Efficiency, how do you be
strategic about what you're doing and
allow more analysis and then, you know,
how to use it to execute on whatever
operation you have in front of you.
Yeah, and this is I mean a big reason
why we're here is I wanted to speak with
you because I read so many stories and
they didn't comport with what I was
hearing from people close to what was
happening and I thought let's
live clear the air. Absolutely. That's
what I was saying here in the
department.
>> Um so,
just to confirm, the LLMs, what they're
doing is they're summarizing different
reports.
>> Synthesizing, interpreting,
you know,
uh
um taking in different forms of data and
giving you alternatives. Okay. Um and
most of these are very mundane because
like again,
you have to imagine that every single
thing that the military does is has to
be audited, has to have the right
command command and control structure,
like who's authorized this and that, has
it been checked through legal system,
you know, has it comply with all our
memo or internal memos about ethics and
and sort of the laws that we follow in
conflict. Um and that doesn't change.
It's just the tools to do that.
Make that better and easier, if that
makes sense. Now, there's an argument
among those who watch this tech
in action that sometimes a little
friction is better, right? Like that was
the one thing that made me feel somewhat
uneasy when I looked at this smart Maven
smart system demo is like,
you know, maybe we want the the Excel
spreadsheets and the Word docs and the
PowerPoints when it comes to something
uh as serious as making a decision to
attack a target. Like maybe you don't
want to make it that easy because
the easier you make it, the easier it is
just to hit action and send it away.
>> Well, the the friction's there
regardless. The the what again,
the the and this is the key point.
It's you have the same rules of
engagement, the same approval system.
What you now have is better aggregation
and synthesis of the data that you would
already use to make that decision.
Um so, it's partially about speed, but
it's partially it's more about more data
points. Right? So, if you think about it
as we're taking as many data points as
we can to make a better decision.
Yes, it's going to be faster if you were
going to go hunt and peck for all those
data points, but that makes, you know,
there's no military in the world that
doesn't believe in speed. So, that's
sort of, you know,
you know, speed wins the game. Look what
happened in Venezuela. The speed at
which that execution of that operation
happened meant that we didn't have any
casualties on our side. That's amazing.
If you had to spend way more time, you
weren't able to synthesize information
as well as uh as one could, maybe you
had to be there for 48 hours instead of
3 hours, right? So, you think about
that. Uh speed has to be one of our our
our prerogatives,
but better information is the goal, so
that the decisions are more precise and
and and more consistent with the
operational objective we've got.
Is there a limit to what this can do for
you? I mean, I'm thinking in the context
of the war with Iran. Um obviously,
there've been many air strikes, lots of
them quite precise.
Uh a an entire echelon of Iranian
leadership taken out, but the IRGC is
still in control. There's a new
Ayatollah with the same last name.
So, isn't there a limit? Yeah, there's a
limit. I mean, no one
I don't believe that
there is some all-seeing, all-knowing
answer um
to human conflict, which has been
happening since humans existed, right?
Uh
I think that ultimately what you want is
clear objectives,
you you need the manpower and machinery
to do it, and you want to do it with the
least cost, with the least amount of
damage in the quickest time, right?
That's the goal.
Um and I don't think,
you know, AI or really any technology is
sort of the
the you know, becomes the answer. It's
just one of the tools. Yeah, and that's
sort of one of the fundamental questions
here is
does AI just become something that is a
speed up, is a friction remover, or can
it fundamentally change war? I mean, you
know, I I don't think, you know, I don't
worry about that from our side because I
believe
the way the United States is structured
our command and control is you have a
commander in chief in the Constitution.
He appoints a
a secretary of war who's confirmed by
the Senate, and you have the generals
and all their ranks. So, all all the
procedures to make sure that decisions
we're making are the result of a
democratically elected
leader
um and a and a Congress that finances
uh these things.
I worry about other countries who don't
have that
using AI to take humans out of the
decision-making progress. They don't
trust their generals because of graft,
because of um because they don't have
the expertise and they start to use
machines in place of human as opposed to
using machines to augment human humans.
So, that's more of a worry for me and I
think that one of the things I've tried
to explain to to some of these companies
is
think about the alternative. What would
an adversary want to do with AI that we
wouldn't because it's not consistent
with our our values. And we have a chain
of command, a constitutional government.
If another government doesn't and wants
to use AI to eliminate risk, human risk,
we're looking to augment human
capability. It's a totally different way
of thinking. Which governments are you
referencing? I mean, I'm I I think if
you think about the the biggest military
build-up
in world history in China and you think
about you've seen a purge of the
generals and sort of the military
hierarchy there, you start to wonder,
well, how do you replace all these
people, you know, um
what is the command and control? Um what
would your AI strategy be if you were
running that company a country relative
to ours? It's just a different mindset.
And so, the uses that we've talked about
right now are largely, when we talk
about LLMs in this world, largely
they're um chatbot uses or I put them in
the chatbot bucket, right? You have
information, you synthesize information,
you get something, you know, that that
saves you time to make a decision. But
now the AI industry is moving towards
agents, right? Which is like the word
connotes letting the AI take some action
for you.
Do you have a plan for agents here? Is
that where this goes? I I think that um
not for things that require human human
judgement. No, I mean, again, you have
to have an endpoint where it ends with
human oversight and human discretion on
the most consequential decisions, right?
Um
but you could take you could imagine
scenarios like I described with a drone
swarm coming in at a military base at
night and how do you how do you deal
with that? Um
uh but again, it's not an agent use case
per se, that's like a visual
discrimination or
uh discernment use case. And maybe you
have a directed energy laser that could
take them down and it's a lot cheaper
than the alternative, a lot safer, a lot
less collateral damage. Um but in terms
of agents, we've we've have some agent
pilots at our enterprise level, remember
I was talking about the enterprise
corporate level, just to do the mundane
things we have to do every day.
Um but those things are not sort of
where we're at at the war fighting
level.
>> Okay, so if I'm hearing you right,
basically the plan here is
not to automate warfare. No. But the
question is here,
uh if you have your adversary who's
doing that, let's say you're in a direct
conflict. I mean, maybe it won't be
China.
Can you really
afford to sit still and do it by the
book? Because that's the worry, right?
Is that these capabilities are out
there, they're integrated, and it
becomes tempting to like
go into, let's say, a Maven smart system
and say LLM is getting me 99% of the way
there, just finish it off. No, and I'll
tell you why. It's
Not that I'm advocating for
Um I'll tell you why is like number one,
that's the reason the US has to be AI
dominant, so we're never faced in a
position where the counterforce AI is
better than our AI and therefore we have
to have face those choices at all.
Secondarily,
people confuse automation with some sort
of automated
um army, right? And like automation just
as I described to you in the drone
example, what about an automated mine
sweeping or mine detection operation?
There's no human underwater that you
want to find the mines.
Um there's no human involved at all, but
there's an action you want to take to do
that. Well, everyone say like, "Yeah,
well, we don't want mines on our
shores." Sounds like a good idea.
Um or there's a missile coming at you
and you want to take it down from space.
Um like Golden Dome, like we talked
about for how do you do that? Right, you
have to do that in 90 seconds when it's
from when it's launched. So, those kinds
of things in the most extreme
circumstances, you want humans to be
able to rely on some automation
capabilities. But in terms of mobilizing
a whole army or a whole fleet of jets or
a whole fleet of sweets, that's not
that's not in in anyone's mind. And
we've written that there's a 35-page
directive, it's DOD, that talks about
human oversight and how we manage these
systems. We're constantly updating that
and making sure we have the right
controls on it. Yeah, one more thing
about LLMs. One thing that I heard is
that they could be useful potentially in
being another layer of data on top of
strikes before they happen. So, for
instance, the the school in Minab, Iran,
where there was markings outside of
playground and hopscotch outside. Maybe
an LLM in the future, if that something
like that becomes a target, can be like
basically flag it and say, "Hey, maybe
don't don't shoot here."
>> Yeah, this is the point I was trying to
make with the driverless cars is like if
a driverless car ends up detecting a
jaywalker
better than a human, isn't that a better
option? Um, so when I say it it's there
to augment human decision, it could be
on the front end or on the back end,
which is check and make sure this is
something that that we want to go after
or hear warning signs. It It works both
ways. Um, but ultimately humans have to
make the decision. That's the end the
end state. W- uh how that decision's
contributed to, I think LLMs, especially
the ones that are trained on visual and
you know, Google has your Nest Cams, it
has YouTube, has a lot of human
movement. Um,
all these things have different data
sets that they're that they're trained
with to some degree that are
proprietary. Um, could be very valuable.
So, that's why LLMs, I think it's going
to go away as a term because they're not
large language models only. They're
visual, they're going to be used for
robotics, they're going to be used for a
lot of things. Yeah, and that's the
general side of the whole thing. Yeah.
AI part, that's right. Um, let's talk
about drones briefly. You brought it up
a few times. I feel like it's worth
discussing.
>> Yeah. Uh since it's part of your remit.
Um,
very interesting uses of drones in
Ukraine right now. And we saw I think
unprecedented uses of drones in the Iran
war. Um,
different use cases. One is an air war,
one is a ground war. What are the main
things that you've learned watching this
in action? And what do you how do you
think it changes again, the way fighting
might happen? Yeah, two different you're
right to point at two different
scenarios. So,
in Russia-Ukraine, it's a battle over
territory.
And so, that battle over territory,
where the lines are drawn, means that
with the drone warfare, the robots are
on the front line and the humans are
back. And the idea is, well, why risk a
human going in front if you can send a
machine first and see what, you know,
see if you could fight it that way.
Still a lot of
destruction that there that's obviously
sad and un- unnecessary, but
I don't know how much more there'd be if
you had a civil war style thing where
you have you know, humans on humans.
In Iran,
um the drones, I think the lesson from
that is that um the imbalance of costs,
right? You have a cheap drone
going against very expensive targets.
>> Right. Also, millions of dollars to
shoot one of those things down.
>> And and to to protect your your
exquisite targets on your on your side
against a very cheap drone, you have to
use expensive countermeasures. And so,
the lesson there is
um how do you turn the dial from maybe
we should have
um more mass attritable
weapons
or counter drones that are affordable so
that the cost ratios are similar um as
opposed to a country that can afford
cheaper stuff being able to threaten
uh you know, expensive assets on our
side. Uh and for me, that's been a big
push in this department, which is how do
I bring um mass we call mass attritable
weapons that are not exquisite, that can
be delivered quickly, that are designed
to manufacture uh for manufacturability,
that are cheap, that you can afford to
lose uh as opposed to the big stuff that
we build that takes 10 years to build,
that cost billions of dollars. Yeah. So,
let's let's tackle both of the ways that
the US is working to head off these
threats. Um,
we'll start with this uh the bigger
drones, shall we say? So, there's a
program here, Lucas, right? Low cost,
$30,000 a pop. You can send them out and
they crash into other drones. I mean,
what's the point of or do they do the
same thing that the drones do? They have
the same idea is that
um the Shaheed drones that the Iranians
had a one we call a one-way attack
drone. Long distance, can go fast, but
cheap to manufacture. These are sort of
that. Um, and they can do a lot of
things. They can be defensive, take out
other drones, or they can be offensive.
Um,
and they're designed to be cheap to
manufacture. Uh if you lose a couple,
you're okay, right? Uh just from a
financial standpoint. And you know,
they're used in the same way in theory.
Are we working with the Ukrainians on
this project? I mean, there was like
some headlines that the Ukrainians
offered to help, we turned them down,
and now what's the what's the story
there?
>> They you know, there's there's two
levels of this. There's sort of the the
grand sort of United States-Ukraine
relationship. But we just launched our
drone dominance program, and I think
there was two Ukrainian companies in it.
They were going to be like, you know,
onshore manufacturing here and take some
of the learnings with them here
um to help us with our kind of smaller
drone
um uh you know,
uh scenario. And so, we're sort of
agnostic to that, but we want to divest
of supply chains from adversaries. So,
that is one of the the requirements is
that the drones that we use at the drone
dominance program don't have a
dependency on on adversaries.
>> Okay, and that touches on the smaller
drones or the ones that are being used
in the land war, the DJI-style drones
that For yeah, first-person view. Right,
exactly. Yeah. Um,
you know, China has been putting on
these displays, epic displays of you
know, the drone art in the sky. Or
swarms. Drone drone swarms. Right, to to
call it what they are what they are.
And you know, I at a you know, at first
look, it's like man, like China's really
innovating on fireworks. But then you
realize this is completely a military
simulation.
Could be.
Um, I mean, I think that's the scenario
that uh
that you know, I've tried to explain and
I do think it's it's it's something that
these AI companies understand once
explained to them. And you could say
like you see that drone art display that
you saw, imagine those were armed
drones. Imagine that they were
communicating with each other and they
could therefore, you know, form and
reform in ways against your defenses.
How do you defend against those? And
where and depending on where where you
are, you may have a fully defendable
garrison, let's say. But let's say it's
it's it's a small military base. Let's
say it's over the border. How do you
deal with these things? And that's like
something that's a new challenge that
wasn't present or we were at least
wasn't we weren't thinking about before
the Ukraine-Russia war. What is the
answer? Like is the US working on the
defense Yeah. side of that and on the
offensive side?
>> Both. Okay. All the time, right? We
drone dominance has both elements to it.
We have a counter-UAS
counter unmanned systems
task force that's looking at everything
from lasers, directed energy, which is
one of my critical priority areas,
to how do you do electronic warfare on
these things to take them down. They're
all run in some way. So, there's lots of
different you know, measures and
countermeasures. That's what makes this
technology this time in the department
so interesting. Yep.
The nature of warfare is changing.
Technology is getting more capable. The
actual ability to access this technology
is becoming cheaper.
The need to have these systems
interoperate is never greater because
what is a drone swarm? It's a set of
interoperable
drones that work together and you could
see them in the sky like you're talking
about. Um, and you could imagine what
their military utility might be. Um, so
the tech problems there are super
interesting, right? And and they're
hard, but they're interesting. Now,
briefly on the cyber warfare side of
things, I imagine AI could really impact
that side of warfare.
It seems so. It seems uh
it seems that models that are trained on
code
learn can learn vulnerabilities in code
um is what these companies are saying.
And
that presents risk and opportunity.
Um, but
uh yeah, I mean, that they're obviously
you know, what we've heard in the news
yourself and that's been released about
the
about the cyber capabilities that are
almost here are certainly going to come
from every frontier model company at
some point, certainly going to be um you
know, tried to be distilled by the
adversaries.
Um, are going to be, you know, the next
wave of innovation from these from these
companies. Okay. So, it's clear, I think
from the beginning of our conversation
that AI is becoming critical in what the
Pentagon does. It's helping synthesize
information in some areas, it's helping
with targeting. Clearly, you need it for
drones. And you also need it if this is
going to be a new cybersecurity front.
Yeah. So, I want to talk about how you
pick the AI vendors. So, we're going to
talk about the situation with Anthropic
and then a few other topics when we come
back right after this. And we're back
here on Big Technology podcast with Emil
Michael, the Under Secretary of War for
research and engineering. Emil, I
appreciate you being here with us. Let's
talk about Anthropic. So,
I just want to hear from your
perspective. Describe the culture of
Anthropic versus the administration.
Uh
um
well, I I would say
this, which is um
they were the first to aggressively
try to provide service to the government
after the Biden administration's
executive order about AI.
Um
because they were And again, you see
this in the marketplace, too. Open AI
was more focused on the consumer, which
had GPT and the subscriptions. Um
actually, I sort of hadn't been started
really until 18 24 months ago, and then
Google was also focused more on the
consumer. They were focused more on
enterprise. When I say enterprise, I
mean enterprise at large, an enterprise
like the Department of War, or an
enterprise like a big company.
Um and so they were naturally started
sooner here.
Um and I think there's a certain portion
of of people at all these companies that
all now have a government division that
are are all going to start, you know, uh
understanding the vernacular a little
bit, and we can have conversations like
you and I were having about what are the
meaning of some of these
uh advancing capabilities of the world.
Um
but yeah, you know, from a culture
standpoint, I think
we think about
you know, we live in the bureaucracy of
what we have to do every day to innovate
and to reform.
And I think the the image that they
might have, and this is not unique to
them, of the of the Department of War or
the administration, is that
we don't have the safeguards that we do,
that we're not paying attention
um to sort of the risks. We are, if not
more so than than most Americans uh
would understand,
uh that we do, because of the procedures
that have built up over decades and
decades of being be careful and smart
about what we do. And so the
that culture clash to the extent is you
know, what you'd call a lack of
understanding, a lack of confidence,
lack of trust in us and our ability to
do things in a way that's consistent
with our values as a country and the
laws that are passed, and that's
that I guess how I'd describe the
difference. Okay, and just to recap
what's happened between uh the Pentagon
and Anthropic recently. Uh they were in
Maven smart system, like we discussed.
Uh there were all these provisions in
the contract that the team here didn't
like, so there was a renegotiation.
It almost came together, but there were
two things
uh that Anthropic wanted to include in
the contract, uh a provision against
mass surveillance, a provision against
autonomous warfare, and uh ultimately,
there was there was not an agreement
there.
Right, there was
although I would uh I would say the
following, which is just to be clear,
the provisions that were in the
contracts that ultimately served
Department of War said you can't use it
for
planning kinetic actions,
you can't use it to develop weapon
systems. So, all the science,
engineering, aerodynamics, all
>> Those were the original stipulations.
They agreed to throw that out. Uh
but but they but it took 3 months and
examples,
hand-holding, examples to say, "Well,
what about this example?" You can't run
a department of 3 million people by
exception. Mhm. You have to have,
especially if you think about AI as a as
a as a intelligence layer that can apply
to many things, from aerodynamics and
physics and math to synthesizing
information, to anomaly detection,
whatever. Um
and we have a
We run hospitals, we run schools, we run
uh
uh weapon systems, run to defensive
systems to protect against uh all these
kinds of things. So, to go by exception
and try to say, "Well, how about this
scenario? How about this scenario?"
became not tenable and took a long time
to get there, and that's where you start
to say like, are they aligned with with
our mission here? Um and then
the idea that autonomous weapons were an
issue was sort of I think more marketing
than anything. Um because we have our
own policy before they showed up that
talks about that. Um
and we affirmed that we will have human
oversight on all decisions made uh
militarily that are made using their AI.
So,
what else can you do? You're like, we
affirm human oversight.
We have these directives already. We
have the laws.
And eventually, they agreed that there
was no problem there, but they marketed
it as an as a as an issue that we were
disputing at the end, which is odd. On
domestic verse, we are not a domestic
law enforcement agency.
We do not have authorities to do
domestic mass surveillance.
So, it was sort of like
you have Congress that passes laws,
National Security Act of 1947, the FISA
Act, all the the civil liberties that
are enshrined in law and in the
Constitution.
And I said affirmed, we will follow all
those laws and all future laws.
And all the authorities were granted and
not granted, right? We're not the FBI,
we're not Homeland Security.
Um
but again, that wasn't enough. They
wanted us to rewrite the law cuz they
thought Congress was just behind. They
weren't understanding that new tech
allowed new capabilities, but again,
it's not our mission. We don't have the
authority to do it. Okay, but here's the
thing, right? And so I think the And so
eventually, the contract was
was ripped up. They called it off, yeah.
Right. Um
and I think deciding not to work
together makes complete sense if you
have a value misalignment, but then the
Pentagon took it a step further, deemed
Anthropic a supply chain risk. And that
one I I'm a little bit puzzled by,
because if you were so close to being
willing to work with them if [snorts]
they agreed to all lawful uses of the
technology being used by the Pentagon,
then how could they
end up being a supply chain risk, which
basically means that the Pentagon won't
work with them, any government
contractors can't work with them, and
the administration took it a step
further and said no government agency
should work with them. Well, I'll speak
to what Department of War
um
uh
cares about in in this in our supply
chain.
If Lockheed Martin builds a weapon for
me, and they're using uh a technology to
help them do some of these
science-oriented things, physics,
aerodynamics, and so on,
and the vendor has expressed an
unwillingness to want that to be part of
the use case,
well, then what am I getting in that in
that system that's eventually going to
come upstream to our warfighters?
I don't know. What if they decide to
change their red lines?
What if the model hallucinates because
its values are like, we don't want to
cause this to be used in kinetic way?
Those were the things currently in the
contract. Um so you worry about the
downstream implications of that on
everything that leads to the protecting
the warfighter and defending the
country.
And so it is a legit worry if their
alignment with our mission is is not
real.
But then there you also limit yourself
in a way to some of the capabilities
they might have. I mean, if you think
about Mythos,
we talked about cyber cyber warfare.
Mythos is their new model, it's in
preview. There's a project called Glass
Wing that has a bunch of entities that
have come together and they're trying
it.
Um and one of the things about Glass
Wing and and about this Mythos model is
that
it is convincingly good at cyber uh
cybersecurity and cyber attacks. This is
from the AI Security Institute. This
week, we conducted cyber evaluations of
Cloud Mythos preview and found that uh
it is the first model to complete an
AISIS cyber range end-to-end, which
means it's a 32-step corporate network
attack from initial reconnaissance to
full uh network takeover. We estimate it
would take human experts 20 hours to
complete.
>> Through sensing AI cyber weapon,
automated? I well
>> Autonomous cyber weapon. Here's the
thing.
I'm not encouraging [clears throat] the
use, but I'm saying that like
you talked about the drones that are
meant to hit other drones.
Wouldn't you want this tool at this
disposal? I mean, there's a there's an
argument to be made, and I'm curious to
hear what you think about it, that you
sort of put yourself in a corner when
you're not taking these capabilities and
using the ones that you want.
>> The original sin Okay.
>> was
in the past administration choosing one
AI provider and having no options, cuz
it is a
it a gargantuan effort to get these
software uh
uh
things onto classified networks. A lot a
lot of complexity to do that, cuz it's a
secure network, right? This isn't AWS
cloud for consumers.
Um so the original prob- the original
sin was not having more than one
provider, so that you had
um more options. Um but I also believe,
if you talk to every other of these
frontier AI companies, they're going to
have similar capabilities.
But they don't yet. Well, you know, if
you were to use that Yeah, but they will
soon. Like they If you look at the
distillation attacks that our
adversaries are using,
uh uh based on our models, how long do
they take to show up in deep fakes or or
any of these other things? Just a couple
months. Yes. So, so if you think about
those timelines, you're just thinking
about timelines. And we're not we'll
we'll never
uh sacrifice capability for national
security, you know, or or anything. So,
um
I think we're cognizant of what's
happening, and we're working with every
model company, and
um we feel good about our posture there.
Uh the other thing that people say about
this, and I'd be curious to get your
thoughts on it, is that you can look at
the history of companies that have been
deemed a supply chain risk to the
Pentagon.
It's very rare, if unprecedented, for a
company like Anthropic to be banned that
way. Mhm. So, why do you think it rose
to the level? And do do you think it
merits this like fairly unprecedented
action? Well, I mean, on the one hand,
you can't say that they have this this
cyber nuclear bomb.
And
yet we shouldn't be worried about how
those capabilities enter our and remain
in our supply chain. Those two things
are inconsistent, right? And I'm not
blaming you, I'm just saying that
if you believe they're going to cause
40% unemployment, if you believe that
um these things have a capability
that you put 50,000 geniuses in a data
center, they're going to coerce the
world,
they could create bio and chem weapons,
of course the Department of War is going
to want to understand
and constrain
uh you know, those things so that they
don't do something unintended on our
side. Right? So, these companies are
talking about their things in
apocalyptic terms,
which make it necessary for us to judge
the management teams, judge their
actions, look at the terms of service,
understand how they fit in our supply
chain. This technology is like nothing
we've ever seen, so you can't compare it
to, you know, a chip from a foreign chip
manufacturer that gets put in the supply
chain. It's a whole different thing
because of just what you said, is the
power of what they're saying it's going
to do, the disruption it might cause in
American life. Um and we don't If
someone developed a nuclear bomb in
their in their garage, you don't think
we'd have anything to say about it? You
know, of course we would, right? Yeah.
Uh or a biological weapon or or any of
these things. So, I think um
those are things that heighten the
awareness that we have of what these
models could do and where they're going.
Okay. I just want to talk about this one
more one more level, which is a
practical level,
um which is
and then you've mentioned this in
interviews before, that Anthropic's
models were hosted on Amazon's cloud,
their government cloud. And so, they
upload the weights weights to the model,
and then you use it through Amazon. So,
let's just take the Lockheed example. If
the Lockheed is designing some systems
or and Claude is baked in there,
Claude, I mean, Anthropic wouldn't have
the capability to turn that off if it's
hosted somewhere else, maybe upgrade it.
Yeah, but
>> what it is, you flip them out, but to
turn it off, they don't really have that
capability. No, I mean, I I think we're
we're
uh
you understand how this technology works
better than most. How many The upgrade
cycles for these things are now
compressing to now three-ish months.
Right. So, every three months you have a
new set of uh model weights, a new set
of guardrails, a new set of
uh
bugs and the way the model behaves, the
way it hallucinates or doesn't
hallucinate, um the way it does
refusals, where it refuses to answer
certain questions. And there's an
important anecdote
which uh which was written about, which
was
uh Anthropic is also serving the Centers
for Disease Control.
And so, you have some scientists going
there, well, I I'm learning about
pathogens. Right. And they assumed that
was a bad actor.
And it took them they refused to to undo
that refusal. But that was the
off-the-shelf model. Or was it? Yeah,
right, sure, it's the off-the-shelf
model, but what what's to stop them from
making the next model? We don't know.
So, the point is
to have a reliable partner, you have to
you have to have alignment on these
issues, which is
we have a national security mission, we
want to use it for all lawful use cases.
In the HHS's case, it'll be all lawful
use cases, and it's lawful for HHS to be
doing pathogen research.
Right? So, We would hope that that's
what they're doing. We hope that's what
they're doing. So, for someone to have
made the judgment to turn that off, and
they're like, "Oh, well, it was an old
model, this, that."
That's not how it has to work in the
future. If you if you are truly, you
know, an American company that's trying
to
protect Americans and do good things for
Americans, the the government has to be
able to use this powerful tool to
succeed in its mission. Yeah, And if I'm
hamstrung by their choices,
that's what gets in the command and
control structure.
But I mean, you solve this to a degree,
because if you just have Claude, then I
I totally see it. But
if you have Claude and Grok and OpenAI
in there,
then maybe if Claude makes an update you
don't like,
you let OpenAI run with it.
Run with the next
I mean, I that's if we hadn't had made
the original sin, Yeah. I think you'd
have had them competing for the
government business.
Had they
competing for the government business,
like in any non-monopolistic
scenario, power would be balanced
between consumer customer and vendor.
And eventually we'll have that.
But we didn't have that, so then they
could make those choices on their own.
So, I I asked about the culture side of
things in the beginning, and there's
also this perception that well, the and
I I mean, I have a good a decent read on
the government and a decent read on
Anthropic. They're definitely different
cultures. And uh the the other read on
this is, okay, maybe maybe there were
some things that the government was
uncomfortable about, but this really
just came down to a culture clash, where
like even I think wasn't it Pete Hexeth
in when he tweeted about um about
Anthropic said, "You know, we're not
going to let any woke company tell us
what to do."
Um
is it possible that this is just a
culture clash versus
the bigger thing that it turned into?
No, because I mean, I would tell
um
I would tell the Anthropic guys that
came to me,
"This is independent of politics.
I just care about having the best system
for our warfighters. Why would I spend
three months if it was a culture clash?"
Andrew Ross Sorkin asked me the same
thing on CNBC. He's like, "You just, you
know,
you're not buddies with him, you're
buddies with this." I'm like, "I've
never met any I I don't know these guys.
Um I know the culture of Silicon Valley,
so I did take a lot of time to try to
explain as a transplant to government,
here's why this matters.
>> Right. Here's some scenarios. And
eventually we got to a point where it
was just they wanted control.
And you can't have control of the
Department of War's actions and
activities so long as they're legal and
um and consistent with our guidelines.
And so, those on the outside who are
look at this and they say, "Okay, supply
chain risk designation, no government
agency can work with them. This is
effectively the federal government
attempting to destroy Anthropic because
of a procurement dispute." I mean, it
destroy Anthropic that's tripled in
revenue in three months?
>> [laughter]
>> Or tripled in valuation?
>> They're doing okay. They're doing okay.
That's silly. Right? So, that's silly
because the percentage of revenue that
we represent
of any of these AI companies is
infinitesimal.
Um it's just we don't want them in our
supply chain. We don't want to use them.
President decided that he doesn't want
the government to use them. There are
great alternatives, and we're going to
have to fix past mistakes by ensuring
that those alternatives are available.
And I have high confidence, if not more
confidence, that these other models will
be the same or better over time. Okay,
last one on this. Um and thanks for
answering all these. It's good to get
your perspective. Uh the judge in the
case, or one of the judges that in this
case, cuz Anthropic is suing to have
that designation removed,
uh Judge Rita Lynn said the Department
of War's records show that it designated
Anthropic as a supply chain risk because
of its hostile manner through the press.
Punishing Anthropic for bringing public
scrutiny to the government's contracting
position is classic illegal First
Amendment retaliation. Did that have
anything to do with the press, the press
strategy? Uh
I mean,
I I I shouldn't comment on a legal case,
but I think the notion that a First
Amendment claim is going to hold up
is is would be shocking, because that
means that the government has no choice
to make, right? If a vendor, any vendor,
says, "I don't agree with your term."
And they're like, "Well, that's why
we're not going to hire you to do,
you know, whatever kind of work we do,
translation work at Department of War."
Um and that becomes a First Amendment
claim, then then
it sort of would be so overreaching
that it would be it would be not
workable. So, I I I feel like that was a
throwaway, but uh I will say that
you know,
the thing that makes the Department of
War
um
different than most other agencies, and
I don't mean this to be dramatic, but we
really do have lives on the line. And
when people talk about government
bureaucrats and them not caring, the
people here, the career people, they
care. They really care.
They care about the warfighter, they
care about the country. It's a really
patriotic place, and it is very
nonpartisan in the middle of the of the
Pentagon. We have 3 million employees.
Um and so, that mission is very
sensitive, so like we we are sensitive
to
the exact the relationships we have with
these companies, because there's a lot
of unpredictability in our business.
Right? So, something happens in Iran,
and we need companies to move fast. You
have to have some trust with them. You
have to have some shared values. You
have to have You have to understand they
have economic interests. Um and then we
have to understand that we our needs are
going to change based on the threat
environment. And so, that kind of
matters. So, you you could go they could
litigate in the in the public all they
want. That's fine. But, do we have
alignment for real when we get in the
room and we are facing a conflict, are
we aligned? So, I was pretty impervious
to that stuff.
There's a there's a website genai.mil
that's available to the people in the
military here. And interestingly, Google
is in there. Gemini's in there.
And
Google went through something similar.
Even though it was somewhat more
explosive. Where the employees protested
and yet here they are. They're working
with the Pentagon again. May forgive it
and to a degree. Cuz that happened with
Anthropic? I think so. I mean, I believe
that
when you combine like you know, if you
fast forward
from 2018 where the Google Maven thing
happened to 2026 and you talk to people
from Google who are involved in that, I
think they regret it.
Uh and they regret it because
probably the same reason they didn't
understand what was hap what what we did
here. And what we do here in this
administration is going to carry forward
to other administrations cuz we're at a
crucible moment for AI.
And that's going to be could be an
administration of either party. So,
whatever decisions they make
if for us it's nonpartisan and it's for
the future. Um
and I think and I hope that companies
that went through that moment in '18
like Google um kind of as they get more
mature
and more of an understanding of what it
means to work with the government
and understand us better get to a good
spot. Hopefully sooner than 8 years.
Yeah, that did take a while.
>> Yeah. Um
But, I will say Google's been excellent
partner before this genai.mil.
>> shifted in a big way. And so, I mean,
the whole tech industry when I was there
at Uber in 2016-17
>> wouldn't touch this stuff. Huh? wouldn't
touch this stuff.
>> just the the the employees
and sort of uh
I won't call it the a little bit of a
mob mentality where employees had a lot
of say over what their products and
services um were doing.
And senior leaders and founders were
very sensitive to that. I think that
sensitivity has gotten a little more
balanced right now. If you don't want to
go work at Palantir, don't go work at
Palantir. There's a ton of other places.
You don't want to work at Uber, don't
work at Uber. Um I think the the balance
is in a better place and I think Silicon
Valley because of
the fact that we're doing more outreach
to them, there's more California
companies both Southern and Northern
going to succeed here. Hopefully that
knowledge transfer will happen faster.
Let me bring up one more headline. Um
there's a story this week that also says
that you had some xAI stock. Mhm. Do you
have any SpaceX? Is that a potential
conflict?
>> sold all my SpaceX
and I recu So, what happens when you
take one of these jobs
you you show your whole whole sort of
list to the Office of Government Ethics
nonpartisan. They go through it and they
say we think these things are these
things are red lines. You shouldn't have
your defense company stocks. You
shouldn't have much. SpaceX was on the
list. So, you have to sell that. Um and
then depending on your role
here's the things you have to sell that
might be specific to your role. And then
based on
the kind of connection to that, you
could recuse yourself from dealing with
the company. So, I just recused myself
from dealing with xAI until I could
sell. And I was pretty, you know, active
about it cuz I didn't have the AI
portfolio until the fall. Right. So, I
got the AI portfolio. I was like, "Hey,
like to be involved in this. I'd like to
not recuse myself." They said, "Well,
you have to sell." Great, give me
permission. Got permission, sold, was
recused in the meantime.
Okay, two more things I want to speak
with you about. I'll I'll be quick as we
come to a close. First of all, I think
it's we every time we have this
conversation a conversation like this,
we have to talk about procurement.
And it's like I know I can tell that
half the audience is ready to go to
sleep now, but it's really important the
way that these services are bought
because like the Pentagon budget for
instance has been I'll say it in
inflated because some of the vendors
have charged more than an arm and a leg
and a leg for services. So, talk a
little bit about how you're working to
reform the procurement process and
why why that's going to be good for
people.
>> Yeah.
>> [snorts]
>> So, so in in the '80s during the height
of the Cold War, we had about 50 defense
contractors, 5-0. And they consolidated
down to five.
So, that was one sort of dramatic
reduction in the number of competitors
for anything.
Um and then we outsourced a lot of the
core capabilities to other countries.
So, the supply chains got brittle.
Um and China wasn't hadn't it wasn't
didn't have a military build up till
like 2010. So, you could you put all
these two things together
and you said, "Wow, what was happening
is there were a small number of
competitors. They were taking less risk.
So, we were paying them for time you
know, cost plus."
Um now, some of this
well, and I I it's important for me to
say this every time I've asked this.
Some things are so speculative that no
company can economically do that unless
you're financing some of their R&D. So,
there are things that are 10 years out,
15 years out, 20 years out that you have
to do that.
Um
but because the nature of warfare is
changing and because there's defense the
greatest VC boom in defense tech
in our country's history and because
um you have founders like Palmer Lucky
and and all these folks who are willing
to go into this business
um
you've it's made us much more able to do
business deals. So, so for our audience
who's bored with procurement I'm talking
about business deals.
>> It's important. We can now do deals
where if you deliver a weapon and it
works on time, you get paid. And if you
don't you don't.
>> [laughter]
>> Imagine that. Imagine that. And guess
what? If you do it cheaper so you make a
little bit more profit, I'm okay with
that. Right? And so, there's a little
bit more risk sharing there.
Um
and I think ultimately especially for
things that are easier to produce and
quicker
that you they're not taking a huge R&D
risk like you're inventing the next you
know, um
you know, space shuttle that that can
land on the moon and
and be there for you know, 3 years and
build a base. Like all the things that
are you know, very speculative hard
things.
Um I think you'll see us moving a lot
more toward business business-oriented
contracts which is good for them and
good for us. Definitely. And better for
the taxpayer.
>> Yeah, most importantly. I think we pay
enough taxes that we should know where
it's going and hopefully it
it's not wasted. All right, I don't want
to leave without asking you about the
Pentagon pizza index.
>> [laughter]
>> Are you aware that there are people
tracking how much pizza is ordered near
this building we're at the Pentagon and
they've used it to predict military
action?
>> [laughter]
>> I so, I've seen that on X.
Honestly, I wouldn't have no idea where
you get a pizza delivered to come into
the Pentagon cuz it's there's a specific
Papa John's. No, I I I'm not doubting
that, but I I actually don't know if I
went back to my office right now, it's
like, "How how would I order a pizza
from outside to be delivered in?" I'd
have no idea. So, you're not a believer
in the Pentagon pizza index.
>> I'm not a believer in the Pentagon pizza
index. We shouldn't take it seriously.
Huh? We shouldn't take it seriously. Um
I I'm not a believer in it because I
literally don't know how you get any
food delivered from the outside.
[laughter]
>> This is this is the Pentagon. You're
telling me the Pentagon can go to war
with countries millions of or thousands
of miles away, but it can't get pizzas
in the building.
>> [laughter]
>> I I I'm sure there's a way someone could
walk out to the edge of the Pentagon
receive a pizza and bring it in.
>> has the best logistics operation in the
world. There's there's
Look, I don't know. I
>> [laughter]
>> What if someone's messing with it to
mess with the prediction markets?
I I wouldn't put it past anybody.
>> Okay, so therefore it's inherently
an unreliable
um measure in my view because it's easy
to to corrupt it.
So, the pizza around here.
>> [laughter]
>> I think there's there is there a There
is a pizza place or two inside the
building that closed at 5.
That's why they look at the late night
Papa John's.
>> Apparently. I I'll leave it at that. Mr.
Under Secretary,
>> a hell of a last question. I would have
guessed that. Yeah. My pleasure. All
right. Thank you very
>> for coming all the way to DC. My
pleasure. Thanks for having us in
person. Yeah. Thanks everybody for
listening and watching. You now know the
secret to the Pentagon pizza index.
We'll see you next time on Big
Technology podcast.