Senator Mark Warner: Nobody’s Ready for What AI Could Do To Us

Channel: Alex Kantrowitz

Published at: 2026-03-26

YouTube video id: PeRuT-eqgm4

Source: https://www.youtube.com/watch?v=PeRuT-eqgm4

If AI progress is actually moving on an
exponential, are we ready? Let's talk
about it with US Senator Mark Warner
right after this.
Welcome to Big Technology podcast, a
show for cool-headed and nuanced
conversation of the tech world and
beyond. We have a great show for you
today. US Senator Mark Warner is here
with us. We're going to talk about
whether the government is ready for fast
AI progress, [music]
what government data says about
AI-driven job loss so far, and the
latest on that small Anthropic situation
with the Pentagon. Senator Warner, great
to see you again. Welcome to the show.
>> Alex, thanks so much for having me.
So, it's been 4 years since we last
spoke.
And
I
reached out because I had been getting
freaked out, I'll be honest. I've been
speaking with some folks in and around
these AI labs.
And there's a belief among them that AI
technology is moving on an exponential
and could have real disruptions. And I
think for me and many others who've been
watching this, that was marketing
language a couple of months ago. But now
there's at least a percentage chance
that that's real. And I'm freaked out
because I'm not sure if the government
is ready for an exponential. Silicon
Valley might do exponentials, Washington
does linear or backwards sometimes. Um I
wanted to just get your take on I know
you're right into this. Everybody says,
you know, go to speak to Senator Warner,
he's the one that knows what's going on.
But I want to get your take on the
general vibe in Washington today.
Do you think there's awareness among you
know, rank-and-file members of Congress
uh in the Senate that something might be
brewing that
there will have to be drastic action to
head off the negative consequences if it
happens. Well, Alex,
I don't think
government's ready.
I don't think society's ready.
And I know the same
you know, AI optimists who are talking
about this, I actually think they have
changed their pitch
and are now holding back
because they're freaked out about
freaking out people.
And I've seen like, you know, the
um and I am still long AI in terms of
value, but boy, short-term, next 3 to 5
years, the economic disruption
is going to be I I just think we are not
ready at all.
We don't have good data. We don't know
what's happening.
And you know, an an example I like to
give is
if you just look at Anthropic's Claude
products this year.
How Claude has already kind of disrupted
the whole software business. Now, the
market recovered a little bit, but then
it hit the same thing on the HR
business.
The markets don't respond that way that
quickly if people aren't saying there's
going to be fundamentally dramatic
change
in kind of industry fundamentals. And
that's just two areas.
And I think there is much, much more to
come.
So, I've heard you say this a couple
times that these CEOs may be downplaying
the impact. Uh I know they speak with
you privately. Are they telling you
things like saying, "Hey, Senator
Warner, don't say this to other people,
but here's what we think" or what what
brings you to that assessment?
>> me is
you know, the CEOs who are saying this
in the AI space.
And what I'm hearing privately from big
brand-name firms who are saying they're
they're cutting off or cutting in half
the number of interns or first-year
hires. I even heard from a
nationally known law firm that has
decided to hire no first-year
associates. They're going to take a
pause
and see how this works out before they
even hire. All these kids after they've
done everything to get through law
school and they got a job offer they
thought with a big brand firm and then
it's
just going away. Nothing they did. And
because of AI.
>> And yeah, cuz AI. And I hear and and and
I hear like
so many companies that are mid-size who
say, "You know,
I had one guy the other day saying, you
know, I had 23 people do this back
office function. Now I got three. Isn't
that amazing?"
And the thing is we are not even
collecting data on this yet. That's why
I've got a a bill with Josh Hawley, very
bipartisan, that says the BLS, Bureau of
Labor Statistics, we need to start
measuring this.
Um
and and not just in terms of firms like
a Jack Dorsey saying he's cutting 40% of
his staff on because of AI, and whether
that's true or not, you know, we'll we
won't know for sure, but you know, that
kind, but also try to measure
you know, jobs that would traditionally
have been created because my view is
that this is going to particularly hit
kids coming out of college, coming out
of graduate school. We're at about 9%
recent college graduate unemployment. I
think that number will actually go to
30%. And the economic disruption that
will have not only on those young people
that don't get jobs, but their parents
who helped finance their college
education, and the level of kind of fear
that is amongst
everybody I know that's in college at
this point.
I don't think people are factoring that
in. And and to say government's not
ready would be an understatement.
Right. And we're going to talk about
some of the legislation that you have
brewing. And um and and you know, but it
takes more than one or two senators
here. And you know, you've already
passed the Claude test, Senator Warner,
which is you're a senator that knows
what Claude is. 100 US senators, what
how many of them do you think know what
Claude is?
Well, I hope I hope more than you and I
think. But Okay.
>> you know, and again, I don't know if you
want to go now into the whole, you know,
Claude's part of Anthropic. Um
whether we want to go down that path
now, but you know, I would argue that
Anthropic, you know, pick your
Anthropic, Open AI. Obviously, Google is
doing doing well. But you know, we've
got a half dozen LLMs that are making
major advances.
But you know, what what's happening with
Anthropic at this point as they were
doing business with the Defense
Department and being very well used.
And you know, the Anthropic leadership
at Crosswise with Hegseth DOD. And
obviously, any company if they're going
to do business with with DOD has to make
some accommodation. But the idea that
we're going to turn over to Pete Hegseth
the ability to completely decide that
these AI tools can be used totally for
surveillance without any guardrails or
even potentially worse, creating AI
weapons without a human in the loop.
That's big freaking deal. And we ought
to have if we were not in this war with
with Iran at at this point, I think, you
know, that would have been a major
focus. And what is even happening with
this or I'm trying to rally the tech
community to say, regardless of what you
feel about Trump and Hegseth,
you know, if you're having these
decisions and then
Hegseth is going to declare is trying to
declare Anthropic a supply chain risk.
That would mean that not only Anthropic
couldn't do business with DOD, but any
company and virtually every major
company in America does some level of
business with DOD,
they couldn't do business with Anthropic
as well. This would be the ability for a
single individual to write a death
sentence
to major American tech companies. And
people need to realize this stuff is
happening real time.
Okay. So, I I guess like the reason why
I'm asking awareness and I I take your
point. We're going to talk a little bit
more about this in the second half about
the Anthropic deal DOD or the war the um
dispute they're having with the
Department of Defense, Department of
War, whatever you want to call it. Um
we'll talk about that a bit more. The
reason why I asked is because and and
maybe this dispute is giving more
awareness to Anthropic.
I just wanted
see if you could reassure me or maybe
you're saying that there is little
reassurance that when it comes to the
list of priorities that your your
colleagues have, that this at least
ranks.
Right, because I I remember reporting on
the social media stuff uh 5 10 years
ago.
It was clear that there was no there was
yeah and you know what? It it I guess we
sort of came out of it okay. Well, Alex,
But this is my worry is the same there's
the same lack of awareness in the
government for something that could
happen fast. Well, amen. Amen. Like
social media
was a challenge. And you know, I had
bipartisan bills on data portability,
interoperability, delegability, which is
now basically called agentic AI. We had
things about dark patterns. There was
lots of bipartisan action. And all the
social media companies,
you know, they all said, "Yeah, we want
some meaningful regulation" until you
put words on the page.
And we batted zero. We still haven't
even done the freaking kids online
safety bills.
So, social media
was a challenge. It has, I think,
effects, yeah, psychic effects,
psychological effects, I'm sorry, on on
young people. But it is tiny compared to
AI. When we think about you know, the
stories already we're seeing about AI,
you know, leading kids potentially to
suicide. And we're we're seeing what was
kind of a spot story just 6 months ago
of people becoming romantically involved
with um AI agents. Now this is actually
a statistical thing you can look at. Um
and
that's just on the kind of
psychological societal effects. But on
the job effects, we just don't have we
don't have good data. We have people I
think shifting blame. I've you know, I I
saw Amazon, you know, they've announced
11,000 job losses. They say none of
that's due to AI. But I got to just tell
you,
you wouldn't have literally not
billions, but trillions dollars of
private capital coming in
if these enterprises don't the investors
don't think they're going to going to
return. Now, some of this may be because
we're going to have great healthcare
breakthroughs or have AI-created jobs,
which I believe we will.
But in the short-term, the amount of AI
job dislocation is going to be
jaw-dropping and I don't think
the majority of of senators understand.
And I think they can be convinced.
Um and I just fear that we what we got
now is, you know, the overriding uh
agenda coming out of the Trump
administration is they are kind of AI
acceleration acceleration accelerators.
Yeah, pedal to the metal cuz we got to
beat China. And we do have to beat
China. But the idea that we are not
going to think about any guardrails or
about the short-term economic
consequences,
I think is really frightening and and as
somebody who still believes the power of
AI, by by the way, there's no way we're
putting the genie back in the bottle
anyway, is could have positive effects.
We could actually have populism on the
left and the right coming together
to try to
you know, snuff out the innovation and
do it ham-handedly.
So, boy, boy, this is you know,
as I'm I'm trying to get hired one last
time in this job.
And
probably the major reason is
if I can help navigate
um you know, some of these AI solutions
and I don't pretend to have by any means
all the answers. Matter of fact, if we I
think Alex maybe have talked about this
at one point. We go way back in time,
like 3 years ago.
The I think well at least at that point
thought through well
guarantee of a job was let's at least
make sure everybody has basic coding
skills. That was well-intentioned.
But it was obviously not the right
answer since those are the first jobs
being eliminated.
And so when you think about the way that
your colleagues view this,
uh
is it high priority, medium priority, or
low priority for them?
Listen, this stuff is hard. I I don't
you know,
I I
I joke, but it's kind of true. There is
no real linear relationship between me
spending more time on AI
>> [clears throat]
>> and actually thinking I have a better
understanding. I get you know, it it it
is evolving so quickly. I think most
members, you know, and this is a human
reaction. If you don't get it
and it seems too complicated, you want
to try to punt on that. And that that
allows for
you know, a simple-minded solutions like
let's just shut it down or let's just,
you know, have a moratorium on all data
centers for a year. That's not going to
answer the question.
Uh so we we
we do have to navigate it. And you know,
what small value I hope I can add is not
turning this into a partisan issue and
trying to find folks on both sides of
the aisle that says, "Hey,
we got to grapple with this. China and
the rest of the world is moving ahead.
There is no way we can we can reverse
this. But we are not powerless both to
put guardrails in effect and also say in
terms of the economic dislocation,
you know, and my challenge to the AI
community is you guys are right. If
government defines this all, we'll
probably screw it up. So, you guys
help us define what this transition
looks like, whether it's the training or
reskilling, whatever tool we want to
call it. But you also got to help pay
for it. Because the costs of this are
going to be amazing.
Yeah, and and I'll I'll explain a little
bit about the question my line of
questioning here. I just wanted to see
if you think the government would be
able to move fast if we end up seeing
this exponential I'll even take your
words. You said recently in a great
YouTube video about the AI challenge,
this is as dramatic as a change as
anything I've seen in my lifetime. You
said think about the transformation
brought by the internet. Uh this AI
inform AI transformation at the rate
we're seeing it could be over in the
next 2 to 3 years. And and you know, I
again like I've read I know you have
legislation. You have three bills
I I at least three bills in action right
now on AI gathering data trying to
understand uh the implications here
trying to head off the issue. And it's
different like it's not you can attack
it in a way that it's not like stop it,
right? It's like maybe help people who
are being who are who are in at risk of
job dislocation. But I'm not very
reassured hearing the way that you
describe the way that this issue is
being uh handled in the Senate that that
that speed is going to be met.
>> Well, I'm not sure I can point to a
policy makers anywhere in the world
that's figured this out. I mean, I got
good bipartisan legislation. Let's put a
commission together similar to the Cyber
Solarium that actually put some points
on the board. You know, commission of
the economy the future. We've got, you
know, bills to get BLS to start
reporting on AI job disruption. I've got
a bipartisan bill about how AI is going
to affect the financial markets and how
we ought to think through this.
They are I think thoughtful, but they
are and I am
self-acknowledging here,
small incremental steps
when it very much could be the holy
moment.
And
can we think big?
And you know, if if if
Donald Trump the disruptor, if he had an
ounce of either empathy or collaborative
spirit,
somebody that is a disruptor could
actually help us through this.
But I I
I want to be more optimistic. Um
but
but I'm I am
terrified. I mean, I had somebody come
in the other day and it was like I
thought it was a very
uh interesting thing saying you get
you get
three couples of parents together who
were talking about their kids.
You know, 10 years ago it would have
been, you know, this globalization. I
don't know if my kid's going to get a
job. 5 years ago it would be, "Oh my
gosh, I'm really concerned about whether
my kid is getting addicted to social
media." Now the conversation, and this
is happening at such a level that our
policy makers get it, they are terrified
that their kids have done everything
right. They're going through college and
there may not be a job there. Right. And
and can I just say, so I brought up
social media as an example of
um our US legislative body's ability to
deal with technology effectively. But
it's different than than social media. I
think we both agree here that with
social media, the big disagreement was
are you going to tell Facebook like how
to handle its newsfeed, what to do?
This isn't necessarily uh legislation
that needs to be or policy that needs to
be can we tell the AI companies to stop
making their models? To be able to
handle the negative effects here, it's
more like how do you stimulate job
growth, retraining? And even that's
probably not proven. But um that's what
gives me hope is that there's a chance
that that that can be and the fact that
you have these bipartisan bills
>> Well, let's let's let's hold for a
second. The solution.
You know, I've talked to some of the my
friends in the industry who say like,
"Let's at least deal with things like
non-consensual nudes."
Do you want your young daughter or son
to be portrayed with a deep fake out
there?
And everybody says yes, but then you
get, you know,
Elon at Grok saying, "No, we you know,
we're going to be an outlier." And you
know, we default to the lowest common
denominator on some of this. You know,
the idea of these horrific stories of of
people being guided to suicide. You
know, we can say, "Well, we're
you know, we're going to try to correct
the model a little bit, but we're always
lagging." I mean, I do think I'm I'm
kind of freaked out about this.
Um
you know, the idea of who you turn your
romantic interest to. I think we all
remember that movie a few years back
called I think it was called Her, where
the the main character fell in love with
a a kind of a a chatbot. That stuff is
happening now not in tiny numbers, but
it's actually starting to appear
statistically. And then we come to the
to the um
you know, the job dislocation.
You know, the the biggest in most kind
of mid-tier public universities, the
number one major for for most young
people is business or business
administration.
Those are the jobs that you come out and
you go work for a firm for a couple of
years as a young analyst or whatever.
Those jobs are gone. I mean, somebody
suggested the other day, and I'm not
sure this is right,
that maybe, you know, some of these
companies ought to pay an incentive to
get more people in the nursing as
opposed to business administration. We
ought to at least disclose to people
that, you know, the job prospects in
some of these fields are going to
dramatically change.
And I'm just not sure
um I'm just not sure
whether we're ready. And one of the
scary things that I've found, and
I want to be more optimistic, is like
you talk to the leading
AI companies, the leading AI thinkers,
and they'll give you a partial answer.
Well, gosh, we're going to build a lot
of data centers so that the traditional
trades will have an increase. And that
will be a short-term increase, you know,
in terms of um
you know, building those facilities and
there's going to be obviously huge needs
for more electrons. So, I'm a big
advocate that we'll never be able to
power this without small modular nuclear
other kind of decentralized uh power
generation. But that's still going to be
a relatively small number. And then you
say, "Well,
how do we make sure that whatever you're
going to do, you can use AI
to become better skilled at it?" And
everybody's kind of got
soft terms cuz they're making up right
now. But gosh, we got to have that stuff
ready yesterday. And I it'll be very
interesting to see, you know, even this
hiring cycle as we get, you know, close
to graduation in May in colleges,
um whether
>> telling.
>> Yeah, it's going to be very telling.
>> People new grads get jobs. We're going
to learn very quickly. So. Well, first
I'll I'll say a couple things. First of
all, as someone who's married to a
nurse, I agree with you. It's a good
career path and I always tell her at
least one of us will be employed in the
long term.
Um on AI on AI romantic relationships, I
mean,
it can't possibly uh believe that um
adults should not be able to enter into
these relationships with AI chatbots. Is
that more of a minor thing that you
would
>> No, I don't in terms of legislation,
right? I
I don't know.
I mean, obviously on minors and you
know, you
Yeah, yeah, and and trust me, I'm not
uh you know, big brother here and saying
we can prohibit behavior of adults.
But
you know, at at some point you know,
as a functioning society that needs to
procreate, that needs to have human
relations, I just think we ought to have
a at least a discussion about this and I
I just think
that my friends in the community
need to either not blow it off
and say
oh, we can't there's nothing we can do
or
you know,
at least we put up a bigger warning
sign, you know, adults are going to do
what they're going to do, but the
the full fully informed you go down some
of these rabbit holes.
Um you know, and and the ability to have
any kind of shared common truth as we
think about how AI AI could affect
you know, political debate. I am
terrified right now of
of um you know, disruptions in our 26
election from foreign sources or frankly
even you know, the president's
willingness to try to say he wants to
have the feds take over our elections.
And we have not seen deep fakes used in
a massive way so far.
But as we know, that technology is
evolving on a monthly basis.
Uh and it only takes one major screw up
in a election cycle for example for
people who already are losing faith
to lose faith in our basic democratic
processes. Right. So, you're running for
a fourth term, three-term senator at
this point and one of the things that I
love when I speak with politicians is we
can talk about polling and no one reads
polls better than uh people like
yourself. So, I want to read to you a
couple polls about AI's popularity or
lack thereof and sort of get your read
on what it can mean politically.
Uh this is from NBC new news poll, you
might have seen it. Uh a majority of
registered voters, 57% said they believe
that the risks of AI outweigh the the
benefits and a plurality of voters view
AI negatively and don't believe either
Democrats or Republicans are doing a
good job handling policy related to the
rapidly advancing technology.
I I guess let's leave the the
the reaction to um Democrats and
Republicans aside for a moment.
What are the consequences? And we've
tried to figure this out on the show,
but no one better uh to speak about it
than with you. Uh what are the
consequences for
this AI industry if it continues to poll
so low? Uh is there are they opening
themselves up to political
>> Well, they're opening themselves up. I
think the first line will be
the
war against data centers. You know, and
they are big, they use a lot of power.
You know, and and that becomes almost a
proxy for the overall concerns about AI
writ large. And you know, they're going
to have to go ahead and make sure that
people's electric bills don't go up,
that the water supplies don't go up,
that they are better that they are
better screened. You know, I got a
county in in Virginia that took their AI
revenues and put it all into affordable
housing. So, people see a tangible
benefit. And in Virginia, we're on the
front line. We're data center, you know,
heaven
in terms of The biggest state data
centers in the US.
>> we're having a major debate right now at
the state level about trying to extract
you know, somewhere between 500 million
and a billion dollars a year from the
industry. I would hope the industry
would lean into some of these things and
say, we're going to yes, we will
voluntarily
help and we will dedicate that to this
economic tran transition.
You know, it's happening so quickly. I'm
not sure we're going to get that
together and I'm no longer the governor
or the state official, but
the industry
the the tech industry writ large
has basically said, and I say this is a
pro-tech guy. I'm you know, and
my back business background was tech.
I'm a big believer.
>> But the tech industry so far has
generally said
you know, and rightly so, policy makers
don't get us.
We can blow them off.
You know, that was clearly the success
of the social media platforms to never
have any regulatory basis at all.
And then when you do have
overregulation, say from the EU and
they'll point to the EU and say, you
listen, we don't want to be like
Europeans. They have no innovation at
all. So, getting it right is hard, but
on this one
um
you know, when I
and this one if they kind of ignore and
say, we can blow off any regulatory or
any framework or we have no obligation
um
I think it could bite them. Now, it's
not going to disappear AI.
These models are out there
and the fact that you know,
you know, in China clearly is investing
at an amazing rate, but even if America
closed down
the models can transfer to another
entity that has the compute power. So,
this is not going away and and you know,
in a certain way
uh not to sound you know, old school
wistful, but if there was ever a time
where the world as a whole ought to be
thinking through this rather than
nation-state competition
it is on this issue and um
I I absolutely do believe
that
that we are now whether it's you know,
full AGI or full process we're getting
close to where the
the magic that happens inside these
models. I don't think at least I've
heard from many of and they're mostly
guys have said like, we don't really
understand how this is what all is
happening. This is way beyond just
predictive of the next word, which was
kind of the the you know,
AI 101 model that people got educated
on, you know, long long time ago like 2
years ago. Okay, let me run this by you
before we end this segment because you
mentioned the data centers. I was
stunned. So, there was a series of
negative polls about AI that came out
recently and I was stunned at the way
people feel about data centers. So, more
this is from Pew. Sure you've seen this
poll. Far more Americans say data
centers are mostly bad than good for the
environment at 39% to 4% for home energy
costs at 38% to 6% and the quality of
life for those that live nearby, 30% to
6%.
I mean, goodness. You know, that is a
terrible terrible polling numbers for
these data centers. Does that mean that
they're just they're going to be places
where they're just not going to be built
because the opposition is so high. So,
there was an Axios report that said
something like half of the data centers
that are expected to be built this year
are delayed. Now, some of that is part
shortages, but I think community
opposition is going to be Yeah. big big
part of it. Well, and you know, the
interesting thing too and they and you
know, rightly some of the tech companies
have said, well, you actually look
at the
electric rates for states that have done
a lot of this. They've not seen a
dramatic rise. But I think they have to
do more than just say, hey, we're going
to cover your no increase in a in um
you know, utility and electric electric
rates. I think they got to put that in
statute. I think they got to you know,
move more towards you know,
self-generation
that is adjacent to the AI facility so
it doesn't go into the full you know,
into the full grid. And I think we have
to document that. I think they have to
do more on the water usage. I think they
need to do a much better job on just
visual screening. These are big ugly
buildings. Um and and Yes, they are. But
the thing a little bit is they are
making progress.
But to go into
a community
and sell that when the only image you
have
is of say, you know,
data centers in northern Virginia that
are still old school last generation,
that's a hard sell. Now, there always
will be a jurisdiction
that needs that additional revenue
uh to get by and they do you know, they
do generate revenue and they don't bring
a lot of kids cuz they don't have a lot
of jobs adjacent.
But there needs to be a rethinking on
this.
And I do think you know,
the the
the state battle that's going on in
Virginia right now
I've said to the industry, you guys got
to watch this because
we are the mother lode of data centers
and if there is some adjustment of the
kind of economic deal that's going to
happen in Virginia, that is going to be
copied by every other state around the
country and my pitch to the AI industry
is
you know,
don't just fight it like mad, be
proactive and say, yes, we're going to
chip in more and we're going to chip in
more not only make sure your electric
rates don't go up and they are perfectly
shielded, but we're going to actually
put money on the table to help through
this economic transition.
And I you know, I get a lot of head
nods, but
you know, the the lack of
of specific policy ideas. Alex, it's
I've talked to everybody I I I can.
And most
you know, policy experts and others are
observing the problem or want to do
things like I'm trying to do collect
data, but what the actual
reskilling retraining program looks like
you know, we don't have a lot of a good
example so far. Yeah, I'm sensing some
frustration with tech companies. Yeah. I
mean, it's
But the I I kind of get it. You know, if
you think about the big the big guys
um
you know, they've been pounded on
for years as as since most of the big
guys act the hyperscalers, most of them
actually started
you know, as either social media or you
got Amazon and Microsoft and so forth.
But they've kind of gotten through with
you
kind of good lip service, but no
no rules or regulations in place and
you know, it's you of like this time
I think the seriousness and back to your
your your numbers on data centers
the fear
is real and palatable and
you know, I don't want this innovation
to stop, but I do think
you know sitting down and figuring this
out in a more
forward-leaning way is really essential
and and that's what I'm I'm desperately
trying to do here in this job at least
is not allow this to be kind of D's
versus R's. All right, let's take a
quick break and then come back and talk
a little bit more about Anthropic and
the Pentagon and the state of AI in
warfare. Back right after this. And
we're back here on Big Technology
Podcast with Senator Mark Warner.
Senator, it's always great to speak with
you. I I was looking at the date of our
last conversation. I can't believe it's
been 4 years.
>> Alex, that's mind-blowing. I I yeah. We
got to We got to make this more more
frequent.
>> Yeah.
>> [laughter]
>> Um so let's just let's pick up on the
Anthropic thing.
Um
the US government, right? So I you've
you've definitely stated your you know,
your opposition to them being
um labeled as a supply chain risk in the
first half. Um US government right now
is in the middle of removing Anthropic
from federal agencies. There's actually
a 6-month phaseout that the president
has ordered. Uh
is this So so can you talk about because
you know government agencies very well.
Can you talk about like is this
something that Anthropic is already
being removed and you can't really see
them being put back or is this 6-month
deadline something like we've seen in
the past with TikTok which could just be
a 6-month deadline because we know they
need Anthropic that just gets pushed
back again and again. Which is it?
>> Uh Alex, great question. Yeah, I again I
go back to like
you know, the TikTok issue. President
Trump in his first term
and his Treasury Secretary was good
friends with Steve Mnuchin literally
convinced me, you know, about the the
national security risk around TikTok
because of them you know, particularly
the ability to alter the message uh and
more the propaganda than the data
collection. And then obviously President
Trump completely flipped on that issue
and TikTok's here to stay and I still
like data more of the details on the the
controls the new American owners have.
So I don't know the answer to that
whether this is talk or they're actually
being disconnected.
And
and you know, to take out
what is at least at this moment in time
probably the market leader and when
there are actually benefits happening
um from the usage and I'm you know, I
got no particular beef for Anthropic uh
or or you know, I'm not carrying their
their water here, but I am saying when
you can get thrown out, what happens to
Anthropic could happen to OpenAI, it
could happen to Amazon, it could happen
to to Google, you name the the entity
um
and you got to have to go through a
political litmus test. Now, I think
Anthropic probably screwed up their
negotiations with Department of Defense
but to put up, you know, the supply
chain designation which I don't believe
has ever been designated against an
American company
this is a death warrant.
And I don't think any company
technology-driven or not wants to have a
single individual. This is not even the
president, this is Secretary Heckstall
making that determination without some
due process.
This is a
big freaking deal and I just hope and
this will be the we're
I think the jury's out on this. I've
been trying to to talk to all of the
other tech companies to say even if you
are
Anthropic's biggest competitor you don't
want this precedent set.
Um particularly because at least with
this administration
you know, as we've seen time and again
you know, they may love you today but
that doesn't mean they're going to love
you tomorrow and you know, take the
Marjorie and the political figure think
the Marjorie Taylor Greene
if that kind of, you know, up-and-down
approaches applied to all of our leading
tech companies
you know, who's going to
you know we're going to see where we've
always had advantages in terms of our
international take-up people are going
to say heck it, you know maybe it's
better to go with the Chinese model.
Okay. So you're the vice chair of the
Senate Intelligence Committee.
Was or is the Pentagon making a AI-based
surveillance program of American?
That was one of the central contentions.
>> I do not know the answer to that and I
should.
This administration has not been
forthcoming.
And unless we have unless we have
bipartisan oversight
we're not going to get those answers and
um I think there've been concerns raised
that I you know, and this is not just
around the Intelligence Committee, it
ought to be also the Armed Services
Committee and others.
And I think um I've had conversations
with a lot of my Republican friends
think I'm making the case that this is a
big deal
that we got to
know some of this. We could might decide
that that is the right choice. We may
even decide although I can't imagine
this to be the case that we're ready to
move to
AI weapons without a human in the loop
and it's easier to make the decision for
example on an AI weapon without a human
in loop on defense, you know, having a
missile system that would fire based
upon an incoming that's
adversary, you know, to protect an
aircraft carrier makes, you know,
there's an argument there without a
human in the loop.
You know, on the offensive side it's
it's a much more challenging argument,
but we ought to have those arguments
rather than you know, a single person in
in terms of Pete Heckstall making that
determination. Palantir recently demoed
Maven smart system at a conference and
showed how it selected targets. Um seems
like Palantir is actually far more
consequential uh in war fighting than
than Claude. Although maybe there've
been updates where Claude was embedded
uh that we don't know about. Uh I'm
curious from your position cuz you know
this better than than than most uh or
almost everyone. How important is
Palantir there? And um when you think
about the war on Iran right now, or is
just Is it Palantir selecting the
target?
>> Listen, I think Talk a little bit about
that. I think Pa- Palantir is a you
know, has been a very successful
company. I think Andrew has been a very
successful guy. I I think the idea that
you know
that these new entrants are shaking up
the primes in many ways, you know makes
sense.
I also actually think that um
you know
Alex Karp is is thoughtful on a number
of these issues. I know I know I was I
raised um real concerns about Palantir
and the six other technology companies
that have taken contracts with
Department of Homeland Security.
And I've been extraordinarily concerned
that, you know, DHS or ICE as we saw
people targeted in Minnesota. I mean,
literally a lady who was up for the
global entry pass
got denied because they had evidence
that she'd shown up at a protest. Do we
really want DHS or ICE making those
determinations that I've you know
Palantir and some of the companies are
saying they are not doing that, but how
do we independently validate that? This
is Mhm. This is where we're entering
into this realm where
you know, at some point you still need
third-party objective whether they be
academic or other experts
trying to
help keep both sides honest in in terms
of both sides, both being government and
and the tech companies. Um
And I found with some of these companies
a willingness to participate and at
least they've told me they're willing to
participate through through that kind of
review and oversight. But it really is
going to take um you know, both
political parties in DC to
you know, realize this is not a
Democrat-Republican issue. This is like
we're setting the ground
the ground rules for stuff that that if
we don't put ground rules in place could
lead to a pretty spooky spooky place.
There's a reason why I think the
overwhelming majority
of science fiction movies about the
future or have this kind of dystopian
future because that default is actually
easier than
thinking this through in a rational way.
On the Palantir uh side of the of the
Iran war uh obviously we're it seems
like the United States did target and
hit that girl school in Iran. Um and it
was presumably bad targeting.
Are you because again you as the vice
chair of the Senate Intelligence
Committee, do you have any idea of
whether Well a US technology layer like
Palantir was involved there?
You know
I think we need a full investigation and
what I'm I'm a little old school that I
think we ought to
you know
restrain making a conclusion before you
got all the facts.
Um this girl school was literally right
adjacent to you know, an Iranian
military base. You know, was this DIA?
Was it CENTCOM? I mean
I think we need to get the facts out on
on this.
Um but we all know, you know, technology
makes mistakes. And that's where you
know, the rub comes with this kind of
horrific
um
event
let's get the facts before we draw
conclusions, but what is what is
problematic
is that
when the president of the United States,
I can't believe he was briefed with his
initial reaction that this would came
from the Intelligence Committee, oh this
was the Iranians bombing their own
school and then they kind of said, well
like here's the material that showed it
was an American, you know, missile and
then he said, well maybe they got them
as when that kind of absurdist
response comes from the
commander-in-chief
that undermines I think not only the
confidence of the American people that
we're going to get the truth and it also
doesn't help us in terms of how the
world views us.
You know,
for all our flaws, we have been
generally viewed as the good guys.
Um, and when we lose that designation,
you know, that doesn't make America
safer. I'll just leave it at that at
this point. Okay, that's a very telling
answer. That's very interesting. All
right, I have a couple more for you
before we leave. Uh, first of all, uh,
on the AI job disruption question,
you've mentioned bipartisanship a number
of times.
Uh, I want to put this to you. I'm going
to be in DC, uh, in a couple weeks from
now, and I'd love to interview one of
your Republican, uh, colleagues. So,
Yeah, I'd love to get you somebody like
Mike Rounds is very thoughtful on this
stuff. I got a lot of Republican friends
that I think would, um,
love to sit down with you. And, you
know, it was Great. And especially on
some of the the the weapon issues, I
think Mike Rounds is, you know, frankly,
ahead of me on on on thinking through
some of this stuff.
Okay, so maybe, uh, we can I can get in
touch with your staffers after this, and
we can find a way to connect with him.
That'd be great. Um, okay. Also,
4 years ago, we talked about an issue
that's been, I think, really important
to me, really important to many
Americans, which is, uh, that we see,
you know, whether it meets the legal
definition or not, um, insider trading
within Congress. And you were great in
your statement saying we shouldn't see
this anymore.
Uh, but here we are four 4 years later.
Uh, this is just one example that came
through my timeline, uh, this week. It
looks like Josh Gottheimer, sorry, yeah,
Josh Gottheimer, who's on the House
Intelligence Committee, uh, bought Exxon
twice in early February. Now, who knows
if that's necessarily connected to the
fact that the Iran war was brewing, but
doesn't look great. Uh, why do you think
it's been so difficult for the Congress
to pass legislation around this? Uh,
I can't answer that. I mean, I I I don't
know. It seems like it's
it's um,
should be a no-brainer.
Um,
you know, and I'm lucky enough that I
was able to put all of my stuff in a
blind trust, independent. I don't know
anything that I own. Um,
you know, and I think we've kind of
completely gotten out of all trading,
and I've moved from mostly stocks to, I
think, mutual funds. But there are, you
know, there
there are
issues I've seen, like,
um,
you know, I was a venture capitalist for
many years before I got into this stuff.
You know,
I've invested in companies that have
have, um, you know, took 10 to 15 years
to go from startup to a public company.
And then, you know, I have a policy that
if something becomes
public, we try to sell it.
But that still shows up as, "Why is
Warner selling this stock right now?"
Well, I don't want to own the stock at
this point. But, you know, it is it is,
uh,
you
you should you have to disgorge
even before, you know, in a company that
you had long before you were in public
service. There is some there is some
complexity,
uh,
to this stuff. And again, I've been
very, very lucky. I've got the freedom
that that
I was
able to do very well in technology.
You know, I'm going to be fine
regardless. Um,
I don't want to chase people off from
even going in public service, because if
they're kind of somewhere along their
career, and, you know, they they were a
founder of a single company, uh, what do
they do? I I don't know the full answer,
but all of those are are nits, actually,
compared to we ought to have a rule
that Mhm. members of Congress shouldn't
trade stocks. But here's the part,
Alex, that makes every people more
cynical. I am
right now in the middle of the final
negotiations on trying to put in place
certain rules around crypto.
You know, I you know, I crypto is here
to stay. There are some again,
real beneficial aspects of crypto.
But if we're going to have a market
structure bill, we've already paid
passed a stablecoin bill,
you know, one of the things that makes
it difficult
to get it finished is when the President
of the United States says so grossly
you know, totally enriches himself
through this industry, and wants to say
he wants to have ethics rules apply to
Congress and and members of the cabinet,
but not to the first family.
It's it's, you know, we ought to be
passing these ethics,
uh,
restrictions, but, boy oh boy, there
ought to not be a carve out for the,
you know, any anybody whose name rhymes
with Trump.
Okay. Well, I'm with you on that. Look,
Senator Warner, I can't say I'm more
reassured that Congress has it under
control on the AI front, but I am really
thankful that you're out there, you
know, stirring it up, working across the
aisle, and trying to make some progress
out there. I'm sure it's not easy, and I
appreciate you doing it. I appreciate
you spending the time here again.
>> No, Alex,
we should do this more than a
quadrennial basis, because these issues
are coming, you know, as you know, I
know.
Um,
and we really need This is one of the
things I would appeal You you've got a
very sophisticated audience.
You know, if if part of your audience
has got ideas or suggestions,
please, you know, I'm wide open for
business on what these policy notions
ought to be. Um, so, you you can get to
me easily, you know, online. Um, but
it's going to take all of us in this,
because getting it wrong, boy oh boy,
getting it wrong can be a major
disaster. But thank you for having me
on, Alex.
Definitely. Yeah, it was it was great
having you, and I'll tell you social
media, um, that beat took me about 10
years before I ended up in DC covering
hearings. And the [music] speed at which
I had to, you know, call and say we got
to talk about AI is is much faster. So,
thank you again, and I'm sure the
audience won't be shy in writing you.
Let me know. Thank you, Alex. Be well.
Thank you. All right, everybody, thanks
for watching, and we'll see you next
time on Big Technology Podcast.