Dario’s Choice and Anthropic’s Future, Apple’s AI Devices, Netflix Loses WBD

Channel: Alex Kantrowitz

Published at: 2026-03-04

YouTube video id: Pzhr1_MwmBk

Source: https://www.youtube.com/watch?v=Pzhr1_MwmBk

Anthropic's war with the Pentagon hits
another level. Apple's preparing three
AI devices, but the iPhone might be the
killer feature, and Netflix will have to
go forward without Warner Brothers
Discovery. We'll dig into what it all
means with Spy Glasses, MG Seedler,
right after this. Welcome to Big
Technology Podcast. It's the first
Monday of the month, which means MG
Seagler from Spy Glass is here to break
down month's news with us. And boy am I
glad that we have an episode for you
today because it feels like a year's
worth of news has happened over the
weekend since we last left you on
Friday. The Pentagon declared Anthropic
a supply chain risk, making it clear
that Anthropic was not able to work with
the government or its contractors on
government work, which is going to be a
major hit to the business if it holds
up. We also have OpenAI coming in and
signing a very similar deal to the one
Anthropic was just about to sign with
the Pentagon. So, we're going to dig
into the latest in that story and what
the implications might be for Anthropic
and the rest of the AI industry. We'll
also talk about Apple's forthcoming AI
devices. It's a set of them and Netflix,
of course, losing the deal with Warner
Brothers. Discovery is paramount. Swoops
in and pays a lot of money for that
shrinking property. All right, MG, great
to see you. Thank you for being here.
>> Great to see you, Alex. As you know, I'm
happy to be here. As a week ago, I was
in Dubai. So, uh this is uh you know uh
my family was lucky in the timing of of
getting out of there, but uh you know,
obviously thoughts with all the people
over there and it's a it's a terrible
situation. Of course,
>> definitely. No, I'm I'm very glad that
you and your family made it out and um
yeah, it seems like it's not just
military infrastructure um but civilian
hotels, airports, even a data center, I
think an anthropic data center was hit.
So, we we will talk about Oh, right. an
Amazon data center that may or may not
have been serving Anthropic. Um, let's
let's pick up because uh let's pick up
on this story of Anthropic and the
Pentagon because we now have some more
news about what exactly led to the
dispute and what the fallout might be.
We have we have movement, right?
Anthropics lost the deal. Not only that,
they can't work with the government
anymore, maybe. Uh and OpenAI has now
picked up that uh that deal. So, let me
just uh take you all through what
exactly happened because there's this
Atlantic story inside Anthropic's killer
robot dispute with the Pentagon. They
say on Friday morning, Anthropic
received word that Pete Hexath,
Secretary of War, his team was going to
make a major concession. pledge not to
use Anthropics AI for mass domestic
surveillance or fully autonomous killing
machines, but then qualify those pledges
with loopholey phrases like as
appropriate, suggesting that the terms
would be subject to change based on the
administration's interpretation of the
given situation. And here's where it
goes off the rails. But on Friday
afternoon, Anthropic learned that the
Pentagon still wanted to use the
company's AI to analyze bulk data
collected from Americans. This could
include information such as questions
you ask your favorite chatbot, your
Google search history, your GPS tracked
movement, and your credit card
transactions. All of which would be
cross referenced with other details
about your life. Anthropic told uh
Anthropic's leadership told Hegs the
team uh that told Hegs team that was a
bridge too far and the deal fell apart.
Um, just to pick up my perspective from
Friday where I said maybe there's not
really a there there and this is, you
know, likely, you know, positioning in
marketing. Um, I think there's more of a
there there. It does seem like this is a
good line for Anthropic to draw.
However, you know, as I kept reading
more about this, it just seemed to me
like this is a deal that did not need to
fall apart, that there were ways to word
the deal, that you could basically
include the carveouts that everybody had
agreed to, uh, and it would have been
fine. But the Pentagon just set this
deadline for Friday at 5:00 p.m. and it
stuck with it. Basically, Daario didn't
return their calls in the way that they
wanted. uh and then they went nuclear,
substituted them out with open AI and
declared them a supply chain risk.
That's that's sort of my perspective on
on where we stand today. Do these
details uh change your have have any
change for you MG in the way that you
see the story and what's your general
read on where it is and where it's
going. So, I haven't actually written
about this in part because I'm still
feel like yeah, it's obviously we're all
digesting it a bit in real time and it's
a delicate situation given what we just
talked about sort of with the the
situation in the Middle East going down.
Uh it does seem I mean obviously the
timing of that you note the Friday
deadline. It's w the most wild thing to
me about all of this is that uh you know
Secretary Hegsith is going through with
um these negotiations in the middle of
major preparations for war. Obviously, I
mean, we didn't necessarily know that at
the time, though, you know, clearly
there was the buildup happening and and
you know, in the middle of getting ready
for these these strikes, they are going
back and forth with a, you know, an AI
technology provider um to try to get
them, you know, to agree to to terms.
And so, you know, part of me, a cynical
part of me wonders if you know, they
weren't using that. Not that they would
disclose anything like that to to
Anthropic necessarily, but like that
they knew that this was sort of coming
and so they knew like we need to both we
need to get something done now because
we're probably going to be using some of
this technology in the forthcoming you
know war preparations and execution of
the of the war strategy. um and or you
know is this going to be the best
position for us to sort of lay down the
terms that we want and and maybe
Anthropic will have to sort of yeah just
yield a bit easier but uh maybe he
hadn't done enough uh research on on on
Daario listen to your interviews and and
many other interviews to know like what
uh what the his response was likely to
be to sort of these types of ultimatums.
Um, and so yeah, it it does feel like a
bit that they probably could have hashed
this out, but I do wonder again if the
timing of the macro stuff of of the
actual war and attack situation uh just
added the time pressure um necessarily
to some of this.
>> Well, this is interesting because they
actually did end up using anthropic in
these strikes,
>> right? And uh and last week on Friday I
said uh it actually anthropics use was
limited because I was reading the
reports saying that Palunteer has
Anthropic involved and that was what
started this entire discussion because
Palunteer systems were used in the
capture of Maduro and Anthropic had some
questions about how they were used. I
think uh I got I I won't say I got that
wrong but I had an incomplete picture of
of how deeply integrated anthropic
already is in the US government and this
stunned me. Uh this is from this is from
the Wall Street Journal. US strikes in
the Middle East use anthropic hours
after Trump ban. So by the way the ban
and we'll talk about this is going to be
six months from now right that they
can't use it. But um already at within
with the Iran strikes they are using
anthropic. Here's the Wall Street
Journal story. Within hours of declaring
that the federal government will end use
of its artificial intelligence tools
made by tech company Anthropic,
President Trump launched a major air
attack in Iran with the help of those
very same tools. Commands around the
world, including US Central Command in
the Middle East, use Anthropic's
Claude's AI cloud AI tool. The command
uses the tool for intelligence
assessments, target identification, and
simulating battle scenarios. uh even as
tension between the company and the
Pentagon ratcheted up, highlighting how
embedded the AI tools are in military
operations. This was uh this isn't just
military analysts like asking Claude
questions. It seems like you have war
games going on with Claude, which was
much more than I expected. And it I mean
I I'd love to get your reaction a to uh
to what your reaction is now that we're
learning how deeply it is integrated and
b why would the military risk having to
substitute it out uh over you know
language that they could have agreed to
anthropic with and they just didn't.
Again, I I sort of come back to the
notion of was this sort of a it's just
like the worst possible timing in in
ways for both sides, right? Like whereas
if it were a more stable situation, um
you know, maybe the two sides could have
sat down and hashed things out a little
bit more, but given the buildup to this,
like it seemed like the administration
got very fast was very fast to get
exacer exacerbated by uh or sorry,
exasperated by anthropic. And now again,
you might see why. It's like, look, we
don't have time for this guys. We are we
are preparing for uh some military
action right now. If you guys are not on
board, unfortunately, like you know, we
already have the systems in place. We're
we're using those right now. And um you
know, we'd love for you guys to be on
board, but if you're not, like uh that's
something we can discuss, I guess, down
the road to your point, like 6 months
later. Um and also to your points, like
it's not just that they're Yeah. using
like clawed chatbot stuff. This is
directly related it seems like to their
uh you know their um contracts with
Palunteer and and also Amazon which has
their own sort of government cloud stuff
right that that allows uh these things
to operate behind um you know their own
firewalls and insecure uh centers and
whatnot. And so it's again this is not
something they could swap out overnight.
It's not something that even if they
give clearance to OpenAI um or anyone
else that they can just yeah put put in
there because again these things have to
be test like how would you know to trust
that um you know if you're all of a
sudden swapping out your main model and
you're running like literal war games uh
on there like how do you know like you
know what what to trust and what not.
And so again, it just feels like this
this timing of it. Uh maybe it it's like
uh guys, we need to make sure all of our
our our eyes are dotted and tees are
crossed before we go ahead with this
operation. Uh as you know, we're we're
going to be using um some new technology
this go around. Uh so has anyone talked
to Anthropic about like you know the
latest uh with with what they're
thinking about it? Then as as you noted
the the Madora situation and obviously
that Palanteer it seems like was
involved in that as well and so that
came to the forefront there and so uh it
is this is this is all sorts of in in
snarled and and weird uh entanglement
going on right now. We I feel like uh I
feel like all the talk that we've been
doing about circular deals and all this
like we're we're now at new stakes now
in terms of uh of where this is all
getting in integrated within these
systems,
>> right? these like science fiction papers
of AI potentially being used in the
military somewhere down the line like in
future years like oh wait a second it's
already being used but it is interesting
because they do have this six-month
deadline to um to disentangle themselves
from the federal government or really
the federal government has the six-month
deadline disentangle them from anthropic
u are you are you suggesting because of
the timing that basically it was like we
it's not that we need this Friday you to
meet this Friday deadline because we're
going to swap out another model. It's
like we're going to give you this Friday
deadline cuz we got some other that
we need to handle. And
>> I mean, doesn't it feel like that?
Obviously, I have no idea. I have no
inside sources at the Pentagon to know
that they were giving this ultimatum
given their timeline for uh for war
preparations. But it does feel like, you
know, they're they must have been at
some level thinking like, we don't have
time for this right now. Uh you know, if
if you if we want to hash out something
like great. uh here's here's X Y and Z
partner on the team that can sort of
talk you through it. Um but if not like
sorry we're uh we'll revisit this at
some other point.
>> But that's the thing I don't understand
because you're about to fight a major
operation, right? Like this is a war and
>> you're not going to be able to swap out
Anthropic. So if Anthropic like comes in
like before the war starts, right? Like
you're not going to like switch to Deep
Seek or OpenAI on Friday afternoon.
>> Deepse. Can you imagine that? That would
go over really well. I could I could I
could I mean at this point anything's
possible, right? Um I mean I wouldn't do
it but I wouldn't be stunned. I mean
give look look what they just did to an
American company. But I guess it's
interesting because like if you fight
that war and you say all right anthropic
gave us problems during the war that's
maybe when you start the process of
thinking about you're going to find out
one way or the other. So right and by
the way the the attack was quite
successful in the early going. I mean,
I'm not not sure if this is like all
Anthropic's doing. That would be I think
a bridge too far to to put it all on
anthropic. But if that's the AI tool
you're using, you're having a pretty
successful campaign early on uh
militarily at least. Like
>> I don't know. Is that is that when you
want to start subbing them out?
>> So, two things to that. One, again, I do
think the Maduro the the Maduro thing
situation um obviously played a role in
this. It's sort of it weirdly like
hinted at what was to come, right?
because all of a sudden we we learned
that maybe it was being used. The
reporting had uh there's conflicting
reports, but some of the reports had uh
you know the notion that Anthropic
learned about how Palanteer was using it
potentially for that raid and they
didn't like that maybe too much and sort
of that's why they raised raised the uh
you know raised it up the chain a little
bit and maybe the the administration
didn't like the fact and Palanter didn't
like the fact that they were doing that.
Um, and so fast forward to now, again,
we know, you know, the government
obviously knows that they're they're
heading into this new situation. Either
maybe they wanted to try to get it
squared away before they did that or
again to my earlier to my first point,
like it's possible that they used it as
like a point of leverage over anthropic,
right? to say like look um we understand
you know that there's been this back and
forth about how we're potentially using
using the technology here but like look
we're uh you know we're going to be
using these things going forward your
models uh we'd love to keep doing that
and um uh like dot dot dot by the way
like you know again they wouldn't they
wouldn't tip their hands but let's just
look in a week from now and see uh what
you think like how this is uh how this
is playing out and if you really you
know want to to be on sort of the wrong
side of this from their advant vantage
point,
>> right? And look, the more I think about
this, the more it just seems to me like
like I shared on Friday that this is
sort of an ultimate culture clash. And
we'll get into the opening ideal in a
moment, but um you look at Emil Michael,
the under secretary of war who's been
working on this, clearly doesn't like
Daario, clearly doesn't like the
Antropic team, and I wouldn't be
surprised knowing about him and knowing
about them that there would be a culture
clash there. And in fact, so I read that
basically at the beginning that
Anthropic stood up against what they
thought was going to be domestic
surveillance. Uh and uh they had
seemingly both agreed on the autonomous
warfare part. Uh this is what Emil
Michael said. He he just, you know, it's
like one of those like I reported it for
years and he just tweeted it out. He
just tweeted it out sort of like what
what what happened behind closed doors.
He says wanted language that would
prevent all Department of War employees
from doing a LinkedIn search. They
wanted to stop the Department of War
from using any public database that
would enable us to, for example, recruit
military service members, hire new
employees. When I called to discuss
cutting off the Department of War from
using publicly available information
that would hurt our military readiness,
uh Daario didn't have the courage to
answer. Right? This is the now sort of
infamous Emil called him before the
deadline and Daario was in a meeting and
then by the time he got out of the
meeting, this whole thing was blown up.
Now, this is really where it gets wild.
He says, "We agreed in writing to act
according to the National Security Act
of 1947 and the Foreign Intern
Intelligence Surveillance Act of 1978
and all other applicable laws." They
wanted the word pursuant versus
consistent with and wanted to delete all
applicable laws, which was less
protective of Americans. Can't make this
up. We also agreed to human oversight of
all weapons uh weapon systems by saying
the department of war will use the AI
systems for all lawful uses use case in
accordance with all applicable laws of
the um US law and the department of war
directives. Um and we wanted to to
retain the ability to override or
disable the AI system as appropriate. He
goes he didn't like this is talking
about Dario. He didn't like the word he
didn't like as appropriate. Would he
prefer inappropriate? I agreed. I even
agreed to take that out. He knows it.
His investors, customers, and employees
should know about his lies. Risking the
safety and security of our country and
our troops are a marketing vehicle for
him. I mean, again, like this is I'm
just going to say it. If you have two
adults in the room, I think you should
be able to work out this language. The
other explanation is that the Department
of Defense really did want to be able to
override these systems. really did want
to be able to conduct domestic
surveillance. But again, we're talking
about a tool that's so important to the
military today that's being used in the
use cases we described to blow it up
over these terms to me seems like
complete like a ridiculous thing. I
think there's a few things going on
here. So, first and foremost, like
hearing you talk through those exact
quotes, it's like I'm sure you've been
involved with them. I've been involved
in on you know in a deal side on a
number of times like with lawyer when
lawyers get involved and want to use
very explicit language to to make sure
that everything is drilled down and
there's no wiggle room like lawyers
themselves for lack of a better phrase
go to war over these little terms right
and it's like no we can't say it this
way we have to say it exactly this way
and the other side's lawyers will say no
we can't let them say it this way so
like there's definitely some level of
that I I I know that you know they're
they're talking about this on like the
the Emil and Daario level. Um,
certainly, but like the legal ease stuff
just seems like it's it's lawyers like,
you know, going back and forth on both
sides to try to cover their own asses,
right? And the company's asses uh in the
in the case of the downside scenario.
That said, I think you hit on it earlier
where it's like obviously these two
sides just don't like each other from a
philosophical level, right? There's long
been the the charge against Anthropic
from the Trump administration that maybe
Anthropic is the more quote unquote woke
AI company that they have all this you
know effective uh altruism stuff going
on that they don't like and that um you
know David Sax has come out strongly on
these on these issues and um that they
just feel like that they're misaligned
uh philosophically and I do think it's
an awkward situation because they didn't
I'm not I don't know this for sure but I
wouldn't be shocked if they didn't
necessar necessarily know just how vital
anthropic was to some of the systems
they're using again with with regard to
Palunteer. Um obviously they they use
Palunteer for a lot of different things
and and government famously has for a
while for different services and the
fact that you know anthropic cuz
everyone I think across the board loves
their models for different reasons uh
regardless of sort of your philosophical
bent about the team uh that's building
them you know they have great technology
and so the fact that Palunteer and then
Amazon um obviously and a bunch of
others have used uh anthropic services
like maybe the government just wasn't
savvy enough to know just how integrated
Anthropic itself was and that they can't
just again like we were talking about
swap it out overnight for everyone makes
you know Frontier models we can use
OpenAI we can use Google we can use
anyone like let's just get someone else
in there it's like it's not going to be
that simple and so um I think you know
all of these things sort of coming to a
head leading up to the situation that
we're talking about with the attacks uh
last weekend it just feels like there's
a boiling point and again there's maybe
some points of leverage that Emil and
some others like thought about and
obviously we didn't even talk about the
Trump tweet. He tweeted like, you know,
to basically tried to to, you know, end
anthropic as we know it. Uh, you know,
saying that like we're we're done
dealing with them. Uh, you know, best of
luck with whatever you do. We're not
working with you anymore. And, um, none
of our partners are working with you
and, um, obviously the government has
partnerships with Google, with with
every with Amazon, with everyone else.
And it's like, you know, it felt like it
was an existential potentially threat to
to anthropic itself. And so there's so
many layers going on into this. And
obviously the reporting every single day
comes up with more and more uh layers to
it um to unravel. And it's just weird to
think that again all this is unraveling
while there's actually attacks going on.
Um crazy
>> like yeah, insane.
>> Here's General Jack Shanahan who's no
friend to the sort of woke wing of the
the tech industry. He's the general
behind the Maven program uh that uh you
know Google uh Google employees uh
rebelled against which was a partnership
between Google and the Department of
Defense. Um he said you might expect him
to be sympathetic to the Department of
Wars uh uh position. He's not. He says
I'm sympathetic to anthropics position.
No LLM anywhere in its current form
should be considered for use in fully
lethal autonomous in a fully lethal
autonomous weapon system. Despite the
hype, frontier models are not ready for
prime time in national security
settings. Over reliance on them at this
stage is a recipe for catastrophe. Mass
surveillance of US citizens, no thanks.
Seems like a reasonable second red line.
That's it. Those are the two
showstoppers. Painting a bullseye on
anthropic garner spicy headlines, but
everyone loses in the end. This should
never have been such a public spat. um
should have been handled quietly behind
the scenes, scratching my head over why
there was such a misunderstanding on
both sides about terms and conditions of
use. Something went very wrong during uh
the rush to roll out the models. Let
reason and sanity prevail. I mean, that
seems like a pretty reasonable take.
>> It does. But again, I think that maybe
it was a trickle down effect of the
Maduro situation coming into this
knowing that the government knowing that
they're going into this situation and
not wanting to, you know, for this to
come up like say that the these attacks
started and Anthropic got wind that
their models were being used via
Palunteer or whatnot and you know they
just start to raise like this giant PR
campaign against against the government
um for doing that. Um, now you might say
that that would backfire against them
and it could have, but it's sort of a
who knows how it would have exactly
played out in that case, but I'm just
trying to game through like what the
government was thinking here in terms of
like why engage this ahead of time.
Again, either it's that they viewed it
as a point of leverage over Anthropic
leading up to this, that they knew um
that they could they could get maybe or
they thought that they could get more of
what they wanted out of out of Daario
leading up to this or again that they
wanted to sort of cover themselves for
if and when they went forward with this
and and using these models. But again,
you point to the other stuff which is,
you know, there's there's the
multi-layers here. It's not just war
game scenarios and things like that. It
is the mass surveillance stuff which
obviously Anthropic cares about and you
would be hardressed to find people who
would be on the other side of that right
like um to your point on on the
general's comments like everyone sort of
is on the not everyone of course but
like a lot of people I think would be on
the side but the government's push back
against that at least to date has been
like we just don't want anthropic to
have um de facto say over anything. It's
not like that they're saying like we
want to mass surveil the you know the
the American populate and they would say
like the laws are already in place
against that like obviously there's gray
areas with all of this stuff but like um
they just their stance is we do not
think that a company should have uh de
facto say over what you know what we
would do um in situations and again
we're not going to the plan is not to
mass surveil the US but again these are
they're slippery slopes which is what uh
anthropic would argue I would assume and
so you could just go back and forth and
continually will go back and forth over
those issues,
>> right? And and I would I would still
hold that they they should have come to
a deal, but they didn't. And so now the
question is what happens next? So, as I
mentioned earlier, the Pentagon has
labeled Anthropic a supply chain risk,
which as I understand it means no
federal government uh uh agency can work
with Anthropic uh after this six-month
deadline. Not only that, private
companies working with the government on
uh certain contracting work cannot use
anthropic for that work. Um so by the
way, like if let's say you're a Boeing,
>> you may not want to have a a certain
model that your engineers use for uh you
know government work and a different
model that they use maybe for for
commercial work. You want to have
standardization. So, this is a
potentially very big hit for Anthropic.
Not just the $200 million contract that
it had with the Pentagon, but this is a
potential billion dollar multi-billion
dollar hit uh if the Pentagon uh does go
through with this designation. Would you
agree?
>> Oh, I totally agree. It's it's not just,
as you said, the contract itself. It is
the the trickle down effects and the
broader ramifications of if they lose
that distinction. And it might and again
like it might just yeah it puts a
chilling effect on new contracts that
are signed right because it's like what
if um some other company is thinking
about like oh we might do a government
contract one day um and to your point of
like would we rather just use one model
like you know to sort of do all of our
work or would we really want to have to
swap out anthropic for open AAI if um if
we do go forward with this government
contract. And to that point, like I do
think that the two sides you you hear,
you know, there's been subsequent
reporting that like there's still some
talk that they want to, you know, figure
out how to make this work again for
nothing else. Maybe if if we still have
this six-month window like where they're
going to be using uh the anthropic
models like you, the 6 months are
probably going to be pretty intense in
terms of uh what's going down um from
the war perspective. Yeah. a lot of
tokens being used and so they probably
do want to find a way to hash things
out. So like the hope obviously is that
cooler heads prevail maybe once this
initial wave of of the of this you know
these attacks are are sort of behind us
hopefully um that they can you know sit
down again and maybe hash out uh the
legal ease as we were talking about and
like the exact wording of like how to go
forward with this because yes it's it's
bad for anthropic if if they get ripped
out of the US government as they're
talking about
>> right and we should say that this supply
chain threat is not something that's
typically used for domestic companies
Right. It's typically
>> right. It's Chinese. It's like all the
threats that were used against Huawei
and all the Chinese companies and it's
wild that this is and it's like so it's
the the sort of backdrop behind all of
this right now. Um I noted this earlier
um but you know a number of people have
seen this like Claude is now the number
one app in the app store which is wild
and it's it's very clearly related to
some of this. Yeah. Like right like it's
not just that um obviously it's been
doing well. Anthropic's been doing well
with the new with the new Opus models
that have rolled out um and and co-work
and cloud code and whatnot. But like
some of this is certainly, you know,
virtue signaling if nothing else, right?
Like people are saying like, "Oh yeah,
we want to be on the side of the AI
company that is pushing back against the
government that's trying to mass surveil
or or in the headlines at least, right?"
Like that's the way that it's being
portrayed. And so
>> and who does that remind you of?
>> Tim Cook 10 years ago. one month 10
years and one month ago was the the time
that he sent that memo out about uh
standing up to the FBI and Apple's
basically capitalized on that for the
last decade.
>> Yeah. And so, you know, is Anthropic
running a similar playbook to that? I
mean, maybe not explicitly, at least
right now, like not doing PR campaigns.
That would be pretty uh you know, not in
great taste to do that at the moment.
But still like again it doesn't seem
like it's completely unrelated that
claude is shooting up um and people and
they're sort of yeah thinking of that
this might be anthropic is positioned as
the AI company that's going to be the
quote unquote moral one and you know
that's a whole obviously hornets nest of
of of a topic as well.
>> That's right. Uh I will give my hot take
here which is that this supply chain
risk threat is never uh never manifests
just never takes takes effect. Uh again
it's a six-month uh deadline. We've seen
sex six-month deadlines a lot uh from
the US government. Often it's been
around Tik Tok. Oh we'll ban Tik Tok in
six months. Yeah we'll extend it for
another six months. If if anthropic is
this pivotal uh for um for the
government uh then they will just
continue to use it and extend this or
rescend it or it will won't hold up in
court. So that's hot take one. The other
side of it though is I've already heard
some rumblings from government like big
companies that are government
contractors that they will preemptively
take anthropic out of uh their workflow
or at least are highly considering it
because they don't want to bank on the
fact that this will get extended. So
even if this isn't going to go through
completely uh I do anticipate that that
it will hurt Anthropic uh when it comes
to these private companies. I agree with
you, but I would also just say like I
wouldn't fully discount the notion that
we talked about already, but that um
these two sides just don't like each
other from the personnel involved,
right? Like it every indication seems
that way. And so like is there are they
going to be able to get past sort of the
grudge between the Trump administration
and and Daario basically? um or is there
some intermediary that has to come in to
sort of assuage that um in some ways
because yeah like the Tik Tok thing and
everything else like um you know the
sort of the taco stuff right like Trump
always chickens out of like the things
that he threatens and and goes back
upon. Um is this another one of those?
And again, it feels like yes, it
probably will be except if they view it
like that they want to make a sort of,
you know, makes make some sort of points
about of more philosophical points and
highle point about um, you know, quote
unquote woke companies or companies that
are that are misaligned or or that they
view as misaligned with sort of the
American public and the electorate and
all that. And um, you know, they might
dig in their heels a little bit more
because of that.
Yeah, I I do think Taco really did apply
uh with the tariffs, but maybe after
this Iran thing, it's going to be
tougher for that that label to stick.
>> Um maybe the inter intermediary that
comes in is Sam Alman, or maybe not. I
mean,
>> he he swooped right in. Uh this is from
the times, you know, as these
discussions were breaking down with
Anthropic, Emil Michael had an ace up
his sleeve on the side. He had been
hammering out an alternative to
Anthropic with its rival OpenAI. Um, a
framework between the Pentagon and
OpenAI had already been reached. Mr.
Altman of OpenAI got a call got on a
call with Mr. Michael to discuss a deal
for his company. Within a day, they had
drafted the framework. OpenAI agreed to
the Pentagon's requirement that its AI
could be used for all lawful purposes,
but it also negotiated the right to put
technical guardrails on its system
systems to adhere to its safety
principles. At 10 p.m. on Friday, as
Anthropic's lawyers began working on a
lawsuit against the Pentagon, Mr. Alman
was on the phone with Mr. Michael,
finalizing the details of OpenAI's deal
with the Department of Defense. Mr.
Alman then posted the news of the
agreement on social media on Saturday.
Alman invited people to ask him
questions on X about the deal as OpenAI
faced a backlash for swooping in. He
goes, "Uh, we don't want the ability to
opine on a specific legal military
action, but we do really want the
ability to use our expertise to design a
safe system." Basically the same, very
similar deal uh to the one that
Anthropic could not agree on with the
Pentagon. Your thoughts on OpenAI's role
in this whole situation? Classic, I
guess.
>> I mean, this could 100% this could have
been predicted, right? like you uh you
see you see the opening. Sam Alman sees
the opening. Sam Alman's going to take
that opening and he is going to to
immediately ring up Emil Michael and get
get him on the get him on the line and
figure out a way to sort of uh swoop in
there and not only potentially take over
all these contracts, but also obviously
he's positioning he's trying to position
this as like they are the peace broker
here, right? like that they are the ones
who are who are going to sort of uh iron
out these differences between anthropic
and the US government by cutting their
own deal that paves the path uh to sort
of do a new deal going forward but I
think that they wouldn't mind if say
they got all those contracts instead of
Anthropic got all those contracts going
forward as well and so you know that
part was maybe left out but uh but
they're the peace broker and and they're
going to come in here and and make
everything I mean again this was so
predictable and it was also predictable
the backlash to it, right? Because like
no one believes that like two blood
rivals that won't hold hands on stage uh
at a at an event are going to uh you
know, one is going to help out the other
in a major way. Now, to be fair to to
Samman, like he might think at a high
level, like yeah, I think we should
probably take a stand on this that's
more in line with what Anthropic is is
trying to project at least at the
highest level. But still, we're going to
do that in a way uh that's that's good
for the business um at the end of the
day. And so, you know, both things can
probably be true. But again, the the
optics around this are just not great.
And uh you know, again, to be expected
there.
>> I got a text uh as this was all
unfolding where Sam had said something
like, "We don't want Anthropic to you
know, not be able to work with the
government." Uh, and someone sent me
like this text like, "Oh, uh, well,
looks like OpenAI is really changing
their tune on Anthropic." And I was
like, "I don't think so. Wait and see."
>> And there they were. So,
>> yeah.
>> Uh, could be could be a potential very
lucrative deal for uh, Opening Eye and
especially if this thing goes through
and Opening Eye, by the way, in the
middle of a year where they're really
emphasizing enterprise, uh, they could
potentially swoop in and get much more
than just that one contract. I would
just one last thing I would add to this
because I was going over it today trying
to look through some of the numbers for
ownership stakes as I like to do as like
a hobby uh of these these AI companies.
um given the ownership stakes in
anthropic from that we know obviously
from Google and Amazon but now Microsoft
bought in right famously and and Nvidia
too and so you know don't don't
necessarily uh underplay those elements
to it as well especially someone like
Amazon right um who has lots of
government contracts as well and Google
too um if they can sort of step in and
be a bit of an intermediate intermediary
here and say you know like look cool we
got to pause on this like we can all
work together, we can all get along. Um
we you can figure out how to use these
models in ways that both sides sort of
figure out. Um because again like it
does ding also their businesses, those
big players if if all of this gets
ripped out.
>> Yeah. By the way, I mean Amazon just did
this $50 billion funding deal with
OpenAI. So
>> I don't think that that was related
>> 35 next. So maybe they they might just
say, "All right,
>> they're always hedging." So, uh, so
they're fine either way, I guess. But
yeah, wild.
>> All right. So, could Amazon and OpenAI
work on a potential device together to
go against the Apple and Google
alliance? And where is Apple's AI device
bet going? That's what we're going to
where we will pick up when we come back
right after this. And we're back here on
Big Technology Podcast with MG Seagler
of Spyglass. You can find it at
spyglass.org. or highly recommend
signing up for it, getting the
newsletter. One of my favorite tech
reads. Uh, all right, MG, let's talk a
little bit about switching gears from
this big
>> blow up. Nice. Let's talk about Siri
>> or or or more, let's talk about the
devices that Apple might be um
developing that will have Siri or Gemini
or Gemini powered Siri baked in. So, uh,
recently we've gotten news that Apple is
going to release maybe three devices all
at once. Smart glasses, a pendant, and
AirPods with expanded AI capabilities.
Uh, I've thought that I think we've both
discussed actually that this is going to
be a pretty good year for Apple. And,
uh, when this news hit, I was like, I
got to go to Spy Glass to get MG's
perspective. and you started with a very
surprising line at the beginning that
maybe we're seeing the beginning of
Apple, if not pulling ahead of the AI
race, really starting to assert itself
and make a strong play here. Let's talk
a little bit about what you're seeing.
>> Yeah, so this there's a few things that
I think lead into fuel that idea. Um,
and you know, this this dates back to
obviously when um when Apple at WWDC two
years ago now was gearing up to to talk
about AI in a real way for the first
time. And obviously they ended up doing
that and falling flat on their face
because they couldn't execute upon it.
But now in a way it's almost like are
they going to run basically the same
game plan but now that they have the
Google partnership for Gemini building
these models like they can actually do
it and execute on it execute on it in
the right way. I wouldn't put it past
them to sort of basically do that do
everything that they promised and then
to your point on these devices extend it
a bit to sort of the the world that
we're entering now. Um I do think that
they are potentially in a good position.
We've talked about it before that that
if um if we believe that models are
getting commoditized and if um if
there's going to be diminishing returns
and sort of spending billions and
billion hundreds of billions of dollars
on uh on training these large language
models like what's the next sort of step
after that and if you and if you're
Apple and you believe that um that is
the case that they don't need to train
necessarily their own massive frontier
models that they instead can partner as
they're doing with Google on
then the value might, you know, on from
their eyes come from the way that they
implement them. And obviously their a
lot of their value has always been
derived from selling devices um the best
devices, you know, many would say uh to
the to the public. And so if they can
create these devices that leverage that,
and by the way, like I do think the one
key device to all of these things
remains the iPhone. And I think that
what you're seeing with these three
devices that are being talked about that
you put out there, you know, AirPods and
and and glasses and appendants, all of
them, you know, per the report, per Mark
Gurman's reporting, like would likely be
reliant uh to some degree upon the
iPhone. And that's where Apple has this
unique advantage. You know, maybe you
could say that Google and Samsung have
similar uh capabilities because of their
device, their smartphones. Um but Apple
has has a very unique advantage um in
you know certainly ahead of the metas of
the world and others that are trying to
create these types of new fangled
devices let alone any startup that's
trying to do so and open AAI in that
bucket. Um, Apple has this unique uh
position where they have the iPhone in
billions of pockets and now they're
going to have these devices that rely
upon that as sort of at least for the
foreseeable future as basically, you
know, the central processing unit of
those uh devices potentially. And so you
can close your eyes and not it's not too
hard to imagine a world in which Apple
is sort of the device leader again uh in
this new AI world. And if if they're the
device leader, who's to say they're not
the overall leader? if they're the the
way that everyone's interfacing with AI,
at least.
>> Yeah. So, I want to talk through this
because, you know, I've recently start
I've gotten like the first wearable that
I actually use frequently, which is this
Garmin watch, which is not an Apple
product,
>> uh, but actually works quite well with
the iPhone. There's this Garmin app. It
mostly connects. Only had one situation
where I've had to like reset the whole
thing because the Bluetooth connection
was off. Um, and and this is basically
like these AI devices probably wouldn't
exist in their own like their own
ecosystem. For instance, when you want
to set up the metagasses, you set it up
with the smartphone.
>> Um, but it's still it syncs pretty well
and there's technologies that have uh
come out that lets you like sync data
through Wi-Fi that have made it much
more seamless. So, if the iPhone is
going to give an advantage to Apple's AI
devices, how does its interoperability,
which has always been Apple's calling
card, how does that help uh in a way
that would be that much better than um
you know, the ways that these current
wearables are connected?
>> So, it's a good question. It's hard to
know for sure without obviously seeing
what Apple's going to release out there.
But I would just point to, you know,
comments made by no less than Mark
Zuckerberg over and over again about
complaining nonstop about how they don't
get the full level of interoperability
that they would like with with Apple's
products, right? And some of that is
obviously just a little bit of posturing
because those two sides don't like one
another. And um obviously Metaphamously
doesn't have a smartphone play. So, you
know, they're telling regulators that
look, you need to make sure that the
iPhone is as open as can be to uh to
third party products like perhaps the
ones we're making and others are making.
And obviously, Europe is very open to
that notion. They've they've basically
installed some laws in various places to
to make it so that they have to be more
interoperable and allow thing low-level
um system integrations that Apple may
not want to. And again, your question
though, like what's going to be all that
different? Um, from the, you know, at
the day-to-day level, it might not be
all that different, but I do think that
there's lots of low-lying under under
the hood stuff. Um, you know,
potentially as as as boring as like
slightly longer battery life because
Apple is able to, you know, more tightly
uh hone the way that the connection is
made between their device um and the
iPhone. And I think there's all
different sorts of things, background
syncing, contact syncing, all this type
of stuff that can come into play that
you might not think on a on a day-to-day
level as you're using it is like that
big of a deal. But there are advantages
that Apple has. And the question will
become probably both certainly in
Europe, but I think it will ultimately
become true also in the US of like how
much of that is uh too much of a
competitive advantage and that they're
um hurting competition as a result of
that. And that's we're going to hear a
lot from Mark Zuckerberg and probably
some others, maybe Sam Alman as well
about that uh going forward.
>> So, we're going to have it seems like
these are all coming at the same time.
Smart glasses, a pin, and uh and these
enhanced AirPods. Uh what what do you
give the chance of being the most
successful of those three?
>> Um I would imagine I mean I do think
that they'll all be slight for slightly
different uh purposes. I would imagine
price will be a key factor in that as it
always is. But like if I had to guess, I
would think that the AirPods would
probably be the most successful just
because you and I are wearing them right
now. Everyone's wearing them out and
about. Like they're a known thing. As
long as they don't look uh entirely
ridiculous and different with some sort
of camera sensor on them, I think that
they will uh continue to be obviously a
very popular product. It's a matter of
again how much do they cost if they add
a camera sensor to it. Is it a $500
product all of a sudden? Do they can
they keep it like at $300 or something
around there? I think that will matter a
lot. Glasses obviously Meta has already
sort of proven somewhat of a market, but
relative to Apple's other products like
it's a drop in the bucket. It's not very
big. Uh you know the Meta Rayband
products are not huge compared to say
AirPods or Apple Watch or anything else.
And so can Apple take that to to another
level? Um, I think that, you know, I
think that they'll have success with it,
but um, you know, we're now seeing
already there's starting to be backlash
uh, preemptively against Meta because
they're talking about using um, facial
recognition within the glasses, right?
Adding that after the fact and so we're
all of a sudden right thrown right back
into the glass hole situation from
Google Glass a few years ago and and
Meta has has to their credit sort of
avoided that to date and now we're
getting thrown back into that. And how
does Apple deal with something like that
if Meta is, you know, for lack of a
better word, sort of poisoning the well
or the market um by thinking like I
don't want any glasses with any sort of
camera on your face. And obviously
Apple's product will have that to some
degree. And then the pen in itself
obviously you you think to humane and uh
you know expple um engineers and and
designers who were working on that uh
didn't end up being successful of course
and sold to HP and a fire sale it seems
like. Um, but Apple has that unique
advantage of having the the iPhone
itself. And it sounds like this would
maybe be more of a the I think German
even said it was like uh an internal
phrasing of it as the eyes and ears
maybe of the uh of the iPhone going
forward. And so you wear it around and
it's constantly um just looking at
things. Again, this is a privacy thing
though, but Apple's, as we're, you know,
talking about, Apple is in the the
unique position to be more trusted than
probably any other tech company
certainly um from a privacy angle. And
so, um yeah, there's all those all those
elements to it,
>> right? Yeah, I think the AirPods, that's
my bet. I think we're going to see a
battle of these AI devices in the in the
earbuds space, but it does seem you're
right like we're just kind of we are
sort of doomed to just be videotaped by
everybody at all. Although we kind of
already are. So
>> I I still like like looking at us, you
know, right now wearing these AirPods
like I'm I've always been curious like
how they're actually going to do that
though from a pure product perspective.
It's like so I have a beard. uh if like
there's stems, you know, feature the
camera, like does it just record my
beard like looking forward or do they
have to stick out more then as results
and that will look ridiculous? You know,
everyone joked when AirPods first came
out how ridiculous, you know, they
thought they look cuz they're sticking
out of your ears. But like ultimately
they're pretty streamlined and you can't
really, you know, tell all that often,
you know, when you're looking at people
and we got used to it very quickly. But
if you got cameras sticking out of them
and then there was like talk where it
wasn't like uh it wasn't necessarily
camera cameras but was more IR cameras
and was used like um you know to
potentially capture motion and things
like that to to help with gesture
control of of different devices and
things and that made a little bit more
sense to me. But I am very curious like
how they end up doing that. There was
also talk that they were going to put a
camera in the watch and that you would
have like yeah like almost like a you
know Dick Tracy style like uh uh camera
that you would like shoot people with
like looking at uh looking at your wrist
and and so all these things are going to
create situations where you just need
new cultural norms to come in and again
Apple has done much better than any
other company but to Meta's credit they
have done well with the Ray-B band so
far
>> that's right and I think that the battle
will definitely fall on whose assistant
is better and Siri has to get better. I
mean, it's we we it feels like beating a
dead horse at this point, but we didn't
even talk about it because it's so
regular uh that Siri got postponed again
or features within Siri got postponed
again. You had a really funny funny
piece about that. You said it's almost
like Apple's having some major issues
with their AI implementation and
strategy. They should probably look into
that, but it just keeps happening,
right? that that this keeps getting
delayed and you know you start to lose
faith over time even with the Google
partnership that they're going to be
able to figure this out.
>> Yeah. I was always like a little bit
skeptical. I mean I've obviously been
super skeptical of Siri over the past 15
having used it over the past 15 years
but like when they announced the Google
partnership I was always a little bit
skeptical of the initial rollout cuz
it's like how are they going to it's
sort of what we're talking about with
the government like right like you can't
just swap these things in. It may seem
like it's it's that simple, but like
there's a lot of like underlying things
that need to be connected. Look at
Amazon for an example of that, right?
Like look how long it took them to to
rework Alexa to be able to to work with
things like anthropics models and and
all the models that they're using behind
the scenes to sort of upgrade Alexa. It
took over a year and and they promised
something and they couldn't deliver on
the timing of it. And now we're seeing
the same thing. We've seen the same
thing play out with Apple. It just takes
a long time to get like all of the
little pieces uh in place because the
last thing Apple can afford to do right
now is put something out there even in
beta I think even in some sort of you
know like thing where it's any any
forward- facing userfacing um uh service
um and just have it flop again. That
would be just a death nail I think to
they would have to change the Siri name
at that point. You would have like Siri,
we we might have the Microsoft style
like funeral um where they would be like
yeah walking down Certino with um with a
coffin and series in it because they
would need a new a new branding if they
fail one more time with this.
>> Yeah, I think it's long past past time
to do that. Could Amazon and Open AAI be
the competition here? I mean, we talked
about it before the break, but Amazon's
going to invest 50 billion uh in OpenAI.
Now, of course, OpenAI has a device
program underway. Apple has I mean know
Amazon has the Echo. Um I think Alex
Plus is actually already pretty good. Uh
could you see as part of that deal
because uh OpenAI will be helping Amazon
develop some of some specialized AI
technology um that this could also be
part of a counter could be a like team
team battle open AAI and Amazon against
Google and Apple.
>> Yeah, that's that's sort of the where my
mind went when I was reading about yeah
these reporting and and again like $50
billion. Yes, it's over like two tanches
it sounds like, you know, 15 and then
35, but still $50 billion that that
Amazon is investing um in a time when
they're making cuts. They're famously
doing layoffs, right? Like and and
they're getting dinged left and right
for their for their capex spend. Like
$50 $50 billion is no joke. And they're
spending that for a reason obviously
with with OpenAI. And so you you have to
my mind again went to wondering is this
some sort of massive play to get sort of
all of the models inhouse to you know a
lot of there's a lot of talk right now
about orchestration and the idea that
like um like perplexity and others are
now trying to like move their businesses
into being these layers on top of the
LLMs to be able to do whatever you you
as a user shouldn't have to worry about
which model picker and and things like
that. you should just let it uh say what
you want and let a service pick the best
one for you. And obviously that's harder
within Amazon because as you noted they
make their own product in in Alexa, but
given that they have the anthropic
partnership and now given that they have
the OpenAI partnership, is there a world
in which they're using all those models
behind the scenes and sort of they can
use that to counter both Apple and
Google potentially where they say like
look if you're using those products
you're only going to get in this in both
cases Google as a as a result because
they're both using Gemini but um you
know they're using their in-house house
models whereas if you use Amazon if you
Alexa maybe uh going forward um you know
you will have the power of claude you
will have the power of chat GBT and
you'll have the power of Alexa all three
um um you know on top of maybe some
others that they add in there as well um
and it's sort of a playbook that they've
run right with the cloud in a way too
right so it's like they view it as like
you can pick which um which you want to
use and um or let us pick which you
which you think we should use um from a
product perspective and so no no you
know indications that that necessarily
is going to be what happens, but I
wouldn't be shocked about that.
>> Okay, before we leave, I definitely want
to talk briefly about this Netflix
Warner Brothers Paramount deal. Uh
you've written about it. We haven't
talked about it on the show in depth
really yet. Uh but the the cliffnotes
here was that Netflix had agreed to buy
Warner Brothers Discovery which has CNN
and HBO and was going to build this sort
of powerhouse streaming company that
maybe the streaming company of the
future by adding these old school
assets. Uh Netflix is obviously in the
lead. No one really comes close to it in
streaming. So this might just have
solidified it as the dominant uh
service. It reaches a deal with Warner
Brothers Discovery. Paramount comes in
and says, "Nope. uh we want to uh make
the deal instead. We weren't given a
fair chance to bid and uh just keeps
throwing out these bids until it decides
that it's going to end up uh or that
both companies decide that Paramount
will be the buyer and not Netflix. uh
Warner Brothers Discovery is going to
have to pay Netflix about a $3 billion
breakup fee and the final deal is going
to be 110
billion or so uh that Paramount will pay
for Warner Brothers Discovery whose
market cap as you note in Spy Glass was
$20 billion a year ago. So just give us
the the your perspective on what
happened here and what the implications
are. Uh yeah. Um so it does seem like uh
on one level at the highest level this
is uh just a master a masterful job by
David Zazlov who's uh the CEO of of
Warner Brothers Discovery because he was
able to take a company as you noted that
a year ago was a fraction of of what
this offer is in terms of market cap and
still is right now um and turn it into
this offer. And they basically did that,
you know, by at first it was uh
Paramount that came out with with an
offer, much lower offer uh than what
these current offers are uh I believe
$19 a share and we're up to $31 a share
now with the newest one. And um I think
the wild card there, you know, was
Netflix coming in because Netflix was
viewed as as a obviously a big player
and the biggest sort of if you want to
call it a media company, the biggest
one, it's market cap is roughly double
that of Disney. Um, and so they
obviously have the capital to be able to
do whatever they want in a deal like
this, but they had not historically done
anything like this. And so I think that,
you know, Paramount basically felt like
they came in and stole this from under
their nose. And there was a question of
was this the you know Master Stroke by
David Zazov sort of orchestrating this
whole thing knowing perhaps that uh
Netflix that Paramount basically needed
this more than Netflix did. And so they
were going to drive up the price um to
make it so that Netflix would walk away
with their $3 billion allcash
consolation prize. Uh which is a pretty
nice uh you know offer to
>> Yeah. It's about what they make in
profit a quarter. And so uh they just
got that in one fell swoop. But still,
yeah, this um this deal has has gone
back and forth and back and forth. And
now the fact that Netflix walked away
relatively quickly once the Paramont
offer came in, you know, kudos to
Netflix for it seems like had good
discipline. They weren't going to get
into some sort of bidding war and go
outside of of their bounds. But also
like I I'm just overall sort of sad for
Hollywood because I do feel like that
they they didn't like either of these
deals, but I think that they're going to
be in for a bigger world of pain with
the Paramont deal than they would have
been with Netflix. Um and we talk, you
know, you could talk about yeah, the
streaming dominance of Netflix and
whatnot, but the reality in my view at
least is that uh this is much more about
like the future going forward and the
future is going to be Netflix versus
YouTube and a few other you know key
players. I think Prime Video will be in
there. Um Disney Plus obviously, but
like it's not just and Tik Tok and which
uh has uh interesting new ownership
given this uh this paramount structure
as well. And so all of these players in
there is really the battle going
forward. And we're talking about like
this decaying sort of industry that in
in movie going, which is an industry I
love, but is not like a giant growth
industry. And so we're talking about
like these players uh battling over
these assets. And it feels like, you
know, Netflix would have been a good
safe haven for a for a studio that's
like been owned by conglomerates for a
hundred years. This isn't like a new
thing. Everyone's all afraid because
we're in the world of tech now and and
AI is coming and all of this, but like
Netflix would have been a pretty good
safe haven, I feel like, for this. And
instead, we're just going to get a
straight down the middle sort of
combination of two studios and that's
just going to mean a lot of layoffs and
it's going to be just this brutal sort
of, you know, decline over a longer
period of time,
>> right? All right. And I'll note that
Netflix is up 26%
in the past 5 days. Uh so clearly the
market has really digested this and
said, "Yeah, probably better that you
didn't do the deal." I thought maybe it
would be good. Like maybe it would be
nice to roll all this content up.
Obviously, as a consumer, you're not
happy about that because you have you
have less choices. But from a business
perspective, I understand why Netflix
was interested. Uh but obviously
different way market likes it and
everyone will just move forward. Yeah,
we'll see until until like there's going
to be a lot of fallout from this. Uh,
and I think it's going to happen both
both from, you know, the antitrust
perspective because, you know, of the
the relationship with with Trump and the
Ellison's and and there's going to be a
lot of different hearings on this type
of stuff. And I think it'll play out
over years and years because then
they'll look back on it after it's
approved and say like, was it approved
for, you know, less than above board
reasons? And so I I think we're gonna
just hear about this for years and years
and years. And the reality is it's like
it is a bit sad. It just feels like uh
you know obviously Paramount's play is
going to be to try to bulk up to compete
with the the Disney Pluses and the
Netflixes of the world. But are they
really realistically going to be able to
do that? Maybe if they can leverage Tik
Tok or something in some way uh you know
now now owned in in no small part by
Oracle. Um maybe. But like it it feels
more like that this is a still a slow
decay story and and you know they'll
just sell their their products
ultimately uh the the content itself to
Netflix just as they've been doing.
>> All right folks the website is
spyglass.org mg always great to speak
with you. I'm so glad we got a chance to
speak today especially just I mean an
incredible weekend of news that I think
we're all still trying to wrap our heads
around and I'm I'm so glad we got a
chance to digest it here together.
>> Indeed. Good as always, Alex.
>> All right. Thank you so much. Thanks
everybody for listening and watching. If
you haven't, if you could rate us five
stars on Spotify or Apple Podcast, it
will go a long way to helping the
podcast reach new audiences, which would
help us, you know, recruit guests, and
that would always be great. So, hope you
do that. Hope you have a great Monday
and the rest of your week. And we'll be
back here on Wednesday with another new
interview. I'm not quite sure who it
will be, but we'll hopefully touch more
on the anthropic Pentagon saga. So,
thank you again for being here and we'll
see you next time on Big Technology
Podcast.