Anthropic vs. The Pentagon, Bloodbath at Block, The Citrini Selloff

Channel: Alex Kantrowitz

Published at: 2026-03-02

YouTube video id: fCrbbU74SNc

Source: https://www.youtube.com/watch?v=fCrbbU74SNc

Anthropic showdown with the Pentagon
reaches an end point. We dig into what
it means. Block is laying off half the
company as Jack Dorsey tells everyone AI
might be coming for their jobs, too.
Open AAI finally raises its 110 billion
fundraising rounds. And we have yet
another AI [music] science fiction
sell-off. That's coming up on a Big
Technology Podcast Friday edition right
after this. Welcome to Big Technology
Podcast Friday edition where we break
down the news in our traditional
coolheaded and nuanced format. We have a
great show for you today. We're going to
break down everything that's happening
between [music]
Anthropic and the Pentagon and discuss
what it means for the company and maybe
the future of war and defense. [music]
We'll also talk about the big layoffs at
Block. Half the company seems like it's
on the way out the door. Uh we'll talk
about OpenAI finally raising the $110
billion round. It might that round might
grow even larger and of course the
Catrini [music]
selloff. So we're joined as always by
Ron John Roy of Margins who's back from
Europe and ready to podcast. Let's let's
podcast Ron John.
>> Let's podcast a AI science fictiondriven
selloff is is is catnit for me. [music]
So have to come back for that.
>> We love it and it's been such a big week
of AI news that that's the fourth most
[music] important story.
>> Wait, was was catrini this when did it
get published again? It was like
>> that was this week.
>> My god. My god.
Every week is a month it feels like. All
right, let's get into the big story.
This is one I've been really looking
forward to speaking with you about. We
haven't talked about it on the show yet,
but uh today on is Friday and that means
it is the deadline between uh the
Pentagon and Anthropic. the deadline for
Anthropic to aced to the Pentagon's
requests that Anthropic both uh give it
the option to uh conduct uh autonomous
uh have use its technology for autonom
autonomous weapons and conduct uh
domestic surveillance. Um, so this is
sort of uh let me just take and of
course Anthropic is Friday, but
Anthropic has already said no to that on
Thursday and we're going to get into
what the repercussions are, but I think
it might be helpful to actually talk
through what's happening between
Anthropic and the Pentagon and maybe
give some context here.
>> Walk me through it. Walk walk us all
through it.
>> So, you may recall that the United
States captured the leader of Venezuela,
Nicholas Maduro, in a daring raid uh
quite some time ago. and in a raid that
the United States uh didn't lose any
military servicemen uh and actually
seemed like it pulled off uh in a in a
remarkable way. Now, it turns out that
anthropics technology might have
tangentially been involved there. Um and
this is from the Wall Street Journals a
little while ago. Anthropics artificial
intelligence tool claude was used in the
US military operation to capture former
Venezuelan President Nicholas Maduro.
The department the deployment of Claude
occurred through Anthropic's partnership
with data company Palunteer whose tool
whose tools are commonly used by the
defense department and federal law
enforcement. Following the raid, an
employee at Anthropic asked a
counterpoint at Palunteer how was used
in the operation according to people
familiar with the matter. So, you know,
it did seem like this was just
Anthropic, you know, kind of leaking
this news that it was working with
Palunteer to help capture Maduro. It's
great marketing if you want to show the
capabilities of your tool. But in fact,
Anthropic really didn't have much idea
of what was going on uh within Palunteer
as far as its technology being used for
the raid and it even had to ask a
Palunteer employee uh about it. And
that's where these conversations of like
the tech company going to the defense
department or the department of war now
and saying how's my technology being
used? This is how it all began.
>> Yeah. I think uh especially in terms of
how it was being used again the employee
uh saying it was Palunteer layered on
top of Claude and that basically Claude
has been helpful for synthesizing
satellite imagery and different aspects
of the Intel picture. The thing that
just kind of jumps out to me here is
like, yeah, how what kind of
responsibility should Anthropic have in
here? And and this might surprise you a
bit, but I'm not going to say I'm like
full department of war heth on this one,
but I mean the capabilities are embedded
in anthropics model and like what you
know like what kind of control they
actually have over how it's getting
used. It's computer vision in the end in
this case and it's kind of like doing
the analysis and insights on top of it.
So, so I don't know. I I'm I I've been
having a tough time trying to figure out
where I land on this. Where where are
you landing on this?
>> Well, first of all, this is sort of I'm
I'm telling I'm telling the story here
to discuss to basically set up this idea
that I'm not really sure if there's a
there there between the Pentagon.
>> That's what Okay. Okay. I think there
might be a lot of posturing and
positioning. And now there might be an
argument that these hypotheticals matter
and we'll get into that. But this was
this initially started on something so
minor. It was and by the way this is
from uh Dave Lawler who's an Axio editor
who responded to my tweet about what did
uh Anthropic do with Venezuela. We don't
know and they didn't know either. And so
what he said is yeah it might have in
the past claude has been helpful for
synthesizing satellite imagery and
different aspects of the intel picture.
Um, we don't know that the technology
was being used either for mass domestic
surveillance or autonomous weapons in
Venezuela. This all this whole thing
began simply with Anthropic inquiring
how his technology was being used uh by
the Pentagon. That's when Dario Amode,
the CEO of Anthropic, makes his way down
to DC. And again, I don't think this was
uh this was a disagreement that happened
based off of real world pictures because
the Washington Post and 74 have both
reported on what happened with the
discussion next. So, this is from the
Washington Post. A defense official said
the Pentagon's technology chief uh yeah,
the Pentagon's technology chief
whittleled the debate down to a life and
death nuclear scenario at a meeting last
month. If an intercon intercontinental
ballistic missile was launched at the
United States, could the military use
Anthropic's clawed AI system to help
shoot it down? It's the kind of
situation where techn where uh the
technology might uh the technological
might and speed could be critical to
detection and counter strike with the
time to make a decision measured
measured in minutes and second.
Anthropic chief Daario Ammoday's answer
rankled the Pentagon. According to the
official who characterized the CEO's
reply as you could call us and we'd work
it out. So basically the Pentagon's
version of events is you know maybe
maybe these these conversations began
around this Palunteer thing. They start
having these conversations together uh
about how anthropics technology can be
used and somebody from the Pentagon pre
presents this like nuclear scenario to
uh Daario basically saying we might need
your technology to be used quickly and
Daario gives the most Daario answer
ever. Yeah, call us and we'll we'll let
you know right you could totally see him
saying this. Now this is from the post.
An anthropic person uh spokesperson
denied ammo day gave that response
calling the account patently false and
saying the company has agreed to allow
Claude has agreed to allow Claude to be
used for missile defense. But officials
have said this in another incident
involving Claude's use in the capture of
Maduro as flashoints in a spiraling
standoff between the company and the
Pentagon in recent days. So
here's my read. I don't think the
Pentagon went to anthropic and said, "We
need your technology for autonomous
weapon use and mass surveillance of
America." I simply think that the
disagreements about hypotheticals ga
became so out of control where there was
a culture clash. I mean, think about the
culture clash here. It's Daario Ammo,
CEO of Anthropic, we know how he acts,
and Emil Michael who's on the other end.
By the way, both of them have been on
the show. I like I mean, I've enjoyed
speaking with both of these people. uh
and Emil has basically said um that uh
you know who Emil is the under secretary
of war for research and engineering at
the sec at the department of war. He
says that Dario wants nothing more than
to try personally to personally control
the US military and is okay putting our
nation's safety at risk. Now
>> I'm just going to turn it to you. This
is kind of how I look at it. It's simply
conflicting culture. It's not a specific
disagreement over technology that will
be used in the moment.
>> Okay. Sorry. I didn't even realize this
is Emil Michael of Uber fame, right?
>> Correct.
>> Like 20 the mid2010s kind of like very
aggressive, brash personality, the kind
of like like the the face of tech
spreading at all costs. Screw the taxi
unions, all that. Okay. Okay. See, yeah.
So I
>> He happens to be a very interesting guy.
I've enjoyed speaking with him, but
sorry. Go ahead.
>> No, no, no. that that's what so I do
agree and and it's rare that when the
topic of like autonomous
weapons killing civilians I would say
there is not a there there and it's just
kind of like a as you said culture clash
it I agree that's really what it feels
like I think the other thing I keep
thinking about as I'm reading through
all of the the stories coming out on
this this one is it's so rare in the
past when you would hear about some kind
of like, you know, like nebulous defense
department technology, war games,
whatever else. As an individual, you
would have no real concept of what that
might look like. To me, one of the most
fascinating parts is we all use Claude.
We all like understand how AI works. So,
like actually like thinking through I
don't know. I kept thinking like what's
the the query? What's the analysis?
What's it's like uh Claude, how do I
capture Maduro? here's uh here's like 10
documents. Give me a strategy. Like I
don't know. I I keep trying to think
through like what does it actually look
like? I don't know. Yeah.
>> I mean my belief here and we don't know
exactly. My belief here is that
Palanteer did the heavy lifting on
Maduro and then maybe someone was using
natural language to synthesize some
information there. I mean the jokes have
been amazing on X, right? It's like you
tell Claude code uh you know Claude
codes capture Maduro
make make no mistakes and it just goes
out and does it like we're not at that
point yet. And we even joked last week
with Aaron Levy that like the fact that
people were like, "Yeah, Claude was
being used for warfare and you know was
responsible for the capture of Maduro
and everyone's like yeah of course they
are not asking any questions." Uh has
kind of been like a test testament back
to the company's capabilities. But I
think its involvement in this specific
uh operation has been blown completely
out of proportion just just speculation
reading between the lines here. Um but
but the other side of the argument is
that well these hypotheticals do matter
and you want to have a defense
contractor because anthropic working
with the department of war is a defense
contractor that won't say that will
basically be ready to you know to do
what you need them to do when you need
them to do it. And this is from uh Sam
Parnell the Pentagon's chief
spokesperson. He said the department had
no interest in conducting mass domestic
surveillance or autonomous or deploying
autonomous weapons, but wanted to use AI
for all lawful purposes. This is a
simple common sense request, he says,
that will prevent Anthropic from
jeopardizing critical military
operations and potentially putting our
war fighters at risk. Again, this just
kind of goes back to like, you know,
this is obviously not anything in
theater right now. However, yeah, the
question stands, do you want to even
make the Pentagon think that you might
say, "You know, in a moment of war, we
don't we're not ready to go to go that
far." I don't know.
>> I'm going to I'm going to present to you
a hypothetical here.
>> Alex, if you are the CEO of a massive AI
research lab that has some powerful
foundation models, do you allow your
technology to be used for autonomous
warfare? Because Yeah. Do you?
>> I don't think so. I don't think so. But
I'll tell you what. I'll tell you what I
will do. And I think I'm going to
preface this by saying I think that
Daario has real values and Anthropic has
real values and they've, you know,
mostly stuck with them and I give them
credit for doing that. However, if I
have this moment where the Pentagon is
in, let's just say this. I have this
moment where the Pentagon is saying, "We
want to use it for all your technology
for all lawful purposes." And I say,
"Oh, all right. Just, you know, don't
use it for autonomous warfare or mass
surveillance." And they're like, "Just
sign the all lawful, you know, degree
here, decree here. We don't want any
caveats. It'll just make it easier for
us." I might be tempted to blow that out
of proportion. I might be tempted to
>> from a marketing standpoint. Perhaps
>> release a blog post and say no freaking
way. I'll never work with the Pentagon
on these things. Lo and behold, Thursday
night statement from Daario Ammoday on
our discussions with the Department of
War. Uh Daario said, and this is I love
the way that Daario writes. I think he's
a great communicator. He says, I believe
deeply in the existential importance of
using AI to defend the United States and
other democracies and to defeat our
autocratic adversaries. Anthropic has
therefore worked proactively to deploy
our models to the department of war and
intelligence community where we were the
first frontier AI company to deploy our
models in the US government's classified
networks the first to deploy them at the
national laboratories and the first to
provide custom models for national
security customers. Uh he says in a
narrow set of use cases we believe AI
can undermine rather than defend
democratic values. One is mass domestic
surveillance the other is fully
autonomous weapons. Now again to our
knowledge these two exceptions uh Dario
writes have not been a barrier to
accelerating the adoption and the use of
our models within our armed forces to
dates. Regardless he says we are not
going to change our position. We cannot
in good conscience
to the Pentagon's requests.
It's I don't want to reduce this to
public positioning but I'm going to just
for this far sake of argument. It's
almost as if even if the Pentagon's
request was reasonable, they didn't need
them to agree necessarily to these
demands. And Anthropic just ran with it.
And now they're going to position
themselves as,
>> you know,
>> again, once again, hammer home that
branding, the ethical company, the
company that works for you, the company
that has values, the company is not
growth at all costs. And uh, you know,
who knows? because there are some
consequences that that could happen and
we'll talk about them, but I don't think
it could have been a better situation
for Anthropic than the one that they
were just handed.
>> I think we've been hanging out too much
because my my uh affliction of looking
at everything through a marketing and
communications lens seems to be rubbing
off cuz cuz I'll admit like as this is
all happening, that's the first thought
that's going through my head and I'm
like, "Oh my god, this is gold from a
standpoint of anthropic. We're the good
guys. Do we s like do you not support
mass surveillance? Do you not support
fully autonomous weapons potentially
killing civilians? But but it okay,
let's separate those two out. Mass
surveillance bad. Fully autonomous
weapons. I don't if that's the direction
that warfare is going. I feel like
that's just going to be part of whatever
China or other countries are developing
anyway. So, it's just going to be kind
of uh as awful as it may sound, it's
going to kind of be standardized unless
there's some kind of like global
agreement to actually ban autonomous
weapons. But again, as someone who works
in agentic AI, the more fascinating part
of this to me is like having autonomous
agents in anything, the assumption is
that they can be controlled. And this is
where I think this is actually kind of
weird for Anthropic to be pushing this
hard to say if you are actually kind of
like at least hinting or implying that
there is this world where they're not
being controlled and and then then even
in my day-to-day enterprise AI workflows
with autonomous agents, can I actually
rely on them? like this whole promise of
autonomous work being done and agents
running around doing all different types
of work. It does tie to the autonomous
weapons thing. There has to be like a at
least this idea being pushed that they
can be controlled. And it's weird to me
that Daario is kind of saying actually
they can't.
>> I'm sorry. Like is there a difference
between cloud code running running a
command and you know bugging out on your
website and then having to go out and
fix it when you're like this isn't
working to like potentially conducting a
military operation where people are
going to get killed.
>> But it's the same underlying kind of
process. It's the same underlying
technology. That's what I mean that like
yes the scale and the gravity of it all
is kind of terrifying. But it's the same
way with anything autonomous drive
self-driving cars like at a certain
point do we all accept that autonomy is
good and predictable and will work or do
we say there is this level of like
uncertainty that does lie around it that
it can it can go haywire and kill the
wrong people or or I guess actually is
the argument that not that it will go
haywire and go kill a bunch of random
people or is it are they implying that
the Department of War, it's still weird
for me to say that, uh, actually will
use it for nefarious purposes and that's
the risk. What do you think's implied in
there? Again, like my perspective here,
and I understand why why Anthropic would
not want to sign this away to the
Pentagon to like have like full use and
whatever it because if it's a a company
comes in with values, it has its values.
But again, like I don't know if there's
a concrete worry here. That's what I'm
trying to say. I think it's mostly just
like,
>> you know, blanket no, this is against
our values and and let's go. And even
Emil Michael was on, I think he was on
Fox Business or Fox News saying that
like we're in the middle of this
discussion and this blog post comes out,
you know, again, like I I do think and
again I I don't want to feel too cynical
about this, but I do think that this is,
you know, sort of a PR opportunity. But
back to your point, I mean, like, you
know, I tr I would trust Think about
this. I would trust a Whimo. I would get
in a Whimo. I would trust it to drive
me. Um, I know it's probably good on
like 99.7%
of of of rides or whatever of much
better than humans, but I don't want
Whimo to be the police. Like, I'm not
giving Whimo a gun and saying if you see
a crime, go arrest somebody. Like, I
don't trust it to that extent. That's
the difference I'm trying to make here.
>> Okay. No, no. Okay, I'll give you that.
Whimo as basically Robocop coming to
life. You're dead in the [laughter] car.
>> Anti-rocop here. Yeah. Okay. I can see
that. But but going back to the PR
standpoint, it is like I I I I was
trying to be level-headed and take this
kind of genuinely seriously, but this is
just like and I mean I have to imagine
we'll get into open AI. Uh and I mean
Sam Ultimate even came out and said that
like the company would potentially be
working with the Pentagon. I think uh
>> same restrictions by the way with open
AI and Sam is like well we're just going
to hope to that we can diffuse we'd like
to try to help deescalate things.
>> Yeah,
>> that's just open AI being like we open
agreeing with my statement that this is
not a real disagreement yet.
>> Where where's where's where's Sundar
going to fall in this? That's what I
want to know.
>> He's going to sit back and he's going to
be like we're printing. We don't have to
be involved in this.
This is why you got to have an a
monopolistic ad business. Just printing
cash and it always
>> making Gemini better. Don't have to
worry about autonomous weapons.
>> I mean, you can't pay for marketing like
this. Uh, and I'm sorry if this comes
off too cynical, but this is an Axios.
Uh, this is from a a defense official.
The only reason we're still talking to
this to these people is we need them and
we need them now. The problem for these
guys is they are that good. You can't
pay for that type of
>> That's why it is it's like the the but
it is interesting how okay yeah again
not trying to be too cynical here again
um like Claude catches Maduro great
headline great marketing kind of just
fun exciting which is but hey that's
that's that's the narrative in the meme
>> but then yeah Dario realizing
this is a great opportunity for us both
to be the ethical AI company to kind of
I mean it's a pretty good positioning
nowadays if you're like setting yourself
up for that conflict to have uh the
headsets of the world kind of like
coming at you on Twitter that actually
can help your case. So So yeah, I I I do
think yeah. Again, I never would have
thought on the subject of autonomous we
warfare, I would say it's it's a meh
story, but I think I agree on this one.
It's uh it's they and they just raised
their giant round. They don't need to
they don't need this marketing. They
already have enough, but
>> good for you. If you think they don't
need the marketing, I think you're um
you're uh underestimating the level of
competition right now. Every bit of
marketing helps.
>> Yeah. I mean, think about it. It It
almost follows the same line as the
Super Bowl ad, right? Like Claude won't
ever do ads. Claude won't, you know,
kill you in your sleep. Um,
>> that should have been the Super Bowl ad.
[laughter]
>> You can just run this uh the fanfiction
of this episode. But I I think that like
ultimately like there is still I think
part I don't want to say it's entirely
cynical marketing. I think part of Dario
really does believe uh that u that this
is not the uses that cloud should be
used for and I think on the Pentagon
side you can totally see their side as
well where they're like we don't want to
be in a mission critical moment and have
Daario say we you're not ready to do
this
>> actually. So where do you fall on
model companies like regulating I guess
I don't know if that's the correct word
but use cases again companionship AI
erotica we've debated in the past and
open AI has one view of it versus others
like like do you think as the this
evolves and I mean it almost kind of
comes down back to the the great content
moderation debates of Facebook and
others like Do they have the
responsibility to do that moderation?
Cuz I do kind of think they do, but that
this is just going to get messier and
messier and more and more complex.
>> Yeah, I I think they do. I mean, I think
that if you're a private company, you
you have, you know, at least the right,
if not the responsibility to try to make
sure your product is used in ways that
you think are beneficial uh to society.
>> So idealistic and optimistic. [laughter]
Well, allow me to take this moment to,
you know, maybe not be as cynical as
I've been in our first few minutes of
this show and say, "Yeah, I think that's
that is important.
>> Tech companies have some responsibility
to society at large." That's the
>> I know it's controversial to say.
>> That's a that's a hot take, but
[laughter]
>> can see people just hitting play on
something else right now, but uh but
that's where I'm I'm going to stand
there. I'll I'll die on that hill. But
it's not this is not without potential
consequences for anthropics. Let's talk
about it. Um in preparing for they might
the Pentagon might now label anthropic a
supply chain risk and Pentagon officials
is from the journal have reached out to
defense contractors including Lheed
Martin and Boeing in recent days to
gauge how much they use claude. I love
that scene by the way. It's like can you
imagine the Pentagon on the line with
like Boeing and being like how much
claude do you use because we might ban
it for this hypothetical reason now.
Now, are we not going to be able to make
planes anymore? It's just crazy that
we've Chad PT came out three years ago
and we're at this stage already.
>> Critical infrastructure right now. Yeah.
>> Yeah. I think I I mean that's the the
politics element. I guess that is kind
of like that that part I is almost more
terrifying to me in the actual near term
of again like if if that level of kind
of like tit fortat Twitter fighting can
actually lead to like that you know some
some kind of like uh yeah the supply
chain risk uh application actually kind
of like derailing a private business
that that I don't like. They might also
invoke the Defense Production Act, which
would require Anthropic to supply its
technology to the Pentagon the way the
Pentagon wants, which would be
unprecedented. Again, I mean, the tweets
coming through the timeline this week,
and I I know Twitter's not real life,
but a lot of people in AI world, a lot
of buyers are are paying attention to
this. Here's from another Twitter user.
Best proof Anthropic has that it has the
best internal models. The Pentagon would
rather invoke the Defense Production Act
than use someone else's AI.
>> Do you think they get rate limited?
>> That's maybe that's what actually
started. [laughter]
>> That's actually it was they were like
about to capture uh actually that's this
might be too dark, but I was going to
say that's why we didn't invade Iran
yet. They're getting rave limited on
clot. [laughter]
>> I don't know. Ron,
>> I'm going to take that one back. I'm
going to take that one back.
>> [laughter]
>> I don't know. Maybe by the time we
publish this podcast. Anyways, let's
let's uh we'll let that go.
>> Let's move on to lighter news like
funding rounds.
>> All right. Uh OpenAI announces $ 110
billion funding round with backing from
Amazon, Nvidia, and Soft Bank. So, it's
finally here. The round we've been
talking about. Man, that was quite a
transition with the round we've been
talking about has arrived. Uh it is
bigger than expected. Remember, it
started out at 50 billion Amazon and now
it's and then it went to 100 billion.
Now it's 110 and this is from CNBC.
Other investors are expected to join as
the round progresses. So it's not even
over. We just have these big commitments
uh from these three big companies. 50
billion from Amazon. I mean that is
that's wild. Speaking of uh Dario, I
wonder how he's feeling now that you
know one of his biggest partners in
Amazon is making a deal like this with
OpenAI. Uh we won't spend too much time
on it, but Ranjan, your takeaway from
the size of the round, who's in and uh
any other things that I love how we're
going to glaze over the biggest
>> only in February 2026 could a$ 110
billion round in a private market
actually be like let's not spend too
much time on it. But but I think I want
to call out
being and I'm very glad that open AI is
making again this funding round the most
open AIish thing possible because 110
billion is the headline. It is
impossible to tell what the actual round
is because again one of the from the
information Amazon's decision to invest
up to $50 billion in OpenAI could hang
on whether OpenAI goes public or reaches
a loosely defined milestone known as
artificial general intelligence. That
was my favorite part because we might
after we lost that benchmark with
Microsoft in OpenAI, but now it's back.
And this idea of like declaring AGI
actually potentially unlocking tens of
billions of dollars. It's back again.
And uh and again, we have our benchmark
here on the podcast. Whimo's operating
in New York City officially will mean
AGI is here. But but I think I don't
know. Did did you like the complexity of
the funding? Do you call this do you
genuinely call this $ 110 billion round?
There's so many stipulations here.
>> No, it's not it's not certainly not
that. It's I think it's a very important
point that you're calling out here.
Remember when OpenAI and Nvidia said
that they were going to do a hundred
billion together and it was another one
of these well it's 10 billion now and in
time turns out that hundred billion was
actually $30 billion which is what
Nvidia will be investing although Jensen
Wonga said we hope they invite us you
know to come back but certainly a intent
to invest 100 is very different from an
actual action to invest 30. Um to me the
interesting thing here is
where you know what happens as a result
of these major deals. So this is from uh
CNBC OpenAI said it's expanding its
existing $ 38 billion agreement with
Amazon Web Services by 100 billion over
the next year, eight years. Uh so so
it's going to get 50 from Amazon, but
it's going to put back either
>> 100 or 138. That's why I'm so happy
about this announcement. It's got
everything. It's got kind of like
nebulous benchmarks and tranches. It's
got circular funding and financing. It's
it's a classic SAM funding round again.
Yeah. Like as you said, potentially
putting in 50 potentially over eight
years getting that 38 billion up to a
hundred billion. It's uh it's it's
perfect open AI funding.
>> That's right. Yeah. I I mean as Sam was
on uh CNBC earlier today and basically
said look um this is only going to work
if the revenue goes up and answering the
circular funding thing and I think
>> he's right it's only going to work if
the revenue goes up
>> that's good business right there
>> good business but also like yeah of
course if the exponential continues then
he'll continue to get the money and it
all makes sense and if it doesn't he
won't get the money Yeah, I think I mean
I definitely want to get into kind of
like where you see OpenAI's business at
this exact moment, but but like one one
thing that was also interesting to me as
well was there wasn't a lot of talk
around like where this money gets
invested into. I we like in the old days
of a year or two ago, I feel any of
these big funding rounds would really
kind of center around getting to that
next generation model, building data
centers. It was still a bit I don't
know, did you see anything like it still
wasn't there wasn't a big kind of
flagship push around what this money
actually is going to mean to both OpenAI
and the ecosystem at large. Yeah, it has
to be infrastructure, right? And just
the support for inference, especially
when you're working with partners like
Amazon and Nvidia. I think that's not an
accident. And Open Eye basically told us
what the game is, right? They're like,
if we are able to build more
infrastructure and serve more demand,
we're going to make more money and we'll
keep building until that proves to be
untrue. So to me, that's this is that's
just um one step here along the way on
that front.
>> Well, but where do you view Open AI
competitively right now? I'm curious.
I'm just going to say like it is crazy
to me my chatbt usage has declined
dramatically between Gemini actually for
like day-to-day just basic stuff I'm
using a lot more like it just again and
we've talked about switching costs and
moat actually and I know like the idea
of memory which everyone has been
talking about for a long time is
supposed to kind of start to build that
moat but it's still such a reminder to
me of how brittle
A lot of these kind of like foundations
are that might seem again what is it 900
million users now.
>> Yeah, they just said today 900 million
week 900 million. So definitely on track
for a billion by mid mid to late March
and again chat GPT is the like Google
kind of like trademark brand name
whatever of like AI for the average
person. It's like a verb, but still like
I I actually was looking this up the
other day cuz I was curious back it us
on this show a year ago, there were
headlines Anthropic was screwed. Like
usage was going down on the consumer
side and again massive credit to them.
They had such a clear bet and we we
outlined this very early that like they
were going all in on coding API. they
were giving up the consumer product
basically and it worked brilliantly for
them. But it again 12 months ago, 14
months ago, the narrative was very
strongly. Anthropic is in a bad
position. Um it's kind of where
perplexity is now. Open AAI is just
dominating. Gemini is on the rise. Two
years ago, Gemini and Google are dead.
Like it it just Yeah, it keeps reminding
me just how quickly things can shift in
this market right now.
>> Yeah. changes fast. I mean, obviously
like when people think about generative
AI, they think about chatgbt. That's
what you hang your hat on right now. If
you're open AI, some of the other bets,
uh, Sora, you know, haven't worked
exactly according to plan. We still
don't have the device. Uh, but I think
that basically you you have this
two-prong strategy. You you're growing
CHP from the ground up. It's the leading
consumer product and you use that to
leverage uh and move into enterprise and
and yeah they're making their move into
coding and it's very interesting now
what's happening in the coding market
wouldn't you say because you have cloud
code which you know I've been you know
I've been in like like crazy I'm hitting
my limits every couple hours uh just in
cloud code it's amazing um and there's
some other some other players like
cursor that are you know starting to go
up and down you know as the two big boys
get involved So what's happening with
cursor on?
>> Okay, so I wanted to highlight this
story. There was kind of like a there
was one tweet from a Kyle Russell around
how the company was removing 90 seats
and basically over Slack people were
like, "Hey, can you unsub me from
cursor? Yeah, I'm not using it anymore
too." And I think again this like
momentum
the speed and like inflection with which
people can shift or this idea of moat
it's just so fascinating to me because
again a year ago cursor was synonymous
with autonomous coding or codegen and
like any kind of like AIdriven coding
and just how quickly people can switch
how the moat was never really there and
so I think it raises two questions like
one does everything kind of like only
condensed to the foundation model labs
and uh my argument that it's the product
not the model is completely wrong and if
that happens I will I will say that but
but but I think like to me there that
was one side the other that the cursor
story and it I mean we don't know
definitively like where things are
internally for them from a revenue
perspective but like I hope that this
starts to raise the every claim around
ARR or like annualized recurring revenue
that starts to go away because anyone
who knows it's like taking one month of
data and extrapolating it times 12 when
you have a good month. I mean there's
like I've seen people joking but maybe
it's the case you have one week or one
day and times it by 365 and call it ARR.
I think like in the market actually
trying to understand stickiness over
time is going to become much much more
of like a valued thing right now.
>> Yeah, I think that's great. I mean, I
think that there's definitely been a uh
some title inflation when it comes to
ARR, right? Like certainly companies are
putting releases out there about ARR and
you have to like, you know, kind of
shake your head a little bit about are,
you know, is that really the number or
not? But I think going back to your
previous point, I think that's the most
important point is that u when it comes
to AI applications because of generative
AI's general purpose nature, you always
have to worry about one of the big
companies gobbling up uh what you're
doing. And certainly that's what's
happened with cursor I think is that
cloud code right which was initially
like you know seen as a friend of me or
maybe just a different version of cursor
um that for different use cases and it's
not like not being an IDE maybe uh now
it's fully competitive now and people
are just working within code and they're
working within codecs and so that's
that's what's happening is the big
models are just gobbling up smaller
competition. So question 12 months from
now is anthropics still the king of the
hill or
have things shifted again dramatically
because if I mean if we're looking at
there's some kind of directionally
similar things I mean open AI is growing
and raising more money but still kind of
who owns the narrative and conversation
and kind of like the next wave of
innovation. Do you think it's still
anthropic six months from now 12 months
from now
>> on coding? Yes.
>> No, no, just overall. Yeah.
>> Well, they're I don't I would argue that
they're not like I mean clearly they're
ascendant right now, but I would still
argue that OpenAI is the leader, but I
think coding is going to be very
interesting because that is uh that is
the use case right now. That's that's
clearly economically valuable and
exploding. In fact, I think some of the
numbers on anthropic paid subscribers
are are quite impressive. Actually, I
have them in my inbox. You want me to
read them?
>> Read them to me.
>> All right. Uh let's see. So it let's
see. Free users on on uh Claude are up
more than 60% since January. Fastest
growth in Claude's history. Daily
signups have tripled since November.
Every single day this week has
consecutively broken the record for
Claude's largest ever day of signups.
And paid subscribers have more than
doubled since October. People are
staying and upgrading because they value
Cloud's most advanced capabilities and
consistently say it sharpens their own
thinking. So, it's more than a Super
Bowl bump. They're saying it's adoption
from months before the ad campaign. So,
they're doing really well. I think that
the the coding fight is going to narrow,
but a year from now, I think they're
still going to be uh in the lead, and I
still think that will be the biggest use
case for these models as the vibe coding
stuff continues. What do you think?
>> Okay. Well, I'll differ again as as my
the company I work for, writer.
Autonomous knowledge work is where we
play. Claude Co-work is in there. Manis
is kind of one of the only other
competitors really. I'm still standing
by my prediction that that's going to be
the big trend of the year. It's
self-interested. Talk my book, but
really and I got to say I was it's been
interesting to me cuz like how Claude
Code has been the entry point for most
people cuz like our company, we only
work with enterprises. So it's not a
consumer product. So just not as many
people feeling it. But I just say like I
think you get it now, right? Like in
Claude Code, it gives you that feeling
of like what I've been trying to say
since October of like actual autonomous
agentic work like agents out there doing
stuff for you that actually works with
many steps. And you feel it now, right?
I I heard you talking about
>> feel it. Yes. But I also think there's a
long way to go. Although it's done a
great job building some internal tools
for me, I have to say. So,
>> okay. You're you're progentic now.
You're coming around.
>> I'm feeling the agentic. I'm feeling it,
but I'm not I'm not uh I don't haven't
fully drunk the Kool-Aid like you have.
Okay, we have to go ahead.
>> I I was going to say what I've actually
come away with is uh it's the word agent
and agentic was so just beaten down and
kind of like mischaracterized for all of
2025. That's why everyone's just has a
hard time saying the word agentic.
Whereas in reality, all this what's
happening is it's act this is actually
agentic that this is what we were
promised. But we just heard it for so
long and it wasn't working or none of it
made sense that that's why people people
are uncomfortable saying it.
>> No, here's a I think this is a good uh
distinction of where we sit. you are
happy to you believe that AI will this
agentic stuff will eventually be good
enough to take the shots and I'm like do
not take the shots let let's if we're
going to have to shoot let a human do it
okay [laughter]
that's
>> maybe that is the breakdown
>> okay I think you're far too trusting of
it but anyway we'll go go down that
rabbit hole another day
>> uh
>> all right
>> or you can respond if you want
>> I I think that that's that's a
reasonable characterization that's going
to I like I like uh our our regular
standing debate around is it the product
or the model probably is a little more
tasteful than should AI take the shot or
[laughter] humans take the shot. But
>> these are both real questions.
>> These are both real questions. Yeah.
Yeah.
>> All right. I'm going to take a break and
come back. Uh we're going to come back
after this and we're going to talk about
Jack Dorsey laying off 4,000 uh at block
and then we're going to talk about the
Catrini research paper uh in the time we
have left that caused this selloff in
the market. We'll be back right after
this and we're back here on Big
Technology Podcast Friday edition. All
right, so uh the news is that Jack
Dorsey is from SFGate. Jack Dorsey is
laying a 4,000 at block and saying
others will do the same within the next
year. So it's not that uh the size of
the layoff is massive, which it is, but
the the real headline here is that Jack
Dorsey has said AI has become uh has
helped us become so efficient that we're
able to lay off half the company and be
as productive. And by the way, this is
coming for others as well. What do you
think about that, Ranjan?
>> This one killed me. I gotta say I like
and
there's I I have two kind of minds here.
One is and again as we've been
discussing today like everything is
comms and marketing in my mind and like
it really just feels like again blocks
revenue has been revenue growth has been
slowing profitability 2025 was not it it
wasn't a bad year but it's certainly
like they were a company that saw
incredible growth during COVID and it's
been slowing. So like the stock's down
75%, it's it's overall business is not
great. So to say it's AI kind of bothers
me. It feels like a copout versus listen
like a lot of big tech were we're a
little bloated. We overhired. We're just
trying to rightsize the business a
little bit. Like to me that's what this
is. And and again, I'm saying that as
someone who genuinely believes
workforces are going to get transformed
and there's going to be some problems
and like dislocation in in the industry.
I felt this one was just kind of Jack
being like AI when he has to lay off a
bunch of people,
>> right? And I think we should give the
context here, right? So, Block is
profitable.
>> It's a profitable company making these
moves. Uh, and still it's up 14% today
after the news.
So, here's what I'll say about it. This
is not the first time that Block has
been doing layoffs this year. Block did
layoffs earlier in February. This is
from Wired. After hundreds of workers
were laid off in early February from
Jack Dorsey's block, some of the people
remaining at the company say the
internal culture has devolved to a point
where it's where performance anxiety is
running running rampant using generative
AI is required and overall morale is
rapidly deteriorating. Listen to this.
Black employees are currently expected
to send an update email to Dorsy every
week who then uses generative AI to
summarize the thousands of messages.
I don't know if this is the most
effective use of the technology and
kind of hate to make the argument here,
but is Jack on to something and are we
going to see more of this? Because the
idea that a CEO could get a weekly
uh email from all their employees,
thousands of employees, throw them into
a generative AI engine, get a feel of
what's going on in the company that his
reports can do the same with their, you
know, legions of employees and become
more effective through that. Uh is that
is that kind of where this technology is
heading?
>> I mean, it was interesting when I read
that like
On one hand, to me, that actually would
be like the wrong process because if
you're asking everyone to give you
essentially sell themselves and the work
they're doing on a weekly basis and
using that as your like foundation for
understanding the state of your company,
it's going to be biased positive. Like
that's not actually good because
everyone's going to be like, "Oh, did
amazing things. Everything is great."
And then you summarize it and Jack's
just sitting there thinking everything's
great. So I think but but I it's funny
that to me that you took that as a
negative cuz again like it is kind of
you don't think it's cool at all the
idea that now you can kind of manage in
different ways that like you can at
scale that never would have thought
possible before like really getting a
view that's semi- true or at least
versus having like a bunch of people
spend a month doing a report that you're
going to have like an all hands meeting
or board meeting where they update you
that you can actually have more
real-time feedback like that. You don't
like
>> I don't think I'm criticizing that. Uh I
think you're not Let me tell you this. I
I think instinctively I you know I'm on
the side of the workers here. I think it
kind of it's kind of gross that a CEO is
instead of talking with them having AI
summarize them summarize their notes and
making them write these notes. I mean
just think about how much work you have
to write and that gets fed in. Although
you're probably writing with AI,
>> you're probably almost certainly using a
your agent is talking to Jack's agent is
what's happening here.
>> But ultimately, you know, I have to I
think I have to get past that. I
actually think that if I was running a
company of that size,
I would do this. I I really think it is
a great way to stay on top of a of a
company. Actually, uh I don't think it
makes the company 50% more efficient.
Uh, and it's like natural to get these
quotes that I read from SFGATE from
employees who just don't like the
mandating of AI and also aren't happy
that half the company is leaving. Um,
but maybe the truth lies somewhere in
the middle. And I think we should really
focus on the um on the warning so to
speak that Jack gave to everybody else
saying that you know I just think we're
early and I'm going to be honest about
it and I expect many others to do the
same thing cuz I got this note from
somebody who's worked with Jack in the
past. This block news is going to
cascade hard. Jack just put the question
to every CEO in tech and maybe beyond of
whether they are carrying dead weight
that could be shed. If a few more tech
companies pull a move moves of this
magnitude and we know they will then the
odds of it crossing over increase
tremendously. I I would say it's sounds
right and probably we're going to have
many tech companies CEOs uh saying and
by the way they they know that Jack can
run a bloated company. I mean look at
what happened with Twitter but saying
you know maybe we don't have to do 50
but can we do 20? It's a little bit
scary.
>> I think it's scary, but when the hiring
at these companies was up by a 100% or
200% in a condensed amount of time based
off of like extrapolated
revenue and growth numbers from COVID,
we weren't all complaining. I think like
I I I think I'm almost
cynical isn't quite the right word, but
like I don't know. I I know a lot of
people at a lot of tech companies that
get a lot of money for not a lot of
work. That it's the it's become the case
increasingly more so over the last 5
years, seven years. um the Googles and
the Facebooks for a while I would say
but like like there there is like it's
one sector of the economy that became
the most valuable sector of the economy
for a 15-year period or whatever it is
10-year period and it became bloated and
now like to me AI what it's doing it's
it's just kind of like the the value of
work in software and technology that we
had assigned to it over the last decade
is not the same and that's happened to
ev many many other industries over time
and it it causes disruption but to me
that's almost like natural business
cycle rather than again I'm like not too
doomer about it and maybe that's
shortsighted but like it it's not that
different than other shifts that have
happened over time.
>> Yeah. Like I guess you could you could
condense like two you know 75% effort
email jobs today into one email job if
you have generative AI but like I also
think you know as we have this
conversation I don't think either of us
are going to discount like the fact that
this there's real people in these jobs
and this really sucks. Uh, and you know,
especially now we're like in a no hire,
no fire time period that for every
person that gets laid off at a place
like Bach, it's just like it's it's a
disaster in each one of those cases. I
don't want to, you know, leave that out.
>> No, no, that's that's the problem about
all this. It's like I don't want to
short change. I mean, getting laid off
sucks and like it is just it's it's it's
sad, but it's also like does Metallica
and Benson Boon need to play Dream Force
and is that like the sign of a healthy
industry or an industry that might be
getting a bit soft?
>> I ask you,
>> Benson Boon and just one of them,
Metallica or Benson Boon, but as long as
Metallica, right? We got to keep
Metallica. I don't know. I I rather that
always kind of saddens me that they went
that they're playing a Dream Force.
Benson Boon put him up there, not
Metallica.
>> This is going to be embarrassing. I
don't even know who Benson Boon is. So,
>> he's the guy who he did the flip at the
uh Grammys now. No,
>> no idea who that is.
>> All right.
>> So, maybe you cut him. Maybe you bring
keep him and you you [clears throat]
keep your employees.
>> Uh that would be my preference. Maybe.
Um,
>> I think I'm going to create an agent for
you to keep up better with pop culture,
Alex.
>> Well, I would like that. Uh, yeah, I
would not I would have to put that
filter that agent's emails just too
much. But so, but by the way, so so
here's where this becomes a real
problem.
Um, is if every company I don't know. I
actually don't think Jack is right. um
we're going to talk about in a moment.
But the even with these AI tools, the um
software engineering employment numbers
are going up fairly quickly, which is
fascinating. But if Jack is right in the
case that he is, um that could be rough.
I mean, if you think about every company
coming out and doing I mean, we've seen
Amazon do these big layoffs, right? Like
every company comes out and does a 20%
layoff.
That is uh that's tough. That's tough if
you're a tech worker. But the amount of
money everyone in tech has made relative
to every other industry over the last 10
to 15 years like I think that that's
going to be actually one of the more
interesting things politically how this
all plays out I think is that it's
targeting this is causing a disruption
in a sector that got a lot bigger but is
still a small percentage of the overall
kind of like employment in the economy.
So will do do you think people will be
as strongly reacting or up in arms
around this or it's going to kind of be
like and again I as someone who works in
tech I I'm saying this that like it it's
just it it's harder for me to be like
that saddened by it given just how these
companies have been able to operate for
a long time.
>> I mean if if you're asking me whether
there's going to be an outpouring of
national sympathy for tech workers I
don't believe so. I mean this is
remember this is a country where uh many
people celebrated the palisades fire
when they saw that there were people in
a different socioeconomic status and
them that lost their houses. So I don't
really feel like we're a nation of
empathy right now at least. We should be
but but we're not. All right speaking of
cascading crises let's end with this.
The I'm sure you saw this Catrini letter
talked about the 2028 global
intelligence crisis. I'll try to
summarize it as best I can. Uh basically
this research firm who may or may not
have shorts I don't know 100% but it's
been speculated they do and some of the
companies that have tanked because of
this letter uh basically looked at what
happened if generative AI works. They
write it should have been clear all
along that a single GPU in North Dakota
generating the output previously uh
attributed to 10,000 white collar
workers in Midtown Manhattan is more
economic pandemic than economic panacea.
So basically they say look what's going
to happen. there's going to be a human
intelligent displacement spiral where
people will automate jobs away. Maybe
this is kind of where that Jack memo uh
can can actually end up in the bad
scenario, right? because then you have
people with their mortgages, they can't
pay them. Stocks go down. And then, you
know, there's so much uh of our econ so
many large parts of our economy that are
based off of uh wanting to avoid
annoyance, not wanting to cancel certain
things and not disputing certain fees.
The agent goes goes out and, you know,
cancels these things and and takes down
those fees. And then um all of a sudden
consumer spending is down, growth is
down, and private equity that depends on
all this uh starts to go up in in
[clears throat] flames. Um and they even
these these you know you can build your
own delivery apps for instance and then
all these displaced uh white so those
businesses go away and all these
displaced white collar workers end up
taking blue collar jobs and there's just
no uh jobs left in the economy. I think
I boiled it down. That's kind of the
argument. I think you can tell by the
tone of my voice. I'm not I'm not
convinced that they're right here. What
was your reaction?
>> So, my reaction to the actual content of
the piece and then there's my reaction
to like actually kind of like it causing
a stock market selloff are two different
things. I think I don't know. I It was a
good piece of it was like an interesting
piece of writing. It was uh like like a
and it is it raises these kind of
questions I thought were like as trying
to assign kind of like value to the idea
like if we're locked into subscriptions
we forget about like imagine if if I
tell you imagine you have an agent that
is actually able to track your Netflix,
Disney Plus, Hulu, all of your
utilization of those services and then
the ones you're not using it goes and
cancels them for you. Like that sounds
pretty good, right?
>> Oh, I would love that. Yeah, exactly.
The argument is that that will cause
cause cascading economic problems.
>> I know. [laughter] But but but this is
where if that is the foundation of the
US economy, that's the more terrifying
part to me than the actual unwinding of
it. Uh if that's the case. So I think
that part of it is uh I I don't know. It
was it was interesting just like the
kind of questions it raised. There's a
lot of like I saw arguments over like it
using Door Dash as an uh as an example.
And I actually do agree with the idea
that that was like and as not the
biggest fan of Door Dash as readers of
margins will know, I do think they're
going to be the hardest to displace out
of any uh given it's a marketplace.
There's like physical labor elements of
it. So, so I thought that that part like
there's definitely weaknesses in it. To
me, the idea of some kind of like
cascading potential
just downward spiral here. I do think I
don't know it it presented a pretty like
interesting consistent narrative that
actually told a good story. So, I see
why it had the impact it did. But, I
don't know. My my hot take on this one
is I think it raised actually a more
interesting issue is that again the
state of the current stock market the
valuations of a lot of companies have
gone in the same direction for a very
long time and I think it is more
unmasking
just general unease and worries about
valuations as opposed to it's like AI is
going to destroy society and people
again it's an excuse to just kind of
knee-jerk sell things that you're
sitting there on paper have just been
marked up insanely over the last number
of years, but you you just you don't
feel it's actually value that valuable.
That's kind of how I am reading it.
>> I like that take. I like that take a
lot. I think we're agreeing with each
other a little too much. [laughter]
>> Yeah, that feels uh spot on to me. Um I
and you know it is this it's I was asked
about version of this on CNBC this week
and I had to cite the something big is
happening paper. I think this is kind of
what it is. There is this belief that
something big is happening. It is in a
way. The question is what the magnitude
is and there's this instinctive race to
go and say uh you know it's going to
take our jobs and destroy our economy.
The one issue that I had with that
paper, and this is sort of my core
issue, is it just wasn't imaginative at
all. It didn't think that somebody who's
displaced has like any dreams of their
own that they might go build now that
these tools exist, right? And that the
it sort of felt the economy is stagnant.
I really believe that like if these
tools work the way that you think that
they're going to work, uh then you know
they're they're just not going to cause
vast economic displacement. Here's this
is my this is the line that I really
hated. In every way, AI was exceeding
expectations and the market was AI. The
only problem was the economy was not. I
I just think that like you know if if
the AI exceeds all expectations then the
economy is going to become uh AI and and
enable people to grow much more uh than
they have previously and and so the
economy will be AI and the economy will
grow. That's just my perspective.
>> Yeah. I do you know what I I saw this
stat around the number of professional
photographers where there was worry that
like once like first digital cameras and
then phone cameras came out that it
would come destroy all the entire
industry. I mean I guess this is Jevans
paradox in action whereas like actually
the like increase in access to taking
photos created massive new demand in
industries around photography and like
it was actually this nice little
encapsulation I felt around like what
can happen like suddenly now everyone
needs professional quality photos where
in the past you wouldn't have cared as
much and because you could take photos
on your phone it created social media
and like like it the there is whatever
that means for society. That's another
question. But like there there's that
argument that or it was like a really
nice simple picture of in the last 20
years of our our lifetime seeing
something that could have been pure doom
actually turning into something like in
a positive unexpected way.
>> That's it. I mean, if you think that
everything is static and that people
don't want to do new things or grow or
they're satisfied with whatever they're
doing, then you believe the Satrini
paper. If you don't, if you believe that
there's growth and the economy changes
and people find new things to do, then
you don't believe it. And in fact, this
is from Citadel, which wrote a rebuttal.
The number of software developer jobs
being posted on Indeed is far outpacing
uh the total number of uh job posting in
terms of overall uh in terms of
percentage growth. So that says
everything you need to know. If AI AI is
able to code much better than so many
people now, why are software engineering
jobs, you know, outpacing the rest? It's
just that when you have these tools and
you're able to be more productive,
you're able to do the things that you
couldn't do previously. And so you want
to do them and you don't shrink into a
corn cob and say, "I'm done because of
these things." And that's why these
papers like this, the Catrini paper
annoy me because they just don't have
any imaginative thought and they're not
realistic about the world work, the way
the world works and they they you know
scare people and the fear translates
into clicks and
uh god I guess I'm being very harsh on
them right now, but I I'll I'll keep
with it. Like it just seems to me to be
the worst way to do things. And I think
the other worst way to do things is the
idea that the stock market is so brittle
right now that an AI sci-fi imaginative
paper can cause a selloff is uh still
get a handle everybody. Come on. Come on
stock market. It's okay. Just
>> yeah take a take a breather here.
>> I mean if anyone shouldn't be freaking
out into the stock market. We know that
the market reacts so cooly to any
>> to anything. So relax, godamn it.
[laughter]
>> Just not a substack. Not a substack.
>> Not a freaking substack. Obviously
Substack goes out and be like, "We move
the market." It's like, "Is this the way
you want to move the market?"
>> Yeah. Yeah.
>> I don't think so. All right, let's pack
it up and go home and we'll we'll cool
off and come back next week and
hopefully the world will still be
standing. Does that sound like a good
plan?
>> I think. I hope so. I'll see you next
week.
>> All right, see you next week. Thank you
everyone for listening and we'll see you
next time on Big Technology podcast.