Dwarkesh Patel's AI Lab Review: OpenAI, Anthropic, Grok, Meta, NVIDIA, Safe Superintelligence

Channel: Alex Kantrowitz

Published at: 2025-06-20

YouTube video id: zIEQdAnOfwg

Source: https://www.youtube.com/watch?v=zIEQdAnOfwg

Let's talk a little bit about the
competitive side of things and like just
lightning round through the the labs.
People said that there's been such a
talent drain out of open AI that they
would no longer be able to innovate. I
think Chad GPT is uh still uh the best
product out there. I think using 03 is
uh like we both have talked about pretty
remarkable watching it go through uh
different problems. Um,
how have they been able to keep it up?
Um, I do think 03 is the smartest model
on the market right now. I agree. Um,
and even if it's not on the leaderboard,
by the way, last time we talked about do
you measure it on the leaderboard or the
vibes, right? I think it's like it's not
the number one of the leaderboard, but
vibes it kills everything else. That's
right. That's right. Um, and the the
time it spends thinking on a problem
like really shows, especially for things
which are much more synthesis based. Um
I honestly I don't know what the
internals of these companies I just
think like you can't count any of them
out. Um I've heard I've also heard
similar stories about OpenAI in terms of
talent and so forth, but like they've
still got amazing researchers there and
they have a ton of compute, ton of um
ton of great people. So I I I I really
don't have opinions on like are they
going to collapse tomorrow. Um yeah, I
don't think I mean clearly they're not
they're not on the way to collapse,
right? Yeah. Um, you've interviewed
Ilaskever.
He's building a new company, Safe Super
Intelligence. Any thoughts about what
that might be? Uh, I mean, I've heard
the the rumors everybody else has, which
is that they're trying something around
um test time training, which I guess
would be continual learning, right? So,
uh what is what would that be? Explain
that. Who knows? But I mean, the words
literally just mean while it's thinking
or while it's doing a task, it's
training. Um, okay. Like whether that
looks like this online learning on the
job training we've been talking about. I
have I have like zero idea what he's
working on. Um, I wonder if the
investors know even what he's working
on. Um, yeah. But he's I think he raised
a 40 billion valuation or something like
that, right? He's got a very nice
valuation for not having a product out
on the market. Yeah. Yeah. Or or or for
Yeah. So, who who knows what he's
working on? Honestly, he's Yeah.
Enthropic is an interesting company.
They are they made a great bot, Claude.
They're very thoughtful about the way
that they build that personality. For a
long time, it was like the favorite bot
among people working in AI among coders.
It's definitely been, you know, a top
top place to go. Um, but it seems like
they're making, I don't know, a
strategic decision where they are going
to go after the coding market. um there
it maybe they're seeding the game when
it comes to consumer and they're all
about you know helping people code and
then using claude in the API with um
with companies you putting that into
their workflows. Yeah. What do you think
about that decision? I think it makes
sense like enterprises have money
consumers don't right especially going
forward these models like running them
is going to be like really expensive.
They're they're big they think a lot
etc. So, these companies are coming out
with these $200 a month plans rather
than the $20 a month plans. It might not
make sense to a consumer, but it it's an
easy buy for a company, right? Like, am
I going to expense a $200 a month to
help this thing do my taxes and do real
work? Like, of course. Um, so yeah, I I
think like that idea makes sense and
then the question will be can they have
a differentially better product? Um, and
again, you know, like who knows? I I I I
really don't know what will how the
competition will shake out between all
of them. It does seem like they're also
making a big bet on coding, not just
enterprise but coding in particular
because as this thing which we know how
to make the models better at this. We
know that it's worth trillions of
dollars uh the coding market. So and we
know that maybe these the same things we
learn here in terms of how to make
models agentic as you were saying it can
go to for seven hours how to make it
break down and build a plan and etc
might generalize to other domains as
well. So, I think that's our plan and
we'll see what happens. I mean, all
these companies are effectively trying
to build the most powerful AI they can.
And yes, Anthropic is trying to sell to
the enterprise, but I also kind of think
that their bet is also you're going to
get self-improving AI if you teach these
things to code really well. That's
right. And that might be their path.
Yeah. I I think they believe that. Yeah.
Fortune 500 companies which you talked
about at the very beginning of this talk
uh of this conversation struggle uh to
implement this technology. Yeah. So in
that with that in mind what's the um
what's the deal with the bet that's
about helping them build the technology
into their workflows because if you're
building an API business you have some
belief that these companies can build
very useful applications with the
technology today.
Yeah. No, I think that's correct. Like,
but also keep in mind that I think there
what what is Enthropic's revenue run
rate? It's like a couple billion or
something. Yeah. Um I think it would
increased from 1 to two to three billion
run rate in like over 3 months which I
mean it's like compared to like OpenAI
loses that over a weekend.
Um uh Sam Meree doesn't even know when
he's lost it, right? It's so little
money. Turned out he was a great
investor, just a little crooked on the
way up. That's right. Yeah. Um, yeah, he
went in the wrong business. He should
have been a VC like a guy in crypto. I
mean, the bets that he made, do you bet
on cursor very early anthropic Bitcoin?
Yeah. I mean, honestly, some like fun
should hire him out of prison just like
we got a new pitch. What do you think? I
mean, he's probably the way that we're
seeing things go these days, he's
probably pardoned. Right. Right. Right.
Um,
uh, anyways, what was the question? Oh,
yeah. What are enterprises going to do
if oh so the revenue run if it's three
billion right now
there's so much room to grow if you do
solve continual learn it I think like
you could get rid of a lot of white
collar jobs okay at that point and what
is that worth like at least tens of
trillions of dollars how like the wages
that are paid to white collar work so
I think sometimes people confuse my
skepticism around AGI around the corner
with the idea that these companies are
valuable I mean even if you've not like
not AGI that can still be extremely
valuable. That can worth hundreds of
billions of dollars. Uh I just think
you're not going to get to like the
trillions of dollars of um value
generated without break going through
these bottlenecks. But yeah, I mean like
three billion plenty of room to grow on
that, right? And even so today's models
are valuable to some extent, right? Is
what you're saying. you can put them you
have them summarize things within uh
within software and make some
connections make better automations and
that that works well. Yeah. I mean you
got to remember big tech what they have
like $250 billion run rates or
something. Wait, no. No. Yeah. Yeah.
Yeah. Which is which is like compared to
that, you know, Google is not AGI or
Apple is not AGI and they can still
generate 250 billion a year. So yeah,
you can make valuable technology that's
worth a lot without it being AGI. What
do you think about Grock? Which one? The
XAI or the inference? The XAI. But yeah,
um I think they're a serious competitor.
I just don't know much about what
they're going to do next. I think
they're like slightly behind the other
labs. Um but they've got a lot of
compute per employee. Um real time data
feed with X. Yeah. Is that valuable? I
don't know how valuable that is. It
might be. I just don't I have no idea.
um uh based on the tweets I see at least
I don't know if the the median IQ of the
uh the tokens is that high but okay
yes
it's not exactly the corpus of the best
knowledge you can find if you're
scraping we're not exactly looking at
the textbooks here exactly
uh why do you think meta has struggled
with llama growing llama I mean llama 4
doesn't seem like it's living up to
expectations and I don't know. We
haven't seen uh the killer app for them
as a voice mode, I think, within
Messenger, but that's not really taking
off. What's going on there? Um I think
they're treating it as like a sort of
like toy within the meta universe, and I
don't think that's the correct way to
think about AGI. Um and that might be,
but again, I think you could have made a
model that cost the same amount to train
and it would have it could have still
been better. So, I don't think that
explains everything. I mean it might be
a question like why is um
why is any one company I don't know like
why why is um I'm trying to think of
like any other company outside of AI why
are HP monitors better than some other
company's monitors who knows like HP
makes good monitors I guess um uh supply
chain there's always supply chain you
think so I think so yeah on electronics
really okay supply chain because yeah
you get the supply chain down you have
the right right parts before everybody
else. That's kind of how Apple build
some of its dominance. There are great
stories about Tim Cook, right, just
locking down all the important parts.
Uh, by the way, forgive me if this is
somewhat factually wrong, but I think
this is directionally accurate that he
locked down parts and Apple just had
this lead on technologies that others
couldn't come up with because they just
mastered the supply chain. Huh. I had no
idea. Um, but yeah, I think there's
potentially a thousand different reasons
one company can have worse models than
another. So, it's hard to know which one
applies here. Okay. And it sounds like
Nvidia, you think they're going to be
fine given the amount of compute that
all the um all the labs are making their
own AS6. So, Nvidia profit margins are
like 70%. Not bad. Uh-huh. Not bad.
That's right. For a I mean, they would
get mad at me, I think, for calling them
a hardware company. Yeah. Hardware
company. That's right. Yeah. Yeah. Um,
and so that just sets up a huge
incentive for all these hyperscalers to
build their own ASEX, their own
accelerators that replace the Nvidia
ones, which I think will come online
over the next few years from all of
them. And I still think Nvidia will be I
mean they do make great hardware. So I
think they they'll still be valuable. I
just don't think they will be producing
all of these chips. Okay. Yeah. What do
you think? I think you're right. I mean,
didn't Google train the latest editions
of Gemini on tensor processing units?
They've been they've always been
training. So, I I mean, they still I
think they still buy from Nvidia.
All the tech giants seem like they are.
Let me just use Amazon for an example
because I know this for sure. Uh Amazon
says they'll buy as basically as many
GPUs as they can get from Nvidia, but
they also talk about their tranium chips
and you know, it's a balance. Yeah.
which I think Anthropic uses almost
exclusively for their training right at
this point. Yeah, but it is it is
interesting because I mean the GPU is
the perfect chip for AI uh in some ways,
but it wasn't designed for that. So, can
you like purpose build a chip that's
like actually there for AI and and just
use that? You're right. There's real
incentive to get that right. That's
right. And then there's the other
question around inference versus
training. Like some uh some chips are
especially good given the trade-offs
they make between memory and compute for
um low latency which you really care
about for uh serving models but then for
training you care a lot about throughput
just making sure the most of the chip is
being utilized all the time and so even
between training and inference you might
want different kinds of chips and who
knows how RL is no longer just this um
uses the same algorithms as pre-training
so who knows how that changes hardware
uh yeah you I got to get a hardware
expert on to talk about that. Are you a
Jevans paradox believer? Um, no. Okay,
say more. Say more. So that the idea
behind that is that um
as the models get cheaper, the overall
money spent on the models would increase
because you need to like get them to a
cheap enough point that it's worth it to
use it for different applications. Um,
it comes from a similar observation by
this economist uh during the industrial
revolution in Britain.
The reason I don't buy that is because I
think the models are already really
cheap. Like a couple cents for a million
tokens. Is it a couple cents or a couple
dollars? I don't know. It's like super
cheap, right? Regardless, it depends on
which model you're looking at.
Obviously, um, the reason they're not
being more widely used is not because
people cannot afford a couple bucks for
a million tokens. The reason they're not
being more widely used is like they
fundamentally lack some capability. So,
I disagree with this focus on the cost
of these models. And I think it's much
more we're we're so cheap right now that
like the more relevant vector or the
more relevant um thing to their wider
use, the more uh increasing the pie is
just making them smarter. How useful
they are. Yeah, exactly. Yeah, I think
that's smart. Yeah.