OpenAI’s Windsurf Crash, Grok’s Wild Week, Replace Tim Cook? — With Aaron Levie

Channel: Alex Kantrowitz

Published at: 2025-07-14

YouTube video id: eOIlapplm6w

Source: https://www.youtube.com/watch?v=eOIlapplm6w

OpenAI's windsurf deal is off and the
executive team is going to deep mind.
Elon Musk's Grock had one hell of a
week. Nvidia becomes the first $4
trillion company. And should Apple
replace Tim Cook as some analysts are
suggesting. That's coming up on a
special Big Technology Podcast edition
right after this. Welcome to Big
Technology Podcast Friday edition where
we break down the news in our
traditional coolheaded and nuanced
format. Boy, do we have a treat for you
because today Box CEO Aaron Levy is
joining us to break down this week's
news. And we have a full slate, a more
than full slate because OpenAI's
Windsurf deal is off and the team there
is going to Google DeepMind. We can also
talk a little bit about Grock, the ups
and the downs, big ups, big downs.
Nvidia hitting 4 trillion and then of
course these rumors that Apple wants to
replace Tim Cook. So great to see you as
always, Aaron. Welcome back to the show.
Good to uh good to be here. What a week
in uh in in tech.
>> Absolutely crazy. Let's just start with
the big headline first. This just
dropped right before we started
recording the show. OpenAI's Windsurf
deal is off and Windsurf's CEO is going
to Google. This is from The Verge.
OpenAI's deal to buy Windsurf is off and
Google will instead hire the Windsurf
CEO Vun Mohan, co-founder Douglas Chen,
and some of WinSurf's R&D employees from
the company to join DeepMind Google and
Windsurf announced Friday. So Aaron, can
you tell us about the significance of
this Windsurf? What is Windinsurf? What
deal were they going to have uh with
OpenAI? And what is the significance
that that deal is off and they are
moving instead to Google?
Yeah. Uh it's I mean yeah in this
industry at this point you never get
even just like one piece of news. It's
always multiple pieces of news embedded
in in one one major thing. So this is
this is sort of a multi- uh you know
multi-art announcement I guess. Um so
Windsorf is uh has been one of the the
faster growing AI coding uh platforms.
Uh it's an IDE that is built off of VS
Code that lets you have agents that
automate your coding and um uh you know
quite successful particularly in the
enterprise. Um they were uh they were
one of the first to really kind of nail
the enterpriseoriented sales motion lots
of protections for data and and your
codebase um that that they had a good
fit on. Vun's a fantastic kind of
founder entrepreneur and um you know the
expectation I think was that they were
going to be acquired by OpenAI help
openai really kind of boot up their
coding efforts and uh clearly that's now
off obviously the rumor in that process
was there was some structural issues
with maybe the Microsoft terms and
different you know parts of of that deal
it's you know no one has ever kind of
explained exactly the the the problems
there but um now with that deal off and
and Verun team going over to Google.
It's um it's a recalibration of the
market. It's actually interesting. So
the um the the thing that that everybody
should have been thinking this entire
time actually was where is Google in AI
coding? Uh because right now you have
you know Enthropic from a model
standpoint tends to be seen as the
leading model for coding. Um uh and then
they also now launched you know cloud
code. Uh OpenAI launched Codeex uh which
is a very strong uh kind of offering in
an agentic coding experience and um and
so the odd man out there is is Google uh
where Google has has you know is a very
deep engineering centric organization
and so one would have imagined that they
would want to be front and center with
with AI coding. The Gemini 2.5 model um
is is seen as very good at coding. Uh
but again it's in this a little bit of
no man's land because they neither have
an IDE nor uh do they have like you know
what tends to be the the best coding
model uh from from Enthropic. So they
had to do something in this space. Um
and this is a a pretty exciting move to
uh to launch into. So the politics that
you talk about between OpenAI and
Microsoft, I am just going to imagine
that Microsoft has GitHub copilot which
allows you to do a lot of this AI
generated code thing and the fact that
it's invested all this money in OpenAI
and has access proprietary access to
OpenAI's models probably not such a fan
of OpenAI going out and building a
competing product.
>> Yes, although it's not obvious to me
what leverage they have in that uh in
that dynamic. So um so I think that I I
think you know everything at this point
is basically just rumors and conjecture.
It's very clear that
>> we do best on this show.
>> Yeah. Exactly. Sure. Or or basically our
entire industry at this point. But um
but I think um I I I don't perceive that
open AI is constrained by anything
strategically at this point uh visav the
Microsoft relationship. So I doubt you
know I think the rumor was more there
was things like you know IP issues and
and other dynamics with with uh with the
acquisition but again it's all rumors so
who impossible to know these you know
deals fall through for a variety of
reasons. Um but I would not be surprised
if if OpenAI continued their motivation
for needing to be in this space more
aggressively. Um and so I I doubt this
is the the last that we hear from OpenAI
on uh on either idees or or coding in
general. And certainly they're very
committed on the codec side. Um, and
people people have had great, you know,
experiences with with Codex, their AI
agent.
>> Now, people might look at this and be
like, well, this is just continuing a
pattern of open AI uh running into drama
anywhere it goes. Uh, is it getting
concerning at this point? From the
outside, it looks like it is.
>> I I think it's fine. they they uh they
they are they are somehow juggling you
know building some of the world's
largest data centers you know massive
massive energy you know needed for that
massive GPUs they're they're you know
acquiring Johnny Ives you know company
uh they are uh you know releasing models
at an incredible cadence the rumor next
is uh is an open source model so so I I
think they have probably 50 different
things going on um this being only one
of those uh those activities
So speaking of speculation and
conjecture, what percentage of all AI
spend right now, all generative AI spend
do you think is going to coding? Because
>> the way that it's talked about, I mean,
if you think about just anthropics
growth over the past year, I would be
stunned if it wasn't more than 50%.
>> Oh yeah, for all all AI tokens in
general. Yeah, I think that's um I I
think it' be fun to look at a graph of
this. I mean um it tends to be one of
the highest volume you know kind of if
you look at a hum like what's the
relationship between a human and the
amount of AI that they can consume.
Coding absolutely would be the the peak
use case right now. There's no other
there's no other human task that that
one person could cause so many um uh you
know kind of tokens to be produced. Uh
like a deep research is is great but you
do it maybe once or twice a day and it's
relatively confined. Um summarizing
information very efficient not very
tokenheavy. Uh so coding is is
definitely the one that is like you know
this incredible like you know one person
could cause thousands of dollars per day
of of GPU expense um if they if they
really want it. So I think this is going
to be the killer app for for the
foreseeable future in terms of just
sheer volume of tokens. And so this is
why it's such a big prize. And Google
again you know it's funny actually just
you know timed well with with your
Sergey and and Dennis interview. I mean,
with Sergey back at like there's there's
these little small nuances that that you
know, you probably don't want to overly
extrapolate from, but but or anecdotes
that you don't want to overly
extrapolate from, but I think Sergey
being back at at Google is a very
interesting thing to consider around
how, you know, this is a company that
has a great operator in Sundar, an
incredible, you know, AI uh innovator in
in Demis uh Jeff Dean, you know, deep on
research and science, and then now this
kind of hardcore founder and Sergey like
this is not a company that is going to
lose the coding battle. Like they like
there's no way that Sergey is sitting
around being like, "Oh, I'm going to use
Entropic to to code the next version of
of a feature I'm building." Like he has
to make sure that that they're using
Google's technology. Um like that that
is a that is, you know, obviously a
point of pride for any founder uh is uh
is to make sure that you're building the
the the technology that you're using
for, you know, the domains you're going
after. And so I I just you have to
imagine how committed they are to
solving this problem and and Verun now
and and team are going to be now one
more way to accelerate that.
>> Maybe that's the reason why Mark
Zuckerberg is deciding to spend billions
of dollars on AI talent seemingly. It's
because they started using sonnet I
think for coding. So he realized there
was a problem there.
>> Well it's it's not it's not crazy.
>> Yeah. I mean think about think about all
of these founders right? Greg Brockman
uh at OpenAI um
Sergey Zuck I mean they don't want to
walk around their office and find out
that the thing that everybody's really
excited about is somebody else's model.
So like like that is like uh that that
that would be like if you know you
worked at Facebook and everybody was on
on X all day long and not not using you
know Facebook. So, so like these things
are very major points of pride for for
these founders which makes it the race
so exciting uh to be to be watching.
>> Yeah. Back when I was reporting on
social media, whenever there was a trend
on Facebook when they had their trending
column that originated on Twitter, they
would never say people are talking about
this on Twitter. They'd say they're
talking about it on social media and
point back to Facebook posts of people
people talking about the Twitter thing.
So I think that really goes to the
hubris of these companies. And just to
you know put a finer point on what you
were saying you know for those who are
listening and and are maybe more on the
financial side not that technical um
you're paying for every it's generative
AI so you're paying for tokens the the
characters that these machines generate.
And so when you say build me a web app
it's just tens of thousands if if not
hundreds of thousands of tokens. And
that's why we're seeing people spend
this much money in coding. So, I'll give
you an example of how crazy this gets.
Um, I was talking to a founder this week
where I mean like I, you know, every day
I see something that that I'm just like
like I have to completely reassess my my
uh estimation of of the future. Um, this
this founder is so right now a solo
founder. he has uh many different I
don't know if it's five or 10 or
whatever the right number is many
different agents in the background going
off doing individual parts of his
codebase as well as the marketing kind
of um uh website that that he has to
build for this product that he's working
on. And so he is effectively as a solo
founder a manager of multiple agents
doing all of this work. And then his job
and and basically the new form of of
engineering work out there is to come up
with incredibly precise prompts that are
super tuned for his use case and then
kick off all these agents in the
background that that are going off and
doing work and then he goes and reviews
their code and he integrates that code
into the broader codebase um and then
you know effectively is is there you
know reviewing and auditing all of their
work. The the reason why that's so
impactful is or or meaningful is that
one person could literally be causing
tens of thousands of dollars a month in
AI consumption uh because of just the
single actions that that he is doing. So
while that's not going to be the
behavior of everybody on the planet,
that is this that that is a a massive
force multiplier of human to compute
ratio that we've just never seen in in
computer history.
>> Open AAI isn't the only one making
headlines this week. There's been some
crazy stuff happening with Grock. Um
both a new model Grock 4 and uh the
behavior of Grock has been
disappointing, shall we say. Um so let's
start with the actual new model first
and then we'll talk about the alignment
issues or um maybe this is what Elon
Musk wants in terms of the way that it's
behaving. So Elon Musk builds this
massive GPU cluster in Memphis called
Project Memphis. He calls it Colossus. I
think that was the name of this GPU
cluster. And we finally see, I think,
the first model that's built on top of
it, Gro 4. Um, this is from Tom's Guide.
Gro 4 is live. Here's what to make.
Here's what makes it Elon Musk's most
advanced AI yet. Um, he says that it's
going to be uh expected to rival
OpenAI's GPT5, which we still don't know
when it's coming. Clouds for Opus, uh,
which is launched. You have, uh,
artificial analysis. This is a
benchmarking firm. they basically say
that Grock is blowing away all these uh
different benchmarks. Um and then of
course in the ARGI test it outperforms
every model by a significant margin. Uh
some have said maybe these benchmarks
are you know maybe Grock is just
benchmark hacked or you can't believe
them. Uh but it seems like there's
enough evidence here uh that there's a
chance that making this GPU cluster
massive has worked for Elon Musk. What's
your read on it? Yeah, I mean I think it
it is working uh empirically. Um
obviously you can you can and we saw
this with meta a little bit. You can
sort of train your models to to perform
better at some of the evals or the
benchmarks which then you know somewhat
can delude you into thinking that the
model is is uh is is better than it is
uh where it's really just better at
these kinds of tests. However, right now
I think most of the evidence is is that
uh this is a very high performing model
kind of across the board. um it
continues to align with the the theory
that more compute, more data generally
is going to produce better models. And
then they're doing some novel things
that I think is are emerging across the
industry, but but maybe this will be the
first kind of real commercial model at
scale that does this, but they have a
model called Grock 4 heavy that has
multiple agents go off and execute
basically the same task. uh and then go
they go and review their answer for
which answer they think is these agents
think are the the best result. Um and so
this is a great example of how you can
have a lot of compute in the training
process but then also have lots of
compute in the inference process uh
where where you just have the model
working harder and harder and harder to
produce better answers um which uh which
is is clearly producing you know great
results. they show the the scores of
what Grock 4 heavy can produce. Um, and
I think that will become a standard AC
across the board. So, I think it's it's
absolutely a a continued uh improvement
on uh in in in model quality, model
performance, and we're we're super
excited that that this uh the scaling
laws, you know, are continuing to play
out and this is just more evidence of
that.
>> Well, it's interesting. So, I want to
talk with you about the scaling laws
because we've had a number of folks come
on this show uh and say, "Yeah, we're
seeing diminishing returns." And it's
not nobody's. I mean, Thomas Korean, CEO
of Google Cloud, said it pretty much
straight up a couple weeks ago. Um, and
now it seems like it's been tested where
Elon said, "I'm just going to win on
scale." And he makes what is, I think,
the biggest GPU cluster in the world.
And it looks like it is producing. There
was um one of his engineers, a guy named
Uday Rudaru uh is he's actually he's
left and he's going to OpenAI to work on
Greg Brockman's scale team. And I
messaged him after he left and I said,
"Do you believe in the scaling laws
after what you've seen?" And he says,
"Yeah, the more GPUs the better." And it
looks like that's what they're showing.
Um so so what makes for this disconnect
between everybody yelling um diminishing
returns and what we're seeing now which
is like maybe that's not the case.
>> Well I I would I would kind of say that
both can be true. Diminishing returns is
first of all it's a relative concept. So
diminishing relative to to what rate? Um
but but I I think the the way to think
about it is um if you think about a
curve that eventually sort of asmmptotes
um all that matters is where are you in
that in that curve. So um if it if a
curve is sort of like this and it's
absining right here well if we're right
here that's bad. But if we're right here
it will be quote unquote diminishing
returns but you haven't asmmptoted or
plateaued yet. And so all that matters
is is where you are on that on that
curve and and trajectory. And you can
see based on some of the evals, it's not
as if it's not as if that there's a
going to be a 10x improvement in
intelligence anytime soon simply because
some of these eval were already at 80 or
90% of of the of where the eval. And so
there there isn't even room for the
model to be to be 10x better. So that
might mean though that you have to apply
five or 10x more compute to get to that
next final that last mile of of um of
intelligence which again would be both
diminishing returns but also something
we would still continue to drive as an
industry because you're still just going
to get you're going to appreciate that
quality difference. Um and so that that
I think is totally fine. um you know in
general talking to enterprises we're
already for the most part for uh you
know with with many many exceptions
we're already for the most part uh in a
position where the technology well
exceeds anybody's ability to adopt all
of these benefits so far. So
simultaneously we we want a we want the
progress to continue at this exact rate
and there's most use cases on the planet
it still could be benefited just by even
what today's models can do. So we want
more innovation, we want more compute,
we want more intelligence. But even if
you stopped right now, you'd still have,
you know, massive amounts of economic
gain uh get delivered from from what
we've already created, right? But I
guess the the question really is for
those, and I don't think you've said
this, but there's many in the AI
industry saying, well, the scaling laws
are a straight shot to AGI if we keep
making things bigger. So, I guess I'm
trying to te what we're seeing with
Grock. Uh test that statement based off
of what we're seeing with Grock.
>> I I
I'm not going to make any predictions on
that front. Um because the the the
>> But do you think this is evidence for or
against?
>> I think the smartest people on the
planet have two totally different views.
And so I am uh I'm not going to get in
in the middle of that one. I mean
clearly you have people like Ilia where
you know it's rumored that he's working
on a different architecture or and and
and maybe a different path. Um and then
obviously you have other people that are
are you know let's just throw more
comput and data at the problem. I think
you can start to sense actually as an
industry that the the AGI term has
actually kind of gone into the back seat
and obviously more of the conversation
is around super intelligence. Um, and I
think there's more and more comfort
around this idea that actually the race
really is just how do we build
intelligence that far exceeds a human
and what will the economic and and you
know kind of societal benefits be of
just even accomplishing that which are
massive. And um I have always sort of
found the AGI thing to be you know
particularly squishy as a concept. Um uh
I I in the B2B world I I deal way more
with just like utilitarian concepts and
so super intelligence and this idea of
we have AI that will far exceed a human
like that that alone is enough of a
breakthrough to be shooting for and I
think what you're seeing with scaling is
we will be able to certainly accomplish
our collective definition of super
intelligence with the the current u the
current path we're on with scaling laws.
>> Okay. So you would say that there's two
camps. One is keep scaling and the other
is we need new techniques.
>> Well, if you is that right?
>> Yeah. If you have if you put Yan Yan
Lun, Ilia and U
>> Demis would be in that category if we
need new techniques
>> and Demis in in one category and then
you put a bunch of sort of uh you know
scale max you know maximalists. You know
>> who would that be? Daario probably
>> maybe Dario maybe anybody that's running
one of these current clusters I don't
know where Sam is these days so so you
know probably Sam
>> he said we know what to do we know what
to do and he's investing in Stargate so
seems like he's max that's a scale
maximalist so um but but what's
interesting is actually I think that
that you'd be able to get them all to
say the same thing which is which is
this category that says we need a new
idea are probably AGI maximalists and
then there's another category which is
like actually It's already proving out
the economic and societal advantages of
of even our current approach to AI. So
just let's keep running that for as long
as possible and we'll just keep eating
out more and more benefit. Like you
could already dramatically improve every
health care experience on the planet
just by using whatever the latest, you
know, state-of-the-art model is in every
in every area of healthcare. Like
everybody will absolutely get better
doctor diagnosis. they'll get better
health care. The the doctors will be
happier when they transcribe all of
their uh their their conversations with
patients with AI like like and that's
just like today's state-of-the-art. We
don't need any new breakthrough just to
have that ripple through everything that
we do. If every engineer on the planet
had background agents that were, you
know, checking for bugs or writing
writing new code for them or updating
their libraries, all that longtail work
that's really really inefficient and and
not enjoyable. already the economic
advantage of just today's architecture
would be massive. So, so I think that
that the I think you can basically be
happy about both outcomes like the super
intelligence track with more scale is a
great track to be on and we're just
going to get more and more benefits and
the sort of like we need a new idea AGI
maximalist that's fine too and that's
just upside if if and when we discover
whatever that thing is.
>> Okay. So, I want to poke at this a
little bit because we did see something
this week that is concerning and really
goes to the stability of these models,
which is that uh Grock became, I don't
know, a neo-Nazi. It seems like half the
time these buckets become neo-Nazis, but
none of the big
I don't know if it was Neo. I think it
was uh it was like OG
>> straight up Nazi. Yeah. Yeah. OG Nazi.
All right. I was giving it uh too much
credit. So, so uh this from the BBC.
Musk says Grock chatbot was manipulated
into praising Hitler. Grock was too
compliant to use her prompts, too eager
to please, to be manipulated.
Essentially, this is being addressed in
response to a question asking which 20th
century historical figure would be best
suited to deal with I think it was the
Texas floods. Grock said uh to deal with
such anti vi such vile anti-white hate.
Adolf Adolf Hitler, no question. Um all
right, so that definitely uh a Nazi full
blast. also uh someone who insulted
President Reab Tay Berdogan of Turkey
and so he got the Gro got blocked in
Turkey. So just uh really off the
reservation here messing with Erdogan.
Um so I I want to ask you we got to one
of our listeners dropped a question and
we're going to get some discord
questions but basically asked me uh what
does it say about the stability of these
models that with a little tweak Grock
turned into Mecca Hitler? That doesn't
sound like a tight system or
architecture. It sounds really wobbly.
>> That's a question for me. I mean the
unfortunately I don't know if there's
been a full postmortem as to whether
that was a training issue. All of it's
in the weights to be Mecca Hitler or if
that was a system prompt issue in which
case you can do quite a bit with a
system prompt um uh to to to effectively
you know change the direction or path of
what you want the AI to respond to. So
to the extent that it was as simple as
they used to have a system prompt that
said, you know, please be politically
correct and be thoughtful and and make
sure that to um you know, not say
anything offensive. If if they used to
have that and then they basically said
um actually no, you know, say anything
you want, then you know, in that latter
mode, users could certainly kind of you
know, could it into then doing Mecca
Hitler stuff. um uh and uh
>> stuff
and so um so I think I think it's it's
sort of unknown how they train that
model how much of this was system prompt
I you know for for being able to remove
that as a risk factor I think it's sort
of well understood what you need to do
post-training um and uh and what you
need to be you know doing from a safety
standpoint and then it's really just a
decision of the model provider and the
application layer of of how to implement
those things but um I I thought it was
obviously a ridiculous
you know, ridiculously bad situation.
Uh, deeply obviously, you know,
offensive and dangerous, but also not
really that much of a meta story about
AI simply because you can get these
models to do anything you want. And the
whole the whole thing is as an industry,
you're kind of working toward, you know,
trying to keep these things confined
within a particular pattern of behavior
and and and and sort of, you know, uh,
level of of of
communication style. This is the next
iteration. Grock force from Techrunch.
Grock force seems to consult Elon Musk
to answer controversial questions. So
they decided I guess to try the next
version where if you ask a contra
controversial question, let's say about
the Israel Palestine conflict, abortion
and immigration laws. Grock will
reference Musk's stance on these
subjects uh through news article written
about the billionaire founder and the
face of X and Techrunch tried to do this
and was able to replicate uh it multiple
times in its testing. Is this the the um
answer to the alignment problem? Just
follow what Elvon believes.
>> Uh it's it's an experiment of of how to
how to achieve it. So um he listen he's
always claimed he's he's fairly
centrist. So um so that would uh that
would make it pretty aligned. Um yeah, I
mean I I mean they clearly keep stepping
on the rake and um uh and the rake, you
know, keeps hitting themselves in the
face. But um I I I I have faith that
they will they will find a way to work
through some of these uh kind of
ridiculous uh situations,
>> right? And we've talked about where the
money is in AI today. I mean, I would
say we both said majority coding. Uh and
then probably comes enterprise use
cases. Yeah.
>> And as this is all unfolding, we have
the big technology discord server. One
of our users uh says, "Oh, uh you're
speaking with Aaron Levy this week. Why
don't you ask him this question?" This
is the question. Given what we just saw
uh that Elon is willing to do with
Grock, would you really in your heart of
hearts consider this model for use at
Box or even extending it a little bit
more? Um why in their right mind would
an enterprise consider integrating Grock
uh given this pattern of behavior?
>> Well, I think um it's a fantastic
question and it's absolutely, you know,
worth worth thinking about. Do you
remember like 10 years ago um Microsoft
had an AI chatbot I think called called
Tay or something?
>> Tay.
>> Yeah.
>> So I remember it well because I broke I
had the exclusive. So Microsoft came to
me to break that news at BuzzFeed and I
wrote Microsoft has this fun chatbot
called Tay. It will you know be your
friend. I pinned it to my Twitter
profile went to sleep in San Francisco.
woke up that morning. Overnight, Europe
and the East Coast had figured out that
Tay had been a Nazi and I woke up to
many concerned messages telling me,
"Please take the pin down."
>> Okay. So, I'm I'm glad that that that uh
I didn't know you caused this problem.
So, so that's actually what
>> I I didn't co cause it, but I might have
inadvertently supported it. So,
>> okay.
>> I took the pin down eventually. I I
think uh I I think this space uh is a
you know always this process of of
figuring out where these models you know
kind of go go a bit crazy um uh produce
the either the wrong information or
hallucinate or have accuracy issues and
it's all about continuing to iterate on
uh on on how to how to improve the the
the system prompt the model the
alignment of these models and so I you
know just judging by both how they
responded. They took it down, you know,
almost immediately as as these examples
were coming out. The fact that they
acknowledge kind of why this was
occurring and and what they're what
they're working on about it. Um I I
think that they will continue to improve
their their model and and uh and the AI
system. And then it's really up to
individual customers to decide which
which model do you do you trust? You
know, what do you want to use? Um and I
think everybody should take into all of
the factors uh of uh that they that they
would want to consider. So I I'm you
know we're we're certainly not in the
business of of you know telling our
customers which uh which type of AI
model to use. Um there's going to be
some that have really perfected a use
case and so thus you're going to want uh
to use a particular AI model but um uh
but but you know I think everybody has
to make their own decision of which
which AI to use.
>> Yeah. Yeah, I guess the their point was
um if you're an enterprise, I think this
is one of the examples given. If you're
an enterprise and you're using like rock
to write emails, uh you don't want it to
like in the middle of responding to a
sales request to be like and by the way,
you know who was great? Hitler. But I
don't
>> my guess I I haven't seen I haven't I
haven't read all of the again I haven't
read all of the if they've done a
postmortem or anything. My guess is that
that's not built into the model as much
as
>> right
>> um it was a uh it was more of a Grock uh
kind of specific application issue that
that caused that. But let's let's see
what they what they you know how they
respond. I just want to quickly agree
with you here because Elon, we had
talked about this actually on the Monday
show with MG Seagler that Elon had
repeatedly, you know, talked about how
he lost control of Grock and it was
citing media matters to try to take down
cat dirt, which we know is a capital
punishment worthy crime in the Elon
universe. Uh, and he kept saying, you
know, Grock's getting a rewrite. So,
this is clearly a post-training uh,
snafu where they took it from something
that was politically correct. They
wanted to make it less politically
correct. And this is sort of where you
get on the internet when you want to go
there. All right. Let's take a Yeah.
Yeah. Sorry. Go ahead.
>> I know. But I think I think I think to
to respond to that initial question from
that person, I do think that anybody who
wants to have an enterprise business
does have to ensure that they are
building basically purely utilitarian AI
systems that are generally considered to
be very safe and and and trustworthy.
So, so if you want to be in the in the
B2B game, which will be most of the
volume of of AI usage and APIs over time
because that that's how you will show up
in every other product, then then this
matters a ton. I just haven't seen
evidence that that they don't want to go
fix those problems. Um, and uh but we'll
see. Okay. Yeah, I I hear you. I I think
you're probably right here. All right.
Um, I want to go to break. Before we go
to break, if you are on techme.com this
weekend, you probably see that this
podcast is uh showing up as the top uh
podcast in a list of shows. Uh it's um
reverse chronological. So, we posted it
uh I think most recently before the
weekend, but it's a great placement and
I want to thank TechMe for it. Um if
you're not familiar with TechMe, it's
read by tech industry leaders,
executives, VC, founders, key product
people. It has info dense headlines
summarizing the news and enabling
leaders to absorb what happened in tech
as quickly as possible. I use it all the
time for the show and it provides unique
and valuable context uh including
related news tweets, blue skies, threads
when people are still threading. Um
highly recommend TechMe. Thank you
TechMe. Uh it's really great to be
partnering with them. All right, we're
going to go to a quick break and then
we're going to talk about Nvidia hitting
$4 trillion. And we're back here on Big
Technology Podcast with Box CEO Aaron
Levy. Aaron, the money uh you know for a
an industry that is majority enabling
coding use cases uh keeps pouring in and
we now have our first $4 trillion
company. Uh I think MG Seagler pointed
out that the first uh trillion dollar
company was Apple. The second the first
$2 trillion company was Apple. The first
$3 trillion company was Apple. The first
$4 trillion company is Nvidia. So it
just goes to show you all those decades
of uh working to sell computers and
iPhones. Now the GPUs are the hotness.
Uh it's been Yeah, this is uh from the
times Nvidia spent three decades
building a business worth1 trillion, two
years turning itself into a $4 trillion
company. Uh is it just another number or
is there something significant about
this? I think someone said it's like 4
trillion is like 4% of the entire GDP.
>> I I think it's I think it's fun. It's a
it's a fun milestone. Um there's
obviously nothing magical about four
versus 3.999. So, so to some extent it's
it's mostly symbolic but but I think
what what what it being the the largest
company in the world is has a has an
embedded message in it which is just the
point of leverage that Nvidia has
relative to essentially what everybody
is betting on as the future of the
economy which is an AI powered economy
with robots and self-driving cars and AI
systems that we chat with and agents
that do work for us. um you know you
would expect that a meaningful portion
of the profits of that economy will
acrue to the infrastructure providers of
that economy and as you go through the
stack you've got the hyperscalers you
have the model providers you have the
then the chip providers and Nvidia is in
the pole position you know on the chip
front so I think it's welld deserved
Jensen's a beast uh you know he's he's
you know just worked obviously insanely
hard for decades to to get to this point
right and And so it's right place, right
time with many many decades of building
up to be able to be in that position. Um
and uh and so I I think it's I think
it's an important milestone for sure.
>> So speaking of these big numbers though,
I mean eventually they have to be tied
to real reality and it's not just
Nvidia, right, which is going to have to
justify 4 trillion. Now I mean I guess
the scale hypothesis, sorry, the scaling
laws news is good for Nvidia. Maybe
that's part of the reason why it's up
today. Uh but you also have Core Weef
which is at a 4x jump uh in its share
price after like a pretty underwhelming
IPO. Uh and then you have Meta spending
all this money on talent. Uh newcomer
says uh this AI data center media
conjures up the B- word which it means
bubble. Uh is it something to fear? Um
what do you think about this term
bubble? Applicable.
>> I I I uh I think it's uh all uh well
placed at the moment. So we're, you
know, if you if you think about, let's
just fast forward 20 years. Um, so,
um, one one cool thing, actually, maybe
just small anecdote, one cool thing is,
um, Whimo just, uh, arrived in, uh,
where I live in in Silicon Valley in the
Peninsula. And, um, and, you know,
everybody in in in downtown SF has
already kind of had the their religious
moment on this, but we've never had it
in the suburbs. And um and you just you
get into one of these things and it
takes you to a meeting or it takes you
to dinner and it's a completely
lifealtering experience of just
imagining in 20 years from now. What if
every car you get into is is autonomous?
Uh what if every factory you go to has,
you know, is like 80% robots just
running around? Uh what if um uh every
computer you use is augmenting your work
by a factor of 10 to just to work on
your behalf to do way more. What if
every time you I mean maybe people won't
like this but every time you have a
sniffle uh there's an AI you know doctor
that's like doing diagnostics on you
just like the future is going to be
going going to be so many of these
autonomous systems around us helping us
with education health care
transportation
commerce uh just basic productivity and
so if you think out 20 years and that's
the world that we live
who would you want to invest in other
than other than that architecture and
that infrastructure stack right now and
Nvidia would be at the at the center of
that but then many of these other
players and platforms would would would
uh would obviously be in that investment
case so no I don't I don't think it's uh
I don't think it's crazy um and I think
it's 100% sort of directionally aligned
with where the economy is going
>> I thought three trillion was surely the
end of it but we're we reached four so
quickly and I'm just It's like, oh no.
Like, is it is it going to hit five
soon? At this point, there's nothing
that's out of the imagination for me.
>> I mean, we we probably need to start
talking about the 10ens. So, like, what
will that really?
>> Yeah, why not?
>> Okay.
>> Yeah, sure.
>> That's true. Okay. Uh, how long? And all
right. Over under Nvidia 10 trillion by
2028.
>> Oh. Uh, I maybe I would I'll give it a
little bit more time, but but you know,
to be to be worth 10 trillion dollars,
you probably want, you know, 300 billion
in profit, let's say. And so, uh, with
their margins, that means that, you
know, they're doing 400 billion in
revenue. Like, that's totally not crazy.
400 500 billion in revenue for Nvidia.
That's, um, that is a totally realistic
scenario to to imagine.
>> Okay, this isn't investment advice, but
I'm starting to scratch my head here.
All right. So, of course, there's a
company that they've replaced as like
the one that's been setting these bars
with which is Apple.
>> Um, this is a report in Bloomberg. Apple
should consider replacing Tim Cook as
CEO. Lightshed says so. Uh, the story
says Apple should consider replacing Tim
Cook um as the iPhone maker struggles
with artificial intelligence raise
significant risks for the company. Apple
needs a product focused CEO not one
centered on logistics. The two analysts
said missing AI could fundamentally
alter the company's long-term trajectory
and ability to grow a all grow at all.
AI will reshape industries across the
global economy and Apple risks becoming
one of its casualties. You know, it's
great setting this up, right? The um you
know, could Nvidia hit 10 trillion
because um if AI is going to be as
transformative as you suggested with um
all these various use cases, it is true
Apple has been flatfooted. Is this the
craziest suggestion that the light shed
guys are making?
>> Well, I think the thing I would say, so
maybe a couple things. First of all, I
think I think Tim's great. Um, and so I
I have a I have a bias towards him um uh
for for a number of reasons, but the um
but but I I I I you know, the thing that
is worth noting is how strong Apple's
position is in um and that what that
what that then equates to is their
ability to watch the space and figure
out the right move to make and when to
make it. Um because whether whether some
people like it or not, you know, this is
still the best device handheld device on
the planet and it has the best set of
apps on the planet and it has your whole
life kind of tied to it. So given they
own that platform, their ability to
lodge in AI into that at any point in
the future remains very strong. And so I
I look at I look at this as you know if
you have you have basically three three
options uh as a company. You could be a
first mover and then totally sort of
have a debacle and it not work. And
we've actually seen plenty of examples
in AI where the first mover is no longer
the relevant player. Um you could have a
a scenario where you are a first mover
that has a compounding advantage that
continues to persist. Let's say OpenAI
is in that category. Incredible
execution and and uh absolutely amazing.
And then you have another category which
is you enter the space at a time when
the architecture has sort of been
figured out when we understand the
economics of the model when when you're
not having to to you know you're able to
have step function levels of improvement
by I mean by the time that you launch
into it and I think maybe Apple didn't
purposely make that choice but but it
they are clearly in the position where
they can actually have that choice now
and so I think you can just look at this
as if If this was 2004, we could have
easily said, why has Apple not released
a phone? And and yet by 2006, like that
wouldn't have mattered and they had the
dominant platform that that that would,
you know, continue to uh to exist. I
mean, Microsoft had a tablet computer in
2002 or something. I I own one. Um uh or
or my my co-founder owned one and I
owned a one of their Windows um
smartphones uh made by Compact or HP.
And so think about that that they got
had the smartphone and they had the
tablet computer first and neither of
those things mattered to the long-term
dominance uh in the space. And so Apple
has a position uh and a potential of
basically when when the time is right to
jump in. They still have the devices
that we're using. They still have the OS
that we're using. Um and they'll be able
to have learned from all of the mistakes
of of you know various companies along
the way. So I I wouldn't count them out
and I think they're clearly sitting
around saying when is the right time to
pull a trigger on a much bigger move. Um
and so I think we have to just wait for
that.
>> What do you mean much bigger move?
>> Well, they they they either have to make
the decision of either train a model
that is that that gives them a
state-of-the-art AI model or do some
substantial partnership or acquisition
move. all of what we've seen with these,
you know, kind of founder CEO hires.
Obviously, the acquisition environment
is complicated because of DOJ and FTC.
Um, but I would I would certainly be
astonished if in two years from now
there wasn't one of those those choices
being made, but I'm not I'm not sort of
uh that worried that it hasn't been made
yet.
>> So, um all right, here is my galaxy
galaxy brain idea. It's one step bel uh
below I mean further from the typical
galaxy brain. So, I've been on the show
advocating for perplexity. Maybe I've
been thinking too small. Let me put it
this way. Uh, Apple just lost its COO
this week, um, Jeff Williams,
>> and everybody thought Jeff Williams was
going to be the successor to Tim Cook.
Um, are we now in a moment of setup
where Sam Alman and Johnny IV have
teamed up on a device? Tim Cook is
getting ready to retire in the next
couple years without a clear successor
now that Williams is gone. Do we see the
ultimate tech merger where OpenAI
becomes forprofit and Tim Cook says,
"Sam, Johnny, pick up the legacy." They
did the picture. I think they want this.
Can it happen?
>> Uh that is a that is a wild uh that that
is w some wild fanfiction. Um uh I mean
anything could happen. I I I think that
is that should be totally in the in the
category of of options. You know, if if
you're being realistic by the time that
that moment would would likely occur,
you know, OpenAI should be much bigger.
That would be much more complicated than
as a deal. Um but I like the you know,
certainly the as a brainstorm. It's a
it's a great way to brainstorm.
Okay, that's a very nice way to let me
down. And yeah, I I said merger uh for a
reason. I wouldn't call it an
acquisition. It might have to come at
this point where the two just come
together that way.
>> No, no, fair point. And I've seen
crazier things in my life now in tech.
So I I can't write anything out at this
point. So So uh let's uh let's see what
happens.
>> Let me put this put it this way as we
end. I think that type of deal is far
more likely than Apple buying Anthropic.
uh just because uh it's going to require
something so much more substantial or or
why would that why would that be more
likely?
>> Because I think it's a better cultural
fit. I think the anthropic team and
Apple would clash, but I think Open AI
going into Apple, you know, could
potentially work. Although OpenAI is
much leakier than Apple, although Apple
leaks leaks everything to Garmin these
days. You know, the only the only thing
I would just suggest or posit is is you
know, this um it'll be fascinating to
watch what Meta does with um obviously
its new super intelligence or because
because we actually already saw it with
Grock to be clear, but but Meta will be
a second round of this. If from a more
or less standing start, you know,
they're able to accomplish, let's say,
some new breakthrough state-of-the-art
model in 6 months, 12 months, 18 months
or whatnot, I think what that will prove
is basically it still remains largely a
talent and compute and data game. Uh,
which means that you don't really need
to buy an existing incumbent. You you
mostly just need to decide to to go big
on on on the compute and on the
training. Um, and and obviously have the
right talent to do that. and the day
that that like it does it it doesn't
really matter whether you had you know
uh all of the other prior versions to
like before that moment like like you
you're you're doing a reset no matter
what. So, I would I would just argue
that like we we get all excited about
this idea of some big mega acquisition,
but right
>> it's not really it's not a problem that
requires that kind of scale uh except
for when you're just doing the capital
expenditure of the GPUs. You really just
need the right talent, the right
training data, and the right compute.
So, I I would I would more bet not on
one of these very large multi-tens of
billions of dollar deals simply because
there's other paths to get there that
are not as complicated.
>> That's a great point. I mean it's less
about you know an individual company's
IP because everyone's effectively
sharing the IP it's about productizing
it right
>> well that that's exactly right so so if
you imagine this industry within one
year every single breakthrough idea
eventually gets discovered by everybody
else like there's never nobody has kept
an advantage for more than a year on
some secret idea that that that that
only they have and so Apple's ultimate
uh Apple's ultimately advantage is is
they have a distribution model that
nobody else has and they have a form
factor of a where AI could show up that
nobody has. So they don't need they
don't necessarily need to have the best
model relative to you know one or two
months being ahead of anybody else. They
just need to have like a a a good enough
model that any one of our non- tech
friends would just be like this is
fantastic. I love this thing which is
just not again does not require that
scale of of uh of of acquisition or or
whatnot.
>> All right everybody the website is
box.com. You could also find Aaron's
very insightful posts about AI on
LinkedIn, Aaron Levy, and on X. Uh, his
handle is Levy. Aaron, this was so fun.
It's always great to speak with you. I
appreciate that. Thank you.
>> Good to see you, man. Take care.
>> You, too. Thank you everybody for
listening and we'll see you next time on
Big Technology Podcast.
[Music]