OpenAI Closes in on $100 Billion, OpenClaw Acquired, AI’s Productivity Question — With Aaron Levie

Channel: Alex Kantrowitz

Published at: 2026-02-23

YouTube video id: XuYYsTE8v4k

Source: https://www.youtube.com/watch?v=XuYYsTE8v4k

Open AI is closing in on a massive
hundred billion dollar fund raise. Open
Claw is acquired as agent hype goes into
overdrive. And is AI making us more
productive actually? That's coming up on
the Big Technology Podcast Friday
edition with Box CEO Aaron Levie right
after this.
Welcome to Big Technology Podcast Friday
edition where we break down the news in
our traditional cool-headed and nuanced
format. We have a great show for you
today. We're going to talk about the
forthcoming hundred billion dollar or
thereabouts funding raise for Open AI
where SoftBank, Amazon, Nvidia, and
maybe Microsoft are expected to
participate. We're also going to talk
about the acquisition of Open Claw also
by Open AI and some new studies about
whether AI [music] is actually helping
us be more productive. Ranjan Roy is out
today and we are joined by the perfect
guest, returning champion Aaron Levie is
here with us. Aaron, Box CEO, welcome
back to the show. Thank you. Good to
Good to be here.
Um
never a dull moment in AI land.
Seriously. So, this week we have model
releases, we have potential funding
announcements. It's hard to figure out
where to start. But let's just go with
the big story. A couple weeks ago, we
foreshadowed this idea that Open AI
might be on the way to a fifty billion
dollar fund raise. Guess what? It's
doubled now. It looks like it might be a
hundred billion dollars. SoftBank with
thirty billion of that. Amazon might end
up investing as much as fifty billion
which is wild given their connections
to Anthropic. And then, I don't know.
Even the numbers are even making it look
like at least one ten to me because
Nvidia might end up
>> remember these kind of numbers from our
like series A and B days.
So, this is all This is par for the
course. Right. Context here is that any
So, Nvidia could put up thirty billion
dollars. This This is All of these
numbers would basically be larger than
the entire amount raised by the biggest
IPO
in history. So, let me just ask you
this.
The narrative around Open AI has been
code red losing to Google commoditized
getting its ass kicked by anthropic.
Now money is just numbers. It's just
money, but does this size of a fundraise
rebut some of that and why do you think
these companies would be making such a
big bet on open AI
if some of those criticisms might be
true?
Well, I mean
I I just take a pretty pragmatic view to
this which is
you know, probably every fundraise after
1 billion
you know, the 1 billion market cap from
open AI
the same set of questions would have
would have been asked. I'm sure when
they were 10 billion and 50 billion and
100 billion and and you know, a couple
hundred billion
the question was always how big could
how big of this market possibly be it's
going to be hyper competitive. Google's
going to wake up someday. There's other
competition. Aren't these models going
to get commoditized? So so you have to
kind of almost imagine that's always
going to be the state of the the the
conversation. That that will happen at
at every kind of you know, juncture
you know, as we saw in the past and and
I think as we will see going forward and
yet at the same time
almost by every metric
the usage of of at least open AI's
products keep growing certainly
anthropic and Gemini's
and and other players in the space. The
capability level of these models is only
increasing. So these models are doing
more work.
We are still only in the earliest
innings of of the actual ripple of
intelligence across
organizations and across the enterprise.
So
so I think all of the metrics you just
cited are relevant, but they're kind of
the metrics that you would look at in
the early days of cloud computing and
you're like in 2010 or 11 or 12 and
you're and you're like wow, you know,
Google just now got into the game and
and Azure is building up market share
and and you're looking at Amazon you're
saying, well, you know, how how big now
could this possibly get given how much
competition there is.
And I think in AI we're kind of
experiencing the same thing which is
which is if you actually zoom out and
you look at maybe the 10-year view of
this market,
we are looking at a
a really really small percentage of the
total change that is going to happen as
a result of this. So we're in the
earliest innings. It's crazy to think
that when you're talking about a hundred
billion dollar raise. Like I'm I'm, you
know, I'm aware of of that the cognitive
dissonance that that might exist from
that.
But when you're talking about just intel
like one of the most kind of fundamental
kind of, you know, core fabrics of the
economy in the next century.
Uh, I I it's just like entirely
reasonable that you would both see that
level of competition and you might have
companies that are now approaching a
trillion dollars in this in this
category.
Okay, but here's what the pushback would
be. It would be that in the past these
questions have come up, you know, what
is Anthropic going to do? Is Google
going to get it together?
Those were ifs. Now Google has gotten it
together. Gemini, I think we have a new
model from Gem Gemini 3.1 that came out
this week that is, you know, half the
price of the other leading models and it
and has about the same performance.
This is a competition that has tightened
in a real way. Yep. Anthropic isn't just
a figment of the imagination anymore. It
is dominating in enterprise. Cloud code
is crazy. But but you just have to you
have to kind of do a slightly different
math on this. You have to
Everything you just said is true and
doesn't doesn't
yet doesn't impact the valuation or
funding question. We're talking about a
category where, you know, it'll be
measured in the tens of trillions of
dollars
the market caps that will will be
generated by AI. Some of that will go to
the chip providers, some of that will go
to the supply chain of the chip
providers, some of that will go to the
AI model providers, and some of that
will go to the kind of application and
deployed layer. So, if you're talking
about a category that will be worth tens
of trillions of dollars,
you know, we're we're talking about
little skirmishes in on the path to to,
you know, who's going to be a $5
trillion company in this space or a $2
trillion company in this space or a $500
billion company in this space or $100
billion company in this space. So, so I
I just I look at it as just like the
total size of the market and how that
pie will likely be be divided and and
you can still have, you know, Google
become two times bigger than they are
today and have 50% of the market share
from consumer traffic, and that would
still support,
you know, very large numbers from Open
AI or Anthropic or one or two other
players in the space just because of the
the sheer size and scale of of the
market we're talking about.
Now, I'm looking at the size of these
numbers, and one of the questions that's
come up for me is do the people
>> here's just just just for fun. Just for
fun.
What do you think
What do you think if if you want me to
put you on the spot,
what do you think the market cap of JP
Morgan is?
Uh let's say 100 100 billion.
200 billion.
>> $840 billion. Oh man, I'm embarrassed.
Way off. Okay.
>> So, so the market cap of JP Morgan is
$840 billion. And I'm not I'm not saying
that that's a fair market cap or not a
fair market cap. But as of so no no
opinion on on on that market cap. But
you and I could list 15 competitors to
JP Morgan, all of which I I don't even
know if I do anything. I I I don't have
any JP Morgan thing. I think I have
maybe like a car loan or something
that's through JP Morgan. But like I
don't use JP Morgan in my daily life. Uh
and they're worth $840 billion. And if
you take all of the other banks that
that, you know,
you just are in the you know, you're in
the trillions of dollars very, very
quickly
across just one one little category. Now
and so
this is the like if you're talking about
intelligence across the entire economy,
you can get to you know, pretty large
numbers in a in a pretty pretty
reasonable way.
Okay, you're setting up the question I
was about to ask perfectly.
>> I didn't want to.
No, [laughter] I think it is it's a
great setup. You've just illustrated
what I'm going to ask about the size of
these numbers. So, the numbers are
[snorts] big. Yeah. And the question I
have is are the investors thinking that
this is all going to be additive or
maybe what happens is that OpenAI
is getting this big because it's able to
take some of that a little bit of market
cap from a JP Morgan. You know, a big
part of JP Morgan's business is advising
clients on making investment decisions.
You know, if I have a
chat GPT investment instance, Yeah. you
know, is that is that all of a sudden
some of that market cap is going into um
into the OpenAI market cap? Closer to
home, we're in the middle of the SaaS
apocalypse, right? Where there's this
belief that AI is going to just ingest
lots of what software companies are
doing right now and the market has
really been unkind to software companies
like the start of this year. Very
unkind. Very unfair. Um I feel like
Trump like they very unkind to software
companies.
>> [laughter]
>> But you know, on that note, like do you
So can you sort of describe what you
might think as what happens if this is
additive versus what happens if this
actually is a technology that will just
gobble up big swaths of the economy?
Well, I kind of think about it as a
multiplier on the economy or you know,
kind of a maybe it maybe it it you could
either think about it as a force
multiplier
and it gets a it gets a tax on that or
it's a it takes a percentage of of the
economy, you know, through through some
sort of you know, kind of
you know, labor arbitrage type type
pricing. But, to me I kind of look at it
as, you know,
tens of trillions of dollars are spent
on on knowledge workers across the
economy. And and if you could, you know,
add a 30 or 50% increase in productivity
across all of knowledge work, could the
major labs and the applications around
that take up a 5%
you know, 10% sort of fee on on on that?
Uh that that that sort of like I think
how you get to to to the math where
revenue can get to the hundreds of
billions or trillions low low trillions.
Um and it's not like entirely
unreasonable just mathematically. Uh and
that's and you just are are basically
saying, "Okay, well, OpenAI will will
take part of that, Anthropic take part
of that, Google takes part of that, some
of the application layer takes part of
that." Um but, I think that you can you
know, there's a lot of ways you can get
there, uh including actually just like
advertising could probably get you
there. Like, there's just no reason that
that that your AI service is not
generating 50 to 100 billion dollars
uh just due to better performing
hyper-targeted advertising as another
business model. So, I think I think
OpenAI has kind of these multiple
business models stacked up that all that
all will create, you know, more and more
opportunity over time. And at the same
time,
you know, in 5 years from now, both they
will be
you know, 100 times bigger in inference,
Anthropic will be 100 times bigger in
inference, Gemini will be 100 times
bigger in inference, and and so on. And
that inference is more profitable, which
sort of starts to answer some of these
questions. The inference eventually gets
more profitable. I think you're in a a
mode right now, and I I I I know it it
sort of is it'll it'll sound kind of
crazy and bubbly, and you know, there's
a some percentage chance that I'm
totally just drinking the Kool-Aid, but
I think I think you're in a period right
now where you're just in the
infrastructure build out, teach the
world about AI, it's sort of worth
subsidizing a lot of these use cases
because because it it's the it's the
fastest path to figuring out where the
actual value it is going to be.
Um and um and so while, you know, there
are some scenarios where you have a
startup or a lab subsidizing tokens for
coding or whatnot, it it is there it is
it is like competitively a good move
for, you know, gaining market share,
getting getting data, building a
flywheel, creating a moat, like those
are all strategic things to do at this
stage. Similar to how Uber, you know,
had to buy their way into to many
markets unprofitably on a region basis
and then over time, you know, it's now a
wildly profitable business because they
now have obviously a very strong network
effect and they're kind of locked in uh
to the to these markets and I I I think
I think some of these very kind of CapEx
or or, you know, cash-heavy businesses
up front, uh
you know, sometimes just fundamentally
require that.
Right. One one note on the ads before we
move on. Um you're talking about how ads
could be a hundred hundred plus billion
dollar annual business and I would just
put a giant asterisk that I've I've not
studied that once. I'm just going off of
the size of Facebook's and Google's
businesses and saying there's just no
reason that consumer grade intelligence,
you know, that's answering any question
for you wouldn't also deliver that type
of business model as well. Yeah, so Open
AI has gotten a lot of I mean, so
Facebook, by the way, did 60 billion
dollars in the last quarter. So this
would basically the numbers you're
looking at is like
half half of that. And the one interest
I was speaking with an ad executive uh
this week and one of the interesting
things about Open AI's advertising, now
they've taken a lot of flak for it,
maybe with good reason, but one of the
interesting things about it is it's so
high-touch and that's why they're
charging like a $60 CPM, which is
insane. Um
it's so high-touch, it really guides you
through a process. It feels
seems like it feels good to go through.
Uh, it's helpful if you're thinking
about like staying somewhere. Um, the
difficult thing with advertising over
time is something that custom and that
high-touch has been really difficult to
scale. But with AI, that opportunity to
scale it presents itself and then all of
a sudden these numbers that you're
talking about aren't crazy. Yeah, well,
I I'm on the other camp than versus a
lot of people on this. I I think ads can
be incredibly powerful in AI products.
Uh, I think that um,
you know, uh,
you just kind of like
you sort of have to eventually decide as
a user do you want to see products that
are kind of SEO hacked or do you want to
see products that are kind of like
marketplace economically hacked? And um,
and there's there's, you know, many
reasons why the the products that can
best advertise for you to you might be
the better product because they have the
they they they they have a very clear
financial incentive only to get that you
to their site if it's a good product and
it works well uh, or else you're just
going to bail. And so, versus you know,
SEO we can just load a bunch of keywords
across a whole bunch of sites and create
lots of Reddit posts. That's that's all
you're seeing right now. When you ask
for something, you're you're you're
seeing some form of a company, you know,
you know, doing whatever it can to
ensure that it's showing up inside that
that algorithm. And so, it's not obvious
to me that that the marketplace model of
that is is going to, you know, get you
worse results. And I'm actually very uh,
you know, I I think I don't think any
lab would ever change the answer that
it's giving based on advertising. I
think it's going to give you the answer
and then it's going to give you related
and recommended things from from, you
know, from from the the bidding system.
And to me that kind of makes total
sense. Like that's just like how the
internet has worked for 25 years. It's
funded incredible consumer surplus of of
products on the internet. It's why we
have free search and free email and free
maps and like there's just no reason
that that would not apply to a consumer
grade intelligence product as well.
Definitely no. I think it could it's a
very interesting way of thinking about
it and you're right. You're going to get
recommended products anyway in these
things. So you know maybe maybe that's a
that's a good signal.
>> People want to believe that there's some
kind of like you know amazing truth
arbitra- like arbiter in these in in
these systems and and they just they're
not. I mean they it's they are at the
exact same mercy of a prior search
algorithm would have been. It's just
taking signal from a variety of sources.
It's doing its best to figure out what
the what the real answer is
and if you also have a marketplace
layered layered on top of that it's just
not I just don't think it's the end of
the world and I think you'll actually
get a lot of good recommendations along
the way and and people will then pay to
not see the ads and that'll be even more
revenue. So there's just like it's just
like a very good way to make money if
you're an AI company at that scale. Not
I mean I I only think it's relevant for
two or three companies but OpenAI is one
of those. Yeah, they'll have a billion
users or they might already have a
billion now. Okay. So before we move on
from the fundraising thing there's one
thing that has puzzled me throughout and
I need to ask you what your thoughts are
here. So
OpenAI and Nvidia announced this hundred
billion dollar funding that was going to
come in from Nvidia to OpenAI ten
billion dollars at a time and
then
it seems like Jensen was backing away
from that. There was a Wall Street
Journal article saying that the deal was
on ice and we found out this week from
reporting from the Financial Times that
Nvidia is going to invest in OpenAI
but it's going to be thirty billion
dollars and not a hundred billion
dollars. Now there were these reports
that Jensen was not happy with OpenAI's
a
and and all of that
Um and and he seemed like when he was
talking about it very different from the
original press releases saying we hope
they'll invite us to invest as opposed
to we intend to invest as two very
different ways of talking about it. So,
I'm trying to figure out Aaron, how do I
think about this because on one hand
they are So, this is if this deal
replaces it, that's $70 billion less. I
mean, if you if you get $70 billion less
than you anticipated, that's bad.
However, they're still putting in $30
reportedly. That's a lot of money.
Where do you think uh where do you think
the relationship stands and how are we
How should we read the number and the
replacement of the initial 100? Oh, I
mean, this is uh this is like full
astrology on uh Yes, this is astrology.
Yes.
>> [laughter]
>> Uh I'm we're doing palm reading for um
uh for the AI industry. I you know, I uh
uh I I first of all, I did did they say
that they intended to invest in the very
next round or they intend to invest 100
billion at at some arbitrary point in
time over time?
>> it was over time. It was never one
round. So, I I I don't know. I'm going
to just I like I I'm I'm taking all the
facts that in the same way everybody
else is and but I just don't have the uh
impulse for the drama side of this. It's
you know, Nvidia obviously wants a very
strong corporate relationship with
OpenAI. OpenAI obviously wants to be
able to be first in line for for chips.
They have a lot of incentive to both
make each make each other very
successful. It's it's like a it's it's a
boon for both of them if if the whole
the whole space, you know, keeps growing
and at the same time there's probably a
lot of configuration dynamics that you
know, that both Nvidia has to consider
on how much to invest and that OpenAI
has to consider when they think about,
you know, their total cap table and what
companies own what percentage of them.
So, I like you know, it's a very boring
answer only because I I think it's like
it's it's uh it's like fun to kind of
watch the viral video and of you know
Jensen in the street interview, but like
I I just might like I I I kind of don't
worry about it too much.
I just think like this space is is
changing so quickly that I can imagine
many different reasons why some
configuration might end up different
from you know where its intent was 6
months ago or where the lawyers decided
to you know kind of put put certain
terms in the in the press release.
Yeah, my my hot take here is that this
is all I think Jensen does want open AI
to succeed. Obviously it's them versus
Google.
I think this whole thing was basically a
signal from him to them. You better
perform and no more code reds and just
just stay ahead.
>> I I you know the only my only counter
take to that is I just don't think that
open AI has a challenge raising money.
So I don't know that
I don't know that there's sort of some
kind of pressure that can be exerted on
them from
from the cap table side.
I think it's a bit more of a fluid
market and and it's just people looking
at their capital allocation decisions.
Looking at valuations, looking at you
know do you have other sources of ways
of of getting the capital etc. Like if
you think about it from Nvidia's
standpoint for 1 second like
They don't They don't need to own a
percentage of of open AI. Like that's
like they need to sell chips to open AI.
And so and so really they they just need
to ensure that they've got a very strong
you know relationship that is is sort of
very sturdy and and supporting the broad
tailwinds of AI. And I don't know that
there's a number that like if it turns
out SoftBank wants to take more of the
allocation. I'm making all of this up,
but if it turns out SoftBank wants to
take more of the allocation, I don't
know that they are like strategically
impacted by that in in in a meaningful
way because if they if they own more of
open AI, I don't think that that
position in the cap table is going to
overly sway the infrastructure decisions
of Open AI. Open AI will have to make
their infrastructure decisions based on
just like like like the supply side of
of chips, the the cost side,
you know, where where do they have data
center capacity? Those things are going
to matter more than who owns a certain
percentage of their of their, you know,
corporate structure.
My counter to that would be
with numbers this big, there's only a
certain amount of money left for them to
raise. And Nvidia at $4 trillion
with, you know, sizable revenues is one
of those potential sources. I don't
know, there's countries with lots of
money. So
>> Yes, yes, they have We're we're about to
see them get involved. So And and those
>> [laughter]
>> those places want want to deploy money
in future economic,
uh, you know, activities. So Yes, well,
we have we definitely have we'll have
this round, which is going to be the
tech giants round, then we'll have the
Gulf state round number one, the Gulf
state round number two, and then IPO is
probably what the way it will play out.
Uh Uh
From your lips to God's ears.
>> [laughter]
>> So, um, it Speaking of other countries,
uh, the entire AI industry made their
way to India this week, um, for the
India AI Summit. And some really bold
statements coming out of there. So,
let's play a game, uh, that we play on
this show every now and again called,
uh, hype or true. Is this Are these
statements hype or are these statements
true? We got one from
uh, Sam Altman. On our current
trajectory, we believe we may only be a
couple of years away from early versions
of true superintelligence. If we're
right, by the end of 2028, most of the
world's intellectual capacity could
reside inside of data centers than
outside of them.
What do you think?
Um, I I well,
uh,
you know, all I'm probably every one of
the things you're about to say are going
to be conditioned on, you know, one of
the definition of what is the thing that
is being talked about, but uh I think
that that there's
kind of that that seems to be totally
reasonable based on the trajectory that
we're on. Um and I would bet that that
Sam has an even a far higher bar for for
what his definition of, you know,
intellectual or whatever the the the
term was than even I would. Like I think
like I I think [clears throat] already
with things like the latest round of
models with the right kind of AI
harness, we we could squeeze out a
significant portion of a valuable work
from these systems with the right
scaffolding and the right kind of people
being involved. So, I I think that that
is a very reasonable statement based on
what he's saying. That might be
different than what like Yann LeCun
would say is the definition of of
intelligence,
um where where he would probably define
it as can the thing drive a car, you
know, with only 10 minutes of training.
Um and I just don't I don't have that
same uh kind of more biological
definition of intelligence. I like like,
you know, so so that's why I think Sam's
statement is very reasonable.
Here's Dario. AI has been exponential
for the last 10 years. There are only a
small number of years left for AI models
surpassing the cognitive capabilities of
most humans for most things. I guess
that's the similar statement. Yeah, I
did so true. Same answer. Yeah. Yeah.
Uh
interesting moment happened at this
India summit. I'm sure you've seen it.
>> [laughter]
>> They have all the CEOs up there on stage
and they're all I guess instructed for a
photo to uh
lock hands and raise their arms. And uh
Sam and Dario, who don't seem to like
each other very much, instead
>> Although didn't it didn't I I watched
the video a couple times, didn't it feel
like maybe it was a little impromptu? Or
or do you think that was instructed? Is
it reported that it was instructed? I
don't I don't So, I was making a
assumption on the coordination of it.
Maybe it was impromptu. Maybe Modi at
the middle was just like and then
everybody followed.
>> some videos where it kind of felt like
nobody really knew what to do
and That's true. Yeah. and and and they
were kind of like just all figuring out
cuz you have this moment where like Alex
had to grab Omar's hand and and they cuz
and it seemed like like not everybody
quite knew how to coordinate this. So so
so you might have maybe we just maybe
they just malfunctioned for a minute and
and then by the time it was too late it
was just like I we can't we can't hold
each other's hands. So who who knows? I
mean
the point is the point we could we could
we should we can maybe in a future
episode play the video back and go do
the play-by-play.
But the point is everybody seemed to
figure it out except for Sam [laughter]
and Doria.
All right.
They had their hands in the air clenched
with fists one next to the other.
>> hands. So.
>> All right. Right. Yeah, they did
Photoshop the claw hands on onto him.
Question for you about this kid. Can
These two guys who who can't figure out
a way Okay, I respect their differences
but for if they can't figure out a way
to hold hands for a picture, should we
trust them to handle AI alignment?
>> [laughter]
>> Uh I
it's it's a it's a that's a very
uh
it's a very great meta question
on that.
Has anybody written that piece yet?
No, I mean I that really should have
been the big technology story this week.
>> write write that piece. I I I think it's
a it's a it's a great conundrum that we
face
that is this great little micro
you know microcosm of of of a broader
issue but
yeah I I don't I mean you know I pay a
lot of money to get both of their takes
on on the hand thing.
>> [laughter]
>> You know sometimes you get into these
heated battles with a rival where people
are just saying too many things in
public and and and it's just like
you know, you get to this point where
it's just the the relationship is is too
dramatic and there needs to be some kind
of
you know, kind of neutral ground that
brings everything back together. Maybe
one would have thought India would have
would have done that.
But I I I have kind of full faith that
we will get through you know, hand hand
issues
and and they can repair the the
relationship somehow.
Yeah, I I I hope so. I mean, I think if
you asked either of them right now, they
would have just said I would I should
have just held the hand and then avoided
the
Yeah, totally. That became the meme out
of the whole thing. I don't think they
meant for that to be the takeaway from
the summit. [laughter]
So,
they had like 20-minute speeches about
the and yeah, I don't think the hand was
meant to be the takeaway. It is funny
how you get all these AI leaders
together and sometimes there's just one
great meme. There's that. There's Dario
and Demis on the small couch, which is
one of my favorites.
>> [laughter]
[snorts]
>> Uh so, very interesting development
actually on the model front. We hinted
at it before. Anthropic has a new big
model, Sonnet 4.6 and you've said that
it is a major upgrade over the most
recent model, 4.5. You usually expect
these single-digit models to be
incremental updates, but the stats that
you shared on your
evaluation for complex work are pretty
pretty significant where there's been a
15%
percentage point jump in performance and
accuracy you know, between 4.5 and 4.6.
This is a from you on Twitter
or X, shall we say? In the public
sector, you saw a jump from 77%
in accuracy for complex tasks.
Healthcare saw a jump from 60 to 78% and
legal saw
a jump from 57%
uh uh
accuracy on complex tasks. Uh that's
pretty pretty big. It seems like this
model has has almost been underhyped. Uh
can you talk a little bit about these
these jumps and what the significance
is?
>> I think I I think probably the the main
takeaway should be that that the
progress of these meaningful jumps that
we've been seeing in AI coding over the
past couple of years, where
>> Mhm.
you know,
the model at best could do a couple
lines of code,
you know, in a in a kind of type-ahead
type format 2 2 and 1/2 years ago in in
coding space, and now obviously people
are giving the model a task of, you
know, write me tens of thousands of
lines of code for a full project, and
and we've just seen this incredible rate
of progress and this march uh up toward,
you know, more and more capability over
time uh with uh with in coding. I think
that same trend is going to come to
other other knowledge fields of
knowledge work. And so, so this jump in
Sonnet's model from 4.5 to 4.6, I think
represents an example of what happens
when these models just get trained
across more areas of knowledge work.
What happens when they are getting
better and better at reasoning
capabilities that go beyond coding. What
happens when they get better at using
tools and deciding when to use tools.
And that's what our complex work eval,
you know, is is meant to represent is is
sort of how does it think through a
problem? How does it decide it's got the
right answer? How does it check its
work? Um and these models are getting
much better at at being able to deliver
on that.
So, I think that'll be the trend for the
next couple of years. And even for our
own eval, I think we're looking at one
of the earliest phases of of of a of a
knowledge worker type eval. I think
we're going to make it harder and harder
to better represent the the capabilities
of these models soon. But um but yeah,
these jumps are obviously, you know, uh
very very uh
You know, uh we're going to get a little
bit into how AI will do work in the in
the second half when we come back, but
uh one of the interesting things that's
been happening around Claude is there's
been this drama between Anthropic and
the Pentagon about its use of of Claude
in this like uh the Pentagon's use of
Claude and there was this story that
came out that apparently the Pentagon
used um Claude in its to coordinate its
uh attack on Venezuela. Uh this is from
uh
ex-user Tony Shavlin. Such a compliment
to Claude that amid rumors it was used
in a helicopter extraction of the
Venezuelan president, nobody's even
asking, "Wait, how can a Claude help
with that?"
>> [laughter]
>> People are like, "Of course, of course
it was useful." Like, how would you not
have used Claude? Uh it is actually a
very funny like 2 years ago that
sentence would have been like, "Excuse
me? What What do you Like, how would
this have Like, what would the thing
have been?" And now it's just like,
"Yeah, I'm sure they used some kind of
intelligence to to plan something or
figure something out or, you know,
correlate data." And that's just sort of
priced into I think more and more
complex work in and um and and software.
Wow. Okay, so we still have so much to
talk about. We have Open Claude, we have
these new studies on AI productivity.
Let's do that when we come back right
after this. And we're back here on Big
Technology Podcast with Box CEO Aaron
Levie. Aaron, it's always great to have
you here and I think you're really going
to enjoy this next segment because this
is something that you've been following
very closely and it's going to be great
to get your perspective on it. When uh
when Open Claude sold to Open AI, I
said, "We got to get Aaron on the show
for his perspective on this." So this is
from CNBC. Open Open Claude creator
Peter Steinberger joins uh Open AI. The
creator of the viral AI agent Open
Claude is joining Open AI and the
service will live in a foundation
[snorts]
as an open-source project that Open AI
will continue to support, Sam Altman
said. Uh he said that it's that
Steinberg is going to join OpenAI to
drive the next generation of personal
agents. So,
we'd love to get your perspective here
just on, you know, a little bit about
very briefly what Open Claw is cuz it's
always good to sort of refresh there.
And then why is it significant that
OpenAI either acquired it or brought
Steinberger aboard?
Yeah, so um so I think the the
innovation that
that Steinberger kind of created with
Open Claw was
and there's been various attempts at
this, you know, obviously over the past
couple of years, but but I think it was
only really possible in probably the
last couple of months of of model
capability. But the the big jump is, you
know, we have these
agents that effectively act
on behalf of us and we're controlling it
and steering it to go do tasks for us.
So, Claude code, you kind, you know,
type in your terminal utility, generate
some some code and it goes off and does
work and comes back and and it's waiting
for its next task for for you to give
it. Or Codex, you're you're in a UI
telling it to to go and and generate
some code for you. Devin, factory, you
know, all these kind of agents. And
that's that's basically been the state
of the art of agents
for for the past, you know, year or so,
plus plus or minus. And Open Claw
kind of took, you know, many of the same
principles, but said, "Well, what if
that agent is sort of running on its
own?"
And it had access to your computer and
your browser and all the services that
you use and
and it's it's just literally running on
an ongoing basis. And it and you you you
chat with it and you can ask it to do
things, but it can also ping you as as a
sort of relevant. And that was that was
this this is sort of a a very new kind
of way to think about agents that again
we've seen examples of, but nothing
obviously that has taken off at the at
the level that open claw did. And
and it gives you a little bit of a peek
into what the future, you know, could be
where
you don't you don't have these agents
that you only sort of spin up and spin
down as you need them to do work for
you, but you have a actually an agent
that's sort of always always on kind of
working working for you and executing
tasks for you. And that's why people are
setting up, you know, their own separate
computers for these agents that can just
keep running off in their own
environment. And and you know, hard to
know exactly how you fully would package
that up and how it could manifest in a
way that would be really really simple
for people to use and and fully secure
fully fully yeah, exactly safe and
secure for people that don't kind of
know their way around all these systems.
Lots to figure out there, but but not
that different from, you know, what I
when I think about it is like a
you know, a principal update or a
paradigm update.
You know, I remember the viral video of
of Devon must be 2 years ago now and you
know, I don't remember exactly all the
details if they if they did a slack
message or if they were in the UI, but
you you kind of told Devon to go off and
do work and you could just see it it's
producing its code. It had a another
environment where you could see what
what it was building. And you know, they
got they got
you know, I think there a lot of people
that were like, oh, this will never
work. How could this possibly work? It's
not actually doing that.
And there were these viral takedowns
from non-believers and but but for for
for some people who were deep in the AI
space, we were like, oh, shoot. Like
that is a very different way to think
about, you know, working with an agent.
You're not in an IDE, you're not coding
alongside it. You're just setting off a
task and it's going to go and do a bunch
of work for you. And now obviously it's
very clear that that's the dominant
paradigm that we're going to be in.
Codex has proven it.
You know, Cloud Code has proven it.
Devon and Factory have proven it.
Uh you know, I I I I assume Cursor is
betting even more on agents. You can
kind of see them them pushing more on
the agent side of the user experience as
opposed to the IDE side. So, so that was
an update that we got a couple of years
ago. And I think we're going to see the
same thing now in other areas of
knowledge work and and open and and you
know, open claw introduces an
interesting kind of paradigm that that
could that that could persist across,
you know, more and more areas of work.
Right. And and now as a software CEO, I
really would love to hear your
perspective on what this means for
software. I'll just give some context
here. You know, I've spent the past, I
guess, week and a half now just like
with my nose in Claude code. I've just
been going crazy with it. And you know,
initially it was like, "Can you build me
like basically a software version of a
spreadsheet that like sends an email
when I complete a field?" Uh but then it
was like, "Well, why don't you plug that
into YouTube's API? Why don't you plug
that into, you know, I've right, I'm
looking for an apartment. Can you plug
into Streeteasy and and and Zillow?" And
all of a sudden it's like, "Oh, it goes
from basically me going to the internet
to the AI, you know, sorting through the
internet for me." And you actually
tweeted about this with the open claw
situation. You said, "In a world of open
claw codex, Claude code co-work manus,
which meta acquired, and other agentic
systems, it's becoming clear that the
future of software has to be API first,
but also enable human interaction for
verification, collaboration with agents
and people, and working on the output."
So, what does it mean for the software
industry if if it becomes API first?
Because, you know, on one hand, you're
you're enabling your customers to get
manage some out of utility if they're
interacting with you this way. On the
other hand, you know, Zillow
probably got some value in me going
there. YouTube probably wants me on
YouTube. Now it's all happening in my
like, you know, Claude my dashboards
that I've built with CloudCode.
Yeah, so uh maybe we'll separate the
markets a little bit cuz you you threw
in a lot of consumer products at the end
of of that.
You know, I
uh
hard to say how much how much of the
consumer internet kind of gets collapsed
into API calls versus versus, you know,
the average consumer just still wants to
go to YouTube and see the feed and and
they're not going to do that.
>> for me YouTube is that's strictly on
like the back end. So, that's like the
the creator side of YouTube. Like yeah,
exactly. I've used it to sort like
thumbnails and then rank them by, you
know, click-through rate and then also
tell us how how long people are staying
on the videos. But I I point taken on
the consumer side. You're not going to
want to go to your CloudBot to watch
YouTube probably.
>> Yeah, and so so that's why I kind of
separated a little bit. Now, now I'm but
but you have to be a little bit
sympathetic or at least think through
because because again,
absolutely major consumer properties are
going to see a reduction in traffic when
the answer just comes up in ChatGPT or
when you know, some kind of automated
system is just delivering the answer.
So, so I I I think that's a whole whole
category people have to think through.
On the enterprise software side, that's
obviously where where we spend our time.
We I'll speak for Box for a second and
then maybe can broaden out for software.
At Box,
we're like 100% excited about this
because because the
you know, one of the things that agents
are both really good at but also need
for their workflows
are your files. They they need to be
able to access the information to work
with to answer questions for you, to
produce new information, to be able to
store off memories and and it's working
and they're working sessions that you
can go and interact with. They need to
be able to read specifications and
documentation. All of that ends up being
files.
So, what we are building is a platform
layer that whether you're a person
interacting with your data, whether
you're an application that needs to
access data, or whether you're an agent
that needs to a file system to interact
with, we want to be the platform layer
that connects all of that. Mhm. And the
the key why we we at at Box we think
we're in a
kind of unique position is we don't
think it's enough for the agent just to
have its own sort of sandbox environment
of of of of a file system. Uh nor is it
is it going to work for just people to
have a separate environment. You're
going to need something that actually
connects those two worlds together. So,
the people are going to need some form
of end user interface, even if that's an
end user interface in a in a chatbot,
they're still going to need to kind of,
you know, interact with their data with
with something visual, and they'll
likely eventually want to like log into
something and see all their content and
be able to manage their sharing
permissions and who they're working
with. But, agents just need a set of
APIs. And agents need to be able to work
with those APIs and um
uh facilitate all of the work that
they're that they're doing. So, what
we're investing in is is making sure
we've got the most powerful capabilities
for agents to be able to work in and,
you know, work with all of this content
uh that that you want to give it. Now,
there's all these new implications,
which is how do you give an agent a
separate space to work in that you're
collaborating with that agent, but it
it's blast radius is somewhat contained,
so it doesn't kind of delete all of your
data, and and now all of a sudden you
have this kind of crisis on your hands
cuz your Open Claw agent went and and
mucked with everything. That just
happened to Amazon, by the way. I mean,
not to interrupt you, but Amazon they
there was just a story in the FT that
Amazon had lots of had outages because
the agent was like, "You know what I'm
going to do to fix this problem? Just
erase everything."
>> [laughter]
>> I I can make the problem go away. No
more code. Delete. Yeah, there you You
didn't like your
>> solution. You didn't like your folder
structure? Great, now there is none.
>> [laughter]
>> Um so, so you do have to you have to be
thoughtful about about how do you kind
of create the right, you know, lines of
demarcation between these systems, but
but But for us, if you imagine that
there's five or 10 or 100 times more
agents in the future than people, which
is I think a relatively safe assumption
given the productivity increase that
they're going to enable, all of those
agents are going to work with enterprise
information. They're going to need a
secure space to work with that
information. They're going to be able to
store that data. They're going to be
able to operate off of it. They're going
to be able to answer questions for end
users. They're going to be able to be
able to, you know, need to be able to
store their own data. So, that's what
we're building. And we have to make sure
again we we make that as easy as
possible for agents to go and utilize. I
think that there's a meaningful amount
of software that already exists that
will also have to do the same thing.
They will have to make their software
ready for agents. I think there'll be
some forms of software that get got kind
of compressed where agents don't really
need to use their tools in the in the
same way that people did. And that's
obviously where you're going to see some
pressure in in in in the software market
in some areas. And then there's going to
be all new platforms that have to exist
cuz we didn't anticipate the kind of new
problems that agents are are going to
run into. And that's where you'll have
again API first companies get launched
from the start thinking, you know, only
in terms of platforms. And I think this
is just going to be, you know, a
tremendous amount of growth for anyone
who at least has a play in that in that
architecture.
Okay, so you mentioned productivity and
I think this is something that's worth
examining as we end the show because um
I think there is this sort of discussion
around AI. Often times it's well,
there's productivity increases and it's
sort of accepted like, you know, that's
that there already are or there will be.
Um but the data is a little bit mixed
and I just want to run it by you and get
your perspective on what the data is
saying. So, this is from Fortune.
Thousands of CEOs just admitted AI had
no impact on employment or productivity.
And it has economist resurrecting a
paradox from 40 years ago. So, it talks
a little bit about how in the 1960s we
had transistors, microprocessors,
integrated circuits, and productivity
growth actually ended up slowing from
2.9%
uh it well
to 1.1% in 1973.
And so now you have all the these CEOs
that have been pulled
and it is yes, 6,000 CEOs.
2/3 of the executives reported using AI,
but it was 1.5 hours a week. 25% of the
respondents reported not using it in the
workplace at all. Nearly 90% of the
firms said AI had no impact on
employment or productivity over the last
3 years. I mean maybe this is research
done last year, but even still When was
that was actually I I'm curious. When
was that published? Or when was the
research taken?
I think I'm just going to take a look. I
don't have the exact date.
I'm going to get it. Hold on. It
published
February 2026. I don't know exactly when
the research was conducted.
Yeah. But with with the number of
respondents that obviously that would
have been probably you know, sometime
last year. But
but sorry, keep going.
No, go ahead. Oh.
Like I can just like like defend the
defend [laughter] AI or
I was just going to ask like what your
perspective is here because
it does seem like we're we're you know,
in some ways and this is sort of you
want to pressure test a little bit about
like some of these assumptions that
we're going to have more AI agents than
we'll have workers that it will lead to
this increase in productivity. Whereas
we're still seeing data where that is at
the at sort of best when you look at
this data up in the air.
>> Yeah.
Yeah, I
I I I can understand the the dissonance
that might be out there be be between
the tech enabled economy and and the
rest of the economy because what what's
happening is in tech
these agents are are so effective at
coding
and and developers have
have far fewer barriers to adopt agents
for coding than the rest of the
knowledge worker economy has for the
same level of productivity gain kind of
use cases. So, in coding you've got
these just incredible properties, which
is the models are hyper hyper trained on
code. They
the you know, coding itself is a is a
text only medium.
You know, Dario and Dørge on their
latest podcast kind of hinted it at an
interesting point, which is your code
base contains most of the context that
that you end up working with. It's got
your documentation. It's got your all of
the existing work that you've done.
And you if you kind of compare and then
you developers are just, you know, are
are obviously more technical, generally
more tapped into the internet and what's
going on in the latest trends. They pull
down the latest new products and and and
try them out. Now, you can compare that
to the rest of knowledge work, you know,
the marketer at a CPG company, the the
the you know, lawyer at a mid-size law
firm. You know, I'm making up a you
know, kind of some some kind of
caricatures of of you know, various job
functions, but basically like they're
going about their day and they're not
thinking like how do I go and construct
my workflow to to just fully take
advantage of agents and automate
everything I'm doing. Like that that's
just like probably not top of mind for,
you know, most knowledge workers.
They're going to go to chat GPT. They're
going to ask them questions. They're
going to get an email written for them.
They're going to summarize a you know, a
document. They're going to build a new
strategy plan. And then, you know,
they're going to be
you know, the company will will do
incrementally a little bit more as a
result of that and maybe their strategy
changes a little bit more or the
financial analyst comes up with some new
insights. That's I think probably been
the state of AI for
>> [clears throat]
>> for the past couple of years at least
whenever a survey like this would have
would have tried to analyze.
Compare that to engineering where, you
know, we have products that we build
five, you know, these are the estimates
from the actual engineer that we will
build five times faster because of of AI
coding. And
and we will as a result of that be able
to ship significantly more capabilities
to our customers. We will we will be
able to solve significantly more
problems for our customers. In many
cases, we might not even charge more for
that functionality. We are going to pack
that into their existing licenses
because because we now can. So so to
some extent, what would you measure in
our in our kind of productivity? We this
is now just a priced in thing that we do
because we we have to deliver more and
more value because obviously tech is
hyper-competitive and and we want to now
add more capability to our customers. I
think that that has not yet rippled
through the rest of knowledge work and I
I think it just will. It it just it it
it will have to because the tools will
get better and better and you'll have
one competitor in an in a market that is
able to use AI to either lower their
costs or lower their fees to the
customer or be able to deliver a
substantially higher product to the
customer. And as you see more and more
examples of that, that will just start
to to transform these market dynamics.
You know, I would say equally that you
know, I I I like to operate off of
you know, I think Bezos had this line is
when the anecdotes and the data
disagree,
you have to look at the anecdotes.
And so you know, look at the the you
know, the equal headline from two weeks
ago of KPMG asking their auditor to
lower their fees because of AI.
>> [snorts]
>> That I think is your that's your initial
signal of of actually what's going to
happen, which is a company's going to
say, you know,
that kind of work that that that we now
know we can we can bring automation to,
we should be spending less on
and then using those dollars to do
something else in our company that is
that is higher productivity or more or
that makes us more effective or more
competitive. And you once you do that
dozens or hundreds or thousands or tens
of thousands of times in an in
ecosystem, that's where you'll start to
see kind of this reshaping of of how
these markets will play out. It's
happening in tech unquestionably. And
now the only thing is what's the road
map to that happening across the rest of
the economy. That's going to take time.
People have to change their workflows.
People don't have data set up in a way
that is sort of prepared for agents. The
agents themselves don't always have the
right interfaces or tooling to to be
supported knowledge work. So So I'm
actually extremely pragmatic about this
where I think I could agree with the
survey that you just read and equally be
completely unfazed and and more of
anything just say people should be
probably prepared for this will come for
more areas of knowledge work. I'm the
biggest optimist on the jobs impact of
that. So I don't see that as a scary
thing. I think it's just going to mean
companies will have to sign up to do way
more for their customers. I think that
that that will be where it shows up is
is is we will have a surplus on the
consumer side of all of the vendors that
we work with will just have to deliver
better and better services for us. Or if
you're a B2B company, then all of your
vendors will have to deliver greater
services. And and we will wake up in 5
or 10 years and it'll actually kind of
feel like relatively normal. Like like
it there's not going to be some kind of
crazy it's not going to be the the
sci-fi movie. It's going to be that that
that we just we just have incrementally
better consumer experiences and better
services. Just as if you went back, you
know, 40 years ago and tried to imagine
life of of a lawyer or a healthcare
professional and you'll be like, "Wow,
how did you do your job?" Like without a
computer. Like how how how how did you
like
How did you understand the legal case
precedents without a internet search
that you could go do? Like that's going
to be work in 5 years from now is you'll
be like, "How did you do that without an
agent that drafted your entire contract,
uh you know, for you instantly so you
could respond to the client that was on
the phone?" That That we will have that
same set of questions and be confused
how we even work the way we do today.
Uh but yet it won't be some kind of, you
know, completely transformation of of,
you know, we'll still have people,
they'll be working together, they'll
deploy tasks to agents, those agents
will go off and farm work and do work,
and then people will go and bring it
back to to the to the task at hand to
move whatever their their sort of, you
know, work or project is for.
That's right. Yeah, when I'm watching
Claude code go, I look at it and I say,
"Wait, people did this before? That
seems like a lot of time to do things
that are automatable." But No, but like
literally like we you used to have to
spend like 2 weeks on like uh like a
like a library, you know, change that
you wanted to make in your code base.
And And that's now a 10-minute activity.
And but But are we spending any less
time building software? No. It's because
we're just now doing the things that we
didn't get to because we were spending
the 2 weeks doing the library update.
Right. Okay, Aaron, we have to get you
out of here cuz you have to go to your
your next meeting, I think. But uh just
want to say thank you again. Great
Always great having you on the show. Uh
next Wednesday, we're going to have
Michael Pollan on. He is the author of a
new book about consciousness, so we'll
talk about AI consciousness. All right,
everybody, stay tuned for that, and
we'll see you next time on Big
Technology Podcast.