Box CEO Aaron Levie — What Cheaper, Faster, and Smarter AI Gets Us

Channel: Alex Kantrowitz

Published at: 2024-05-22

YouTube video id: C-oSxeZgaJw

Source: https://www.youtube.com/watch?v=C-oSxeZgaJw

well Aaron welcome back to the show yeah
thank you uh so so do we just like
pretend that we're podcasting or like
how does this
work well you you already have given
away the secret which is that we're back
together in a moment of crazy AI news
yes but this time we're doing it in
front of a live audience and in front of
the first ever public event for big
technology and we're fully sold out with
130 people here with us in Manhattan
this is going to be the first of many
and listen the audience isn't going to
believe me so I would love it if you
guys could make some noise and let
everybody who's listening to the
recording know that you're here let's
hear
you and
there there's there's simply no way we
could have faked that so um so that that
was that had to be authentic and real it
brings us to sort of the topic of our
discussion which is the latest in Ai and
of course there's everything from
synthetic images synthetic text
synthetic voice synthetic video which we
heard a little bit about from Google
recently but let me give you my big
picture here now that the dust has
settled a little bit from google.io uh
my perspective is that we haven't
exactly seen the uh groundbreaking stuff
that's been promised right we we were
looking at gp5 and AI sentience but we
haven't gotten that yet so am I right in
thinking that yes we've had some you
very impatient if that's uh if that's
your if that's your issue well I was I'm
setting up to own me at the beginning of
this discussion but I do want to know
like should is there something that we
should be thinking about in terms of
what's going on right now um in ter that
doesn't look at what we just saw over
the past few weeks which like yes they
were impressive but they're not the this
sort of promised Godlike AI that
everyone keeps talking about and waiting
for this GPT 5 moment were you so you
were unimpressed by gbt 40 I wouldn't
say I was unimpressed but I say I would
say given that you see that video where
the guy was doing like a job interview
and uh and then the the her little thing
was like explaining like how he should
look different I mean that this is like
psychotic technology I mean this is
incredible I I don't I don't deny it but
I also think you have to understand
where I'm coming from which is that a
lot of people when they saw well
actually that's now the past but a lot
of people when they saw this stuff were
just like reasonably or not and I'm just
trying to channel the audience right
this where's the this audience the
audience on x which we know is the most
reliable the X people are crazy yeah
well we're going to read some of your
posts so let's see who's crazy at the
end
this think I know how crazy they are
that is a good point okay so but but I
guess where would you say we are right
now and why haven't we had that sort of
Step change because this is the thing
yeah we talk about the gpts and it very
quickly becomes old like GPT 4 is pretty
impressive but it already feels like old
news and people want something new and
they want to feel like that aha moment
that chpt brought and we haven't had
that yet now they're impressive I just
think these people are like sound like
heroin addicts like we just need another
I just need another breakthrough well
they do call tech people users and you
know sort of fits Fair we we uh it comes
with the territory um yeah so I mean I I
basically don't agree with the premise
um so uh I think that this is the
craziest technology ever and um uh and
it's entirely reasonable that that we
will see this in sort of Step change um
in it's kind of Step change fashion um
uh so if you think about it we're like
18 months into you know the initial chat
BT moment we have seen uh breakthrough
after breakthrough in the past 18 months
on uh AI model performance on AI model
effectively intelligence when you look
at the evales uh that these AI models
are are put up against um when you think
about um you know one interesting metric
is um is context window which is the
amount of of basically data I can put
into the AI model or get back from the
AI model and and um at the start of chat
BT in uh let's call it November of 2022
the context window was somewhere on the
order of about 4,000 tokens um just
yesterday uh uh um Sundar announced uh 2
million tokens on the latest Gemini
model so when you think about it uh
there's like not that many Technologies
literally in the world that see an
improvement at the rate of 500x in 18
months and that's basically what we're
seeing in AI so that's obviously one
metric of of of performance Improvement
but um but it it you kind of look across
the board whether it's the cost of
tokens dropping whether it's the
Improvement rate that we're seeing um
all of this is building the foundation
for now Downstream I I think um uh an
incredible amount of innovation and then
you look at just literally on Monday uh
the gbt 40 um Omni model is uh also
another breakthrough just in terms of of
you know obviously you always have to
kind of look at these demos and say okay
how much of that was was really just
like the perfect demo and they they kind
of knew would work well in that that
that situation I think we can chalk up
some of the the uh the use cases to that
but um I'd say chat BT and and open AI
generally have been um incredibly um
intellectually honest uh kind of uh you
know stewards in this space and so when
you see a demo from them it tends to
actually you know work like that in
practice um so that ability to have
multimodal experiences where you have
video in and and audio out or video in
and text out um in basically real time
because it's in the same model I mean
this is this is going to produce some
pretty incredible um uh just both
personal experiences and then not even
you know touching on what's possible in
the Enterprise so I think uh first of
all I I think we should actually
probably be glad that we don't have this
sort of breakthrough AGI experience yet
because we we actually need some time to
sort of um uh just Pace ourselves
frankly in in the deployment of this I I
sent a video of um of the the sort of
interview um uh the interview example
and if you haven't seen this please
please watch it this this person sort of
takes a video of themselves in a real
time they're talking to the the gbt 40
model saying you know do I look
presentable for this interview and it's
giving it feedback um I send that to my
my parents and basically they're like
well like we basically don't need humans
anymore and um and so like like to
somebody like that like this is Agi like
we we could stop right here and probably
be done with AI for like a decade and
you've already you've already solved
like hundreds of use cases that are
these breakthrough you know kind of
situations they have another video of
somebody um uh who basically you know
uses um AI to see the world world uh
they're they're they're basically blind
and they they see the world and they're
able to now communicate um you know with
AI to have so much more um capability
than they than they would have had
before so I mean these are just
incredible Technologies um just even in
the current form let alone you know when
when we actually get access to gbt 5 and
and so on and and the place I thought
you were going to go first was really
the cost because that's the thing that
you've really harped on over the past
few days I'm trying to anticipate your
answer I'm like all right he's
definitely bringing up the cost thing
and I think it's worth spending a minute
on it which is that open AI has made the
cost of GPT 40 50% of what it was uh
previously and for you you're working
with a lot of AI I mean just talk a
little bit about what that does for the
industry for anyone who's building on
top of this stuff in terms of its
ability to be a profitable investment
and to be something that is more oquit
than it is now yeah so um uh so there's
there's probably I mean so there's
basically three dimensions uh that AI is
going to need to improve on uh before
you even worry about about you know kind
of a agentic like experiences so even
you know kind of bookmark that um you
have effectively the let's say you know
more or less the quality of the models
so um how do these models perform on on
evals um and these EV vals are you know
basically throw a bunch of of of
problems at at AI models and then you
get a a sort of a shared Benchmark um
across how different models perform so
you know llama 3 versus gp4 versus
Gemini and you get to see sort of how it
does it against the lsats or NBA you
know uh courses and um and and so on so
one is sort of model quality we've
already seen that that the latest gp4
models and and kind of of the the gp4
class you know in many respects per you
know perform better than humans at at a
large number of tasks um but in some
areas are still deficient um and uh and
not yet at kind of human level
performance so model quality is is sort
of one vector that that we need to
continue to get performance on and you
know you can imagine you can just TR
plate and imagine you know GPT 6 let's
say um is like we're like at 99% you
know human level and then gpt7 is like
99.9 and then gp8 is
99.999 so so we'll see some you know
some type of of ASM toting but but
eventually you're just going to continue
to get better and better model quality
that that's that's sort of you know
Vector one uh Vector two is how much
data can I put in the model um and this
has previously been something that was
very limited actually a large number a
large reason why frankly I think so many
of us were were not yet sort of figuring
out how big of a breakthrough This was
um was back in you know the GPT 2 days
you know the Bas like all you could give
it was like a couple hundred you know
effectively characters um or or tokens
and that was basically all you were
working with so it's very hard to kind
of imagine you know sort of token next
next token prediction when you can only
give it a limited amount of of of data
and context um and now we we sort of
have breakthroughs now with again 2
million token um context Windows that's
a massive breakthrough so number one is
quality number two is how much data can
I put in the model and number three is
cost and sort of then the performance of
of the model um actually I should
probably add just one more which is
speed but but um with with cost and
speed kind of come um in uh in a little
bit of the same um uh the same Dimension
so um so when you have what you saw Yes
uh on Monday gbd 40 drops by by 50%
literally you can just think about it as
I just took the ability to have
intellectual capacity and one day it
cost x amount and now it cost 0.5x like
overnight
that's that's pretty crazy when you
think about that and the fact that we've
now done that or not we I mean open AI
has basically done that you know I don't
know four or five times in the past 18
months so we're already maybe a tenth of
the total amount of cost of a of a of a
kind of Fairly high quality token um
just in the past year and a half since
the kind of first version of chat BT so
this is a breakthrough because if if you
have a use case with AI a year and a
half ago that may have been slow you may
have had to hack around it and it may
have been relatively expensive and a
year and a half later it's much cheaper
you don't have to hack around it because
you can give the model a lot of data and
it's much more intelligent so you just
it doesn't take that much imagination to
say well in 18 months from now what am I
going to be able to then create and
build and so you just sort of watch
these curves and and I mean the
implications I think are going to be
massive for startups mostly positive a
couple kind of question marks which is
if you watching this curve you probably
should be building software for what is
going to exist in three years from now
as opposed to today because you don't
want to be building software that sort
of assumes that the the tokens are
expensive and they're not that high
quality and they're kind of slow so we
spent a lot of time thinking about like
are we designing a system that is sort
of you know kind of just covering up
some of the shortcom comings of AI right
now or should we design a system that
will work really really well in a year
and a half from now as we we get more of
these improvements and and that's you
know the ongoing battle I think of
anybody building AI you know startups is
is you know do you build for what you
have today do you build for what what
might exist in the future how do you
avoid getting disrupted from the model
just sort of kind of basically building
in your value proposition directly in
the model itself so many questions but
ultimately I think you know one of the
most um if not the most exciting time in
history to be building software and I
would definitely want to get back to
this idea of what we should be building
or what software what software companies
and startups should be building but as
we talk about the cost of Intelligence
coming down it does make me think like
is there a viable business model for all
these companies that are spending
hundreds of billions or tens of billions
of dollars training models and then
selling this intelligence at lower and
lower rates and one of the like data
points that I think about here is open
AI in the middle of this whole Sam uh
Alman thing uh when he was fired and
then brought back in they were in the
middle of a reported fund raise that was
going to put them at what a hundred
billion dollar and we haven't heard
anything about that yet so maybe that's
a little bit because of the um they
wanted to make sure the board was
settled but what are the economics for
these companies and is this really
sustainable for them to be keep
providing this for less and less cost I
mean a 50% cost is a big deal yeah yeah
um well you know what ultimately matters
is what is their cost and then um and
and so one would theorize that they have
come up with algorithm uh improvements
and model improvements where their
underlying cost of running um uh running
the tokens through have now dropped by
let's say 50% or whatever obviously you
know it's it's sort of hard to pin down
because there's no public information on
their gross margins but but in general
I'm guessing that they've done something
that has driven efficiency that has made
their cost structure lower and so then
they're they're basically kind of giving
us that cost structure Improvement um as
uh as as customers so um uh so then
their theory is well if we drop the
price by 50% do you basically get more
than a 2X in usage and volume and and I
would argue that basically at every
point in um AI performance Improvement
that that sort of trade has has
basically come true which is um if you
could make again kind of wave wave magic
wand and you say like we have GPT 5 or
gpd6 and it costs like a tenth of what
today gbg4 costs um I would argue that
you'll probably get 100x more usage of
AI not not just 10x you know more usage
of AI and so um at some point maybe that
plateaus but we are like nowhere near
the point where a lowering of cost
doesn't doesn't sort of
disproportionately impact what you can
now build which then impacts more volume
this is actually an interesting thing if
you looked at um I have no no um no kind
of interesting anecdotes on this you
should probably do the research um but
in the very early days of cloud
computing everybody looked at the size
of the server market and they basically
said well well if if these servers are
in the cloud we should kind of take the
size of the server market and you know
maybe you know shave off off some of
that spend because as it goes to the
cloud it gets more efficient and because
it's you have shared capacity so you get
less un underutilized capacity and a lot
of the kind of total addressable market
analysis of cloud computing was looking
at the historical usage level of of
on-prem data centers and and that's
totally fair because that's kind of like
all you could really do um if you're
doing that analysis but what they didn't
realize was as you created sort of ond
demand Computing resources it meant that
literally every developer on the planet
could now actually have access to
servers and you could just like start a
company tomorrow and then use Computing
capacity which you didn't do when you
were startup you know 25 years ago
because you just like couldn't you know
put servers in a data center so like you
just didn't start start the company in
the first place so all of a sudden the
cloud computing scale was like 10 times
larger than what you used to do in data
centers and so you know similarly as you
get the cost drops of of either the gpus
themselves get cheaper or model
efficiency gets better you'll see just a
massive increase in uh in utilization so
I think the business model is still very
good um for the top let's say five or so
model providers where where you will run
into a question is you know could you be
the 15th llm you know uh you know
training company
that that seems tough especially if your
if your job is to do kind of horizontal
LMS um I uh I think that'd be less
likely to to to work um uh you know it
you also have this battle of like
something like a mrw which was for a
period and maybe maybe still you know
ongoing was like you know a massive
breakthrough in open source Ai and then
meta one day you know decides to just
exceed all the benchmarks with llama 3
so I think there will be some parts of
the market where there's going to be a
lot of competition and it'll be hard to
kind of figure out what the business
model looks like but um uh but in
general I think the business model of
providing high quality cheap highly
scalable tokens uh if you're in the
anywhere in the top three for quite some
time is going to be fine um and um uh
and then ultimately like if you're a
hyperscaler in the cloud what you really
want is all of our workloads you just
want us to build our full application on
your your Tech stack so you're not
really trying to make that much margin
on the AI itself you actually want like
the data you want the compute you want
the storage um so I think the business
models will will all uh continue to be
fine to continue to give away this
technology at a lower and lower price
okay and so I started our discussion
talking about how disappointing this
release was I mean let's actually talk
about the impressive stuff right one of
the things that I saw over the past at
these events within open Ai and and
Google was that these models have an
ability to reason right they seem to be
able to take problems and then break
them down to their component parts and
then go through step by step it's not
like the traditional ask a question to
an llm and it will give you an answer it
looks actually like something that's
smarter so did you pick up on that
reasoning capabilities cuz we had this
whole like moment in the Sam Alman thing
where like that people talked about this
qar model which good reason and did math
and I watched some of these demos and
I'm like is that it yeah so I think uh
you know things like reasoning we're
still very early on uh uh there's
there's um you know any anybody now you
know deep in the AI space uh you're
going to hear us all talk about this
idea of agents and and kind of what is
this agent-like Behavior or agentic
behavior that you can have in in AI
models which really moves from going to
an AI model asking a question and then
just basically getting the text output
or or audio output of of what that model
is producing to actually giving it a
problem that is often multi-step in
nature maybe interacting with other
systems uh I.E other tools and how do
you kind of put that all together where
a single AI model connected to these
tools can actually produce effectively
an agent um that that really can
actually execute full tasks um and uh
and processes so we saw you know maybe
slight examples uh from that both on
Monday at um uh in the open AI
announcement and then uh yesterday in
the Google announcements um I I'd say
both at a very high level um just
because actually so much of this space
is still at a pretty high level there's
um if you go and ask like 10 AI startups
that are doing anything with agents
you'll probably get you know more or
less 10 different architectures um of of
kind of how the agent actually functions
but what is at least similar to all of
them is the llm or the the the uh the
model is really acting as the reasoning
engine um and basically the brain for
you know kind of coordinating and
executing tasks across other other
systems and software um and which is a
very exciting concept because again a
year and a half ago I think what we
thought you know at least we we
internally you know thought and what we
saw from from startups was you know this
is like this it's it's like a chatbot
wave and the chatbot was really just
like the best you know kind of way to
manifest AI to get people to see the
power of it but you know the chatbot is
just like one of you know a thousand
modalities that we might have with AI
when you start to think about the AI is
not something that you just you know
chat back and forth with but instead
it's it's sort of a reasoning engine for
anything that you want software to do um
it opens up a very different world of
possibilities and what about the
personalities of these Bots that we're
going to see I mean after openai did its
release event Sam Alman tweeted out her
the bot was just extremely flirty and
and then it didn't work the next day and
I looked at it and I was like oh if they
were trying to build an AI girlfriend
they nailed it super flirty in the first
interaction doesn't answer your text the
next day Aaron what do you think about
the do I have to answer that question um
uh yes you do yeah what do you think
about the their attempt to build her uh
I I I mean I uh do not uh know the
internal you know kind of workings of of
you know how did the voice uh get get
kind of tuned to be the most you know
interactive and engaging you know voice
um that's a fun way to describe flirting
but
yes engaging as euphemism um uh but uh I
I don't I mean like I I I thought about
that for about 3.2 seconds uh when I saw
it um and obviously it'll be like a big
controversy online but um you know the
the market will effectively decide what
voice we want from these things um and
uh and I expect you know open AI Google
Etc to kind of land on what's the right
equilibrium of of kind of okay a little
bit too creepy uh versus like like way
too robotic and utilitarian and so like
you know somewhere in there is uh is
probably the sweet spot and I think
we'll kind of go you know and do a
little bit of pendulum swinging until we
find that and you you spoke about this
in the beginning and I don't want to
gloss over it uh this capability for the
AI to be a tutor yeah right I I want you
to like kind of unpack how important
this is because it really it this can
really change the equation for parents
where like
you know everyone has this like idea
okay set the kid with the laptop but if
you can set the kid with the actual
tutor that's going to work with them
personalized through the notes and not
only that the Google had this example it
can listen in on a PTA meeting for you
and take the notes there and tell you
what happened it's almost like taking Ai
and putting parenting on autopilot and
everyone's going to be like that's weird
and creepy but it also is like in the
best cases this technology gives us more
time to do the human stuff yeah right
and if you have more time to actually be
a parent to your kid like be caring with
them be present with them as opposed to
having to go through this work with them
I think that's a could be a pretty
special thing yeah and I I don't know
like statistics on parenting globally on
tutoring and like how many you know
parents are good tutors but like let's
just not many okay well so let's just
assume that like a significant portion
of kids do not grow up with like the
highest quality tutor access um uh I I
got lucky because my my dad was was sort
of into that and and my mom as well but
like let's just say like that's not the
case everywhere so like obviously if you
could make AI freely available globally
that was as smart of a human and you had
the interaction Paradigm work where I
can just interact with it to learn that
that is like only a good thing for
Humanity like it would be it would be
literally impossible to say I want to
shut that down or I don't want that to
to exist we can talk about all the
implications of okay how do you make
sure that it's as available as possible
and bias and all these other things but
like the idea that that we could you
know basically democratize you know
access to knowledge and um and and you
know and tutoring and help and education
to everybody on the planet is is
basically a good thing um and and that's
just like one of the many examples of I
think the power of of AI especially in
the consumer side you know take that for
healthare take that for um basically any
kind of subject that I want to be
educated on take that for just learning
how to code I mean the amount of of uh
you know sort of the easing of the
on-ramp of of what we as people you know
spend a lot of time learning and let us
Explore More spaces to figure out what
are the areas and domains that we want
to go really deep in that is just a a
very good thing for the world okay it's
become a tradition on big technology
podcast to do a segment where we read
Aaron his tweets and make him explain
them so let's do that now we can also
stop that today um it doesn't that's not
something that has to continue so you
know let's continue it
um okay okay so uh let's see there like
the problem with reading tweets is like
it never sound it's not it never sounds
like when I wrote it so it's just like
it's a totally different time in the day
it's like a different voice um uh but go
for it I just don't know I mean you're
going to read something everyone's going
to be like that's not that insightful
and then I'm going to be embarrassed and
then um and then I'll have to explain
myself but go for it no no I I
handpicked them because I think there's
going to they're going to start good
discussions and if this sucks I'll
retire this seg I good okay okay okay
okay uh so this kind of goes to the
agent thing a large portion of business
problems are constra strained by how
much time any given problem takes to
solve and the number of people you have
to solve it AI flips this by creating a
world where we can solve problems by
essentially throwing more compute at
them
yeah do we just yeah you Riff on that oh
Riff on that just Riff on my tweet yeah
okay it's in the Tweet um uh so um I
mean the only riff I could add is is
that you know there's this um uh I the
only reason I I I I wrote that was uh
was just classically in in business you
sort of have this term of like let's
throw more bodies at the problem um and
and obviously that just means like like
just how much headcount do you have like
we'll throw more bodies at this
engineering problem at this sales
problem at this you know whatever the
thing is and it's kind of crazy to think
about a world where you would just say
let's throw more compute at the problem
and um and the equation goes from okay I
got to call the HR team got to make sure
we have budget we have to go hire a lot
of people to now it's like well do you
want like like a hundred leads or a
thousand leads or 10,000 leads not and
that's not gonna be driven by how many
people I hire it's going to be driven by
how much computer I have do I want to
you know test you know 90% of my
software for bugs or 95% of my software
bugs or 100% of my software for bugs
it's not again how many let's say test
cases I write or quality Engineers I
hire it's how much compute I throw so
you can kind of you know work through
like how much of business now can become
a problem uh uh where where we can throw
computed the problem to to basically
solve it and it's just like a different
way to think about you know organizing
your company how you scale your company
um and uh and and ultimately you know
the role of of of kind of intellectual
Labor uh in inside of a business okay
yeah this is a good segment okay okay
got it okay okay you're not biased in
any way but but perfect okay okay I did
come up with the
segment the cool thing about this
another one the cool thing about AI is
that Zuck is Unleashed there's a brand
new platform opportunity the market is
early up for grabs and there will be
multiple winners and it uniquely
leverages the strengths of meta the
breakthroughs in this space will
continue to be wild great tweet yeah
just riff again I'm telling you yeah
yeah go for
it um so uh I mean that so I'm just
supposed to describe that like why why
meta in particular is positioned and
what is Zuck Unleashed Zuck Unleashed is
just I mean did you see his birthday you
know photos I mean he's the guy's
Unleashed um uh so uh so you know you
have so I don't know if anybody was
around in like the mid to late 2000s uh
doing web stuff uh but you had this
conference called f8 and and sort of
Facebook was at the center of basically
you know web software in in sort of the
consumer world where they created the
social graph they you built your
application kind of using their apis
that that was sort of like you know like
they they were really you know driving
the web forward and I unfortunately I
think mobile probably sort of slowed
that down a bit because the conversation
then really flipped to iOS and and
mobile platforms and so you know you had
all of this energy and and Technical
Talent from meta that was kind of
underutilized in a in a mobile world and
they they really could they didn't have
any platforms so so that is sort of
where the metaverse came from and uh and
where Oculus came from was I think you
know zuck's entrepreneurial Spirit on
like well let's build a platform that
that you know that we own and that
everybody can build in and I think just
the reality is that that that that at
the scale probably that they would have
wanted we're not all in the metaverse
yet so so you basically have this
incredible entrepreneur with insane
resources both in engineering and and
capex that that uh has kind of been a
little bit held back because he hasn't
had a platform uh to be able to unleash
into the world and now there's this spot
that's open which is open source AI is
not owned by anybody um uh and and so
he's got all of the right resources for
it he's got basically the entire
industry rooting for it to work because
we all benefit the cheaper he can make
AI models and the better he can make AI
models we all win because either that
will mean that open Ai and Google will
want to work even harder and lower their
cost or we just literally have an open
source AI model that we then don't pay
any any kind of fees for which is
incredible other than the the cost of
the GPU so so he's got all of this kind
of pent up energy this is just me you
know just like a like a imagining you
know how he's thinking about it and you
watch his videos and you can kind of see
like he knows he's on to something which
is he can kind of win in this open
source AI World which is going to be a
very very big space um to be a part of
and then commercially I think it's
always good if you're if you're kind of
direct competitors you know do not win
you know a a large portion of kind of
what the Zeitgeist is is sort of talking
about and focused on so it's good if if
he has some way to kind of you know
defend against you know let's say how
big Google gets or open AI gets in this
world and then and then offense on the
offense side he can probably just make
more money if he has people spending
more time on his platform and asking
questions and and getting you know
recommendations for things to buy and
all of that will be powered by AI in the
future so I think it's both a going to
be a commercial success and I think it's
like structurally strategically
something that that uh is uh is going to
look you know like a a very good
decision in the long run all right let's
go to the other uh member of the cage
match um Elon okay uh you said finally
got around to trying the latest Tesla
full self driving last night can confirm
it's wild is it actually does it feel
like real autonomous driving or were you
still is there still fear for people's
life when they're in there um well those
could be the same thing so um so it
might be that real autonomous driving
you still fear for everybody's life
because you're just like I do not know
how this works like this is kind of
alchemy this is crazy but uh it was it
was definitely you know very crazy it
was a very you know kind of relatively
boring Suburban um kind of trip but um
uh but it was uh like there was just
zero need to ever ever interject I mean
you have to to kind of show that you're
you're still paying attention um but uh
but I mean it just it shows again we're
like the past year and certainly for the
next couple of years you get the sense
that we're going to see hundreds of
these these like like early previews
about the future um which is just pretty
exciting like uh I I mean I I've just
never seen a period where you know in
any given week you could see two to
three things which are just like
obviously that's going to be the future
maybe it doesn't work perfectly right
now but but it's like there's nothing
that is is stopping it from working
perfectly in a world of more compute and
and just more breakthroughs on on on the
uh on the models themselves and and that
is kind of where we're at right now it's
pretty cool it's really cool cool nice
if that happens because obviously we
have way too many traffic deaths all
right um Cutting Room floor I won't ask
you to react to these but uh just for
time but uh you have VCS when. a is in
the name and there's a bunch of people
doing some like parkour off of buildings
and then uh so you're going to verbally
explain visual tweets this is this is
good is aggressive that is aggressive oh
my God and then there's uh I'm looking
at screenshots of videos right now so
this is I don't oh this is no this is
actually a screenshot of this is a oh
okay this is a everybody wants small
government until there's something they
want to ban and it's Ronda santis
standing in front of a table of lab grow
meat yes also do I need a riff why did
you get that one what was that you could
give it 60 seconds on that okay uh no no
I mean it was just Ian I mean the uh uh
I think that was also kind of pretty
straightforward um uh without you know
conveying my my um political views uh
you know I just found it ironic that the
you know party of like we want the
smallest government and you know more
libertarian oriented values is you know
not going to let you know science
breakthroughs happen in their state so
it's like okay well maybe actually maybe
it's just only when it's convenient do
you want you know small government um as
opposed to this is a very principled um
you know kind of decision and um and so
that was literally all that was
referencing so yeah no I I enjoyed that
one a lot okay uh we're here with Aaron
Levy he's the CEO of box we're recording
live in front of an audience in New York
City our first public event ever uh we
are going to take a quick break and come
back with audience questions so we'll be
back right after this ad and now we have
a legitimate ad that we is it gonna play
no I'm gonna read it oh you're gonna
read the ad okay good can you describe
the video in the ad okay no it's not a
commercial oh okay okay actually it's
even better than a video we have the
people in the ad here in the audience
wow this is wild okay so this is I mean
we're really we're innovating here
that's what I have to say this is crazy
so are they going to come up or
something no they're just going to waave
oh okay they're not going to like they
should have to say their ad they should
but we're trying to get you on your
flight okay got it okay okay shout out
to easy newswire a platform that is
upending the traditional newswire
industry charging Brands less to get
their news out via top tier Publications
like reuter's Hurst and yes big
technology and they're helping
Publishers make money and learn about
the community they serve when companies
announce news it is their most important
moment and the traditional press release
wire services that exist are often a
major letdown easy newswire solves this
problem the platform isn't limited to
text it's thinking innov ly about ways
to collaborate with Publishers to get
the word out and I'm a proud
partner Caitlyn and Neil the founders of
easy newswire are here in the room okay
you guys should have had to have read
that that would be way more fun okay
they're disrupting what has been a
Shakedown industry and are building
something that will reshape how news is
released and shared in the future
Caitlyn and Neil say hi you have say hi
again hello and they'll be sticking
around so folks go say hi afterward to
learn
more shake down this is I mean actually
the newswire industry I mean obviously
I'm a partner those were your words oh
those are your words you you can ask
your PR people how much it costs to put
a a news uh uh press release out yeah on
the newswire it's crazy wow we should
have Ronda sandz look into this um this
is it's a huge problem we got to ban the
newswires that's right this is what good
government does and we're back here on
big technology podcast with Aaron Levy
the CEO of box we're uh we're in front
of a live audience here in New York okay
he remains here he hasn't left despite
the reading of the tweets and the
conversation about lab grow meet and the
live ad so uh let's let's see if we can
keep going with this so um what we're
going to do now is we're going to take
some questions from the audience and
hopefully we'll be able to record them
and get them on the podcast so um let's
do that if you have a question uh raise
your hand I'll come over to you state
your name and where you're from uh and
and I have a plant I plan a a question
in the beginning with uh with Ronan Roy
he's one of our our favorites on big
technology podcast he's our everybody
people who here listening to the show
you guys know Ronan let's give it up for
Ronan this guy is he's amazing he's on
with us every Fridays every Friday and
uh I think he has some questions cool so