OpenAI President Greg Brockman: AI Self-Improvement, The Superapp Bet, Path To AGI, Scaling Compute

Channel: Alex Kantrowitz

Published at: 2026-04-01

YouTube video id: J6vYvk7R190

Source: https://www.youtube.com/watch?v=J6vYvk7R190

I think it's extremely clear that we are
going to have AGI within the next couple
years in a way that is still going to be
jagged, but that the the floor of task
will just be almost for any intellectual
task of how you use your computer. The
AI will be able to do that. The scariest
moment at OpenAI was actually after we
launched catch and I remember being at
the holiday party and just feeling this
vibe of we won and I have never felt
that. I was like, "No, that we we are
the underdog and we always have been."
From the moment we launched chatbt, I
remember talking with my team having
this exact conversation where they said,
"How much compute should we buy?" I
said, "All of it." I said, "No, no, no.
Really, how much comput should we buy?"
I said, "No matter how much we try to
build, I know we're not going to be able
to keep up with the demand." OpenAI
co-founder and president Greg Brockman
joins us to talk about AI's most
promising opportunities, how OpenAI
plans to capitalize on them, and what
the Super App is all about. And Greg is
with us here in studio today. Greg,
great to see you.
>> Thank you for having me.
>> Well, we're speaking at a time where
OpenAI is shutting down video generation
and focusing its energies on a super app
which is going to combine business and
coding use cases. And I think from the
outside those of us watching this are
like including myself. OpenAI is winning
in consumer and now it's shifting its
resources. What is happening? Well, the
way I would think about this is that we
have been in a world where we're
developing this technology deep learning
to really see can it have the positive
impact that we have always pictured. Can
it build can it be used to build
applications that help people that help
them in their lives?
And we've separately had a arm that's
saying let's actually try to deploy this
technology whether that's to help
sustain the business to start getting
some uh practice with getting real world
impact those kinds of things for the
time when this technology actually comes
to fruition that it actually becomes the
uh everything that we that we've
imagined that we started this company to
try to to have and I think that we're at
a moment now where we've really seen
this technology it's going to work and
that we're moving out of testing on
benchmarks and sort of these almost
cerebral demonstrations of capability to
it actually being the case that for us
to develop it further we need to see it
in the real world and get feedback from
how people are using it in knowledge
work in various applications and so the
way I'd think about it is that this is a
bigger strategic shift because of the
phase of the technology and it's not so
much that we're saying we're moving from
consumer to B2B it's really what we're
saying is that what are the most
important applications that we can focus
on because we can't focus on everything,
right? But what are the things that we
can bring to life that will actually
synergize together as we build them and
that will deliver meaningful impact and
help elevate everyone. And when we look
at the list, so there's consumer, you
can think of it as many things, but
there's a personal assistant, right?
Something that knows you, that's aligned
with your goals, that's going to help
you achieve whatever whatever it is that
you want in your life. There's also
creative expression and entertainment
and many other applications.
on the business side maybe you can if
you zoom out it looks more like one
thing of just you have a hard task can
AI go do it does it have all the context
to do all these things and for us it's
very clear that there the stack rank
includes two things at the top one is
the personal assistant the other is the
AI that can go and solve hard problems
for you and when we look at the compute
we have we are not even going to have
enough compute to fund those two things
and then once we start adding in many
other applications many other things
that AI is going to be very useful for
and is going to help people with. We
just can't possibly get to all of them.
And so I think that this is a
recognition of the maturation of the
technology and the incredible impact
it's going to have very quickly and our
need to to prioritize and to actually
pick the set of applications that we
want to shine and to really bring to the
world. And when I've heard you talk
about OpenAI's various bets, one of the
ways that you described it is that
OpenAI can be a version of Disney or
like Disney where you have this core
compelling advantage at the center and
then you farm it out in different ways.
So Disney has Mickey Mouse and then it
can do the movies and the theme park and
Disney Plus. And for OpenAI, it's the
model and you can do video generation
and be this assistant and then help with
enterprise and work. So, is it no longer
possible then to have that sort of
central advantage and then be able to
farm it out in all sorts of ways? Like,
have you decided, have you come to this
realization that basically like it's
time to pick or choose? Well, I actually
think that in some ways that that story
is even more true than it's been. But
the thing that's important to realize is
technologically that the Sora models
which are incredible models by the way
are a different branch of the tech tree
than the core reasoning GPT series.
>> They're just built in a very different
way and to some extent we're really
saying that pursuing both branches is
very hard for us to do for these
applications. Now we we are actually
continuing the sore research program in
the context of robotics right which I
think is very clearly going to be a
transformative application which is
still a little bit in the research phase
right that robotics are is not really
yet mature and deploy it in the way that
we're going to see this this real
takeoff of this technology in knowledge
work over the next year and so it's a
recognition of for this moment we really
need to put the primary focus on
developing the GPT series and that
doesn't just mean text. It doesn't just
mean cerebral things like for example uh
birectional communication having a great
speechtospech interface. That is
something that also is going to make
this technology very usable and very
useful but it's not a different branch
of the tech tree. It's all kind of one
model and we just sort of tweak that in
slightly different ways kind of like you
describe. And so I think there's
something about if you if you branch too
far and you have two different artifacts
that is very hard to sustain in a world
where there is limited compute. And the
reason there's limited compute is
because there's so much demand. There's
so much people want to do with every
single model that we create.
>> Okay. So talk a little bit then about
why your bet is not on this seems like
world model version where the um you
know the video understands where things
go. It's obviously useful for robotics.
Why is your bet on the GPT reasoning
model tree as opposed to this area where
you you had been seeing real progress
with Sora? I mean to see the progress of
video generation you know generation 1 2
3 was enorbous so why is your bet where
it is so the problem in this field is
too much opportunity right it's the
thing the thing that we observe very
early on in open eye is that everything
we could imagine works now there's
different levels of friction associated
with it different amounts of engineering
effort different compute requirements
all those things but every single
different idea as long as it's kind of
mathematically sound you actually can
start getting some pretty good results.
And I think that shows you the power of
the underlying technology of deep
learning, the ability to really take any
sort of problem and to get to the meat
of it to have an AI that really
understands the underlying rules that
generated the data. So it's not about
the data itself. It's about
understanding the underlying process and
be able to apply to new context. So you
can do that in world models, you can do
that in scientific discovery, you can do
that in coding. And I think that where
we are as we think about the roll out of
this technology is again that the
you know there's been this debate of how
far will the text models go? How far can
text intelligence go? Can you have a
real conception of how the world
operates? And I think that we have
definitively answered that question of
it's it is going to go to AGI like we
see line of sight and that it is at this
point we have line of sight these much
better models that are coming this year
and the the the amount of pain within
OpenAI that we've had to decide how to
allocate compute that goes up not down
over time and so I think that maybe the
core of it is that we have a it's about
sequencing and timing and that in this
moment the kinds of applications that
we've always dreamed of are starting to
come into reach. Like for example,
solving unsolved physics problems,
right? We had this result recently where
a physicist had been working on a
problem for some time. He gave it to our
model. 12 hours later we have a
solution. And he said this is the first
time he seen a model where he felt like
it was thinking that it felt like this
is a problem that maybe humanity would
never solve and our AI solved it. But
you see something like that like you
have to double down. You have to triple
down because we can really unlock all of
this potential for humanity. And so I I
think for me it's not about relative
importance of these things. It's more
about what is open mission of delivering
AGI to the world. Our vision of how it
can benefit everyone and the fact that
we have a tech tree that we see how to
just push it, how to do the engineering,
do the further science and research to
then have that come to fruition.
>> Okay. Okay, so I do want to come back to
the next line of models that you're
anticipating, but I want to press you on
this for a moment. I was speaking with
Deis Sabis from Google Deep Mind earlier
this year. And interestingly, he said
that the thing closest that feels
closest to AGI for him was Nano Banana,
the image generator that they have. And
the reason is because for an image
generator or a video generator to uh
create the images and the videos that it
makes, it does have to understand the
interaction between objects and have at
least some conception of how the world
works. So is this a potential I mean
it's a big bet, but does OpenAI
potentially miss something by doubling
down on the other tree if that's the
case?
>> So two answers. One is absolutely. Yeah.
Right. there still is not like in this
field you do have to make choices right
you have to make a bet and that's
actually where openi started is we
really said what is the path to agi that
we believe in and really focused hard on
that right the sum of random vectors is
zero but if you align your vectors then
you can go in a direction but the second
point is it's actually image genen is
something that has been very very
popular within chatbt and that's
something we're continuing to invest in
continuing to prioritize and the reason
we're able to do that is because it's
not actually on the on the world model
like diffusion model tech tech branch
it's actually based on the GPT
architecture and so there even though
it's a different data distribution
the actual core technology at the core
stack it's all one thing and that is
that is actually the pretty wild thing
about what AGI is is that sometimes
these very different looking
applications between speech to speech
image generation text and text is by the
way itself many faceted of like science
and coding and uh personal like uh you
know wellness information those kinds of
things uh all of that you can do in one
technological envelope and so a lot of
what I'm looking at and what we as a
company are looking at from a
technological perspective is how to have
as much unification of our of our
efforts because we really see this
technology as being something that's
going to uplift and power the whole
economy the whole economy is a massive
thing and so we can't possibly do all of
it but we can do our part that's the
general part in artificial general
intelligence.
>> That's the G. That's the G about that.
>> It really is.
>> Speaking of unifying things, uh, what is
this super app going to be? So, the way
I think about the super app is so it's
going to bring together coding, browser,
and chat GPT.
>> That's right. So, what we want is to
build an endpoint application for you
that really lets you experience the
power of AGI. So the generality and so
if that's you think about what chat is
today I think chat is really going to
become your personal assistant your
personal AGI right an AI that's looking
out for you that knows a lot about you
that's aligned with your goals that
that's trustworthy that kind of
represents you in this like digital
world
you can think of as right now it's been
a tool that we built for software
engineers but it's becoming Codex for
everyone that anyone who wants to
can use codecs and to produce to get the
computer to go do the thing that they
want. And it's not just about the actual
software anymore. It's really about
almost the use of computer whether it's
to set up like I use it to set settings
on my laptop like I forget how how to
set up the hot corners ask codeex to do
it. It just does it right. That's what
computers were always supposed to be is
contort to the human rather than me
contort to them. And so imagine one
application that anything you want your
computer to do you can ask it. And so
there's like a that there's computer use
browsing built in for an AI to be able
to actually use a web browser and for
you to be able to oversee what the AI is
doing that all of your conversations
regardless of application whether it's
for chat or whether it's for code
whether it's for general knowledge work
that's all unified in one way that the
AI has memory knows about you so that is
what we are building but it's really an
iceberg because that's the tip what to
me is actually much more important is
the technological unification And we
talked about it a little bit in the case
of the underlying models. But the thing
that's really changed over the past
couple years has been that it's no
longer just about the model. It's about
the harness. It's about how does the
model get context? How is it connected
to the world? What actions can it take?
How does the actual as you get new
context, how does the loop of
interacting with the model work? All of
that was something that we had multiple
implementations of were slightly
different and we're converging it. we're
going to have one version of that and
almost end up with this AI layer that
can be pointed at specific applications
in a very thin way. So you can build a
little plugin, a little scale, a little
UI if you really want something that's
great for finance, if you want something
that's great for legal, but you
generally won't have to because this one
super app that will be very broad.
>> This app is for business use cases,
personal use cases. So both and and that
is that is really the core is that just
like a computer like your laptop is it
for personal is it for business right
>> both
>> both and it's for you it's your personal
machine that gives you a interface to
this digital world and that's what we
want to build so just talk a little bit
about from a non-b businessiness
standpoint I'm using the super app in my
personal life what am I using it for how
does my life change
>> so I would think of it as So personal
life, just the way that you use chatbt,
right? How do you use chatbt right now?
And people use it for such a diversity
of really amazing applications.
Sometimes that's just asking for I'm
going to give a speech at a wedding. Can
you help me with drafting it? Can you
give me some feedback on this idea that
I have? I'm working on a small business.
Can you give me some ideas there? Which
maybe starts the bridge between personal
and work. There's
any of those questions should be things
that you can go to the super app for and
it answers. But that if you think about
what chatbt has been, it's already been
evolving. It used to not have any
memory, right? It's just the same AI for
everyone starting from scratch. It's
almost like talking to a stranger. It's
way more powerful if it remembers. It
remembers the interactions you've had.
It's way more powerful if it has access
to context, right? right? That if it's
hooked up to your email and to your
calendar and really knows your
preferences and has this this almost
deeper set of of just you know past
experiences with you that it's able to
leverage to achieve your goals and to
you look at things like pulse is a
feature in chatbt right now where every
day it surfaces for you things that you
might be interested in based on what
chatb knows about you. So I'd say that
in the personal capacity that the super
app will be doing all of that and will
be doing it in a much deeper and richer
way.
>> When do you plan to ship it?
>> Uh so the way to think about it is uh we
should we're taking incremental steps to
get there over the next couple months.
We should have shipped the complete
vision of what we're talking about here.
But it's going to come in pieces. And
the place that we're starting is with
for example the codeex app today is
something which is a it's really two
things in one. It's a general agent
harness that can use tools and it's also
a
agent that knows how to write software.
That general agent harness that can be
used for so many different things. You
hook it up to spreadsheets, you hook it
up to uh to word documents. it's able to
help you with knowledge work and so
we're going to make the Codex app just
so much more usable for general
knowledge work because it already we've
seen within OpenAI is all this organic
adoption of people using it for that. So
that'll be the first step and there are
many to come.
>> I was speaking with one of your
colleagues yesterday taking a look at
Codex and he mentioned that someone
using Codex had built had had instructed
Codex to help him with video editing. It
built a plugin for Adobe Premiere,
started separating it into chapters, and
started the edit. That's what we're
looking at. I I love hearing that.
That's exactly exactly the kinds of
things that we want this system to be
useful for. And it's been really
interesting seeing like the codeex app
itself was originally built for software
engineers and that it's almost like the
current usability of it for non-software
engineers is actually quite low because
there's a bunch of little things where
when you set things up you run into some
error that a developer knows what it
means knows how to fix it. It's just
kind of what we're used to. But if
you're not a developer you're like what
is this? like this this is not something
that I that I've encountered before and
despite that we are seeing people start
to use this who have never programmed
before to be able to build websites to
be able to do exactly the kinds of
things you you said of like be able to
automate different their interactions
with different pieces of software to be
able to get lots lots of leverage like
someone on our communications team uses
it to uh it's hooked up to Slack to
their email they're able to go through a
bunch of feedback be able to synthesize
it very well so these kinds of tasks
people who who are very motivated can
jump through the hoops and then get
great return from it. And so to some
extent we did the super hard part of an
AI that is really smart capable can
actually accomplish your task. Now we
have to do the much easier part in some
sense of make it broadly useful and to
remove these barriers to entry and just
looking at the competitive landscape. I
mean, Anthropic, they have the Claude
app. You can use Claude the chatbot,
Claude Co-work, Claude Code. Uh, so they
have a version of a super app of their
own. I'm curious what you think
Anthropic saw that got them to this
position earlier and what do you think
your chances are of catching up there?
Well, I think that if you rewind 12, 18
months, we have always been focused on
coding as a domain. We always had the
best numbers on different programming
competitions, these very cerebral
things. But the thing that we didn't
invest in as much was that last mile of
usability of really trying to think
about, okay, this this AI is so smart.
It can solve all these great programming
competitions, but it's never seen
someone's real world codebase, which is
messy and not quite as pristine as the
world that it it sort of has
experienced. And I think that is
something that we were behind on. But
about you know maybe mid last year is
when we got very serious about that and
that we had a team very focused on what
are all the the gaps what are all the
kind of messiness of the real world we
haven't we haven't encountered how do we
actually get training data that that
build training environments that let the
AI experience what it's like to actually
do software engineering be interrupted
in weird ways all those things and I'd
say at this point we are caught up when
people go headtohead for us versus
competitors that people tend to prefer
us we you know, we're we're diving in
front end. We're going to fix that. But
this is the general motion that we've
been we've been we've been taking is to
say that that usability of thinking
about the product end to end, not just a
model and then build a separate thing,
right? Really think of it as one
product. When we're doing the research,
we're thinking about how it will be
used. That has been a motion that we've
been changing within OpenAI. And so I
think that the way I would look at it is
that we have incredible step-up models
coming like this whole year. I look at
the road map. It's truly inspiring what
will be possible. And then we've been
really focusing now on let's also get
the last mile usability.
>> Since 2022, OpenAI has been like the
undisputed leader and uh obviously now
the competition is intense. like you
just use the word we're c the phrase
we're caught up. Is there a different
vibe within the company whereas like now
instead of the one that's like far ahead
on something like Chad GPT in a real in
a real fight. I mean you're seeing it
come out of some of the reporting on
what's happening within the company. The
fact that there are no more that there's
been meetings there's no more side
quests at OpenAI. It's all focus on
this. How's the environment or the vibe
changed here? Well, I would say that for
me personally, yeah, the scariest moment
at OpenAI was actually after we launched
Catch Gibbt
>> and I remember being at the holiday
party and just feeling this vibe of we
won and I have never felt that. I was
like, no, that we we are the underdog
and we always have been, right? That the
the competitors in this space,
established companies that have just
sort of much more capital, much more,
you know, human resources, data, the
whole the whole thing. Why is OpenAI
able to compete at all? And to some
extent the answer is only because we
never feel complacent, right? We always
feel like we are the challenger and it
actually for me has been a very healthy
thing to see
>> us start to see that in the marketplace
to see other competitors emerge and do a
good job. And that that is you know in
my mind you can never fixate on your
competitors. If you focus on where they
are then you'll be where they are and
they'll already have moved. I think that
that's what's been happening in the
other direction, right? A lot of people
have been focused on exactly where we
are and we get to move. And I think that
the it almost gives us this alignment,
this unification of the company. And I
kind of described how we almost thought
of research and deployment as separate
things and now we really want to
integrate them. Like that to me is such
a wonderful thing. And so I'd say that
the world that we're in is one where
I've never felt like we were, you know,
you're never as good as they say you
are. You're never as bad as they say you
are. I think it's just been very steady.
And that the core of the model
production that is something where I
actually feel extremely extremely
confident in our roadmap, the research
investments we've made making and I
think on the product side, we have such
great energy that's all coming together
to deliver this to the world. you
foreshadowed a couple times already that
you have some good models on the way.
What is Spud? Uh the information said uh
that you finished uh pre-training Spud
and Sam Albin, the CEO at OpenAI, has
told the staff that uh they should
expect to have a very strong model in a
few weeks. This was a few weeks ago and
the team believes it can really
accelerate the economy and things are
moving faster than many of us expected.
So what's
a good model? Um but I think that it's
really not about any one model. Okay.
Right. The the the way that our
development process works is you have
pre-training. So you produce a new base
model that then is the foundation that
we build further improvements on top of.
And that that is always a huge effort
across many people in the company. And
that's where I've actually been spending
most of my efforts over the past 18
months has been really focused on our
GPU infrastructure on supporting the
teams that do all of the training
frameworks to scale up at these big
runs. But then there's a reinforcement
learning process. So you take this AI
that has learned lots of things about
the world and it applies that knowledge
and then we do a post-training process
where you really say, "Okay, now you
know how to solve problems. You practice
it in in all these different contexts
and then here's kind of the last mile of
of behavior and usability." So I think
of spud as a new base as a new pre-train
and that we have had this uh I'd say
it's like we have maybe two years worth
of research that is coming to fruition
in this model. It's going to be very
exciting and I think that the way that
the world will experience it is just
improved capabilities and that for me
it's never about any one release because
as soon as we have this one release
it'll be an early version of what we
have coming. we'll do much more of each
of these steps of the improvement
process. And so I think that where we're
going is almost just we have this engine
of progress that just moves faster and
faster and that spud is just one step
along the way.
>> So what do you think it'll be able to do
that today's models can't? I so I think
it's going to be able to solve both much
harder problems. I think it will be much
more nuanced. It'll understand
instructions better. It'll understand
the context much better. that there's
this thing called big model smell that
people talk about where it's just like
there's something about like when these
models are just actually just much
smarter, much more capable that they
bend to you much more and you feel it
right when you ask a question and the AI
doesn't quite get it. It's always so
disappointing, right? We have to like
explain and you're just like you really
should be able to figure this out. And
so I would just think of it as in some
ways just qualitatively
there will be but quantitatively lots of
shifts right and qualitatively there
will just be new things where you would
be frustrated before you never use an AI
for it and now you just use it without
without thinking very much and I think
that that that is what we're going to
see across the board. I'm super excited
to see how it raises the ceiling. Right?
We've already seen these physics
applications, things like that. And I
think we will be able to just solve like
way more open-ended problems, way longer
time horizons. And then also very
excited to see how it raises the floor
where just for anything you want to do,
it's just so much use more useful for
you. It can be kind of tough for
everyday users to really feel the
change. like there was uh talk about a
lot of buildup before GPT5 came out and
then it came out and actually the
initial reaction was somewhat
disappointment among the public but then
I think people realize that for certain
tasks it was really good with these next
series of models do you expect that
it'll really be felt sort of um in the
trenches in certain occupations or do
you think it will be a broadly uh
tangible improvement for everyone? I
think that it will be a similar story
where you when you release it there will
be people who will try it and be like
this is a night and day different than
anything I've seen and then there will
be some applications where we weren't
necessarily intelligence bottlenecked
and so if you have a model that's more
intelligent maybe you won't feel it
right there but I think over time that
you will feel it because the fundamental
thing that shifts is how much do you
rely on the system like if you think
about the way we all interact with AI is
we have some mental model for what we
think it can do and that that mental
model shifts actually fairly slowly,
right? As you get more experience, it
does something magical for you. You're
like, "Oh, wow. It can do that. I never
imagined that." And we see this for
example in applications like access to
health information, right? That we see
people who,
you know, like I I have a friend who
used Chachib to understand different
treatments for his cancer and that he
was told by doctors that he was
terminal, that there was nothing they
could do for him. He used Chachi to
actually research a bunch of different
ideas and he was able to get treatment
that way. And that's something where you
need to have some level of belief that
the AI is going to be helpful in that
application for you to really put in the
effort to get something out of the
machine. And I think what we're going to
see is that for any application like
that, it's going to become so much more
evident to everyone that the AI can help
you. And so I think it's a little bit of
the technology getting better, but it's
also our understanding of the technology
shifting and and catching up to that.
>> And you'll be relying on it more inside
OpenAI. You have uh an automated AI
researcher in the works. It's supposed
to come out this fall. What is that? So
the the direction of travel right now,
we are in this early phase of takeoff of
this technology. What does takeoff mean?
takeoff is as the AI gets better and
better on this exponential and in part
because we can use the AI to make the AI
better
>> so our development process speeds up but
I also think when I think of takeoff
it's also about real world impact and in
some ways we've been you know every
technology is is an S-curve
or if you zoom out some of S-curves that
end up being an exponential and I think
that that's what we're encountering
right now so it's the tech technology
development is moving with increasing
speed and it's this engine that's
picking up momentum,
but it's also in the world there's all
of these tailwinds because there's chip
developers that are getting more
resourcing into their programs. There's
this economy of people who are building
on top of it, trying to figure out how
it fits into every different
application. And all of that energy is
just accumulating more and more into
this this takeoff phase of the AI
becoming just a kind of sideshow to
being the main driver of economic
growth. And I think that that is
something that it's not just about what
we're doing in these walls. It's about
how the whole world, the whole economy
comes together in order to to push
forward this technology and it
usefulness together.
>> And the researcher will then what will
it do exactly? Well, so the researcher
will be a moment where the AI which
we're building that right now it's
taking a you know it's doing a larger
percentage of tasks that we should be
able to let it run autonomously and that
I think there's a lot of thought that
goes into what that means and that it
doesn't necessarily mean that we just
let it off on its own and then come back
later and see if it does something good.
I think that we are going to be very
involved in managing it. Right? Just
like right now if you have a junior
researcher, if you leave them on their
own too long, they're probably going to
go down a path that's not very useful.
But if you have a senior researcher or
someone who has a vision, they don't
even necessarily need to know the
mechanical skills.
They will be able to provide feedback,
review, the plots that this person's
that, you know, the interns producing
and to provide direction in terms of the
vision of what is it that I want you to
accomplish. And so I think of this as a
system that we're going to build that
will massively accelerate our ability to
produce models to make new research
breakthroughs happen to be able to make
these models more useful and usable in
the real world
>> and to do that at increasing speed.
>> So sorry what's it going to do? Are you
going to say go find uh AGI and it will
just try to
>> I think I think the way I think of it is
is something like that to first order.
Okay. And at a practical level, I think
I would view it as ex is taking the full
end to end of what one of our research
scientists does and be able to do that
in silicon.
>> Another way to think about takeoff is
their progress in AI goes from
incremental to gathering momentum and
then sort of this unstoppable march to
an intelligence that's smarter than
humans. Um, do you do you worry that
there's just as there's possibilities
for things to go right on that front,
there's also possibilities for that
progress, that process to go wrong?
>> I mean, I think that that's absolutely
yes. I think that the way to get the
benefits of this technology is also to
really think about the risks. And if you
look at how we've approached technology
development from a technical
perspective, we invest a lot in safety
security.
Good example of this is prompt
injections, right? If you're going to
have an AI that is very smart, very
capable, hooked up to lots of tools, you
want to make sure that it can't be
subverted by someone giving it a weird
instruction. And that's something that
we've invested in quite a lot and I
think have really incredible results,
have an incredible team working on. And
it's interesting to think about some of
these problems where you can make
analogies to humans. Like humans are
also susceptible to fishing attacks, to
being deceived in different ways, to not
really understanding the full context of
what they're working on. And we bring
those analogies into our development
process and think about this whenever we
release a model, develop a model, how do
we ensure that it's going to be aligned
with people and be able to actually be
helpful. And uh that is something that
we care quite a lot about. I think that
there are bigger questions about the
world, the economy, how does everything
change, how does everyone benefit from
this technology that are not purely
technical, not purely something that
open AAI on our own will be able to
solve. But yes, I I think quite a lot
about not just pushing forward the
technology, but also really about how do
we ensure that we have the positive
impact that is its potential. The worry
though is that this is a race and what's
being done uh within these walls for at
open headquarters is also being copied
by many of the open source players which
have much less uh bear boundaries and
barriers and protection on on the safety
side of things. And I think you said
this once that it takes you know people
getting a lot of things right to be
creative and sort of one person with bad
intent to be destructive. And that's
sort of where the concern lies for me at
least is just
when this this is it's clearly a race.
It's going fast. Many of your
counterparts have said if everybody
agrees to stop it, we'll stop it. And uh
and it doesn't seem like it's going to
slow at all.
>> So is the reward worth the risk?
Basically,
>> I think I think reward is worth the
risk, but I think that that is too too
coar grained of a of an answer in some
sense. Okay. The way that I think about
it is that we've asked from the
beginning of OpenAI,
how what does a great future look like?
How can this technology really be
something that uplifts everyone? And you
can think of there almost being two
different angles. One is the
centralization view of saying that well
the way to make this technology safe is
that you have only one actor building
it. And so then you don't have any
pressures, right? you can really think
about getting it right and you know then
figure out how to roll it out to
everyone when it's ready those kinds of
things.
That's a pretty tough pill in some ways
and I think that there's a lot of
properties that you can instead think
about approaching differently which we
refer to as resilience to think of it as
this open system where there's lots of
players who are developing the
technology but it's not not just about
the technology it's about building
societal infrastructure that helps this
technology really go well and if you
think about how electricity has
developed that's something where lots of
people produce it that it actually has
dangers and risks but we also build our
safety infrastructure in a diversity of
different ways around safety standards
for electricity, around different ways
of harnessing it, about how you scale
it, that there's regulations when you're
at these massive scales that lots of
people are able to use in a democratized
fashion. There's inspectors like there's
a whole system that's been built around
the needs of that technology, the
proclivities of that specific
technology. And I think that one thing
that we have really
I think seen with AI is that it is
something where we need this broad
conversation. We need lots of people to
be aware if the techn is going to come
and change everything for everyone.
People need to participate in that. It
can't be something that's done off in
secret by just one, you know, sort of
centralized group. And so this has been
to me a very core question to how this
technology should play out and something
we really believe in is this resilience
ecosystem that should emerge around the
development of this technology.
>> So you said we're uh in takeoff in the
middle of a takeoff process and we I
guess all of humanity are experiencing
this. Uh Nvidia's CEO Jensen Wong said
recently that he believes AGI has been
achieved. Do you agree?
I think that AGI has a different
definition to many people and I think
that there are many people who would say
that what we have right now is AGI. I
think you can debate it. But I think
that maybe the thing that's interesting
is that AGI like the technology we have
right now is very jagged. Like it is
absolutely superhuman at many tasks.
When it comes to writing code, those
kinds of things, the AI can just do it
right. And it really removes a lot of
the friction to creating things. But
there's some very basic tasks that a
human can do that our AI still struggle
with. And so it's almost to say that
where do you draw the cut line? It's a
little bit more of a vibe than and a
feeling than it is a uh uh than it is
science at the moment. And so I think
for myself, we're definitely going
through that moment. And
>> if you were to show me 5 years ago the
systems we have today, I'd go, "Oh yeah,
that's what we're talking about." But
it's just different. It's so different
from anything we ever pictured. And so I
think we need to adjust our mental
models appropriately.
>> So you're not there yet. I think I think
that I'd say I'm I'm I'm basically like
70 80% there. So I think we're we're
quite close. I think it's extremely
clear that we are going to have AGI
within the next couple years in a way
that is still going to be jagged but
that the the floor of task will just be
almost for any intellectual task of how
you use your computer the AI will be
able to do that and I think that that
yeah right now I have to give a little
bit of a uncertain answer because
there's some there's some it's almost
like a like a uncertainty principle kind
of thing that you can you can you can
debate it for my own personal definition
I think we're almost there and with
maybe a little bit more we will
absolutely be.
>> Okay. Well, we're we got to go to a
break, but as long as we're on the way
to the break, I want to let folks
watching at home know that you and I are
going to be talking again June 18th here
in San Francisco at SFJazz. So, I will
put some information if you want to come
join that conversation in the show notes
and I do hope you sign up. All right,
we'll be back right after this. And
we're back here on Big Technology
Podcast with OpenAI co-founder and
president Greg Brockman. Greg, let me
just ask you what happened in December
2025 because it seems like it was an
inflection point where all this idea of
uh letting the machine code for hours
uninterrupted went from theory to a
moment where everyone said, I think I
can trust this to keep going for a
while. Uh so what exactly happened? So,
new model reaces really went from the AI
being able to do like 20% of your tasks
to like 80%.
And that was this massive shift because
it went from being kind of a yeah, it's
a nice thing to do to you absolutely
need to retool your workflow around
these AIs. And for myself, I've very
much had this moment where I have a test
prompt that I've been using for years of
build a website for me. I'd actually
built this website back when I was
learning to code. took me months. Used
to be over the course of 25 that you
know take like four hours bunch of
different prompts to get it right. In
December one shot just ask the AI one
time and it produced it and did a great
job.
>> So how did those models make the leap?
Well, a lot of it is about the better
base models.
>> That one thing about OpenAI is that
we've been working on improving our
pre-training technology for quite some
time. And that in that moment, we got to
see a little taste of what is going to
be coming for for the rest of this year.
But it's also really about not any one
thing. It's about we're constantly
pushing on every single axis of
innovation. And the thing that's very
interesting about these models is in
some ways you get these leaps. In some
ways it's all continuous, right? It
didn't go from 0% to 80%. Went from 20%
to 80%. And so in some ways it's it just
got better. And I think that we've
actually seen this improvement continue
with every single point release that
we've had. like between 52 and 53. One
of my engineers I work with very closely
went from he couldn't get it to do the
like low-level hardcore systems
engineering he does to it absolutely
being created. It gives it a design
dock. It actually implements it, adds
metrics, observability, runs the
profiler, improves it to the point that
it's the exact thing that he was hoping
to produce. And so I think that the way
to think about it is it's almost a sort
of slowly slowly slowly all at once.
But it is all indicated by what's kind
of working right now certainly within a
year sometimes much sooner is going to
be incredibly reliable. And it surprised
you because I heard you talking on an
interview not long ago about how codeex
right this autonomous coder was just for
software developers. And earlier this
conversation you said actually everyone
can use this stuff. Yes.
>> What led to the fact that you sort of
changed your perspective on
>> Well, I think I'd been focusing on
codeex and it's got the code in it,
right? As really being for coders and
thinking about people within OpenAI
because many of us are software
engineers building for ourselves. It's
very natural to think that way. But as
this technology has been progressing,
we've started to realize that the
underlying technology we produced is
mostly not about code at all. It's
mostly about solving problems. It's
mostly about being able to manage
context and harnesses and think about
how an AI should integrate and do work.
And that's something that becomes both
even for code suddenly anyone can have
access because you can manage something
that's going to go do work, right?
Right? If you have a vision, you have
something you want to accomplish, you
can describe your intent, the AI can
execute, can get that done. But then it
also starts to be why am I just focused
on coding? Like there's so much just
very mechanical skill associated with
Excel spreadsheets with presentations.
And if the AI has the context, it has
the raw intelligence now to be able to
do these things at a great level. So if
we can just make it more accessible,
suddenly goes from codec is for coders
to codeex is for everyone. And soon
after this moment where we saw all this
improvement, there was another somewhat
phenomenon in Silicon Valley which was
open cloud, right? Which is and maybe
it's the broader tech community where
where people started to trust it in ways
that you suggested giving the an AI bot
access to their their desktop or getting
a Mac Mini and giving it access to like
their mail and calendar there and their
files and then just kind of letting it
go run their life. And then OpenAI
brought the founder of OpenClaw in
house. So you talked a little bit more
about the AI as something that will help
run your life for you in a way. Um is
that the vision by bringing the open
cloud team in house? Well, I say that
the core thing about this technology is
that figuring out how it's useful, how
people want to use it, what is the
vision for agents, how is it going to
slot in people's lives? That is a hard
problem. And that one thing I've seen
across many generations of this
technology is the people who really lean
in, who have a lot of curiosity, who
have a lot of vision, that's a real
skill and that's an emerging very
valuable skill in this new economy that
is emerging. And Peter who is the open
claw founder is I think someone who's
got incredible vision, incredible
creativity and so to some extent it's
about the specific technology but to
some extent it's not at all. It's really
about the how do we take these
capabilities and figure out how those
slot in people's lives. And so I think
as a technologist it's very exciting but
as a someone who is focused on bringing
utility to people that's something that
we are doubling down on and investing
quite a lot.
>> You had a pretty interesting quote about
this recently uh talking about getting
these autonomous AI agents to work on
your behalf. you said um you become when
you do it, you become this CEO of a
fleet of hundreds of thousands of agents
that are completing your objectives,
your goals, your vision, and you're not
in the wheat on exactly how different
things are solved. And in some ways,
this new way of work can make you feel
like you're losing your pulse on the
problem. Is that good? I think I think
that there's a mixed bag. And so I think
that what we need to do is acknowledge
the strengths of what these tools can
deliver and mitigate the weaknesses. And
so giving people leverage, agency,
making it so that if you have a vision,
something you want to accomplish, that
you can have a fleet of agents that will
go do it for you. But if you think about
how the world works, that at the end of
the day, there's an accountable party,
right? you're
trying to build a website and your agent
messes it up and your user is affected,
it's not really the agent's fault, it's
your fault. And so, you need to care.
And I think that for people to use these
tools, right, you need to realize that
human agency, human accountability,
that's a core part of the system, how
the human uses the AI, that's something
that is deeply fundamental. And so I
think the important thing is that as a
user of these agents and we do this
within OpenAI, you cannot abdicate
responsibility. You cannot just say ah
the AI is just going to do stuff. Of
course, but you said feel you're you're
losing your pulse on the problem itself.
That's different than accountability
layer to it.
>> Well, to me they actually are linked
together because the point is that if
you if you're a CEO and you're too far
from the details,
>> right? if you're running this company,
you're running this this team and that
you've lost your finger on the pulse,
that is something that's not going to
lead to great results. And so the point
that I was trying to make there is that
not that it's a desirable thing for
humans to not have to to to know about
what's going on. There's some details
that because you you can trust like if
you are working with a team like a
general contractor to build a house,
there's a bunch of details there that
you probably don't need to worry about
because you can trust that that they'll
be taken care of. But at the end of the
day, if there are details that are
wrong, you should care about it. You
should be aware. And so this is, I
think, an important nuance of you cannot
just blindly say, I'm okay with losing
my finger on the pulse. That we need to
lean in and say, I need to keep it there
to really understand the strengths and
weaknesses. and that as you disengage
from some of these details, these lower
level mechanical things, you should do
it because you have built trust with a
system that it will do a good job. One
last question about the models. You've
talked a little bit about the evolutions
that the models have gone through.
Pre-training and fine-tuning,
reinforcement learning that gets it uh
more equipped to solve problems step by
step and go out in the on the internet
and do things. And now we're in this
moment where they're the models have
learned through that process to use
tools. And correct me if I'm wrong on
this one. What what is next in that
progression? Well, I think that the
world that we're in is one of this
increasing capability and depth of what
the machine can do. And some of this is
about we've got this tool use, but now
we also need to actually build really
great tools. You think about something
like computer use an AI that can
actually use a desktop then it is really
able to do anything that you can do but
we also have to build a little bit for
the machine to think about how does in
the enterprise credentiing work how does
how do audit trails and observability
work so there's a lot of technology to
build to catch up with what the core
model capability is and I think the
overall direction of travel includes
things like a really great speech
interface so you can just talk to your
computer naturally and just as natural
as this conversation
and it understands you. It does what you
need. It has good advice. It's able to
surface that I've been working on this
thing. I have a problem. Here's you wake
up in the morning. It says here's your
daily report of how much progress your
agents made overnight. Maybe it's
running a business for you. Which I
think is going to be a huge application
of this technology. The democratization
of entrepreneurship is absolutely
coming. And I'll say here's these
problems. There's this customer that's
upset. You know, they want to talk to a
real human. Like you should go talk to
them. Like all of that's going to
happen. And then I think that the
raising of the ceiling of ambition of
challenges humanity can solve that is
also a next step for this technology and
we're seeing the leading edges of it.
The thing that I am just very excited to
see is almost if you remember Alph Go
move 37 right this move that no human
ever would have come up with was
creative
>> creative and it changed manity's
understanding of the game that is going
to happen in every single domain. It
will happen in science, in math, in
physics, in chemistry. It's going to
happen in material science. It's going
to happen in biology. It's going to
happen in healthcare, drug discovery.
But it may also even happen in
literature, in poetry, in a bunch of
other fields. They're going to unlock
human creative understanding and
ideiation in ways we can't imagine right
now.
>> Why do you think that hasn't happened
yet given how strong you say the models
are? Well, I think that there is an
overhang of what the models are capable
of and how people are using them. So the
application well yeah it's almost our
understanding of what is in these
models. Okay,
>> that's something that I think is still
emerging. So I think that even with no
further progress, there's still a
massive shift that will happen. The
economy being powered by comput and AI
is still going to happen. But I think
there's also something where what we've
gotten very good at is training models
on tasks that could be measured. And so
what we started with was math problems,
programming problems where you have a
perfect verifier. And a lot of what the
progress has been in bringing us to more
open-ended problems has been expanding
the space of what can be created. And
the AI itself can really help with that.
If the AI is smart and understands
things, you give it a rubric for how
well a task goes.
>> And of course, for things like creative
writing, like is this a good poem?
That's a much harder thing to grade. And
so we've had less ability to teach the
AI and for it to experience and try
things out. But all of that is changing
and something that we we have line of
sight for. You know, it's interesting
reflecting on that Peter Teal has
mentioned, pretty sure that's what he
said, that if you're a math person,
you're probably in deeper trouble in
terms of these models coming from what
you do than if you're awards person. And
you were a member of math club back in
the day. Uh are you not concerned about
that? Well, I think that it's much
easier to see what we lose than what we
gain, right? Because we have a deep
understanding of I used to do things
this way. I used to do this math
competition. Now the AI can do the math
competition. But it was never really
about the math competition. Right.
>> Right. That's not really the thing that
drives humanity. And if you think about
the way that we do work right now of
there's a box, something type behind a
box. We weren't doing that 100 years
ago. That's not natural. That's not this
digital world that we all got kind of
sucked into. That's not really what
being human's about. Being a human is
about being here, being present,
connecting with other humans. And I
think that where we're going to see is
that AI is going to free up so much time
to increase human connection to build
more bonds across people.
>> And that's something I'm extremely
excited about.
>> Okay. And then as we shift well as you
shift really to uh these more agentic
use cases there's been discussion about
whether the bigger training runs really
need to happen and you know especially
if you like get the model good enough uh
then you could sort of let it go out in
the world and then you can uh
effectively get much of the uplift in
areas that aren't the pre-training which
is what these big data centers are
needed before. So you you were worked on
you work on scaling here uh lead that
lead that process. Uh what do you think
about that argument?
>> Well, I think it misses something very
important for how the technology
development goes because it is
absolutely the case that every single
step of the model production pipeline
multiplies
and so you want to improve all of them.
And the thing that we see is we prove
the pre-training it makes all the other
steps much easier. And it makes sense
because it's a model is able to learn
faster. It's a model that is because it
already is like more capable to start
when it's trying out different ideas and
learning from its own mistakes. That
process just is faster. It needs to make
fewer mistakes. And so I think that the
big shift has been from thinking of it
as just it's you're just training this
cerebral system on its own and you just
make it bigger and bigger to it's also
about trying things out. It's also about
understanding how people are using it in
the real world and connecting that back
into your training. But it doesn't
remove the value and the importance of
continuing that that that research. And
the thing that I think has also shifted
is we used to really just focus on the
raw pre-training capability but not
think as much about the inference
ability. And that's been a big change
over the past 24 months to realize that
it's a balance between you can have this
model that has all those great
properties in the base, but then you
really need it to be able to be
inferencible because you need to
reinforcement learning. You need to
serve it to the world. And that that
means that you don't necessarily go as
big as you possibly could because you
also really think about there's going to
be all this downstream use and you
really want the thing that has the best
intelligence times that that cost. and
to to optimize those two things
together.
>> Do you still need the Nvidia GPU if
things move mostly to inference?
>> We absolutely do. Yes.
>> Why? Well, because the there's multiple
reasons. Um but one is that even as the
balance of how much inference versus
training changes that you cannot get
massive scale training
through any other way besides this
concentration of compute on one problem.
And so I think that the the thing that I
think will happen is there's some amount
of the the uh the deployment footprint
goes up quite a lot but that sometimes
there will be you have a particular mass
of pre-training run and you really want
to concentrate a bunch of compute in
there. Um I also think that the NVIDIA
team is just incredible and does really
really amazing work and so yeah we we
partner very closely with them.
>> Isn't there going to be a time where
people just say we've pre-trained enough
the models are smart enough? I think
that that's a little bit like once
humanity has solved all problems in
front of us then maybe we can we can say
that right
>> but I think that the ceiling of what we
want to accomplish I think that that
there's just so much ambition that that
maybe we've over the past 50 years or so
just sort of backed off from right you
think about I mean even problems that
seem very clear like can we have health
care for everyone that is not just
that's actually preventative not just
targeting when people have a problem but
really think about the lifestyle and how
to really help people early and detect
potential diseases before they happen.
Like that's a problem that I think we
can actually achieve through more
intelligent models. And there's probably
some level where you can totally solve
that problem and then you say, well, do
I need a model that's two times smarter?
But there's other problems that are
going to demand that. Let's talk about
the math about building these data
centers. It raised 110 billion uh
earlier this year.
is what's the math behind that? Does
that money go right into data centers?
How do you think about how you're going
to return that money to investors? Talk
about those calculations. Yeah. So, I
think it's as simple as the massive
expense we see in front of us is
compute. But you can think of compute
not as a cost center, but as a revenue
center. Think of it a little bit like
hiring salespeople, right? How many
sales people do you want to hire? As
long as you can sell your product, as
long as you have a scalable way to sell
that product, then the more salespeople
you have, the more revenue you will
make. And I think the world that we're
in is we have continually found we
cannot build compute fast enough to keep
up with demand. And I see this very
concretely, right? Right now, we have to
make very painful decisions about what
we're launching, about where the compute
goes, and that I think we're going to
experience this more broadly within the
economy as we shift to this AI powered
economy. question will be what problems
are going to get that massive compute?
How do you scale so everyone can have a
personal agent running for them? How can
everyone be using systems like codecs?
Like there just isn't enough compute in
the world to be able to do that.
And so we're trying to get ahead of that
problem, but it is a new category,
right? So you're doing it with real
confidence. I mean, sums of money the
world has never seen put towards a
project like this. uh when you're
building a new category, how do you do
it with certainty that it's going to
work out?
>> Well, I think there's several components
that go into it. So, the first is there
is historical precedent at this point.
From the moment we launched chatbt, I
remember talking with my team having
this exact conversation where they said,
"How much compute should we buy?" I
said, "All of it." They said, "No, no,
no. Really, how much compute should we
buy?" I said, "No matter how much we try
to build, I know we're not going to be
able to keep up with the demand." And
that has been true. And that has been
true every year since then. And the the
challenge is that these compute
purchases, you have to lock them in 18
months, sometimes 24 months, sometimes
longer
in advance of them actually being
delivered, which means you really need
to project forward. And I think that the
world that we're moving towards is one
where to date most of our revenue has
come from consumer subscriptions and
that will always be very important.
There's other revenue streams we have
emerging as well. But the the
opportunity that clearly is emerging now
is knowledge work. And we're seeing this
very concretely across every single
enterprise realizing this technology, it
actually really works and to be
competitive, they need to adopt it. And
you can see this organic energy of all
these software engineers using it. And
then we're starting to see the
percolation of people using it for
various knowledge work inside of the
enterprises. And the willingness to pay
and the revenue growth that you're
seeing in this industry is very clear,
right? It's very clearly happening right
now. And you just project that forward.
And we look like one thing we get to see
that maybe the world doesn't is the line
of sight to how these models will
improve. And all of this together says
that the economy, which is a massive
thing, right? The economy is just so
large it's it's almost incomprehensible.
all of the growth like that the the
highest order bit on how this economy
grows from here will be about AI how
well you can leverage AI and the
computational power you have available
to power it
>> you said consumer subscriptions are your
biggest source of revenue right now is
the projection that that will flip and
that business will be the biggest source
>> I think well I think that that it is
very like very clear how quickly the the
yeah the enterprise it's not just
enterprise because I think enterprise is
also changing what it means so really
people using it for productive knowledge
work for those kinds of things and I
think that as we think about pricing one
thing if you look at how codeex works
right now is if you have a chatbt
consumer subscription you can use codeex
and so I think it's not going to be as
well defined as there's this category
that category I think it will really be
about you as a user are going to have
just again like your laptop this portal
to the digital world and that is what
the the revenue fundamentally will will
will come from Dario said I think about
you there are some players who are
yoloing who pull the wrist dial too far
and I'm very concerned. I think he's
referencing your infrastructure bets
there. What do you think about that? No,
I just disagree. I think we've been very
thoughtful and very much seeing what is
coming and I think that we will see even
this year how everyone who is
participating is going to be comput
strapped and I think we have been the
most forward in realizing that this is
coming and building an anticipation of
how this technology is playing out and I
think that what we have seen is that for
other players that they kind of realized
that probably late last year and started
scramling to see what compute is
available and there really wasn't any.
And so I think that even as people it's
very easy to to make statements like
that but I think that everyone has kind
of realized that this technology it's
working it's here it's real right
software engineering is just the first
example of it and that we are
fundamentally limited by the
computational power available
>> and he said that also that if he's off
by with his prediction by a little bit
then uh his the company could
potentially go bankrupt. Is that the
same case for you?
>> I think that look I I think that that
there's actually more degrees of offramp
here. Okay. If you start to worry about
the downside case, which I think is a
very reasonable question, right? But to
some extent, what I think the bet is on
isn't about any one company. It's really
about the sector. It's really about do
you believe this technology can be
produced and can deliver this massive
amount of value that we see coming. And
again, I'll point to proof points,
right? that software engineering it just
like the degree to which if you're not a
software engineer you haven't tried
codefs the degree to which is different
like it's just hard to describe and I
think that people will experience it
very quickly like you know 6 months ago
I think that for us we we saw this
internally but there were less proof
points out there now there's proof
points out there 6 months from now I
think that everyone will feel it and I
think that we will all feel the pain of
there's an awesome model and there's
just no availability because there's not
enough compute.
>> Yeah. But as we were looking at our
predictions for 2026 on this show, we
had a conversation uh towards the end of
the year last year where Ron John Roy
who was on with us was like 2026 is
going to be the year where everybody
uses agents and I said, "Yeah, well,
I'll believe that when I see it." And
I'm using the agents. So, here we are.
Here we go. Um what what do you use it
for? I I use it to build um build tools
internally for my uh for the people who
I work with to sort of get on the same
page about um about when videos are
coming and what the thumbnails need to
look like. And I'm also integrating
things from uh from YouTube and so we
can basically then rank how the videos
are doing based off of thumbnail and
like a customuilt piece of software that
I never would have paid for. And that's
one of the things that I think is
interesting about this moment I guess is
that software it's it scales it's used
by the masses but um when you use it
therefore there's going to be so many
things that are not made for you and
maybe what this does is it allows us to
interact with software in a way that's
much more natural. I I think that is the
key and again I just think a lot about
the fact that the way we've built
computers has really pulled us in into
this digital world. You think about how
much time you just spend scrolling
through your phone.
>> Yep.
>> Right. The amount of time that you spend
clicking different buttons and trying to
like connect this thing to that thing.
Like why like why do you have to do
that? And instead the AI being about
bringing the machine closer to you,
personalizing to you, understanding what
you're trying to accomplish. And that we
have all this pop culture of just
computers you can talk to and that they
go and do stuff for you. And it's
starting to become real. It's starting
to become the thing that you can
actually do. And I think that the
amazingness of that is something where
you just have to try it to really
understand. So I I definitely think it's
a it's a very special moment we're in.
>> Yeah. Then I want to know why is AI so
unpopular
with the public? Um Yugo for instance
says uh three times as many Americans
expect the effects of AI on society to
be negative as they expect it to be
positive. I mean what do you think the
reasoning is behind that? And are you
concerned about AI's brand?
>> Well, I think that there is something
that we need to show the country of why
AI is good for them,
>> not just for the broad economy, for
growing the GDP and things like that,
but how does it help them in their
lives? And I think there are actually
many very concrete stories that I hear
every day. For example, there's a family
where their child was having some some
uh headache, some some medical issues,
was denied an MRI, and they researched
the symptoms with ChachiBebt and
realized that they could make an
argument to insurance to actually get
the MRI. They did that. Turns out he had
a brain tumor. They were able to save
his life because they used ChachiBt to
get access to the right information. And
that's just one story. There are so many
more just like that of people who have
been deeply profoundly their lives have
been improved or saved through their use
of this technology and through
partnering with the technology in a real
way. And so that is a story I don't
think gets out there. I think that this
is happening in so many people's lives
but somehow the story is not yet is not
yet told. And one thing I notic is that
there's, you know, certainly a lot of
pop culture from, you know, the '9s from
from the historical context that we have
that's very negative on AI that worries
about what could go wrong. But when
people actually use AI that they find
utility in it, they find value in it.
And so I think that I am definitely very
concerned about us not having
successfully helped people understand
why this technology wave is something
that will improve their lives, that will
help improve human connection. And that
is something that's a big focus in my
mind. And if you think about the
opportunity here and why AI is so
important, I think this will be the
source of economic and national security
going forward. I think it's going to be
about national competitiveness and that
there are other countries like China
where AI pulls in the exact opposite
direction. And so I yes, I think it's
very very important that we acknowledge
that and we really understand how to get
the benefits for everyone. But we also
are in a time that's like politically
unstable. There's concerns about uh
about work. People are every time I
speak with someone about AI, they're
like, "How long do I have left to work
in my job?" And then when I think about
the data centers, I mean, the polling is
even worse than um you know, AI in
general. This is from Pew. Far more
data, far more people say data centers
are mostly bad to good for the
environment, uh home energy costs, and
quality of life of those nearby. So we
are at this moment where good jobs are
tough to come by and people see these
data centers come into their communities
and they say not good for the
environment, home energy and quality of
life. I mean are they are they wrong?
Well I think there's definitely a lot of
misinformation about data centers. Good
example is water usage. That if you
actually look at our Abene facility,
which is one of, if not the biggest
supercomputers in the world, the amount
of water it uses,
is the same as a household over the
course of a year, right? So, it's really
negligible water use. And yet that
there's a lot of misinformation that
these data centers consume a lot. And so
similar on on power, we have a
commitment that we are going to pay our
own way to not drive up energy prices
for people. That's something that as an
industry now that people are making
these commitments because it is very
important that we improve local
communities. And when we build data
centers, we really try to go into those
local communities, understand what's
happening on the ground, how we can
help. There's tax revenue that are
associated with these data centers, and
I think that there's jobs that they
create. There's a lot of benefits that
come from them.
And so I think that's one thing where it
is about how we show up and that's a
responsibility that we take very
seriously.
>> Okay. But also like if their power costs
are not going to go up, you have to
bring in power which means potentially
more pollution. Is that not a concern?
>> Well, I think that I think that there's
much more nuance in terms of not driving
up energy costs. If you look at how the
grid works today, that there's actually
a lot of just stranded power. power that
is there that is not being utilized and
that you need to upgrade the
transmission systems. And again, that's
something where putting that on us
rather than putting that on the
rateayers is very important, right? That
there's lots of places where they have
clean power that you can that is
actually being underutilized and just
being kind of thrown away. And so
there's
>> a lot of benefit that comes from having
real reasons for the grid which is aging
and obsolete in many places to upgrade.
And that's something actually has real
benefits to the community. Like we've
seen for example in North Dakota that
people's rates have gone down because a
data center has shown up and has helped
with improving the utilities for
everyone.
>> All right, one last question on the
politics. Uh you gave $25 million to
MAGA Inc. which is a pack that's uh
proTrump. Um you spoke with Wired about
it and you said anything I can do to
support this technology benefiting
everyone is a thing I will do. And you
know if that makes you a oneisssue voter
or one issue uh you know political
supporter I'll I'll share the one thing
I always wonder when it comes to just
this the one issue camp is um ultimately
doesn't a stronger country make you uh
you know sort of make your goals much
more feasible even if a candidate isn't
you know fully in support of what you're
doing? like shouldn't a stronger country
no matter what be the north star of of
any political activity um and if it's
the if that's the case you know then is
that is that part of the donation
>> so the way I look at this so my wife and
I made that donation we've donated to
bipartisan super PACs as well I think
this technology is one where it's coming
quickly within the next couple years
really going to transform everything
going to be the underpinning of the
economy and it's not popular and we
really want to support politicians who
really lean into this technology really
engage with it and so I think that
certainly this technology is about
uplifting us as a country I you know I
am a oneisssue donor this is something
where I feel like I have a unique
contribution to to make but it's really
about just expressing support for this
technology is something that we should
be leaning into as a country
>> what would you tell someone who's who's
scared of AI I mean, if you have a
moment here where you can speak directly
to them, they might think it's going to
take my job. It's going to pollute my
community. It will change the world too
fast. What's your message to them?
Number one thing is try the tools
because to really understand what it can
do for you is something that
only by experiencing the AI that exist
now
will that really hit home. And we see so
much opportunity and potential and
empowerment coming from this technology
today. You talked a little bit about
what you can build now,
>> right?
>> People who have never built a website
before can build a website. If you want
to start a small business and you're
thinking about all the backend
processing and how to actually manage
it, all those things, the AI can help
you with that right now. And so I think
that in your life, thinking about how it
can help you with your health, how it
can help your loved ones, how it can
help you make money, how it can help you
save money, these are all going to be on
the table. And I think it is much easier
to see what's going to change
than it is to see what you're going to
gain. But I think that it's worth giving
it a fair shot of really trying to
understand both sides of the equation.
That's the one thing that doesn't get
talked about in the polling data, by the
way. It's uh the people that have seen
it used but haven't tried it themselves
or the people that have never tried AI
are much more negative. And then you get
to the power users and or even people
that use it casually and they're
generally pretty positive about the
technology.
>> Yeah. For for myself, we've been
thinking about this technology for a
long time. what I see playing out in
front of us is more amazing, more
beneficial, and going to really have a
much more just positive impact than we
ever imagined. So, last one for you. How
would you invi advise someone to prepare
themselves for the future? And and it
has to be more than just getting the
tools. I mean, I have friends who are
who come to me and they say, I I don't
know what's going to happen with my job
or the world, and I I just need to know
what to do with this.
>> I do think that the number one thing is
about understanding the technology. One
thing we've seen is the people who get
the most out of the technology and you
have to approach it with a curiosity
trying to really try it in your
workflows really be able to get over
that initial hump of you have a blank
box. What do I do with a blank box?
Right? To really develop this sense of
agency, this sense of I can be the
manager. I can set the direction. I can
delegate. I can provide oversight. and
to really develop that skill because
that is something that's going to be
fundamental is we're building this
technology for humans to help humans
foster more human connection for humans
to be able to spend more time doing what
they want and so the question is well
what do you want and really trying to
crystallize that and trying to realize
that with the help of this technology is
going to be the most important thing
Greg thanks so much for coming on the
show thank you for having me all right
thank you everybody for listening and
watching and we'll see you next time on
Big Technology podcast.