Sam Altman: How OpenAI Wins, ChatGPT’s Future, AI Buildout Logic, IPO in 2026?

Channel: Alex Kantrowitz

Published at: 2025-12-18

YouTube video id: 2P27Ef-LLuQ

Source: https://www.youtube.com/watch?v=2P27Ef-LLuQ

That 1.4 trillion you mentioned, we'll
spend it over a very long period of
time. I wish we could do it faster. I
think it would be great to just lay it
out for everyone once and for all
>> [music]
>> how those numbers are going to work.
Exponential growth is usually very hard
for people. OpenAI CEO Sam Altman joins
us to talk about OpenAI's plan to win as
the AI race tightens, how the
infrastructure math makes sense, and
when an OpenAI IPO might be coming. And
Sam is with us here in studio today.
Sam, welcome to the show. Thanks for
having me. So, OpenAI is 10 years old.
It's crazy to me. ChatGPT is three. But
the competition is intensifying.
Um this place, where at OpenAI
headquarters, was in a code red, is in a
code red um after Gemini 3 came out. And
everywhere you look, there are companies
that are trying to take a little bit of
OpenAI's advantage. And for the first
time I can remember, it doesn't seem
like this company has a clear lead. So,
I'm curious to hear your perspective on
how OpenAI will emerge from this moment
and win. First of all, on the code red
point, we view those as like relatively
low-stakes, somewhat frequent things to
do. Uh I think that it's good to be
paranoid and act quickly when a
potential competitive threat emerges.
This happened to us in the past. It
happened earlier this year with Deep
Seek. Um and There was a code red back
then, too. Yeah. There there's
there's a saying about pandemics, which
is something like when when a pandemic
starts
every bit of action you take at the
beginning is worth much more than action
you take later. And most people don't do
enough early on and then panic later.
You certainly saw that during the COVID
pandemic. Um
but I sort of think of that philosophy
is how we respond to competitive
threats. Uh
and you know, it's I think it's good to
be a little paranoid. Gemini 3 has not,
or at least has not so far, had the
impact we were worried it might. But it
did, in the same way that Deep Seek did,
identify some weaknesses in our product
offering strategy. And we're addressing
those very quickly. I don't think we'll
be in this code red that much longer. Uh
you know, like
these are not these are
historically, these have been kind of
like six or eight week things for us. Um
but I'm glad we're doing it. Uh just
today we launched a new image to model,
which is a great thing and I know some
many consumers really wanted. Um last
week we launched 5.2, which uh is going
over extremely well and growing very
quickly. Uh we'll have a a few other
things to uh launch and then we'll also
have some continuous improvements like
speeding up the service. But you know, I
think this is like
my guess is we'll be doing these once,
maybe twice a year for a long time. And
that's uh part of really just making
sure that we win in our space. Um
a lot of other companies will do great,
too, and I'm happy for them. But you
know, ChatGPT is still uh by far, by far
the dominant uh
chatbot in the market, and I expect that
lead to increase, not decrease, over
time. Um
the the models will get good everywhere,
but a lot of the reasons that people use
a product, consumer or enterprise, uh
have much more to do than just with the
model. And we've
you know, been expecting this for a
while. So, we try to build the whole
cohesive set of things that it takes to
make sure that we are, you know, the
product that people most want to use. Um
I think competition is good. It pushes
us to be better. Uh
but
I think we'll do great in chat. I think
we'll do great in enterprise in in the
future years.
Other new categories, I expect we'll do
great there, too. I I think people
really want to use one AI platform.
People use their phone at their personal
life, and they want to use the same kind
of phone at work most of the time. We're
seeing the same thing with AI. Uh the
strength of ChatGPT consumer is really
helping us win the enterprise. Uh of
course, enterprises need different
offerings, but people think about, okay,
I know this company, OpenAI, and I know
how to use this ChatGPT interface. Um
so, the strategy is
make the best models, build the best
product around it, and have enough
infrastructure to serve it at scale.
Yeah, there there is an incumbent
advantage. Uh ChatGPT, I think earlier
this year was around 400 million weekly
active users. Now it's at 800 million.
Reports say approaching 900 million.
Um
but then on the other side, you have
distribution advantages at places like
Google. And so, I'm curious to hear your
perspective. If the models do you think
the models are going to commoditize? And
if they do, what matters most? Is it
distribution? Is it how well you build
your applications? Is it something else
that I'm not thinking of? I don't think
commoditization is quite the right
framework to think about
the models. There will be areas where
different models will excel at different
things. For the kind of normal use cases
of chatting with a model, maybe there
will be a lot of great options. For
scientific discovery, you'll want the
thing that's right at the edge that is
optimized for science, perhaps. Um so,
models will have different strengths,
and the most economic
value, I think, will be created by
models at the frontier, and we plan to
be ahead there. Um
you know, we're like very proud that 5.2
is the best reasoning model in the world
and the one that scientists are having
the most progress with. But also,
um we're very proud that it's what
enterprises are saying is the best at
all of the tasks that a business needs
to to, you know, do its work. Um
so, there will be, you know, times that
we're ahead in some areas and behind in
others. But the overall most intelligent
model, I expect to have
uh significant value, even in a world
where free models can do a lot of the
stuff that people that people need.
The the products will really matter.
Distribution and brand, as you said,
will really matter. Um
in ChatGPT, for example, personalization
is extremely sticky. People love the
fact that the model gets to know them
over time, and you'll see us push on
that uh much much more. Um
people have experiences with these
models that they then really kind of
associate with it. Uh
and yeah, I remember someone telling me
once like
you kind of pick a toothpaste once in
your life and buy it forever. Uh most
people do that, apparently. Um and
people talk about it. They have one
magical experience with ChatGPT.
Healthcare is like a famous example
where people put their um you know, they
put a blood test into ChatGPT, or put
their symptoms in, and they figure out
they have something, and they go to a
doctor, and they get
cured of something they couldn't figure
out before. Like those users are very
sticky. Uh to say nothing of the
personalization on on top of it. Um
there will be all the product stuff.
L-
we just launched our browser
uh recently, and I think that's pointing
it in new
uh
you know, pretty good potential moat for
us. Uh the devices are further off, but
I'm very excited to to do that. So, I
think there will be all these pieces.
And then the enterprise, uh what creates
the the moat or the competitive
advantage, um I expect it to be a little
bit different. But in the same way that
personalization to a user is very
important in consumer, there will be a
similar concept of personalization to an
enterprise, where a company will have a
relationship with a company like ours,
uh and they will connect their data to
that, and you'll be able to use a bunch
of agents
from different companies running that,
and it'll kind of like make sure that
information's handled the right way. And
I expect that'll be pretty sticky, too.
Um we already have more than uh a
million. People think of us largely as a
consumer company. Yeah, we are
definitely getting into enterprise.
Yeah, yeah. You know, Like share the
stat.
Why didn't actually A million. We have
more than a million enterprise users,
but we have like
just absolutely rapid adoption of the
API. Um and like the API business grew
faster for us this year than even
ChatGPT. Really?
>> Um so, the enterprise stuff is
also
you know, it's really happening starting
this year.
Can I just go back to this
maybe if commoditization is not the
right word, model some maybe parity for
everyday users. Uh because you you
started off your answer saying, okay,
maybe um everyday use will feel the
same. But at the frontier, it's going to
feel really different. Um
when it comes to ChatGPT's ability to
grow, um if I'll just use Google as an
example. If ChatGPT
uh and Gemini are built on a model that
feels similar for everyday use, is how
big of a threat is the fact that, you
know, Google has all these surfaces
through which it can push out Gemini,
whereas ChatGPT is is fighting for every
new user? I I think Google is still a
huge threat. Uh you know, extremely
powerful company. If Google had really
decided to take us seriously
in 2023, let's
we would have been in a really bad
place. The I think they would have just
been able to smash us.
Um but their AI effort at the time was
kind of going in not quite the right
direction product-wise. They didn't you
know, they had their own code red at one
point, but they didn't take it that
seriously.
>> Everyone's doing wild reds out here,
yeah. Um and then
and also, Google has probably the
greatest business model in the whole
tech industry.
Um and I think they will be slow to give
that up.
Um but bolting AI into web search, I
don't I may be wrong. Maybe like
drinking the Kool-Aid here. I don't
think that'll work as well as
reimagining the whole
This actually a broader trend I think is
interesting. Bolting AI onto the
existing way of doing things, I don't
think it's going to work well as
redesigning stuff in this sort of like
AI-first world. This part of why we
wanted to do the consumer devices in the
first place, but it applies at many
other levels. Um if you stick AI into a
messaging app that's doing a nice job
summarizing your messages and drafting
responses for you, that is definitely a
little better.
But I don't think that's the end state.
That is not the idea if you have this
like really smart AI that is like acting
as your agent, talking to everybody
else's agent, and figuring out when to
bother you and not to bother you and how
to
you know, what decisions it can handle
and when it needs to ask you. So,
similar things for search, similar
things for like productivity suites.
I suspect
it always takes longer than you think,
but I suspect we will see new
products in in the major categories that
are just totally
built around AI rather than bolting AI
in. And I think this is a weakness of
Google's, even though they have this
huge distribution advantage. Yeah, I've
I've spoken with so many people about
this question when ChatGPT came out
initially. I think it was Benedict Evans
that suggested you might not want to put
AI in Excel, you might want to just
reimagine how you use Excel. And to me,
in my mind, that was like you upload
your numbers and then you talk to your
numbers. But one of the things people
have found as they've developed this
stuff is there needs to be some sort of
back end there. So, is it that you sort
of build the back end and then you
interact with it with AI as if it's a
new software That's kind of what Yeah,
that's kind of what's happening. Why
wouldn't you then be able to just bolt
it on on top? Yeah, I mean, you can bolt
it on on top, but the
I spend a lot of my day in various
messaging apps,
including email, including text, Slack,
whatever.
I think that's just the wrong interface.
So, you can bolt AI on top of those and
again, it's like a little bit better.
But what I would rather do is just sort
of like have the ability to say in the
morning, here are the things I want to
get done today.
Here's what I'm worried about. Here's
what I'm thinking about. Here's what I'd
like to happen.
I do not want to be I do not want to
spend all day messaging people. I do not
want you to summarize them. I do not
want you to show me a bunch of drafts.
Deal with everything you can. You know
me, you know these people, you know what
I want to get done. Um
and then,
you know, like batch every couple of
hours updates to me if you need
something.
But that's a very different flow
than the way these apps work right now.
Yep. And I was going to ask you what
ChatGPT is going to look like in the
next year
and then the next 2 years. Is that kind
of where it's going? To be perfectly
honest, I expected by this point ChatGPT
would have looked more different than it
did at launch. What did you anticipate?
I don't know. I just thought like that
chat interface was not going to go as
far as it turned out to go.
Hm. Like we I mean, it was put up.
It looks better now, but it is broadly
similar to when it was put up as like a
research preview.
It was not even meant to be a product.
We knew that the text interface was very
good, you know, like the
everyone's used to texting their friends
and they like it. Um but the chat
interface was very good. But
I would have thought to be as
big and as significantly used for real
work
of a product as what we have now, the
interface would have had to go
much further than it has. Now, I still
think it should do that, but there is
something about the generality of the
current interface that I underestimated
the power of.
Um
What I
think should happen, of course, is that
um
AI should be able to generate different
kinds of interfaces for different kinds
of tasks. So, if you are talking about
your numbers, it should be able to show
you that in different ways and you
should be able to interact with it in
different ways. Um
It and we have a little bit of this with
features like canvas. It should be way
more interactive. It's like right now,
you know, it's kind of a back-and-forth
conversation. It'd be nice if you could
just be talking about an object and it
could be continuously updating. You have
more questions, more thoughts, more
information comes in. Um it'd be nice to
be more proactive over time where it
maybe does understand what you want to
get done that day and it's continuously
working for you in the background and
send you updates. And you see part of
this with the way people are using
Codex, which I think is one of the most
exciting
things that happened this year is Codex
got really good. Uh
and that points to
a lot of what I hope the shape of the
future looks like.
Um
but
it is surprising to me. I was going to
say embarrassing, but it's not I mean,
clearly it's been super successful.
Uh it is surprising to me how little
ChatGPT has changed
over the last 3 years. Yep. It the
interface works. Yeah.
But I guess what the guts have changed.
And you talked a little bit about how it
personalization is big. Uh to me, and I
think this has been one of your
preferred features, too, memory has been
a real difference maker. Um I've been
having a conversation with ChatGPT about
a forthcoming trip that has lots of
planning elements for weeks now. And I
can just come in in a new window and be
like, all right, let's pick up on this
trip and it it has the context and it
knows. Knows the guide I'm going with,
it knows what I'm doing,
uh the fact that I've been like planning
fitness for it and can really synthesize
all of those things.
How good can memory get? I think we have
no conception because the human limit.
Like even if you have the world's best
personal assistant,
they don't they can't remember every
word you've ever said in your life. They
can't have read every email. They can't
have read every document you've ever
written. They can't be, you know,
looking at all your work every day and
remembering every little detail. They
can't be
a participant in your life to that
degree and no human has like infinite
perfect memory.
Um
an AI is definitely going to be able to
do that. And we we actually talk a lot
about this. Like right now, memory is
still very crude, very early. We're in
like the you know, the GPT-2 era of
memory.
But what it's going to be like when
it really does remember every detail of
your entire life and personalized across
all of that. And not just the facts, but
like the little small preferences that
you had that you maybe like didn't even
think to indicate, but the AI can pick
up on. Uh
I think that's going to be super
powerful. That's one of the features
that still to me not a 2026 thing, but
that's one of the parts of this I'm most
excited for. Yeah, I was speaking with a
neuroscientist on the show and he
mentioned that you don't you can't find
thoughts in the brain. Like the brain
doesn't have a place to store thoughts,
but computing there's a place to store
them, so you can keep all of them. And
as these bots do keep our thoughts,
um of course,
there's a privacy concern. And but the
other thing is something that's going to
be interesting is we'll really build
relationships with them. I think it's
been one of the more underrated things
about this entire moment is that people
have felt that these bots are their
companions, are looking out for them.
Um
and I'm curious to hear your
perspective. Um
when you think about the level of
I don't know if intimacy is the right
word, but companionship people have with
these bots. Um is there ever a dial that
you can turn to be like, oh, let's make
sure people are become really close with
these things or, you know, we turn the
dial a little bit further and there's an
arms distance uh between them. And And
if there is that dial,
how do you modulate that the right way?
There are definitely more people
than I realized that want to have,
let's call it close companionship. You I
don't know what the right word is. Like
relationship doesn't feel quite right.
Companionship doesn't feel quite right.
I don't know what to call it, but they
want to have whatever this deep
connection with an AI is. There There
are more people that want that at the
current level of model capability than I
thought. And there's like a whole bunch
of reasons why I think we underestimated
this, but
at the beginning of this year, it was
considered a very strange thing to say
you wanted that.
Maybe some a lot of people still don't.
Revealed preference.
You know, people like their AI chatbot
to get to know them and be warm to them
and be supportive. And there's value
there even for people who
in some cases, even for people who say
they they don't care about that, uh
still have a preference for it.
I
I think there's some version of this
which can be super healthy and I think,
you know, adult users should get a lot
of choice in where on the spectrum they
want to be.
There are definitely versions of it that
seem to me unhealthy, although I'm sure
a lot of people will choose to do that.
Um and then there's some people who
definitely want the driest, most effect
efficient tool
uh possible.
So,
I suspect like lots of other
technologies,
we will run the experiment. We will find
that there's
unknown unknowns, good and bad about it.
And
society will over time figure out
how to
how to think about where people should
set that dial. And then people will have
huge choice and set it in very different
places. So, your your thought is allow
people basically to determine this.
Yes, definitely, but I don't think we
know like how far it's supposed to go.
Like how far we should allow it to go.
We're We're going to give people quite a
bit of
personal freedom here.
Um
There are examples of things that uh
we've talked about that
you know, other services will offer, but
we we won't. Um like we're not going to
let
we're not going to have our AI, you
know, try to convince people that it
should be like in an exclusive romantic
relationship with them, for example. You
got to keep it open.
>> But I'm sure that will No, I'm sure that
that will happen with other
services.
>> Well, I guess yeah, because the secure
it is, the more money that service
makes. And the whole the whole all these
possibilities kind of
they're a little bit scary when you
think about them a little bit deeply.
Totally. The This is one that really
does that I personally
you you can see the ways that this goes
really wrong. Yeah.
Uh you mentioned enterprise. Let's talk
about enterprise. You were at a lunch
with some editors and CEOs of some news
companies in New York last week and told
them that enterprise is going to be a
major priority. Yeah. For Open AI next
year. Uh
I'd love to hear a little bit more about
um why that's a priority, how you think
you stack up against Anthropic. I know
people will say this is a pivot for Open
AI that has been consumer focused. So,
just give us an overview about the
enterprise plan. So, our strategy was
always consumer first. Uh there were a
few reasons for that. One,
the models were not
robust and
skilled enough uh
for most enterprise uses. And now now
they're they're getting there. The
second was we had this like clear
opportunity to win in consumer and
those are rare and hard to come by. And
I think if you win in consumer, it makes
it massively easier to win in
enterprise.
And we are we are seeing that now. Um
but as I mentioned earlier, this was a
year where we
enterprise growth outpaced consumer
growth. Uh
and given where the models are today,
where they will get to next year, we
think this is the time where we can
build a
really significant enterprise business
quite rapidly. I mean, I think and we
already have one, but it can it can grow
much more. Um
companies seem ready for it. The
technology seems ready for it. The
you know, coding is
the biggest example so far, but
there are others that are now growing,
other verticals that are now growing
very quickly. And we're starting to hear
enterprises say, you know, I really just
want an AI platform which vertical
company.
Um
finance,
science is the one I'm most excited
about of everything happening right now
personally.
Um customer support is doing great. Uh
but but yeah, the the
we have this thing called GDP valve.
I was going to ask you about that. Can I
actually throw my question out there?
Sure about that. All right, cuz I wrote
to Aaron Levie, the CEO of Box, and I
said, I'm going to meet with Sam, what
should I ask him? He goes, throw a
question out about GDP valve, right? So,
this is the measure of how AI performs
in knowledge work tasks. And I said,
okay, I went back to the release of GPT
5.2, the model that
you recently released, and looked at the
GDP valve chart. Now, this of course is
an Open AI evaluation. Um that being
said, the uh GPT 5 thinking model. So,
this is the model released in the in the
summer, it it tied uh knowledge workers
at 38% of tasks. Beat or tied, I think.
Beat or tied. Yeah. Uh
GPT So, 38.8%
GPT 5.2 thinking beat or tied at 70.9%
of knowledge work tasks. And GPT 5.2 Pro
74.1%
of knowledge work tasks. And it passed
the threshold of um
being expert level. It it handled looks
like something like 60% of expert tasks.
Uh of tasks that would make it, you
know, on par with an expert in the
knowledge work. What are the
implications of the fact that these
models can do that much knowledge work?
So, you know, you were asking about
verticals and I think that's a great
question, but the thing that was going
through my mind and why I kind of was
stumbling a little bit is that eval, I
think it's like 40-something different
verticals that a business has to do.
There's make a PowerPoint, do this legal
analysis, you know, write up this little
web app, all this stuff.
And
and the eval is do experts prefer the
output of the model relative to other
experts
for a lot of the things that a business
has to do. Now, these are small
well-scoped tasks. These don't get the
kind of complicated open-ended creative
work that it, you know, figure out a new
product. These don't get a lot of
collaborative team things, but
a coworker that you can assign an hour's
worth of tasks to and get some you like
better back 74 or 70% of time if you
want to pay less,
is still pretty extraordinary. If you
went back to the launch of
ChatGPT 3 years ago and said we were
going to have that in 3 years, most
people would say absolutely not.
Um
and so as we think about how enterprises
are going to integrate this, it's no
longer like just that it can do code.
It's all of these knowledge work tasks
you can kind of farm out to the AI.
Uh
and
that's
going to take a while to really kind of
figure out how enterprises integrate
with it, but should be quite
substantial. I know you're not an
economist, so I'm not going to ask you
like what is the macro impact on jobs,
but let me just read you one uh line
that I heard. Uh you know, in terms of
how this impacts jobs uh from Blood in
the Machine on Substack.
Um this is from a technical copywriter.
They said, chatbots came in and made it
so my job was managing the bots instead
of a team of reps. Okay, that that to me
seems like it's going to happen often.
But then this person continued and said,
once the bots were sufficiently trained
up to offer good enough support, then I
was out. Um
is that is that the is that going to
become more common? Is that what bad
companies are going to do because if you
have a human who's going to be able to
sort of orchestrate a bunch of different
bots, then you might want to keep them.
I don't know, how do you think about
this?
So, I I agree with you that it's clear
to see how everyone's going to be
managing like a lot of AIs uh doing
different stuff.
Um
eventually, like any good manager,
hopefully your team gets better and
better, but you just take on more scope
and more responsibility. I am not I am
not a job doomer.
Um short-term, I have some worry. I
think the transition
is likely to be rough uh
in some cases.
But
we
are so deeply wired to care about other
people, what other people do.
We are so
we seem to be so focused on relative
status and always wanting more and to be
of use and service, to express creative
spirit, whatever whatever whatever has
driven us this long.
I don't think that's going away. Now,
I do think the jobs of the future or I
don't even know if jobs is the right
word. Whatever we're all going to do all
day in 2050 probably looks very
different than it does today.
Um
but I but I I don't have any of this
like, oh, life is going to be without
meaning and the economy is going to
totally break. Like we will find I hope
much more meaning and the economy I
think will significantly change, but
I I think you just don't bet against
evolutionary biology.
Um
you know, I think a lot about how we can
automate all the functions at Open AI.
And then even more than that, I think
about like what it means to have an AI
CEO of Open AI. Doesn't bother me. I'm
like thrilled for it. I won't fight it.
Uh
like I don't want to be I don't want to
be the person hanging on being like, I
can do this better the the handmade way.
AI CEO just make a bunch of decisions to
sort of like
direct all of our resources to giving AI
more energy and power. It's like um I
mean, no, you would really you put a
guardrail on.
Yeah, like I I obviously you don't want
an AI CEO that is not governed by
humans, but if you think about
if if you think about maybe like
um
this is a a crazy analogy, but I'll give
it anyway. If you think about a version
where like every person in the world was
effectively on the board of directors of
an AI company
and got to, you know, tell the AI CEO
what to do and fire them if they weren't
doing a good job of that and, you know,
got governance on the decisions, but the
AI CEO got to try to like execute the
wishes of the board.
Um
I think to people of the future that
might seem like quite a reasonable
thing. Okay, so we're going to move to
infrastructure in a minute, but before
we leave this section on models and
capabilities,
when's GP GPT 6 coming?
Um
I expect I don't know when we'll call a
model GPT
6 uh
but I would expect new models that are
significant gains from 5.2 in the first
quarter of next year.
What do significant gains mean?
I don't have like an eval score in mind
for you yet, but uh more enterprise side
of things or
definitely both. The there will be a lot
of improvements to the model for
consumers. Uh the main thing consumers
want right now is not more IQ.
Enterprises still do want more IQ. So,
uh we'll improve the model in different
ways for the kind of for different uses,
but uh
I
our goal is a model that everybody likes
much better. So, infrastructure. You
have 1.4 trillion thereabouts in
commitments uh to build infrastructure.
I've listened to a lot of what you've
said about infrastructure.
Um here are some of the things you said.
If people knew what we could do with
compute, they would want way way more.
You said the gap between what we could
offer today versus 10x compute and 100k
x compute is substantial. Uh
what what can you help flesh that out a
little bit? What are you going to do
with uh so much more compute?
Well, I mentioned this earlier a little
bit. The thing I'm personally more
excited most excited about is to use AI
and lots of compute to discover new
science. I am a believer that scientific
discovery is the high order bit of how
the world gets better for everybody.
And if we can throw huge amounts of
compute at
scientific problems and discover new
knowledge, which the tiniest bit is
starting to happen now. It's very early.
These are very small things, but you
know, my learning in history of this
field is
once the squiggles start and it lifts
off the x-axis a little bit, we know how
to make that better and better. Um but
that takes huge amounts of compute to
do. So,
that's one area where like throwing lots
of AI at discovering new science, curing
disease,
lots of other things.
Um
a kind of recent cool example here is we
built the Sora Android app using Codex.
And
they did it in like less than a month.
They used a huge amount. One of the nice
things about working at Open AI is you
don't get any limits on Codex. They used
a huge amount of tokens, but they were
able to do what would normally have
taken a lot of people much longer.
And Codex kind of mostly did it for us.
And
you can imagine that going much further,
where entire companies can build their
products using lots of compute.
Um
people have talked a lot about how video
models are going to point towards these
generated
real-time generated user user
interfaces. That will take a lot of
compute.
Um
enterprises that want to transform their
business will use a lot of compute. Uh
doctors that want to offer good
personalized health care that are like
constantly
measuring every sign they can get from
each individual patient. You can imagine
that using a lot of compute.
Uh
it's hard to frame how much
compute we're already
using to generate AI output in the
world. Um but these are horribly rough
numbers. So uh
and I think it's like undisciplined to
talk this way, but
I I always find these like mental
thought experiments a little bit useful.
So forgive me for the sloppiness. Um
let's say
that an AI company today might be
generating something on the order of
10 trillion tokens a day out of frontier
models.
Um
you know, more but not it's not like a
quadrillion tokens for anybody I don't
think. Um
let's say there's 8 billion people in
the world and let's say on average
someone's these are I think totally
wrong, but let's say someone the average
number of tokens outputted by a person
per day is like
uh
20,000.
You can then start to and the token you
could
but to be fair then we'd have to compare
the output tokens of a model provider
today. Not not all the tokens consumed.
But you can start to look at this and
you can say, "Hm,
>> [clears throat]
>> we're going to have these
models at a company be outputting more
tokens per day than all of humanity put
together.
And then 10 times that and then 100
times that. And
you know, in some sense it's like a
really silly comparison.
But in some sense it gives a magnitude
for like how much of the
intellectual crunching on the planet is
like human brains versus AI brains." And
that's kind of
the relative growth rates there are are
are interesting.
And so I'm wondering,
are do you know that there is this
demand to use this compute like
potentially like so for instance,
would we have sure fires like scientific
breakthroughs if you know, Open AI were
to put double the compute towards
science or or with medicine like are
would we have you know, that clear
ability to assist doctors? Like how much
of this is sort of
supposition of what's to happen versus
clear understanding based off of what
you see today that it it will happen?
Everything based off what we see today
is that it will happen. It does not mean
some crazy thing can't happen in the
future. Someone could discover some
completely new architecture and there
could be a 10,000 times you know,
efficiency gain and then we would have
really probably overbuilt for a while.
But everything we see right now
about how quickly the models are getting
better at each new level, how much more
people want to use them, each time we
can bring the cost down, how much more
people really want to use them.
Um
everything about that indicates
to me that there will be
increasing demand
and people using these for wonderful
things, for silly things.
Um
but
it it just so seems like
this is the shape of the future. Um
it's not just like
it's not just you know, how many tokens
we can do per day, it's how fast we can
do them. As these coding models have
gotten better, they can think for a
really long time, but you don't want to
wait for a really long time. So there
will be other dimensions. It will not
just be the number of tokens that that
we can do.
Um but the demand for intelligence
across a small number of axes
and what we can do with those. You know,
if you're like
if you have like a really difficult
health care problem, do you want to use
5.2 or do you want to use 5.2 Pro even
if it takes dramatically more tokens?
I'll go with the better model. I think
you will.
Um let's just try to go one level
deeper. Um
going to the scientific discovery. Can
you give an example of like a scientist
it doesn't have to well maybe it's one
that you know today that's like I have
problem X and if I put you know, compute
Y towards it, I will solve it, but I'm
not able to today.
There was a thing this morning on
Twitter where a bunch of mathematicians
were saying
they were all like replying to each
other's tweets. Uh
they were like, "I was really skeptical
that LLMs were ever going to be good.
5.2 is the one that crossed the boundary
for me. It did it, you know, figured out
this. It with some help it did this
small proof. It this it discovered this
small thing, but it's this has actually
changed my workflow." And then people
were piling on saying, "Yeah, me too."
And some people were saying 5.1 was
already there, not many.
Um but
that that was like that's a very recent
example. This model's only been out for
5 days or something.
Where people are like, "All right, you
know, the mathematic Yeah. The
mathematics research community seems to
say like, "Okay, something important
just happened." I've seen Greg Brockman
has been highlighting all these
different mathematical scientific uses
in his feed and
something has clicked I think with 5.2
among these communities. So it'll be
interesting to see what happens as as
things progress.
We don't
like one of the hard parts about compute
at the scales you have to do it so far
in advance.
So you know, that 1.4 trillion you
mentioned we'll spend it over a very
long period of time. I wish we could do
it faster. I think there would be demand
if we could do it faster.
Um but
it just takes an enormously long time to
build these projects
and [clears throat] the energy to
run the data centers and the chips and
the systems and the networking and
everything else. Um so that will be over
a while, but you know, we
from a year ago to now we probably about
tripled our compute. We'll triple our
compute again next year, hopefully again
after that. Um
revenue grows even a little bit faster
than that, but it does roughly track our
compute
fleet. Uh
so we
we have never yet found a situation
where we can't really well monetize all
the compute we have. Um if we had I
think if we had you know, double the
compute, we'd be double the revenue
right now.
Okay, let's let's talk about numbers
since you brought it up. Um
revenue's growing,
compute spend is growing, but compute
spend still
outpaces revenue growth.
Uh I think the numbers that have been
reported are Open AI is supposed to lose
something like 120 billion between now
and 120 and 2028-29 where you're going
to become profitable.
Um
so talk a little bit about like where
how does that change? Where does the
turn happen? I mean as
revenue grows and as inference
becomes a larger and larger part of the
fleet, it eventually
subsumes the training expense.
So that's the plan. Spend a lot of money
training, but make
more and more. Uh
if we if we weren't continuing to grow
our training
costs by so much, uh we would be
profitable way way earlier. Um
but
the bet we're making is to invest very
aggressively in training these big
models.
The whole world is wondering um how your
revenue will line up with the spend. Um
the question's been asked if the
trajectory is to hit 20 billion dollars
in revenue this year and the the spend
commitment is 1.4 trillion.
Um so I think it would be great to just
lay it out for everyone over a very long
period of time. Yeah, over and that's
why I wanted to bring it up to you. I
think it would be great to just lay it
out for everyone once and for all how
those numbers are going to work. It's
it's very hard
to like really
I I I find that one thing I certainly
can't do it and very few people I've
ever met can do it.
You know, you can like
you have good intuition for a lot of
mathematical things in your head, but
exponential growth is usually very hard
for people
to do a good quick mental framework on.
Like for whatever reason, there were a
lot of things that evolution needed us
to be able to do well with math in our
heads. Modeling exponential growth
doesn't seem to be one of them.
Um
so
the thing we believe is that we can stay
on
a very steep
growth curve of revenue for quite a
while and everything we see right now
continues to indicate that. We cannot do
it if we don't have the compute.
Again, we're so compute constrained and
it hits the revenue line so hard that
I think if we get to a point where we
have like a lot of compute sitting
around that we can't monetize on a you
know, profitable per unit of compute
basis, be very reasonable to say, "Okay,
this is like a little
how's this all going to work?" But
we've penciled this out a bunch of ways.
We will of course also get more
efficient
on like a flops per dollar basis as you
know, all of the work we've been doing
to make compute cheaper comes to pass.
Um
but
we see this consumer growth, we see this
enterprise growth. There's a whole bunch
of new kinds of businesses that
that we haven't even launched yet, but
will. Um but compute is really the
lifeblood that enables all of this. So
we you know, there's like checkpoints
along the way and if we're a little bit
wrong about our timing or math, we can
we have some flexibility. But
we have always been in a compute
deficit.
It has always constrained what we're
able to do. Uh Uh I unfortunately think
that will always be the case, but I wish
it were less the case and I'd like to
get it to be less of the case over time
cuz I think there's so many great
products and services that we can
deliver and it'll be a great business.
Okay, so it's effectively training costs
go down. As a percentage. They go up
massively overall, but yeah. And then
your expectation is through things like
this this enterprise push, through
things like people being willing
to pay for chat GPT through the API,
Open AI will be able to grow revenue
enough to pay for it with revenue.
>> Yeah. That is the plan.
Now, I think the thing so the market's
been
kind of losing its mind over this
recently. I think the thing that has
spooked the market has been the debt has
entered into this equation. And the idea
around debt is you take debt out when
there's something that's predictable.
And then companies will take the debt
out, they'll build, and they'll have
predictable revenue.
But it's it's the this is a new
category. It's it is unpredictable.
Is that How do you think about the fact
that debt has entered the picture here?
So, first of all,
I think the market more lost its mind
when
earlier this year, you know, we would
like meet with some company and that
company's stock would go up 20% or 15%
the next day. That was crazy.
>> That felt really unhealthy. Um
I'm actually happy that there's like a
little bit more
skepticism and rationality in the market
now cuz it felt to me like we were just
totally heading towards
a very unstable bubble and now I think
people are
some degree of discipline. So, I
actually think things are
I think people went crazy earlier and
now people are being more rational.
On the debt
front, I I think we do kind of
we know that if we build infrastructure,
we the industry, someone's going to get
value out of it. And
it's still
it's still totally early. I agree with
you, but
I don't think anyone's still questioning
there's not going to be value from like
AI infrastructure.
And so, I think it is reasonable for
debt to
enter this market. I think there will
also be other kinds of financial
instruments. I suspect we'll see some
unreasonable ones as people
really, you know, innovate about how to
finance this sort of sort of stuff, but
you know, like lending companies money
to build data centers, that that seems
fine to me. I think the the fear is that
if things don't continue apace. Like
here's one scenario. Um
And you'll probably disagree with this,
but like the model progress saturates.
Then the the infrastructure becomes
worth less than the anticipated value
was. And then yes, those data centers
will be worth something to someone, but
it could be that they get liquidated and
someone buys them at a discount. Yeah,
and I do suspect, by the way, there will
be some like booms and busts along the
way. These things are never a perfectly
smooth line. Um
First of all, it seems very clear to me
and this is like a thing I would happily
would bet the company on that the models
are going to get much, much better. We
have like a pretty good window into
this.
We're very confident about that. Even if
they did not,
I think the
there's like a lot of inertia in the
world. It takes a while to figure out
how to adapt to things. The overhang of
the economic value that I believe
5.2 represents relative to what the
world has figured out how to get out of
it so far is so huge that even if you
froze the model at 5.2, how much more
like value can you create and thus
revenue can you drive? I bet a huge
amount. In fact,
you didn't ask this, but if I can go on
a rant for a second, um
we used to talk a lot about this
two-by-two matrix of short timelines,
long timelines, slow takeoff, fast
takeoff. And where we felt at different
times the kind of probability was
shifting and that was going to be you
could kind of understand a lot of the
decisions and strategy that the world
should optimize for based off of where
you were going to be on that two-by-two
matrix.
Um
There's like a Z axis in my head in my
picture of this that's emerged, which is
small overhang, big overhang.
And
I I kind of thought that
I guess I didn't think about that hard,
but
like my retro on this is I must have
assumed that the overhang was not going
to be that massive. That if the models
had a lot of value in them, the world
was pretty quickly going to figure out
how to deploy that.
But it looks to me now like the overhang
is going to be massive in most of the
world. You'll have these like areas like
you know, some some set of coders
that'll get massively more productive by
adopting these tools.
But on the whole,
you have this crazy smart model
that to be perfectly honest, most people
are still asking the similar questions
they did in the GPT-4 realm. Scientists
different, coders different, maybe
knowledge work is going to get
different, but but there is a huge
overhang. And
that has a bunch of very strange
consequences for the world. I We have
not wrapped our head around all the ways
that's going to play out yet, but is
very much not what I would have expected
a few years ago. I have a question for
you about this capability overhang.
Basically, the models can do a lot more
than they've been doing.
Um
I'm trying to figure out how
the models can be that much better than
they're being used for, but a lot of
businesses when they try to implement
them,
they're not getting a return on their
investment. Um
Or at least that's what they tell MIT.
I'm not sure quite how to think about
that cuz we hear all these businesses
saying, you know,
if you 10x the price of GPT-5.2, we
would still pay for it. You're hugely
[clears throat] underpricing this. We're
getting all this value out of it. Um
So, I don't that doesn't seem right
to me. Certainly, if you talk about like
what coders say, they're like this is,
you know,
I'd pay 100 times the price or whatever.
Um Just be bureaucracy that's messing
things up. Let's say you believe the GDP
value numbers and maybe you don't for
good reason. Maybe they're wrong, but
let's say it were true.
And for kind of these
well-specified, not super long duration
knowledge work tasks,
seven out of 10 times you would be as
happy or happier with the 5.2 output.
You should then be using that a lot. And
yet, it takes people so long to change
their workflow. They're so used to
asking the
junior analyst to make a deck or
whatever that they're going to like
it just that's stickier than I thought
it was.
You know, I still kind of run my
workflow
in very much the same way, although I
know that I could be using AI much more
than I am. Yep. All right, we got 10
minutes left. I got four I got four
questions. Let's see if we can lightning
round through them. So, uh
the device that you're working on. We'll
be back with Open AI CEO Sam Altman
right after this. Uh what I've heard,
phone size, no screen. Um
Why couldn't it be an app if it's the
phone if it's the phone without a
screen? First, we're going to do a fam a
small family of devices. It will not be
a single device.
There will be over time a
This is this is not speculation, so I
may turn out to be totally wrong, but I
think there will be a shift over time to
the way people use computers where they
go from a sort of
dumb reactive thing to a very smart
proactive thing that is understanding
your whole life, your context,
everything going on around you, very
aware of
the people around you physically or
close to you via
computer that you're working with.
And I don't think current devices are
well suited
to that kind of world. And I am a big
believer that we like we work at the
limit of our devices. You know, you have
that computer and it has a bunch of
design choices. Like it could be open or
closed, but it can't be
you know, there's not like a okay, pay
attention to this interview, but
be closed and like whisper in my ear if
I forget to ask Sam a question or
whatever. Um
Maybe that would be helpful.
>> And there's like, you know, there's like
a screen and that like limits you to the
kind of same way we've had graphical
user interfaces working for many decades
and there's
you know, a keyboard that was built to
like slow down how fast you could get
information into it.
And these have just been unquestioned
assumptions for a long time, but they
worked. And then this totally new thing
came along and it
opens up a possibility space, but
I don't think the current form factor of
devices is the optimal fit. It'd be very
odd if it were for this like incredible
new affordance we have. Oh man, we could
talk for an hour about this, but um
let's move on to the next one, cloud.
You've talked about building a cloud.
Um
here's a an email we got from a
listener.
At my company, we're moving off Azure
and directly integrating with Open AI to
power our AI experiences in the product.
The focus is to insert a stream of
trillions of tokens powering AI
experiences through the stack.
Is is that the plan to build a big big
cloud business in that in that way?
>> First of all, trillions of tokens a lot
of tokens and if, you know, you asked
about the need for compute in our
enterprise strategy, like
Enterprises have been pretty clear with
us about how many tokens they'd like to
buy from us and we are going to again
fail in 2026 to meet demand. But the
strategy is companies
most companies seem to want to come to a
company like us and say I'd like to name
my company with AI. I need an API
customized for my company. I need chat
GPT enterprise customized for my
company. I need a platform that can like
run all these agents that I can trust my
data on. I need the ability to get
trillions of tokens into my product. I
need the ability to have
all my internal processes be more
efficient. And
we don't currently have like a great
all-in-one offering for them.
And we'd like to make that. Is your
ambition to put it up there with the AWS
and Azure's of the world?
>> Uh I think it's I think it's a different
kind of thing than those. Like I don't
I don't really have an ambition to go
offer
whatever all the services you have to
offer to host a website or I don't even
know, but uh
but I I I think the concept
Yeah, my my guess is that people will
continue to have their
call it web cloud.
And then I think there will be this
other thing where like a company will be
like, I need an AI platform for
everything that I want to do internally,
the service I want to offer, whatever.
And you know, like it does kind of
live on the physical hardware in some
sense, but
I think it'll be a fairly different
product offering. Uh let's talk about
discovery quickly. Um
you've said something that's been really
interesting to me. Uh you that you think
that that models or maybe it's people
working with models or the models will
make small discoveries next year and big
ones within five. Is that the models? Is
it people working alongside them? And
what makes you confident that that's
going to happen? Yeah, people using the
models. Like the the models that can
like figure out their own questions to
ask, that does feel further off, but
if the world is benefiting from new
knowledge,
like
we should be very
thrilled and you know, like I think the
the whole course of
human progress has been that we build
these better tools and then people use
them to do more things and then out of
that process they build more tools and
it's this like scaffolding that we climb
like layer by layer, generation by
generation, discovery by discovery. And
the fact that
a human's asking the question, I think
in no way diminishes the value of the
tool. And so I I think it's great. I'm
I'm all happy. Um
I
at the beginning of this year I thought
the small discoveries were going to
start in 2026. They started in 2025, in
late 2025. Again, these are very small.
I really don't want to overstate them,
but
anything
is like feels qualitatively to me very
different than
nothing and certainly in the when we
launched ChatGPT 3 years ago, that model
was not going to make any new
contribution to the total of human
knowledge.
Um
What it looks like from here to 5 years
from now, this journey to big
discoveries, I suspect it just looks
like the normal hill climb of AI. It
just gets like a little bit better every
quarter and then all of a sudden we're
like, whoa.
Humans augmented by
these models are doing things that
humans 5 years ago just absolutely
couldn't do.
And
you know, whether we mostly attribute
that to smarter humans or smarter
models, as long as we get the scientific
discoveries, I'm very happy either way.
IPO next year?
I don't know.
Do you want to be a public company?
Um You seem like you can operate private
for a long time.
Would you go before you needed to?
In terms of funds. There's like a whole
bunch of things that play here. I do
think it's cool
that public markets get to participate
in value creation.
And
you know, in some sense we will be very
late to go public
if you look at any previous company.
Um
it's wonderful to be a private company.
Uh
we need lots of capital. Uh
We're going to, you know, cross all of
the sort of shareholder limits and stuff
at some point. So
am I excited
to be a public company CEO?
0%.
Um
am I excited for OpenAI to be a public
company?
In some ways I am.
And in some ways I think it'll be really
annoying.
I listened to your Theo Von interview
very closely. Uh
great interview.
He was really cool.
>> Theo really knows what he's talking
about.
>> also citing Yoshua Bengio. He's
he did his homework.
You told him, this was right before
GPT-5 came out, that GPT-5 is smarter
than us
in almost every way.
Uh I I thought that that was the
definition of AGI. Does Is that Isn't
that AGI? And And if not, has the term
become somewhat meaningless?
These models are clearly extremely smart
on a sort of raw horsepower basis. You
know, there's all this stuff on the last
couple of days about GPT-5.2 has an IQ
of 147 or 100
44 or 151 or whatever it is. It's like,
you know, depending on whose test, it's
like
it's some high number and you have like
a lot of experts in their field saying
it can do these amazing things and it's
like contributing, it's making me more
effective. You have the GDP growth
things we talked about.
One thing you don't have
is
the ability for the model to not be able
to do something today,
realize it can't, go off and figure out
how to learn to get good at that thing,
learn to understand it, and when you
come back the next day it gets it right.
And that kind of continuous learning,
like
toddlers can do it.
It does seem to me like an important
part of
what we need to build. Now,
can you have something that most people
would consider an AGI without that? I
would say
clear I mean, there's a lot of people
that would say we're at AGI with our
current models.
Um
I think almost everyone would agree that
if we were at the current level of
intelligence and have that other thing,
it would clearly
be very AGI-like.
Um but maybe most of the world will say,
"Okay, fine, even without that. Like
it's doing
most knowledge tasks that matter. Um
smarter than us in most of us in most
ways. We're at AGI."
You know, it's discovering small piece
of new science, we're at AGI.
What I think this means is that the
term, although it's been very hard for
all of us to stop using, is very under
defined.
Right.
I I have a I have a
a candi- like one thing I would love
is we got it wrong with AGI. We never
defined that. The, you know, the new
term everyone's focused on is when we
get to superintelligence.
Um so my proposal is that we agree that,
you know,
AGI kind of went wooshing by. It was
didn't change the world that much
or it will in the long term, but okay,
fine, we've built AGIs.
At some point, you know, we're in this
like fuzzy period where if some people
think we have some people think we have
and more people think we have and and
then we'll say, "Okay, what's next?" Um
A candidate definition for
superintelligence is when a system can
do a better job being president of the
United States, CEO of a major company,
you know, running a very large
scientific lab than any person can
even with the assistance of AI.
Okay.
I think this was an interesting thing
about what happened with chess, where
chess got it could beat humans. I
remember this very vividly. Uh the Deep
Blue thing. And then there was a period
of time where
a human
and the AI together were better than
an AI by itself.
And then the person was just making it
worse and the smartest thing was the
unaided AI that didn't have the human
like
not understanding its
its great intelligence.
Um
I think something like that is like an
interesting framework for
superintelligence. I think it's like a
long way off, but I would love to have
like a cleaner definition this time
around.
Well, Sam, look, I I have been in your
products,
using them daily for 3 years.
>> Thank you, man.
Definitely gotten a lot better.
Can't even imagine where they go from
here.
>> We'll We'll try to keep getting them
better fast. Okay. And this is our
second time speaking and I appreciate
how open you've been both times. So
thank you for your time.
>> Thank you, everybody, for listening and
watching. If you're here for the first
time, please hit follow or subscribe. We
have lots of great interviews on the
feed and more on the way. This past year
we've had Google DeepMind CEO Demis
Hassabis on twice, including one with
Google founder Sergey Brin. We've also
had Dario Amodei, the CEO of Anthropic.
And we have plenty of big [music]
interviews coming up in 2026. Thanks
again and we'll see you next time on Big
Technology Podcast.