Could LLMs Be The Route To Superintelligence? — With Mustafa Suleyman

Channel: Alex Kantrowitz

Published at: 2025-11-12

YouTube video id: j_3MPTLhHxM

Source: https://www.youtube.com/watch?v=j_3MPTLhHxM

Microsoft's AI CEO returns to explain
why the company is now pushing for super
intelligence, what that means, and how
Microsoft is moving forward after its
latest Open AI deal. That's coming up
right after this. Welcome to Big
Technology Podcast, a show for
coolheaded and nuanced conversation
[music] of the tech world and beyond.
Today we're joined once again by Mustafa
Sullean, the CEO of Microsoft AI and
also the [music] head of the company's
new super intelligence team, who is here
to speak with us about what that means,
what super intelligence is, but [music]
more broadly what the future of this
technology is going to look like and
whether we're at the end of the curve or
the [music] beginning or somewhere in
the middle. Anyway, we'll get into it
all. Mustafa, great to see you again.
Welcome to the show.
>> Hey, Alex. Uh, great to see you again.
Thanks for having me. It's always a
pleasure. And so recently you wrote this
post about a new push towards what you
call humanist super intelligence at
Microsoft. You say uh you're working
towards it. What you call incredibly
advanced AI capabilities that always
work for in service of people and
humanity more generally. Let me ask you
a question about this.
It's so interesting to me to see so many
labs running towards what they call
super intelligence, which I guess is
sort of like a cooler version of AGI.
Um, as the research is mixed about
whether we're going to see a lot more
progress with the current paradigm, a
lot of people are talking about
diminishing marginal returns. I think
we've talked about that. There's some
questions about the viability of LLMs in
terms of pushing the state-of-the-art
and AI forward. And yet it's we're also
seeing you know this push towards super
intelligence. So just explain as we
begin sort of the discrepancy there. Why
are we hearing so much about super
intelligence where we're not even sure
if the current methods are going to get
us to the step before which is AGI?
>> Yeah. I mean super intelligence and AGI
are really goals rather than methods.
And I think that the ambition is to
create superhuman performance at most or
all human tasks. like we want to have
medical super intelligence. We want to
have um the best expertise in medical
diagnosis be cheap and abundant uh and
available to billions of people around
the world. Um we also want to have
worldclass legal advice on tap that
costs almost nothing uh few bucks a
month. Um we want to have financial
advice.
We want to have emotional support. We
want to have uh software engineers
available on tap. And I think that the
project of super intelligence is about
saying um what type of very very
powerful intelligence systems are we
actually going to build? And what I'm
trying to propose is that we subject
each of these new technologies to a very
simple test. Like does it in practice
actually improve the prospects of human
civilization? And does it always keep
humanity at the top of the food chain?
Uh it sounds like a kind of simplistic
or obvious thing to have to declare, but
the goal of science and technology,
science and technology in my opinion is
like to advance human civilization, to
keep humans in control and to create
benefits for all humans. And I think in
some of the rhetoric in the last few
years, you can feel that there's a
little bit of like um
you know a kind of creeping assumption
that it is inevitable
that these kinds of systems exceed our
control and our capability and move
beyond us as a species, as a human
species. And uh I'm pushing back on that
idea with the framing around humanist
super intelligence. So I think it's
quite different. [snorts]
>> But then is your view that super
intelligence won't be one broad
intelligence that it will be you can
maybe achieve super intelligence in one
discipline when it's smarter than let's
say the best doctors in medicine but
maybe it's just like not there in
accounting for example.
One way of thinking about it is that how
we train these models at the moment is
that we work through verticals uh and we
make sure that we have training data um
knowledge expertise reasoning traces
chains of thought that reflect the kinds
of activities that people do in each one
of these disciplines to build their
expertise overall. So we're already
training generalist models from a
verticalized position. We're starting
off by saying what specific tasks are we
trying to optimize and um you know the
project of humanist super intelligence
is first trying to say what good will
this technology do and how will it be
safe and controllable and aligned to
human interests and one of those
dimensions of safety um is
verticalization.
If a model has been designed explicitly
to achieve medical super intelligence,
then by definition it isn't going to be
the best software engineer in the world,
it isn't going to be the best
mathematician or physicist. And so
narrowing the domain, not too much, not
entirely because you can't collapse it,
but narrowing it and reducing the
generality is one of the ways that I
think um is likely to help create more
control. It's not the only solution.
There are many other aspects of like how
we achieve containment and alignment but
domain specific models are one part of
it.
>> Is it possible that something can be
super intelligent but not generally
intelligent? Like is it possible that
maybe super intelligence happens without
AGI because AGI is all about generality
and what you're talking about is not.
>> It's not possible. I don't think I think
they need to be general. Um they need to
transfer knowledge from one domain to
another. they need to um you know um
have generalist reasoning capabilities.
But when you apply it and you put it
into production and [snorts] you let it
have more autonomy to make decisions or
you let it generate uh arbitrary code to
solve a particular problem or you let it
write its own evals so that it can
modify its own code and generate new
prompts to generate new training data to
write new evals to then iterate on its
own performance. these capabilities,
autonomy, goal setting,
um writing code, uh modifying itself,
you know, if you add to that then also a
perfectly generalized model or or sort
of general purpose model, that's a very
very very powerful system which today I
don't think anybody really knows how we
would contain or align something like
that. And so it's not to say that we
should not do any one of those
dimensions. It's just to outline a road
map of capabilities which we're all
working on which add more risk
especially when they compound with one
another and you combine them all
together. And so you know my claim is
that we should just approach this with
caution remembering that we don't want
to bundle together all these
capabilities so that there's a higher
risk of a you know recursively
self-improving exponential takeoff that
then replaces our species. And I think
that that is very low probability from
what I see today. But it's one that we
have to take seriously in the next like
10 years or so.
>> Okay. I do want to get to that in a bit.
But let me tell you what I find odd
about these conversations. And I want to
go back to the first question that I
asked you which is
researchers are talking about how the
current methods are leveling off. Um
give you one example. Data is not
plentiful. synthetic data not very not
very useful yet. Um power might be
running out and you need that scale a
lot of people say in order to make these
models better or at least even to run
the basic capabilities. So given the
limitations of LLMs um are you seeing
something that we're not that will sort
of pave the way to super intelligence? I
mean how do you get from here to there?
Look, I I think we're power limited but
not fundamentally power constrained. Um,
clearly there's like huge appetite to
build bigger data centers
um and train in larger more contiguous
clust more fully connected clusters. So
clusters where all the chips are
connected to each other um but that's
not the bottleneck at the moment. That's
not holding back progress. Obviously if
we had more right now it would
definitely help but there's many many
other things in the stack that are
slowing down progress. If we are not
data constrained right now, we're
generating vast amounts of highquality
synthetic data which is proving to be
useful. Um obviously again the same is
also true like more highquality data
would be great but I I don't see an a
slowing down in progress because of
either of those two things. Um, if
anything, the rate of progress has been
insane over the last five years. And to
expect us to continue to make doublings
every three months in the size of
clusters that are trained for the
largest models, you know, given the base
that we're now starting from when
training runs are often, you know, 50
megawatt or 100 megawatt or soon 500
megawatt. You know, you can't just
double on that every six months. There's
there's like the laws of physics kind of
do create restrictions and we're talking
about tens of billions of dollars of you
know cluster. So pace might slow a
little maybe but it's also clear that
pace is still going to be unbelievably
fast like you know um sort of sort of
objectively speaking. So I I I don't see
or fear or currently feel any sense that
things are slowing down or that we're
losing momentum. This is the quite the
opposite.
>> Well, then let me ask it this way. Do
you think LLMs are the way there?
>> Um,
look, I think one thing to consider is
that um, every year for the last few
years, there's been a major new
contribution to the field, still
principally based around the transformer
architecture. Um, but we're bending the
transformer architecture into new shapes
all the time. um fine-tuning emerged
three years ago on top of our
pre-trained models to adapt them to
specific use cases. Um they're now fully
multimodal which requires further
changes and the introduction of
diffusion models. Um then we had
reasoning models in the last 12 months
which again are still funly
fundamentally based on the same core
architecture things are just rearranged
slightly differently. Um so you know
even though the scaling laws weren't
able to continue exponentially in the
way they had from such a low base um new
methods appear on top of those like
reasoning and new methods will come uh
you know even newer methods will come
soon too. So for example um I expect
that there's going to be quite a lot of
progress in recurrency soon right the
moment you know the models don't kind of
attend to their working memory very well
um you know at at the moment when
they're training right and so you know I
think people are experimenting with lots
of different types of loss function and
lots of um training objectives
um you know the other one is memory like
I think memory is getting better and
better and I think is going to totally
change what's possible And the other one
is um the the sort of length of a task
horizon that can be predicted. So at the
moment it's like a few steps but soon it
will be tens of thousands hundreds of
thousands of steps accurately and
that'll mean that a model can like use
APIs or query a human or check another
database or call on another AI. And so
that will be another like uh sort of
exponential lift when something like any
one of those three things work. You'll
get another kind of um you know rapid
acceleration in progress. So I don't
think there's anything fundamentally
wrong with the LLM architecture and I
don't think we're fundamentally compute
or data constrained. I think that there
are so many people focused on this
problem now. There are just going to be
more and more um you know breakthroughs
coming.
>> Okay, that's very interesting. So your
perspective basically is that LM are the
path.
>> Yeah.
>> That we don't need another breakthrough.
That's a different model format to get
toward super intelligence.
>> Well, I mean so far no I don't think so.
I mean so far deep learning um and the
transformer model has been the workhorse
for
um I I guess like 12 years you know um
you know since Alexet um and you know
there's been variations on a theme but
it's it's been delivering and I I don't
think it's fair to say that it's like
not delivering at the moment. I think
it's I think it's really making a lot of
progress. Yeah, it's it's definitely
delivering. And it's so funny because
whenever I'm like bringing up these
criticisms, it's like some some way I'm
saying to myself, what do you want? The
computer is talking to you. But um the
question is sort of [laughter]
>> right. I feel silly being like, well,
where's more improvement? But I think
when we hear words like super
intelligence, then we see the gap
between where we are today and where you
want to head and those questions
naturally come up. And and just to go
back to the power thing, I was sort of
struck by Satya's comments in the
podcast with Brad Gersonner where he
said he has uh GPUs or chips that aren't
plugged in yet but need need he needs
warm shelves for them. Uh so I'm curious
to hear your perspective if you if we're
not power constrained right now, how
does that square up with the you know
the inability to plug these chips in
right now? Well, I think what he was
referring to is that we have so much
inference demand that we're power
constrained on inference. Um, we're not
power constrained, at least from the
Microsoft AI perspective, on training
chips and obviously my team, you know,
is mostly focused on on training right
now. So, obviously, Copilot is inference
constrained and desperately needs more
chips to scale and so does M365 and our
other products.
One more thing I want to talk to you
about on this super intelligence push is
the world model. Uh a lot of people have
talked about how these are models are
trained on text and some video. I mean
it's actually been amazing to watch them
uh be able to create video that has some
understanding of physics and liquids and
lighting. It's not really supposed to
happen that way, but it's doing it. Uh
but the there's been questions about
whether models understand gravity and
what happens in the real world and an
LLM can't drive a car right now. So
how's it going to be super intelligent?
So I I am curious to hear your
perspective on what's needed to or
whether if it's really a priority to
figure out like the physical world and
if so how you get there.
>> Yeah, that's a good question. I mean
right now um
you know it's actually amazing as you
say that models can learn from a
compressed representation of reality and
then produce a version of reality which
looks like the thing that has been
compressed from. I mean this is like
text and the description text describes
the physical world and the properties of
the physical world. the model has never
seen that and then actually is able to
produce very compelling
stories, code, business plans, videos
and so on. Um so surprising that we've
come so far with that structure. Um I'm
kind of open-minded about like um you
know sort of robotics and streams of
input from the real world. I mean, I
think that you my instinct is that you
can't just like crudely pile this data
into existing pre-training runs because
um you know those runs have tokenized or
they they've sort of described uh text
data in um a certain way and that you
know meshing that with other like
telemetry data from a robotic arm for
example, you'd have to think about like
at what level of abstraction to do that
and obviously there's good specialist
models that um have become pretty good
at that. Um but I don't think right now
at least that like that is holding us
back. I I think in general more data is
always better but um you know I I don't
think in the next few years it's going
to be the big differentiator. I think
that more synthetic data, more human
feedback and high quality data is going
to be the differentiator.
>> Okay. So you brought up recursively
self-improving AI models and maybe that
is where this path towards super
intelligence goes. Uh open AI has said
they want to build an automated AI
researcher by 2028
and I think every lab I'm curious if
this is your interest as well is just
trying to build AI that improves itself.
Is that realistic? I think that in some
ways the RL loop is already doing that.
Um, and at the moment there are human
engineers who are in the loop who are
generating data and writing evals and
deciding what other data goes into
training runs and running ablations on
that data. Um, you can well imagine
different parts of that stack being
automated by subco components of AIS.
Like it doesn't necessarily mean that
one single system does it. Today we have
um you know RLHF the human feedback uh
grew into RLIif where we have AI judges
or AI raers to judge the quality and the
usefulness of data that was also AI
generated. Um and in many cases prompts
that are used to generate diverse
training data data were also AI
generated. So like you know today we're
at a at a point where data the core
commodity which is sort of driving the
progress of these models is you know
albeit not completely automatically in a
closed loop way at large scale you know
individual parts of that pipeline have
been you know developed by um you know
LLMs um so it doesn't seem very
far-fetched to say that in a few years
time at significant scale or that will
get closed loop and you know it'll be
interesting to see on you know what
happens and whether the quality bar can
be maintained and whether performance
does increase um I think it will um but
it's definitely something to be very
cautious about because um you know a
system like that could end up being very
very powerful.
Yeah, and I definitely want to talk to
you about the downsides of it, but we
had a debate on the show recently about
whether that is an an ambitious thing.
It even seems funny to say, but to me,
that's the ultimate ambition, right?
It's if you're able to do that, then
you're you get into a situation where,
you know, potentially you have fast
takeoff of intelligence. But I guess
it's hard to really imagine the and
maybe my imagination isn't there the
AI's finding the you know the next new
method like discovering reasoning on
their own. Um so talk about about both
of those ambition and then whether um
whether I'm just my imagination is too
small on this front. I mean I think the
self-play um work that uh we did at Deep
Mind you know back sort of six or seven
years ago now with Alpha Zero um you
know that that obviously paved the way
to the first large scale um you know
sort of self-improvement effort frankly
and I think everybody in the field is
aware that it can be done in a certain
domain where there's verifiable rewards
and where you're in a kind of closed
loop gaming type environment or
simulated environment. Um, and I think
people are thinking hard about how it
might be possible to recreate some of
the components of that um, in this
setting. Um, and you know, I I do think
that's going to drive a lot of progress
in the next few years. I think it's a
big area that everybody's focused on. um
you know because fundamentally
scale always ends up trumping um you
know uh you know anything else. And so
if if you can have models explore the
space of all possible you know sort of
combinations in a computer efficient way
then it may well discover reasoning by
itself. may discover um you know new
knowledge that we hadn't even you know
thought about ourselves or even like
found in in in in any training data to
represent that knowledge. So, but it is
highly inefficient, right? I mean,
learning from supervised examples with
SFT and stuff like that like imitation
learning is very efficient and clearly
works very well because these models
learn from from you know just as we've
talked about an incredible amount from
uh you know um from web text which is
really just a an artifact or a record of
of human interaction. So um but both are
going to be true. I think the RL
paradigm that involves more online
learning from streams of experience is
um is also like quite promising and I
think is kind of adjacent to if not
orthogonal to um imitation learning. So
we both of those experiments will like
sort of accelerate in the next few
years.
>> Now where could this go wrong? Well, I
think um being in the loop as a human
developer adds a certain amount of
friction. Um
and that oversight is quite important, I
think. Um
you know, if a system like that had an
unbounded amount of compute, it would
end up being incredibly powerful. Um,
and I think we have to sort of figure
out how to force these models to
communicate in a language that is
understandable to us humans. Um, you
know, and and that that's like a very
obvious safety thing to be able to
regulate the language that it uses so
that it, you know, we're already seeing
examples of what some people are calling
deception, but is really just like
kind of reward hacking. Hacking kind of
implies too much intentionality. So it's
just it's it's an accidental exploit is
found a path like you know to satisfying
the reward or achieving the reward um
you know in in unintended ways and so we
shouldn't anthropomorphize it. It didn't
deceive us. It didn't intentionally try
to hack us. It just found an exploit.
And that's a problem with poor
specification of the training objective
and of the reward function. And so, you
know, the way that we make that safer is
that we get sharper in our articulation
of like what is it that we're actually
trying to train for? Um what are you
know what are we trying to achieve? What
are we trying to pre prevent? Um and
then monitor
like you know monitor outputs during
training time rather than you know
reasoning traces, chains of thought and
so on rather than just at like the final
stage. So a a as we grant these models
more capacity to self-improve, we're
going to have to change the the the
framework with which that we use to kind
of provide oversight to them during
training.
>> We're here with Mustafa Sullivan. He is
the CEO of Microsoft AI. On the other
side of this break, we are going to talk
about well, it seems like there's a
little bit of a strategy shift here.
It's gone. Microsoft AI has gone from
wanting to work on the frontier of the
best models but not building them
themselves to trying to build super
intelligence. So why now? And what does
it mean now that Microsoft AI and OpenAI
have a new agreement? We will cover that
right after this. And we're back here on
Big Technology Podcast with Mustafa
Sulleman. He's the CEO of Microsoft AI.
Mustafa, is it a coincidence that
Microsoft just came to this agreement
with OpenAI that you could go ahead and
attempt to build AGI on your own that
you've now decided, hm, [clears throat]
let's go ahead and start a super
intelligence team or is that directly
related like I think it is?
>> No, I think it's directly related. Um,
you know, for I think that the Microsoft
Open AAI partnership is going to go down
as one of the most successful uh
partnerships in technology history for
both sides.
um you know Satia did this deal um you
know at a certain time when there was a
lot of risk and huge amount of upside
and I think you know the last 5 years
have turned out amazingly well for
Microsoft um but then Satia made a call
that like you know we've we've also got
to make sure that you know we're
self-sufficient in AI for a company of
our size it's inconceivable that we
could just be dependent on a you know on
a startup on a third party company um to
to provide us with such important IP.
Um, and so, you know, we basically took
the view that we should extend the IP
license through to 2032. We'll continue
to get model drops from OpenAI and and
and all their IP. We'll continue to be
their uh primary compute provider, a
huge scale to the tune of, you know,
billions and billions of dollars. Um,
and also we would remove the clause in
the contract that says that we couldn't
build super intelligence or AGI. Um and
that was actually expressed as a flops
threshold, a flops per second threshold
for a size of a certain training run. So
there's a big limitation on what we were
able to do. Um now that that is no
longer there, um you know, our team is
reforming around this idea of humanist
super intelligence. We are pursuing the
absolute frontier, training omni models
of all sizes, all weight classes to the
absolute max capability. And over the
next two or three years, you'll see us
really try to build out uh one of the
top labs in the world. I we want to
train the absolute best AI models on the
planet. Um and you know, we're we're a
very young lab. We've barely been going
for a year. Um but you know, we've got
some good models on the leaderboards, um
text and image and audio now. And you
know, over the next few years, we'll be
striving to be the absolute best we can.
I was just speaking with the chief
technology officer of a pretty big
technology company and this company has
decided not to build their own large
language models and it sounds a little
bit wild but I think it it makes sense
in a way that there's going to be
obviously like to to build these models
it's extremely expensive uh resource
intensive you don't always get a payoff
like we saw that with Meta and Llama.
I'm not saying that's what's going to
happen with you. Um, and maybe it makes
sense just to, you know, buy off the
shelf or, uh, use open source. And in
fact, that seemed like that was the
strategy that you had for a long time.
It it seems logical. Um, and so I'm
curious like why you would disagree with
that. Why is it so important to build
your own models?
I mean we we're going through a
foundational platform shift um you know
in software um from the operating system
to apps from browsers search engines
mobile social this is the next major
platform and it's going to be bigger
than all of the other platforms put
together so the idea that a $3 trillion
company with $300 billion of revenue and
80% of the S&P 500 on our Azure stack
and M365 stack um you could could depend
on a third party. This it's, you know,
just in perpetuity. It doesn't make
sense. So, we we you know, this is a
company that's been around for 50 years
uh and navigated many of the past
platform shifts incredibly well and
that's the that's the journey that we're
on. We have to be AI self-sufficient.
There's an important mission that uh
Satia set last year and I think that
we're we're now on a path to be able to
do that
>> and so hence the formation of the super
intelligence team.
>> Exactly. So we we're we're launching the
super intelligence team. Um we're going
to be focused on you know sot at all
levels but also pushing the frontier of
research. I mean there are many hard
problems in machine learning which you
know a few months ago we weren't really
focused on continual learning being one
like how do we store um representations
of knowledge in a way that they're
modifiable by different networks and
they kind of accumulate knowledge over
time just as humans do um rather than
having to retrain them from scratch. Um,
so that's just like one of many examples
of more fundamental research questions
that um our super intelligence team is
is is now going to spend time on.
>> Okay. Now, let's go back to the business
side of it. Uh, your episode's going to
air backtoback with an episode uh that
we'll run with Nick Kle, uh, the former
president of global affairs at Meta. And
you know, Meta, of course, they also
have a super intelligence lab. And we
were talking about the economics of it.
And Nick's [snorts] point was very
interesting. He said, "I don't see how
you can hoard super intelligence if you
build it." Um, I think his idea is if
Meta builds it, then Microsoft will
build it and OpenAI will build it. And
we've seen very like fast follows in
many of these labs after they come up
with a state-of-the-art model. And so
the question is, will it commoditize?
Will it be economically viable uh once
two companies build it? What do you
think?
>> Well, it's definitely commoditizing. um
you know the cost per token has come
down a thousandx in the last two years.
It's just a crazy crazy thought, right?
Um so things are getting massively
cheaper and more efficient and um you
know uh you know the top four or five
models are within you know a few tiny
percentage points of each other in terms
of performance. But that doesn't mean
that one can afford to leave that to the
market and just hope that somebody open
sources it. can use their open source
models. Um,
for a company of our scale, we have to
be able to do that. And I think, you
know, Microsoft is a platform of
platforms. You know, like our API is
critical. Many many people depend on it.
And I think if you're a, you know, a
smaller software company or a technology
company of any kind, I think you can
depend on the market, right, which is
very different. So, Amazon, uh, Google,
US, Anthropic,
um, I guess OpenAI, um, are all
providing, you know, APIs to the very
best language models in the world. And
that means that, you know, you you as a
buyer, even if you're a large public
company, are can feel pretty assured
that for the long term, there's going to
be healthy competitive forces driving
down prices and improving quality for
you to be able to use um, you know,
models via the API.
Right. And so I I understand why you'd
want to build it, but again, going back
to this question, it's like, okay, the
econ if it just doesn't seem like there
won't be a price war. I mean, if if you
have a couple of companies that do this.
>> Yeah. Yeah. Well, I mean, I think a
price war is a great thing for consumers
and for businesses. I mean, we're we're
bringing down the cost of intelligence.
I mean, I think that's an amazing story
for humanity. the ability to access
knowledge, uh the ability to use that
knowledge to get stuff done, to write
new programs, to do new scientific
discovery, to get access to AI
companions and emotional support. These
things are going to be zero marginal
cost in a decade. Uh that's a form of
abundance. That's the aspiration of
society and civilization in my opinion.
That's the great that's why I work on AI
is to make intelligence cheap and
abundant. Uh and it will be market
forces that drive down the cost of that.
So I think it's pretty cool.
>> But I I agree super cool. [laughter]
>> And again I I don't this is this these
conversations off me often put me in a
place where I don't want to be which is
like now I'm going to butt the idea that
there's going to be super intelligence
at zero cost which is again like from a
business standpoint.
>> Fair enough.
>> How does that make sense if the marginal
cost is zero?
>> Well I mean look at it. We we're still
going to uh you know charge you know a
significant amount for it. I mean yeah
we have $300 billion of revenue like I
said this is a huge company providing
great value but the point is where we
provide value to our customers our
customers will be happy to pay us for it
right and that means that you know good
integrations inside of M365
great models inside of um you know
GitHub and VS code um we have co-pilot
deploying on LinkedIn co-pilot in gaming
um our consumer product is growing from
strength to strength you know we just
crossed 100 million wow across all our
co-pilot surfaces. So, all the products
are growing great and uh you know
there's there's there's plenty of
revenue to be had in this transition. No
question.
>> Okay. Wow means week over week.
>> Oh, sorry. Yeah. Week no weekly active
user.
>> Oh. Oh, wu. Yeah, I guess I'm used to
and da but wow. Yeah.
>> Wow. Is that why has wow become the term
of art in
>> Oh, sure. Actually, I think we're all
using wow. Yeah. Yeah.
>> No guess as to what happened.
Uh we we've been using Wow since
Forever. I think it shows like a more
sustained engagement. Um yeah.
>> Okay. But not not not the daily
wouldn't. I mean I'm sure there's
>> Yeah. I don't know. It's always fun for
me to figure out why the acronyms are
the way they are. Um it'll remain a
mystery. So you actually do list uh so
you again you wrote about this. you list
a couple forms of intelligence that you
want to pursue and um one of them was a
personal companion or comp an AI
companion for everyone. U couple
questions for you to start on that one.
Let me just start with one.
>> You told me about a year ago that you
think that AI will differentiate on the
basis of personality. Do you still
believe that?
>> Definitely. Yeah. I mean, we we're right
at the very beginning of the emergence
of these very differentiated
personalities because all these models
are going to have great expertise.
They're going to have great capabilities
and they'll be able to take similar
actions like you we've just said, but
people like different personalities.
They like different brands. They like
different celebrities. They have
different values. Um, and those things
are very controllable now. Um, like we
just released in Copilot something
called Real Talk last week and it's
really cool. It is truly a different
experience compared to any other model.
It's more philosophical. It's sassy.
It's cheeky. It's got real personality.
Um, and the the usage is way way higher
uh than the average um, you know,
session of a regular co-pilot. And it's
built in a very very different way
actually. Um, so you know, I think
that's just the first foray into proper
personalization and I think we'll be
able to see a lot more of that coming
down the pipe.
>> Do you think it's good that people will
have a new friend, if you want to call
it a friend, uh, that they can sort of
customize in the way that they want?
There's been worries that, you know,
people are like, what does it mean to
for real friendship then? And are you
going to have not normal expectations
for your friends in real life? Yeah, I
think it does raise the bar. Um, and I
think we have to be cautious about that
because
um, AIs provide high quality accurate
information immediately on demand. They
provide highquality emotional support
increasingly.
Um, and naturally as we get more used to
that, that's going to put us under
pressure as humans to provide that
support to other humans and provide that
knowledge to other humans and be
available to them to get things done.
Um, and I, you know, that that's going
to be an interesting effect. It's going
to change what it means to be human in
quite a fundamental way. Like being
human is going to be more about our
flaws than our capabilities,
>> right? But it also I mean thinking of
the expectation it sets. I had one
entrepreneur talk to me about how well
there's things you would never go to a
human with right now because of norms
like if you're working on a project you
wouldn't like go to a colleague every 5
seconds and say how about this how about
this how about this or what if I tweaked
it this way. Uh but you could do that
with a bot and the bot will be like oh
yeah I'm happy to help you. So is there
any worry that that will spill over uh
into human relationships? what that what
would that mean?
>> I think that's a very interesting point.
I mean, in some ways, um, AI provide us
with a safe space to be wrong. Um, and
you know, it's kind of embarrassing, but
we can ask the same question over and
over again. Um, and in 10 different
ways, and that's how we get smarter. Um,
so I think it's a I think um, yeah, it's
it's it's a good philosophical question
to reflect on these kind of things
because it is going to really change
what it means to be human.
>> All right, Mustafa, one final question
for you. Um,
you say technologies purpose is to help
advance human civilization. It should
help everyone live happier, healthier
lives. It should help us uh, invent a
future where humanity and our
environment truly truly prosper. Um, so
my question for you is, has it lived up
to that promise?
>> I think science and technology has lived
up to that promise. I
>> Yeah, I think so. I think we're in an
incredible place. I mean, you know, um,
we've doubled life expectancy in 250
years. We're curing all kinds of
diseases. We can communicate with one
another on these devices. Um, I think
it's incredible. There's every reason to
be optimistic about technology and and
science and the project of progress. Um,
and I I just genuinely think AIs are
going to provide us all with access to
abundant intelligence, which is going to
make us more productive and more
creative. And I think we're already
starting to see it. So, yeah, I feel
optimistic about that.
>> All right, Mustafa, great to see you.
Thanks so much for coming on the show.
>> Great to see you, man. Thanks for your
time. See you soon.
>> Thank you.