Google DeepMind CEO Demis Hassabis: AI's Next Breakthroughs, AGI Timeline, Google's AI Glasses Bet

Channel: Alex Kantrowitz

Published at: 2026-01-23

YouTube video id: bgBfobN2A7A

Source: https://www.youtube.com/watch?v=bgBfobN2A7A

Google DeepMind CEO Dennis Asabis joins
us to talk about the path from here to
AGI. When Google's AI glasses are
coming, and whether the pace of AI
progress can keep up at this rate.
That's coming up right after this.
Welcome to a special edition of Big
Technology Podcast from Davos. I'm Alex
Caneritz and I'm joined today by a
special guest, Dennis Estabis, the CEO
of Google Deep Mind. Demis, welcome back
to the show.
>> It's great to be here. A year ago, there
were real questions about whether AI
progress was tailing off. It was in
fashion to ask whether LLMs were going
to hit a wall, and those questions seem
like they've been settled. There's been
a tremendous amount of progress over the
past year. Uh, could you tell us what
specifically has happened that's gotten
the AI industry from that moment of
question last year to the point that it
is today?
>> Well, I'm for us internally, we were
never questioning that. Just to be
clear, I think we we've always been
seeing um great improvements. Um so we
were a bit puzzled by by why there was
this question in in the air. I mean some
of it was to do we people were worried
about data running out. Um and there is
you know some some truth in that is all
the data had been used. Can we create
synthetic data that's going to be useful
to learn from? But actually it turns out
you can ring more uh more juice out of
the existing architectures and data. So
there was plenty of room, I think, and
we're still seeing that in both the
pre-training, the post-training, and the
thinking paradigms and also the way that
they all kind of uh uh fit together. So
I think there's still plenty of headroom
there just uh with the techniques we
already know about and and tweaking and
and and kind of innovating on top of
that.
>> All right, here's what a skeptic would
say.
>> Yeah,
>> that there have been a lot of tricks
that have been put on top of LLMs. Um, I
hear often about scaffolding and
orchestration and AI that can use a tool
to search the web, but it won't remember
what it learns. As soon as you close
that session, it forgets.
>> Is that just a limitation of the large
language model paradigm?
>> Well, look, I think there is, and I'm
definitely a subscriber to the idea that
maybe we need one or two more big
breakthroughs before we'll get to AGI.
And I think they're along the lines of
things like continual learning, better
memory, longer context windows or or
perhaps more efficient context windows
would be the right way to say it. So
don't store everything, just store the
important things. That would be a lot
more efficient. That's what the brain
does. Um and better long-term reasoning
and planning. Now, it remains to be seen
whether just sort of scaling up existing
ideas and technologies will be enough to
do that. Uh or we need one or two more
uh uh really big insightful innovations.
I'm probably if you were to push me I
would I would be in the latter camp. Um
but I think um no matter what camp
you're in, we're going to need large
foundation models as the key component
of the final AGI systems of that I'm
sure. So I don't I'm not subscriber to
someone like Yan Lun who thinks you know
that they're just sort of some kind of
dead end. I think the only debate in my
mind is are they a a key component or
the only component. So I think it's
between those two two options and and
for me we this is one advantage we have
of having such a deep and rich research
bench. we can go after both of those
things at maximum with maximum uh force
both you know scaling up uh the current
paradigms and ideas and and and when I
say scaling up that also involves
innovation by the way um pre-training
especially I think we're very strong on
um and then really uh new blue sky ideas
for new architectures and things you
know the kinds of things we've invented
over the last 10 years as Google and
Deep Mind you know of course including
transformers
>> can something with a lot of hard-coded
stuff uh ever be be considered AGI?
>> No, I think um well depends what you
mean by a lot. I think that uh I'm very
interested in hybrid systems is what I
would call them or neurosymbolics
sometimes people call them. You know,
AlphaFold, Alph Go are examples of that.
So some of our uh most important work um
combines neural networks and deep
learning with things like Monte Carlo
research. So I think uh that could be
possible and there's some very
interesting work we're doing building
using the LLMs with things like
evolutionary uh uh methods alpha evolve
uh to actually go and discover new uh
knowledge. Um you may need something
beyond what the existing methods do but
I think learning is a critical part of
uh a gen of AGI. It's actually almost
the defining feature. Uh when we say
general we mean general learning. Can it
can it learn uh new knowledge and can it
learn across any domain? That's the
general part. So for me, learning is
synonymous with intelligence and always
has been.
>> Okay. So if learning is synonymous with
intelligence
>> and these models still don't have the
ability to continually learn. No.
>> Uh like I said earlier, it has goldfish
brain. It can search the internet and it
can be like I figured this out, but it
doesn't change the model. It's just it
will forget it after the session. Um do
you have a theory as to how continual
the continual learning problem can be
solved and do you want to share it with
us all? [laughter]
>> I can give you some clues. We are
working very hard on it. Um we've done
some work on you know I think the best
work on this in the past with things
like Alpha Zero you know that learned
from scratch um versions of Alph Go.
Alph Go Zero also learned on top of the
the knowledge it already had. So we've
done it in much narrower domains. you
know, games are obviously a lot easier
than the messy real world. So, it
remains to be seen if that those kinds
of techniques will really scale and
generalize to the to the real world and
and actual real world problems. Um, but
at least the methods we know uh can do
some pretty impressive things. And so
now the question is can we blend that at
least in my mind with the uh these big
foundation models. Um, and so of course
the foundation models are learning
during training, but we would love them
to learn, you know, out in the wild and
including things like personalization. I
think that's going to happen. And I feel
like that's a critical part of of of
building a great assistant is that it it
understands you and it works for you as
technology that works for you. And we've
released our first versions of that just
last week. Personal intelligence is the
sort of first baby steps towards that.
But I think to have it uh uh you want to
do it more than just having your data in
the context window. That's uh you want
to have something a bit deeper than that
which is as you say actually changes the
model over time. That's what ideally you
would have. Um and that technique has
not been cracked yet.
>> We've brought up AGI a couple times. Um
so let me let me put this to you because
I was speaking with Sam Alman towards
the end of the year and I asked him I
was like you know you seem to be saying
two things. We're not at AGI yet,
>> but every time he talks about what GPT
models can do, it seems like it fits his
definition. And he said uh a that AGI is
underdefined. And what he wishes
everybody could agree to was that we've
sort of whooshed by AGI and we move
towards super intelligence. Do you agree
with that?
>> I'm sure he does wish that, but it's um
No, absolutely not. I don't think AGI
should be sort of turned into a
marketing term or for commercial gain. I
think there is always been a scientific
uh definition of that. My definition of
that is a system that um can exhibit all
the cognitive capabilities humans can
and I mean all. So that means you know
the the the the the kind of highest
levels of human creativity that we
always celebrate the scientists and the
artists that we admire. So it means you
know not just solving a maths equation
or a conjecture but coming up with a
breakthrough conjecture that's much
harder you know not solving something in
physics or some bit of chemistry some
problem even like alpha fold you know
protein folding but actually coming up
with a new theory of physics something
like um you know like Einstein did with
general relativity right can a system
come up with that because of course we
can do that the smartest uh humans with
their brain archite brain architectures
have been able to do that in history and
the same on the art side you know not
just create a pastiche of what's known
but actually be Picasso or Mozart and
create a completely new genre of art
that we had never seen before right and
today's systems in my opinion are
nowhere near that um doesn't matter how
many you know erdos problems you solve
which for some reason you know I mean
you know that it's good that we're doing
those things but I think it's far far
from what uh you know a true invention
or someone like a ramen would have been
able to do and you need to And you need
to have a system that can potentially do
that across all these domains. And then
on top of that, I'd add in physical
intelligence because of course, you
know, we can play sports and control our
bodies and to amazing levels, the elite
sports people that are walking around,
you know, here today in Davos. And um
and we're still way off of that on
robotics as another example. So I think
an AGI system would have to be able to
do all of those things to to really
fulfill uh the the the original sort of
goal of of the AI field. And I think,
you know, we're 5 to 10 years away from
that.
>> I think the argument would be that if
something can do all those things, it
would be considered super intelligence.
But you think AGI is a good
>> No, of course not. Because the
individual humans could we can come up
with new theories. Einstein did, Fineman
did, all all the all the greats that all
my scientific heroes, they were able to
do that. It's rare, but it's possible
with the human brain architecture. So
super intelligence is another concept
that's worth talking about. But that
would be things that can really go
beyond what human intelligence can do.
We can't think in 14 dimensions or you
know plug in weather satellites into our
brains. Uh not yet anyway but um and so
that that those are truly beyond human
or superhuman and uh that you know
that's a whole another debate to have
but once we get to AGI
>> I was listening to you recently and
something you said really surprised me.
You were asked um on the Google Deep
Mind podcast, which is a great listen.
If you have a system today that is close
to AGI, I thought it might be Gemini 3.
[laughter]
>> You named Nano Banana.
>> Yes.
>> The image generator.
>> Yes.
>> What?
>> Well, you know, sometimes you have to
have these fun names and have fun with
those and and you know,
>> but how is the image generator close to
AGI? Oh, well, of course, look, let's
take image generators, but also uh let's
talk about our video generator VO, which
is the state-of-the-art in video
generation. I think that's even more
interesting in from an AGI perspective.
You know, you can think of a video model
that can generate you 10 seconds, 20
seconds of a realistic scene. It's sort
of a model of the physical world,
intuitive physics, we'd sometimes call
it in physics land. and it sort of
intuitively understood how uh liquids
and and and and and objects behave in
the world. And that's um and obviously
one way to exhibit understanding is to
be able to generate it at least to the
to the to the human eye being accurate
enough to to be satisfying to the human
eye. Obviously, it's not completely
accurate from a physics point of view
and we're getting it. we're going to
improve that. But it's it's it's steps
towards having uh this idea of a world
model, a system that can understand the
world and the mechanics and the
causality of the world. And then of
course that would be I think essential
for AGI because that would allow these
systems to plan long-term plan in the
real world um over perhaps very long
time horizons which of course we as
humans can do. You know um I'll spend
four years getting a degree so that I
have more qualifications so that in 10
years I'll have a better job. You know,
these are very long-term plans that we
we all do quite effortlessly and at the
moment without these systems, we still
don't know how to do. We can do
short-term plans over one time scale.
Um, but I think you need these kind of
world models and I think you imagine
robotics, that's exactly what you want
for robotics is robots planning in the
real world, being able to imagine many
trajectories from the current situation
they're in in order to complete some
task. Uh, that's exactly what you'd
want. Uh and then finally from um our
point of view and why this is why we've
worked with Gemini as being multimodal
from the beginning able to deal with you
know video image uh and eventually
converge that all into one model that's
our plan is that uh it'll be very useful
for a universal assistant as well.
>> So let's talk product a little bit. Uh I
watched the documentary the thinking
game along with 300 million other
people.
>> Um there was something kind of
interesting that happened there. Uh
throughout the documentary, yourself and
some colleagues uh kept pointing your
phone at things and asking an assistant
alpha what was going on
>> and I was yelling at the computer as I
usually do uh and said this guy needs
glasses like he needs smart glasses to
be able to do it. The phone is the wrong
form factor.
>> Um what is your vision for AI glasses
and when is the roll out happening?
>> Yeah, I think you're exactly right and
that that was our conclusion. It's very
obvious when you sort of dog food these
things and internally that as you saw
from the film we were holding up, you
know, you're holding up your phone to to
uh get it to tell you about the real
world. And it's it's it's it's amazing.
It works, but it's not the it's clearly
not the right form factor for a lot of
things you want to do. You know, cooking
or you want roaming around the city and
asking for directions or recommendations
um or even helping the you know,
partially cited. There's a huge, I
think, use case there to help uh uh uh
with those types of uh situations. And
um and for that, I think you need
something that's hands-free. And the
obvious thing is for those of us anyway
that wear glasses like me is is to put
it on glasses. But there may well be
other devices, too. I'm not sure that
glasses is the final form factor, but
it's definitely it's obviously a clear
next form factor. And of course, at
[snorts] Google and Alphabet, we have a
long history with glasses. And uh maybe
we're a bit too early in the past, but I
think the my analysis of it and talking
to the people working on that project
was a couple of things. The the form
factor was a bit too chunky and clunky
and the battery life and these kind of
things which are now more or less
solved. Um but I think the thing it was
missing was a killer app and I think the
killer app is universal digital
assistant that's with you helping you in
your everyday life. Um, and they're
available to you on any surface on your
computer, on your browser, on your
phone, but also on, you know, devices
like glasses when you're uh uh, you
know, walking around the the city. And I
think it needs to be kind of seamless
um, and kind of knows uh, each of those
contexts and understands each of those
contexts around you. And I think we're
close now, especially with Gemini 3. I
feel we finally got AI that is maybe
powerful enough to make that a reality.
And we're, you know, it's one of the
most exciting projects we're working on,
I would say. And it's one of the things
I'm personally working on is is making
smart glasses really work. Uh, and we
hope to, we've, we've done some great
partnerships with WBY Parker and Gentle
Monster and Samsung to build uh, these
next generation glasses and you should
start seeing that uh, you know, maybe by
the summer.
>> Yeah. WBY Parker did have a filing that
said that these glasses are coming out
>> pretty pretty soon this year.
>> Yeah. and the the prototype design. It
depends how you know we're in prototype
phase. depends how quickly that advances
but um I think it's going to happen very
soon and I think it'll be you know a
category a new category defining uh
technology
>> given your personal involvement is it
safe to say that this is a pretty
important initiative for
>> yeah it's one well yes but it's I mean I
you know I like to it's it's not just as
important obviously I like spending my
own time on important things but I like
to be at the push the most cutting edge
thing and and and and that's often the
hardest thing and picking interim goals
and giving confidence to the
and and also just sort of understanding
if the timing is right and over the
years I've been doing this the many you
know the decades now um you know I've
got quite good at doing that so I try to
uh be at the the most cutting edge parts
of um I feel I can make the most
difference there so things like glasses
robotics I'm spending time on and and
world models
>> right okay so timing is right for
glasses
>> let's talk about ads
>> sure
>> is the timing right for for ads um let
let me set it okay Um, there's been some
news that Gemini might include ads.
There's been some news that some of your
competitors might include ads. Um, the
funniest thing I saw about that on
social media was someone who said,
"These people are nowhere close to AGI.
It's not going to be this world
disrupting
>> uh technology if the business model is
advertising."
>> Do you agree?
>> Well, it's interesting. I think those
are tells on, you know, I think actions
speak louder than words. going back to
the original conversation we were having
with you know Sam and others claiming
AGI is around the corner. Um why would
you bother with ads then? So that is I
think a reasonable question to ask. Um
but I I I think uh look from our point
of view we have no plans at the moment
to uh to do ads if if you're talking
about the Gemini app right uh
specifically. I think uh we are going
obviously we're going to watch very
carefully what uh you know the outcome
of what uh Chhatri PT are saying they're
going to do. Um I think it has to be
handled very carefully because the
dichotomy I see is that um uh uh uh if
you if you want an assistant that works
for you what is the most important thing
trust. Okay. So trust and security and
privacy. um because you want to share
potentially your life with that
assistant and you want to be confident
that it's it's working on your you know
behalf and uh and and with your best
interests and so you know you got to be
careful I think there are ways one could
do it but you got to be careful that it
doesn't the advertising model doesn't
bleed into that and confuse the the user
uh as to what you know what is this
assistant recommending you and I think
um you know that's going to be an
interesting challenge uh in that space
>> and that's what not to do and Sudar in a
recent earnings call said there are some
ideas within Google of the right way to
approach this. Sure.
>> How do you approach advertising?
>> Well, you know, that's still we're still
brainstorming that, but I think it's I
think it's um there are also, you know,
very interesting ways when if you think
about glasses, devices, there are other
revenue models out there. Okay. Um so,
you know, it's going to be interesting
to see. I don't think we've made any
strong conclusions on that, but it's an
area that uh needs very careful thought.
>> Just to get a definitive answer from
you, I think you've given it, but I'm
just going to do it one more time. I
read before we met, Google has told
advertisers in recent days, is from last
year, uh, that it plans to bring ads to
its AI chatbot, Gemini, in 2026.
>> Nope,
>> we have no current plans. That's all I
can say.
>> That's pretty. So, y
>> uh, all right, let's let's just, you
know, keep going through some of your
competitors. Uh, anthropic,
>> um,
>> Claude Code, Claude Code and Claude
Co-work have caused a tremendous amount
of buzz.
>> Yeah.
>> Um, it it is amazing to see what some
people have done. Mhm.
>> Um I saw a post from an ex Amazon
executive who said that he built a
custom CRM in a weekend
>> or actually a day and a half.
>> Mhm.
>> Let's call it a weekend.
>> Um
>> what do you think about it? And do you
do you plan to have an answer to it?
>> It's very exciting. Um and I think you
know kudos to to to Anthropic. I think
they built a very good model there with
claw code. We're very happy with the the
current coding capabilities of Gemini 3.
It's very good at certain things like
front-end work. I've been using it over
the Christmas to prototype games. So,
it's amazing. It's getting me back into
programming. I I love the whole vibe
coding wave that's happening. I think it
will open up uh the whole productivity
space to designers, creatives, artists
that maybe would have had to work with
teams, access to teams of programmers.
Now, they can probably do, you know, a
lot more just on their own. I think
that's going to be amazing uh once
that's sort of um out in the world in a
more general way to, you know, create
lots of new creative opportunities. Um
we're we're working on we're very happy
with our work on code. We got a lot, you
know, we got more to do there. We've
just released anti-gravity, our own uh
IDE, um which is very very popular. We
can't actually serve all the demand that
we're seeing there. Um and we're pushing
very hard on coding and tool use
performance of Gemini. Um, but it's one
thing that I think Anthropic have fully
focused on. You know, they don't make
image models, multimodal models, world
models. They just do, you know, coding
and language models. And, um, they're
very, very good at that. And, um, you
know, we're pleased to be partnering on
that on on the one hand. And also, it
gives us something to push for, uh, to
improve with our own models.
>> Let's just talk broadly about the AI
industry business. Um, I have a theory
for how this could all fall apart.
>> Um, and I want to run it by you. So it's
three-step a three-step process. Uh the
first is that large language model
training runs produce limited returns.
Uh the second is that there are flash
models like Gemini flash that run AI
computing as cheap as search. And then
step three is that the massive
infrastructure commitments uh that have
been made become somewhat useless given
those two factors and there is a
cascading collapse that happens. Is that
a legitimate worry? Um I think it's a
plausible possible scenario. I don't
think it's the I don't think it's the
likely one in my opinion. Um I mean in
my mind there's no doubt AI's gone
already proved out enough I would say
and and our work I think in things like
science and and alpha fold and drug
discovery that it's here to stay. It's
not like tomorrow oh like oh we found
out AI doesn't work. We've gone way
we've blasted way past that. So I think
that's it's clearly going to be the most
transformous technology in human
history. There's maybe a question mark
about timelines. Is it 2 years or 5
years? I mean either way it's very soon
for something this transformative and I
think we're still in the nent era of
actually figuring out how to make use of
it and deploy it because the technology
is improving so fast. I think there's a
huge capability overhang actually of
what even today's models can do that
maybe even us as building those things
don't fully know. Um, so I think there's
just a vast amount of product
opportunities that we see and I think
we're as Google only just started to
scratch the surface now of actually uh
natively sort of plugging these things
in to our amazing existing products let
alone building the new ones you know AI
inbox we've just started triing I mean
who wants to do email admin I mean
wouldn't we all love that to just go
away that's my number one pain point for
my work king day and um there's so many
examples like that just just waiting to
addressed. I think you know agents in
browsers um helping out with YouTube
obviously we're now powering search with
it. So I think there's enormous
opportunities and if you're talking
about the AI bubble if that's the
question you know as the AI bubble
>> I think it's fine I mean seems like
that's the question I'm very happy to
answer it because I think um look my
view is it's not binary when are we in a
bubble not in a bubble I think parts of
the AI industry probably are
>> and other parts I think it remains to be
seen. So I think some of the things are,
you know, when you see seed rounds of
tens of billions of dollars with of
companies that basically have no product
or research, it's just some people
coming together, that seems a bit
unsustainable to me in a normal market,
a bit frothy. Um, on the other hand, you
know, we're businesses like us, we have
massive underlying businesses that and
and products that it's very obvious how
AI would uh increase the efficiency or
the productivity of using those
products. And then it remains to be seen
how popular the monetization of these
new AI native products like chat bots,
glasses, all of these things. Um, we
we'll have to see. I think there will be
enormous markets, but they're yet to be
proven out. But from my my perspective,
you know, running Google Deep Mind is um
my job is to make sure that whatever
happens with an AR bubble, if it if it
if it bursts or if there isn't one and
it continues, we win either way. And I
think we're incredibly well positioned
as Alphabet uh in either case. um and
you know doubling down on existing
businesses in the one case or being at
the forefront and the frontier uh in in
the in the bull case.
>> Going back to thinking game, speaking of
the way that this will impact the
economy, I started to feel bad for the
opponents of your technology. Um
>> Li Doll.
>> Okay.
>> Uh demoralized.
>> Sure.
>> Uh this guy Mana who played Starcraft
beat your bot but realized that
>> it's basically over for humans versus
machines. Um, now we're all up against
this in some way as this stuff makes its
way into knowledge work.
>> Um,
>> I thought you were meaning our AI
competitors. Them I'm okay with. I don't
feel sad about that. So, relentless
progress of AI. You mean the gamers?
>> Gamers. Yeah.
>> You made me feel bad for gamers. Um, you
know, but I I want to ask about this.
you know, we're going to have the same
situation uh with knowledge work that
these these models that, you know,
performed admirably against the world's
best Starcraft and Go players are now
starting to do our work and are we going
to end up in the same position?
>> Well, look, let me let's given given you
brought up games as an example, let's
let's look at what's happened in games.
So, chess, we've had chess computers
that are better since I was a teenager
than Gary Kasparov in the '90s, right?
They weren't general AI systems, but
they were, you know, deep blue. Chess is
more popular than ever. No one's
interested in seeing computers playing
computers. We're interested in Magnus
Carlson playing, you know, the top the
other top chess players in the world.
Uh, interestingly in Go, um, the best
South Go player in the world is a South
Korean. And he was about 15, I think,
when Alpha Go match happened. He's in
his mid20s now, and he's by far the
strongest player there's ever been by
the ELO ratings cuz he's learned
natively, young enough. He was, you
know, he's the first generation you
could say that's learned with Alph Go
knowledge in the knowledge pool and um,
you know, he may actually be stronger
than Alph Go was back then. So, I think
and and we all still enjoy Starcraft and
all the other all the other um, computer
games. We enjoy Human Endeavor. I think
it's a bit more a bit similar to like we
still love the 100 meters uh, uh,
Olympic race. um even though we have
vehicles that can go way faster than
Usain Bolt but you know we we don't you
know that's that's a different thing
right and so I think we have infinite
capacity to adapt and um and uh and and
sort of evolve uh with our technologies
because why is that because we have we
are general intelligences um that's the
thing about is we are AGI systems we are
obviously we're not artificial we're
general systems and it's and and we are
capable of inventing ing science and uh
we're tool making uh uh animals. That's
what separates us humans from from the
other animals is we're able to make
tools all around modern civilization
including computers and of course AI
being the ultimate expression of
computers that all has come from our
human minds which were evolved for you
know hunter gathering lifestyle. So it's
kind of amazing we were able and it
shows how general we are that we're able
to get to the modern civilization we see
around us today and we're talking about
things like AI and you know science and
physics and all these things and I think
we'll adapt again but there is an
important question actually beyond the
economics one about jobs and those
things is purpose and meaning because we
all get a lot of our purpose and meaning
from the jobs we do I certainly do from
the science I do. So how does what
happens when a lot of that is automated?
Um, I think, you know, that that's why
I've been calling for, you know, I think
we knew new new great philosophers
actually and it will be a change to the
human condition, but I don't think it
necessarily has to be worse. I think
we've it's like the industrial
revolution, maybe 10x of that, but we'll
have to adapt again. And I think we'll
find new um uh uh meaning and things.
And we do a lot of things already today
that are not just for economic gain. you
know, art, extreme sports, ex polaro
exploration, many of these things. Um,
and maybe we'll have much more
sophisticated esoteric versions of those
things in the future.
>> Okay, two minutes left. I have two
questions. I don't know if we're going
to get to both of them.
>> Um, let me ask the one that I want to
know the answer most about. Uh, in a
recent interview, you said that you have
a theory that information is the most
fundamental unit of the universe.
>> Not energy, not matter,
>> information. Yeah.
>> How?
>> Well, look, I think if you look at
energy, I mean, I don't know if we're
going to cover this in 2 minutes, but in
in in energy, energy and [laughter] and
and matter, you can definitely I think a
lot of people sort of think of them as
isomeorphic with uh uh information, but
I think information is really the right
way to understand the universe. So, um
if you think of biology and living
systems, we're information systems that
are resisting entropy, right? We're
trying to retain our structure, retain
our information in the face of you know
a randomness that's happening around us.
And I think you can look at that uh uh
you know in a larger physics scale. So
almost not just biology but things like
you know mountains and and planets and
asteroids they've all been subject to
some kind of selection pressure. Not not
Darwinian evolution but some kind of
external pressure. And the fact that
they've been stable over a long amount
of time means that that that information
is kind of stable and meaningful. So I
think one could view the world in terms
of its complexity, information
complexity. And I think a lot of what
we're doing with our the reason I'm
thinking about all of that is because of
things like alpha go and alpha fold
especially alpha fold where you know we
solved all the protein structures that
are kind of known to science. And how
have we done that? Well, because only a
certain number of those in the kind of
almost infinite possibilities of protein
structures are stable
>> and and those are the ones you've got to
find. So you've got to understand that
topology uh that information topology
and follow it and then suddenly these
problems that seem to be intractable
because you know how can you find the
needle in the haststack actually become
very tractable if you understand the
energy landscape or the information
landscape around that and that's how I
think eventually we'll solve u most
diseases come up with new drugs new
materials new superconductors with the
help of AI helping us uh navigate that
information landscape
>> uh Dennis before we go I just want to
wrap with this uh well Maybe maybe
quickly this first one and then a big
question at the end. In the thinking
game, speaking of um health and AI,
there's this moment where there's a
discussion in the lab about whether to
release the results of AlphaFold and you
kind of sit there adamantly and you're
like why are we going through a process?
Release it. Release it now.
>> Talk a little bit about um the lesson
from there.
>> Yeah. Well, look, we we started
Alphafold to crack a unbelievably tough
scientific challenge, 50 year grand
challenge uh of protein folding and
protein structure prediction. And the
reason we worked on that and the reason
we've put so much effort into it is we
sort of thought it was a root node
problem. If we could solve it and put
that out in the world, it could be it
could do amazing untold impact on things
like human health and understanding of
biology. But we as a team, no matter how
talented or hard hard working we are, we
would only be able to scratch a surf a
small tiny amount of that potential on
our own. It's clear. So um in that case
and in this case it was obviously the
right thing to do to maximize the
benefit to the world here to put it out
there to the scientific massive
scientific community uh to build on top
of and use alpha fold and it's been
incredibly gratifying to see you know 3
million researchers around the world use
it in their important research. I think
in future um all almost every single
drug that's discovered from now on will
probably have used alpha fold at some
point in that process which is you know
amazing for us and you know really
that's what we do all the work we do for
>> I also read that moment you tell me if
I'm wrong as something of a of a
metaphor small uh passionate AI division
kind of yelling in a big company get
this out cut the red tape
>> yeah potentially but look I mean we've
had amazing support from the beginning
from Google and they the reason that we
you know we we joined forces with Google
back in 2014 is Google itself is a
scientific research engineering
technical company always has been and
has that at its core and that's why you
know I think that we have the scientific
method and the scientific approach that
thoughtful approach that rigorous
approach in everything we do so of
course they're going to love something
like Alpha Fold
>> okay here's here's the big question at
the end um you built Alph Go
>> uh trained um the computer to play Go on
human knowledge and then once it
mastered the human level playing you
kind of like let it loose.
>> Yes.
>> With a program called Alphazero
>> and it started doing things that you
could never even imagine and
>> making new circuits in ways that
surprised you.
>> Um eventually maybe there will come a
time where LLMs or some version of them
>> um reach a a mastery of of human
knowledge in the same way. uh what is
going to happen when you then let that
loose and it does the same potentially
does the same thing as Alpha Zero.
>> I think it' be very exciting. I mean
that's that's what to me is would be the
AGI moment is you know then it will
discover a new superconductor room
temperature superconductor that's
possible in the laws of physics but we
just haven't found that needle in the
haststack or a new source of energy a
new way to build optimal batteries. I
think all of those things will become
possible and indeed not just possible I
think they will happen uh once we get to
a system that's first of all uh got to
you know human level knowledge and then
there'll be some techniques maybe it
will have to help invent some of those
techniques but kind of like Alpha Zero
that will allow it to go beyond into new
uncharted territory
>> that that idea of it like plugging
weather system into its brain like it's
going to be on that that
>> exactly
>> all right so exciting times
>> Dennis thanks for coming on the show
>> thank you
>> thanks everybody
>> [applause]
>> Thank you so much.
[applause]