How Google DeepMind Operates & Experiments — With Lila Ibrahim and James Manyika

Channel: Alex Kantrowitz

Published at: 2026-02-18

YouTube video id: MkZRak7lVcA

Source: https://www.youtube.com/watch?v=MkZRak7lVcA

How does Google [music] DeepMind operate
and make bets? And what's making Google
more experimental? Let's talk about it
with two Google leaders right after
this. Welcome to Big Technology Podcast,
a show for Coolheaded and Nuance
conversation of the tech world and
beyond. We have a great show for you
today because we're going to go deep
inside the way Google's AI and
technology research operations work. We
have two great guests with us today.
Laya Ibram is here. She is the chief
operating officer of Google DeepMind.
Laya, welcome.
>> Thank you.
>> And we're also joined by James Mana.
James is the SP of research, labs,
[music] technology, and society at
Google. James, welcome.
>> Well, thanks for having me.
>> And of course, this is our concluding
conversation here uh in our series at
Davos. [music] And we do have a live
audience. Uh live audience. Make some
noise. Let them know you're here.
[laughter]
>> All right. Um so much to [music] get to,
not a lot of time. Let's just start with
the way that Google DeepMind operates.
Deis Havis, the CEO of Google DeepMind,
who was recently on the show, has
described DeepMind as sort of a
modern-day Bell Labs. Um, but what does
that mean, Lla? Can you tell us a little
bit about how the how the research is it
a lab operation company? How does it
operate?
>> Yeah. Well, maybe I should start with
our mission because I think everything
is kind of based off of that, which is
to build AI responsibly to benefit
humanity. And so, the first thing we do
is take really ambitious research
agendas. We structure it in a way where
we're looking at what are the big
problems but not take telling people how
to do it. And when you think about how
did we first approach that it's really
about taking inspiration from the golden
era of Bell Labs uh but also government
programs like the Apollo program and
even more recently Pixar. So it's all
focused around bringing in really great
talent and creating an environment for
them to succeed and to explore. So first
thing is that big research agenda
telling people what to kind of the area
to focus on but not how to do their job.
The second thing is really because it's
such a broad agenda. We want to build
interdisciplinary teams. How do you
create a culture where you can have a
bioethicist next to a computer scientist
and a neuroscientist because we think
that's really where the magic happens
and unlocks the work. And you know these
this type of approach has resulted in
such extraordinary uh efforts and we're
also not afraid to explore and then say
when is it time I think demis has an a
remarkable way of measuring time like
time to explore are we setting the
really ambitious goals how are we doing
progress towards that and also not being
shy to say okay now's the time to take a
step back and pause it or double down
great example of that is over the past
few years we've been doing a lot of work
around one science area learning science
how do people learn and can we improve
it
>> right
>> and then this year was really Dennis was
like okay Gemini is good enough it's
time to infuse everything we've done
with the industry around learning
science into
uh Gemini and that was one of our focus
areas to really advance how Gemini could
be provided for learners so there's
something I think quite magical within
GD uh Google deep mind about timing
>> okay GDM I guess We're gonna go
everybody in the tech industry.
>> I almost I almost caught myself uh for
saying it.
>> So let's but let's talk about that. So
the way I just want to talk through
process a little bit the way that you
just described that Demis said that the
that Gemini was ready for learning and
then Google Deep Mind started to pursue
it. U how much of what Google Deep Mind
works on is, you know, top down versus
bottom up. Uh, a way that I've heard
OpenAI describe the way that it works is
like a bunch of different startups
within a larger company. Is that a
similar way that Google operates or does
it come more from the top?
>> Well, I, you know, because our mission
is so ambitious, you know, we're really
trying to understand what are the big
challenges where AI can help us unlock
our understanding of the universe around
us and solve some of humanity's biggest
challenges. And it's abroad enough that
we can do things like how do we do
weather exploration and try to predict
weather for yeah weather forecasts. Um
how do we do alphafold and protein
structure prediction to help us better
understand diseases so we can come up
with better therapeutics generative AI?
How can we continue to improve that to
make people's lives better? Um, so
again, we take a very broad portfolio
perspective, but we allow the space for
researchers to explore and that's really
what I meant in the beginning of like
you've got we've got to find the right
talent. So missiondriven culture and
values aligned people who want to have
this type of uh exploration and a big
impact and scale that we can have of
being part of Google. Um, so I would say
some of this is Demis is quite
remarkable in terms of
his thinking in this space because he's
been doing it for so long, right? Deep
mind was founded 16 years ago. It's been
kind of a lifelong mission of his and
yet we have an organization full of
people who are creative, who like to
work in an interdisciplinary
environment, who want to have impact in
this world. So they also come up with
their own approach to things setting uh
setting
>> a little bit of both.
>> Pardon me. Yeah, a little bit of both.
>> Some top down from demis and then some
>> bottoms up.
>> Okay. And
>> which makes managing part of that
organization structure
you about talent? [laughter] I will talk
with you about talent for sure. And um
you know on that note um how have things
changed because um I'm just going to
talk about the tech industry more
broadly that it seems like there used to
be a moment where a lot of tech
companies gave you know these talented
people broad leeway to explore things
that might not have immediate results.
Um then all of a sudden we got into this
AI race and many companies brought their
researchers who were working on these
long-term products much closer or much
long-term projects much closer to the
product. Uh and all of a sudden there
was a almost imperative for long-term
research to make immediate product
[clears throat] impact. Um so has has
that changed as well over time? [snorts]
>> Is that is that something that's going
on within deep mind as well?
>> Yeah. Um you know I've been I joined
about eight years ago. Um, and it we've
definitely been on a journey, but what I
think is so exciting about Google Deep
Mine and I think why so many of our
employees stay so long is because we
have that breadth of portfolio. So there
are some people that want to continue
the deep research, frontier AI research
that they do um or a scientific more
focused on the science and we have the
space to do that exploration while also
delivering on the advancements around
generative AI such as all the progress
we made last year with Gemini.
Okay. Uh let let me take that a step a
step further. Um the way that that the
transformation within Google has been
described is that instead of having
every uh product area or product group
um chart its own direction on AI,
there's now this central engine room
within the company which is I think the
AI division that generate that creates
the AI and then farms it out to these
product areas. So can you talk a little
bit about that process and and how that
works?
>> Yeah, and actually I think that's been
one of the exciting things over the past
few years with a combination of Google
Brain and Deep Mind of bringing the best
of Google's AI teams and research
together under one roof where we could
have we could explore such a broad
portfolio and so we've really been
focused on as you mentioned becoming the
AI innovation engine. And then I
wouldn't say we farm things out to other
Google teams. We collaborate very
closely with the product areas and their
customers to understand what the needs
are so that we can build the models
better from the start and do so in a
very collaborative and responsible way
such that by the time it goes to uh
different Google products it's already
been through a lot of that uh testing
and can be refined for that specific use
case. Okay, one last question.
>> And that's actually helped us. I think
what's resulted in that for example is
like Gemini 3. We launched it and then
it was available to a broad group of uh
developers and users
>> right away.
>> All right, one last question on this and
then we're going to go to James. And
James, thanks again for for being here.
So um let let me just ask you this. On
our show, we have this hypothesis that
uh Sundar spent time at McKenzie and
this is sort of like a McKenzie style
approach to like reorg centralize and
then work with the groups. Is there a
truth to that?
Well, you have a former McKenzie person
here who might be able to uh address the
structure.
>> James,
>> no, I I think what you've got going on
is an is an extraordinary thing, right?
Because on the one hand, you've got the
Gemini program, uh, which underlies all
of this, building the kind of large
scale models, Gemini itself, Gemini 2.53
and all of that. And this kind of came
about back in 3 years ago when we put
together the Google brain team and the
deep mind team to create the Gemini
program. So that program now underlies
uh all the things across the company. So
you see Gemini show up in search uh in
Google Workspace uh it shows up in in
all our products in notebook uh and all
of these things. So it's kind of the
foundation and that's why as Lada said
you know Google deep and the Gemini
program has become the engine room but
in addition to that you've got all these
other things going on there's deep
science going on in the company I mean
this idea of you know you know kind of
this foundational kind of
>> tackle the ch the biggest root node
problems that open up research and
innovation in so many areas. So you've
got all of that going on too. And then
you've got all these other you know
special kind of ambitious projects
working on things like Genie who build
world models. Uh you've got uh work
going on to build special things for
Whimo and you know enhance the models
that Whimo's uh that lead to Whimo the
the drive the Whimo driver. So you've
got a lot of these things going on. So I
don't think there's a top down as much
as let's take advantage of the
foundation called the Gemini program.
Make sure that every time we do these
rapid iterations, you've seen we're now
on a cycle of every 6ish months, there's
a new generation of Gemini. Make sure it
shows up immediately. As L described,
there's no, you know, shipping and
delay. So the minute the latest version
of Gemini comes out, you're going to see
it in search, you're going to see it in,
you know, in in the Gemini app itself,
you're going to see it everywhere. So
that's kind of the incredible thing
that's uh that's happened over the last
three years.
>> All right, I want to talk about labs. So
Google Labs, a lot of us who used Google
products in the early days, uh, you
know, we saw this era of experimentation
within Google and then Labs went away
for a bit. Not that Labs was the only
bit of experimentation within the
company, but then Labs was revived and
uh, and it seems like we're starting to
see many more experimental projects come
out of Google proper in a way that we
hadn't seen in a long time. So how
responsible is Labs for that? And why
why is Labs back? Oh, labs is so much
fun. Uh, so, so what actually happened
was three years ago, uh, you know, uh,
you know, this is a kind of a inspired,
you know, Sunda moment said, let's
reboot labs and, you know, we're in this
AI moment. How do we kind of explore and
experiment and build these AI first uh,
AI products that are totally AI first?
So the idea with labs is let's take the
most amazing research coming out of
Google deep mind and Google research and
any other place quite frankly in the
company where there's incredible
research and focus primarily on how do
we build experimental AI first products.
I think what most people probably know
of the most is what's now you know
notebook LM you know the way that
started by the way is incredible because
it was I remember when I first
encountered it last
>> so what is what is notebookm and tell
the story
>> so no notebook is fun so it just started
out as a product called tailwind there
were four five people working on it and
the idea was you know can we build a re
an AI native research tool that is
grounded on what you put into it so in
other words you know your sources you
might have books you might have papers,
you may have drafts, uh you may have
whatever you you your content that you
want to ground it on, put it in a
notebook and be able to engage with it.
So that was the conception of the idea
and in fact in some ways he got
additional impetus from Steven Johnson
who's a writer and Stephen Johnson you
know is one of these people who kind of
keeps everything. So he has notes from
the '9s and drafts of books and all
kinds of things. He said I'd love a
product where I can dump all that stuff
in and engage with it. What was I
thinking 1997? What was you know what
was that draft I did? And so and be able
to be able to so what notebook LLM has
become is this incredible research tool
grounded on what you've put in. And when
you engage with it and it summarizes or
drafts something, it gives you these
citations and that's a that's in some
ways was a is a key feature of it. So if
it says Alex, you know, you said this or
your source says this and summarizes in
some way, it'll give you citations. If
you want you can click on the citations
they take you all the way back to the
original content right
>> so which is incredibly useful then then
a fun thing happened which was well you
know so it's already a very useful tool
then we said well actually you know what
sometimes I want to hear my sources
uh as opposed to just engage with them.
So I said okay well actually the
technology is ready enough we can
actually add AI audio overviews
>> which is like effectively a podcast you
can have it with like two hosts
>> you could have actually the origin of
the idea wasn't even to do that so
initially the idea was that a few of us
you know Jeff Dean and you know this
legendary Jeff Dean said well actually
you know what we're reading all these
papers that are coming out at this
incredible pace in the computer science
field it would be nice to be able to
hear a summary of them verbally while
I'm driving into work or something. So
just you know then I can figure out
which people I'm going to read. So that
was the original idea. They said
actually no you know what it's it's
easier to learn stuff when you have you
hear people talking about it engaging
that's why seminars are interesting
right as a as a learning mechanism. So
that's where the idea came from. So we
did these audio overviews which you know
in the form of a podcast or a discussion
with two hosts discussing it and now
it's evolved and that's when the product
just kind of took off.
>> Yeah. Whenever I give a presentation
about AI, that's the party trick where I
build one of these notebooks in front of
the audience
>> and then I hit play on the podcast and
people who haven't seen it before.
>> It's like a jaw-dropping moment. And in
fact, we've had multiple people uh on
our YouTube feed and coming from the
podcast, they've asked, uh, Alex, did
did they train on your voice? Uh,
because it sounds a lot like me. And I
say, no, listen, it's they they always
say, let's unpack this at the beginning,
and you have to understand every
podcaster says that.
>> [laughter]
>> So me actually you know one of the most
fun use cases of a notebook by the way
is um because now you can put in things
in all kinds of formats that can be
papers that can be YouTube videos that
can be whatever is on your hard drive.
Uh one of my fun use cases was actually
when I had to do this uh thing where I
was seeing all these papers from
literally over 100 countries in
different languages. So I put them all
in and just engage with content in
multiple other languages because no
bookm can handle multiple languages. And
now you can do video overviews. So
>> I think you can make like an animated
not an animated video but a video with
like graphics
>> with graphics and slides. But I think
this is an example of the kind of thing
that happens in labs where we try to
take this incredible research that Laya
and colleagues and others are doing at
Google deep mind and Google research to
say how do we build amazing AI first
products. Flow is another example. And
if you play
>> I I just so I I'll tell you a story
about flow then I'll let you talk a
little bit more about it. Um, I just did
a a my first and last mountain climb and
it was Cotto Paxi in in Ecuador and I I
wanted to make a video sort of capturing
the moment. Uh, but there were a couple
things that happened that I just could
not that I didn't videotape cuz I
decided to spend the climb actually
climbing as opposed to YouTubing. Um,
which is apparently from what I hear
rare these days. And there was a moment
where my water bottle fell out of my
backpack and rolled down the glacier and
then kind of disappeared into the
darkness. And I wanted to illustrate
that. So I went to FL the Google video
generator and I said I want to make an
animation documentary style to show this
and slotted that into the video. So now
you can and before I would have to hire
an animator, now you can do it.
>> Yeah. No, it's it's it's incredible. But
I think what you know Flo is an example
of the magic that happens in labs. So I
remember a bunch of us got together. So
Josh who runs labs uh and you know Demis
and a few of us say what if we put all
these tools we now have together into
something that's actually useful. And in
fact the initial version of it that we
have uh uh you know some ways was
clunky. Uh then we said well actually
let's just talk to some actual
filmmakers and get their input. So one
of the things that happens in labs by
the way is we try to engage a lot with
creatives and others uh to help us think
about how we build these tools. So
anyway that's how flow came about.
>> Yeah. You can build scene by scene
prompting into video and you can have
continuation. I think that's probably
where the name comes. can flow
>> and and what you just said was an
insight that came from filmmakers. In
fact, the initial version said, "No, no,
what you've got here is actually not
very useful. I'd like to be able to
build things scene by scene and be able
to stitch them together, be able to do
this." So, you know, so that's why it's
been helpful. So, if you say what is
labs, it's a place where we try to
experiment with all these things. At any
one time, we probably have about 30
experiments cooking. So, if you go to
the Google lab site, you'll probably see
about 30 different things.
>> But I have a request for you. broaden
the access because there's a lot of pro,
you know, a lot of projects in there
that seem really interesting to use, but
every time I'm there, it's weight list.
>> You have to we'll work on that. We'll
work on that. I mean, so so for example,
so you know, one of the other ones and
sometimes we're surprised what people
find useful. Uh I'll give you an
example. One is Pomelli uh which is the
it's a tool for SMBs to imagine this is
not your typical kind of techy startup
SMB but kind of a more kind of
traditional SMB where they want to build
a web presence and so you can literally
engage with Pomelli and as an SMB and be
able to build literally a a a a web
presence in incredibly imaginative ways.
So we always have all these things
cooking in labs. uh air studio uh is
another example of the kinds of things
this is for developers. So we're trying
to think of all these incredible
creatives whether they're developers,
artists, filmmakers, musicians to create
these incredible AI tools.
>> Yeah, there there's two that I really
want to get access to and I think are
potentially going to be big. Maybe the
next notebook LM. There's CC, which is
an experimental productivity agent
within Google, which looks great. And
then disco
you can build a a web app um based
basically based on links. So if you're
like thinking about doing something for
the weekend, you can just like open a
bunch of tabs and then it will figure
out what type of app to make for you. So
maybe a custom map with uh dots for each
potential event and then you can pick
the dates that you want to actually be
in the place that you're thinking about
and then it will sort of highlight
what's going to be available then. So
this is to both of you. Uh back in the
day, Google had this concept called 20%
time where Google employees were
basically empowered to spend 20% of
their time on something that's not that
wasn't core to their job description.
And that's where a lot of big Google
products came out of. I think Gmail was
a 20% project. So I I want to ask you
both about about these experimental
projects. Who builds them? And is a
version of 20% time back or how does
this you know obviously a lot of cool
experiments. How is it happening inside
the company?
Well, I'm happy to start. So, I think
the the the effectively that's still
alive. So, I go back to labs. So, if you
think about the things that are in labs,
I would say something like maybe 80% of
them uh came out of people actually in
the labs team. The other 20% came from
20% stuff. I'll give you a good example
on a topic that
>> 20% time still lives within Google.
>> We encourage people to come up with
those things. So I'll give you a good
example that in an area that LA and I
care deeply about which is education and
learning. So somebody in in Google
research came up with the idea that oh
and they're working on something else
but they came up with the idea what if
we created a way for somebody to learn
something their way however they want to
learn because it's now possible to get
these tools to help you learn in any
number of ways. So that be eventually
became learn your way which is an
experimental product you'll find in
Google labs
>> that was not done by somebody in labs
somebody else in another part of the
company had come up with the idea so we
constantly are getting all these ideas
from across Google about these
incredible things another example that
actually came out of Google deep mind
and Google research is co-scientist
uh which those teams worked on which is
an a tool for scientists to do actual
scientific discovery now you're going to
see that show up in labs as a way to
test it, get other people to work on it,
but it wasn't as it were built inside
labs. So the notion of people generating
ideas from across the company is very
much alive and uh you get some exciting
innovations from that. Laya deep mind
researchers have the ability if they
want to build an experimental product to
maybe do that and
>> yeah I think this is actually just part
of our culture um and uh that's really
about giving people the chance to
explore and also taking a very
interdisciplinary approach. So it's not
actually not just limited to researchers
um which has been quite ex quite
exciting. It's actually being able to
pull together different perspectives and
trying to solve real challenges. And
sometimes that's even actually AI tools
to help us accelerate how we're working.
Um, how does our legal team uh
make the review of research papers
faster and be able to provide feedback?
How do we do red teaming for more
automated red teaming for our
responsibility team? Um, or how do we
understand ancient texts? Um we have a
project that actually one of our
researchers uh decided to uh he wanted
to explore it's not just about
intelligence today but what is it about
knowledge from the past that we might
not know about. So he uh worked um to
come up with a project that was not just
um to be able to date a tablet but also
to fill in missing gaps to translate it.
And so we now have project anus which is
all about ancient texts. So there are to
James' point one of the things that we
have at Google is really smart curious
people and a culture that supports that
exploration.
>> Yeah. Before as we close this segment I
talk a little bit why about why I'm so
interested in it. I think the average
company last century was on the S&P 500
which they reached for 67 years. Now
it's like 15 years right now. And as
this AI moment happens, you know, it's
going to I mean, Google's seen this
firsthand, right? It's going to be uh
things will be moving even faster and
and where ideas come from, the
imperative to experiment and, you know,
create new projects. I think that's key
to any company's long-term
sustainability. So, it's very
interesting to hear how how it operates
within Google. You know, I was going to
comment um I was in spent some of my
career in venture capital and I used to
say that that was the most remarkable
place to be because you'd have these
entrepreneurs with audacious ideas who
wanted to build ideas and I think what's
um what's crazy about um my uh
experience at Google is this is just
part of everyday culture and it happens
in all parts of the organization. I
think how it comes to life is quite
different might be quite different in
Google deep mind than other parts of
Google but the fact that it's supported
across the entire organization. Yeah, if
I could add one other piece on this,
Alex, I think one of the things that I
think is really quite unique about uh uh
the research culture at Google and I'm
including back to your original Bell
Labs question uh between and this
happens in Google Deep Mind and Google
Research is this idea that we've got to
go from research to reality,
>> right?
>> Uh and I think what you see a lot of the
these kind of research or you know
originated breakthrough ideas
>> then very quickly transition into real
world impact. I mean Alpha 4 is a good
example right which is incredible
breakthrough Nobel Prize worthy and all
of that but look at what's happened
since then right you now have three and
a half million researchers accessing it
in over 190 countries uh you take some
of the breakthroughs in weather in
forecasting and prediction uh they're
now actually being used in the real
world we now do flood forecasting which
is a very incredible kind of research
question but now it's covering 150
countries with two billion people so I
think this idea of going from research
breakthrough scientific research
translates ating that to societal impact
I think is a very unique uh aspect of
what we do.
>> There's a natural follow-up here that I
have to ask uh because if I don't ask it
the audience is going to be like why
didn't you ask that for for many years
Google seemed like it was or at least
the perception was that it was afraid to
ship. Case in point you created the
transformer model chat GPT is the first
mainstream um application built off of
that. In fact, I spoke with Sam Sam
Alman uh you know at the end of the year
and one of the things that he said one
of the sort of notable things he said in
that interview um was that if Google
took us seriously early on they would
have smashed us and now they're a
formidable competitor. So has the
imperative to ship uh become something
that's more important within Google and
has there been more ambition to bring
these experiments out into the public?
[snorts]
>> I I I think there is but I think there's
a natural evolution of that. I think one
of the things that's important is you
know there are incredible amount of
research breakthroughs going on and
there's always going to be at Google I
think this productive tension between is
it ready is it not and we don't always
get that right uh and I think that
tension I think I actually think it's a
great tension because this idea of part
of being bold and responsible I think we
have to live with that tension so you've
got that going on but I think what you
also see is a realization that for many
of these experiments and innovations uh
there's there's actually a lot to learn.
Uh this is back to the scientific method
by having people use it uh experience it
and we learn from that. So there's so
much kind of red teaming you can do uh
of a product and we do a lot of that but
there's also a lot you can also learn
from when people use it either you know
usefully or even adversarily you're
going to learn and learn from that. I
think that's been a bit of the evolution
that shipping and you know useful
products and also learning from those uh
from that shipping is very helpful. So
you're seeing us you know we like to
talk about this idea of relentless
shipping. Uh so we're now kind of on
this cycle that Gemini models where
every five six months there's the latest
generation. I think that's part of what
you're seeing going on.
>> Okay. I definitely want to make time to
talk about AI and education which I know
Laya and both of you have really worked
on but Laya has been a very important
and passion of you. So uh for of yours.
Uh let's take a break and we'll come
back right after this. And we're back
here on Big Technology Podcast with Laya
Ibraham the COO of Google DeepMind and
James Mika SVP of research labs
technology and society at Google. It's
great to have you both. AI and education
has has been something that that you're
both passionate about and have done a
lot of work on. Um, a recent study that
you did uh found that 85% of students 18
plus are using AI. I mean probably the
other 15% aren't telling you. And uh 81%
of teachers report using AI. Uh which
far surpasses the global average which
is that 66% of the public uses AI. So
this is making a real impact in
education. Let's just start with with
your perspective on um is this a net
positive to education because I think
the criticisms are like they're out
there that that kids are using it to
cheat and teachers are using it to grade
those cheated papers. Um what's
happening in the you know in
practicality?
Well, I think first of all, this is a a
really important area that as James
mentioned earlier, um we're approaching
it as we approach everything, which is
how do we be bold in thinking about how
AI might actually transform how people
learn um and and really unlock human
potential while also being responsible
and thinking about what the risks are
and making sure that we're investing in
mitigating those. Um, one of the things
that we found uh also in that survey is
about 80% of the 18 plus learners are
actually finding it's helpful for their
education and their learning. So, it's
giving them the information they need in
the way format that they might need it.
Um, and one of the areas that we really
have been focused on is making sure that
it's not just like providing an answer,
but that we'll actually take you through
the steps. Um, and this is grounded in
everything we do, which is a scientific
approach. So back up three years ago, we
said let's treat learning like a first
class science problem. How do people
learn? And we have some of that
experience and uh expertise within
Google. And we also know that the world
is full of people who are studying this.
So we took a very deliberate approach to
collaborate with uh pedagogy experts uh
educators worldwide and have been doing
a lot of that what we called learn LM
and this was the year that we infused
that into Gemini and then developed
features like guided learning in the
Gemini app where you can go through and
it helps you actually break down the
problem. So, it's teaching you how to
learn and how to break down the problem.
And for someone like me who's also
happens to be a parent um of teenagers,
I think about this a lot. Um I have I
have twin daughters, so I'm constantly
running AB tests.
>> Yeah, you should [laughter] let one use
AI and make sure the other doesn't and
then see who turns out better.
>> Yeah. You know what's what's
interesting? Um Yeah. Well, I I'll take
that as a as input for my next
experiment. But one of my daughters is
dyslexic, and the way the education
system has been built is not for someone
like her. And yet what I have found is
when she can integrate uh AI into her
learning process, whether it's breaking
down a math problem or helping her take
her words that are sometimes scrambled
and put them into something more
coherent, it's actually giving her the
confidence in a way that I have never
seen her before. And I think back a lot
to um I also have a sister with a a
physical disability. Tools were not
education system was not made for her.
Think about the entire world and how
many students have been left behind
because they just didn't have have
access to this technology. So our idea
is imagine that uh every student could
have a personalized tutor and if every
teacher could have a teaching assistant
where AI is a productivity tool that
really could change the dynamic of how
teachers and students interact. We're
not saying that the AI is the magic. The
teacher is still the magic. Uh but it
frees up the teacher to actually do that
human-toummon interaction. Um and we've
seen some really great progress in a lot
of the work that we're doing with
productivity tools for teachers. I was
just in Northern Ireland and uh teachers
there they they they worked with the
government and ran a pilot and the
teachers had like little post-it notes
and uh what they found was on average
they were sending 10 saving 10 hours per
week per teacher and their post-it notes
were how they were using their time
which was I'm getting time back with my
family. I can now do lesson plans for
different learners of different types
within my 30 plus student classroom. It
was so encouraging. But this is there's
still a lot to learn. We're still in the
early stages and we have to go into this
knowing that
>> it's it is high stakes. We're talking
about people's lives and their
longevity. But helping them learn, being
able to learn and opening up the
opportunities and and then being able to
learn from that and integrate it into
our research is critically important.
>> Yeah. One one thing I would add is I
think one of the things we're learning
is that learning is no different than
other areas of uh society, right? Which
is when a new technology comes in, you
don't just bolt it on to an existing
process and an existing workflow. You
have to almost reimagine the workflow.
Let me give you an example in learning.
Uh so we know that um you know there's
this issue in concern around uh
cheating. Um so in a world in which you
have tools like this I'm not quite sure
you want to do tests and assessment the
old way for example so we found so it's
actually quite interesting where we you
know working with some school districts
for example we found so LA described
guided learning uh it actually turns out
when students actually use guided
learnings they actually do learn and
they you know the the the mastery of the
subject improves uh but this school
district found that actually you know
what maybe we should have more tests
because we know that when students are
getting ready for a test, they actually
do use guided learning. Whereas, when
they're just trying to uh hand in
homework at 11:00 p.m. the night before,
they don't.
>> Any student watching is going to have a
heart attack here. [laughter]
>> More tests.
>> So, what they realize is that, oh, let's
do an experiment. What if we actually
have a weekly test?
>> Oh.
>> Uh, so in other words, let's expand this
window when students are motivated to
turn on guided learning and actually
master the thing because they're going
to have to do a test. they actually
found that students were actually
learning more. So that's an example of
how maybe we need to reimagine even what
the workflow and the learning processes
as opposed to just trying to bolt on a
technology to an existing structure and
existing workflow. So there's a there's
a lot of interesting experiments and
innovations that we're learning a lot
from by talking to teachers and some
schools and school districts. So I think
we're at the very early stages of this.
But I think the concerns that people
have around cognitive offloading and so
forth, those are real concerns and we
have to work on that. I I do want to
talk about that because like with many
things with technology and especially
AI, I think the concern is that the
these these uses that we're talking
about like it's by the way amazing that
the LM the learn LM will go step by step
and like actually instead of spitting
out an answer work with the person using
it um to be able to you know help them
make progress but doesn't the the issue
is that some of the most ambitious
people will use this. this is potential
issue. Um, and their performance will
just go through the roof. But then it
will just create this dichotomy between
the people that use it the right way and
those that use it the wrong way. There
was a great article in the New York
Times recently about it's not just
students, it's teachers. That's the
headline is the professors are using
Chad GPT and some students are unhappy
about it. And there's this student at
Nor Eastern who is reading her
professor's slides and seeing um the
slides fill with spelling mistakes and
extraneous body parts in the images
which are like telltel signs of AI. So
what do you think about the fact that um
that this could create a even broader
divergence in society?
>> Um actually it reminds me a lot of when
we introduced computers into classrooms
and into universities. Um so I think
there's actually quite a few lessons um
I have from those days that we're trying
to explore and do research. Um
so one is what we can do about that. But
what one thing we are also separately
trying to do is convene leaders to talk
about how to approach this from a system
perspective. bringing together uh
administrators to say what is the
framework that they want to use within
their organizations for responsible
usage of the technology. I think one of
the challenges we have right now is it's
a little bit of everything happening
rather than taking a um an exploratory
approach to say listen AI isn't going
away. Uh access and literacy equitable
access and literacy is important. Um, so
some students might be using it because
they want to get ahead. Others are
afraid they're going to be perceived as
cheating, so they're not going to use
it. And that, to your point, that
creates a separation. And sometimes we
see that based on gender, too, by the
way.
>> Oh.
>> Um, so I think what we can do is how do
we bring together leaders to explore how
we enter this next chapter? How do we
start to set the guard rails in a way
that maximizes the benefits while
mitigating the risks? And uh we held an
event James uh James and uh myself and a
few other colleagues co-hosted late last
year to start exploring and sharing best
practices what is where what are people
experimenting with what is working not
and we had our researchers there as
well. We also did some hands-on training
so that teachers can actually learn how
to use the tools responsibly again I
think this is more about unlocking
productivity and potential versus like
some of the replacement. So we have to
work on getting making sure the
incentive models are in place as well.
>> That's for sure. Okay, we have we have
10 minutes left. So I I think there's so
much experimental technology that I want
to talk about. So like can we just use
our remaining time to go through four of
your you know cutting edge technology uh
approaches or or disciplines maybe 2
minutes each or so um where we'll just
kind of talk about the state of them and
it's definitely too much to cover in a
short amount of time but I don't want to
leave here without without touching on
them. So first to you James state of
quantum um seems like it's moving faster
than a lot of people anticipate
>> quantum you know we have an incredible
quantum AI team uh that's doing
extraordinary kind of pathbreaking work
and I think the headline on this is that
I think quantum computing is actually
making more progress than people realize
because keep in mind that the the the
whole idea of what what everybody's
aiming for in quantum is how do we build
a fully error corrected quantum computer
and there's lots of different approaches
to this. I think the the dominant
approach that most people are taking is
the superconducting cubits approach.
That's the that's what our team is
doing. There are other teams in the
world that are doing that. It's a very
uh you know complex way of doing at it.
People think it's the best shot at it.
But there are other mechanisms. There's
neutral atomous approaches. There's a
there's a there's a whole range of
approaches. I think what the progress
that's happens as follows. Uh the
underlying chips are uh making
incredible progress. Our willow chip for
example hit a big milestone. It was a
big enough deal about a year year and a
half ago where you know it was able to
do in you know you know a computation a
benchmark computation called RCS which
would take uh a classic frontier
supercomput 10 subillion years to do and
that's like you know one of like 25
zeros it's it's a big number it was able
to do that under five minutes so the
progress on and and and also and correct
errors uh in a fundamentally
breakthrough way one of the things
that's always been an issue smooth error
correction which is the other big
barrier in in quantum computing is how
can you reduce the error rate as you
scale up and add cubits. So the real
breakthrough despite the fun spectacular
number that I told you about the real
breakthrough which is what got us the
breakthrough of the year award prize was
that for the first time we're able to
show that you can do what's called below
threshold error correction which is as
you scale up the system the error rates
are actually going down. which is
exactly what you'd want as opposed to
that they're actually going up. So that
was a big deal. The other big deal was
actually late last year uh when cuz all
these benchmarks including the one I
just told you these are computations
that are fun and great for benchmarking
but these are computations that are
actually not useful for anything but
last year we're able to show probably
the first useful computation. Uh this is
our quantum echoes uh result was a big
enough deal made the the cover of nature
which is great. Our teams are excited
about that. What that showed was an
actual useful computation for figuring
out the spin dynamics of molecules which
could not have been done any other way
and we're able to validate the result
with colleagues at Berkeley uh who
actually validated the results in a lab
with NMR data. So that was the first
example of a useful computation. So you
put all that together, you realize that
the progress that people had kind of
decades away is actually happening much
faster. So I actually think we're going
to start to see useful applications in
the next five or so years from quantum
computing and that's pretty exciting.
>> Definitely we're going to spend much
more time I think on this show thinking
about that. uh material science I think
is one of the more overlooked areas of
AI research where um you can actually
find new materials uh through AI
predictive techniques. So Laya talk a
little bit about where that stands
today. Yeah. Um it goes back to what are
some of the root node problems uh that
we might if AI can help us unlock um a
basic understanding of the universe
around us it can open an entire field
for ourselves and other researchers to
build upon that. Alphafold being one of
those. the uh alpha genome though uh
alpha gnome the one that you've just
mentioned our material science was
really exciting because we basically
went from 40,000 known stable uh
crystals to 400,000 plus um that are now
being tested uh with research in
research and in labs. And what that
really means is if you think about
things like how do we have build better
batteries for uh electric vehicles or
superconductors for supercomputers, it's
really going to one way we can do that
is through thinking of new materials. So
we're still I think quite early in this
stage, but we believe this is something
promising that could really change how
we how we work and live.
>> And what do we get if there's new
materials discovered? Is it like
something that's maybe t-shirt uh
thinness but winter coat warmth? Yeah.
>> Yeah.
>> I mean looking at the background behind
you that's all I can. [laughter]
>> So [gasps]
>> yeah I think this is like uh when you
when you look everything around us and
like I said uh if you think about even
batteries right and electric vehicles of
how do you make a vehicle uh light like
long the range of a vehicle um or the
charging capacity of it um being able to
have better batteries and not be limited
by some of today's physics. I think
things like that are going to be
possible with some of these basic
materials.
>> Okay. Now, weather uh weather prediction
with AI is actually something that
Google's working on pretty diligently
>> in many different ways. Yeah.
>> Yeah. We actually have a very broad
program around weather. Uh and that's
working Google deep mind and Google
research trying to there's so many
things you want to predict with weather.
One is just forecasts. What's weather
going to be like next week, tomorrow?
Those there's that kind of work. Uh so
Graphcast uh which came out of Google
Deep Minds had incredible kind of
state-of-the-art kind of model for that.
You're also trying to predict other
things in weather. You're trying to
predict monsoons, cyclones,
uh you're trying to, you know, figure
out when uh flood floods are going to
happen. Uh these are weather all these
extreme weather events. So we actually
have a very broad program uh where we're
trying to use the latest AI innovations
to make predictions. I'll give you an
example of one that actually two two
quick.
>> No, no, do one quick one because I have
to ask you about uh suncatcher. I want
to talk about suncatcher unless your
team gives me more time. Let's just do
one example.
>> Well, let me do one example because this
is actually affects uh uh people and
saves lives.
>> Uh so it has always been known that if
you could predict uh floods with more
than 6 days advanced notice, you can
actually save lives. In fact, the UN
estimates it's like you can save
probably half uh the the the damage that
happens. And so this has been always
been one of these kind of challenges.
Can you do that? So our team starting
about maybe two and a half years ago
built a model to do that called to
predict these so-called riverine floods
and we tried it in Bangladesh. It
worked. Now fast forward to today we're
making these uh river and flood
predictions covering 150 countries and
places where more than two billion
people live. [clears throat]
>> I think that's extraordinary. So that's
an example of breakthrough innovation
leading all the way to societal useful
impact.
>> We're working with the National
Hurricane Center as well where we could
predict 15 days in advance 50 different
routes for hurricanes and actually
tracked Hurricane Melissa. So you start
to think about what this type of insight
might mean for crisis preparedness.
>> Yeah. And then more mundane things like
air airplane schedules. So if you know
that a storm is coming, you can sort of
uh take care of that in advance. Okay.
Last thing. Suncatcher. What is
suncatcher?
So this is in classic uh Google moonshot
fashion uh where you say okay so imagine
how we think about training AI systems
uh how we do it today and you imagine
100 years from now how would you imagine
we'll be doing it uh given the compute
and energy requirements needed to train
models so you say 100 years from now of
course we'll be doing it in space
because the sun has 100 trillion times
more energy it's available 24/7 imagine
if we that's probably how we're going to
be doing it in the future. So why don't
we try to build towards that future? So
project suncatcher is a is a moonshot uh
in classic Google fashion where we said
let's start to build towards that. So
the uh we're going to try to put in
we've already done the first a few of
the key milestones. We're going to try
to put TPUs uh special purpose AI chips
uh in space and do training runs.
>> We're sending chips to space.
>> Chips to space.
>> This is actually happening.
>> Yeah. So the first milestone is we're
hoping that in 2027
uh we'll have done a couple training
runs in space. Uh this is project
suncatcher with the idea towards
building towards this future. Uh where
you know this will probably is probably
how we're going to be doing it. So
people reimagine Dyson spheres and all
these things about you know of course
you want to harness the energy capacity
in your system in your galaxy in your in
in in our case in our solar system first
and then eventually ultimately in the
galaxy you're going to do things in
space.
>> There was this idea that uh former
googler I alas had that if we're going
to get towards AGI maybe the world is
going to have to be papered with data
centers but you put them in space maybe
we can have the rest of the earth for
us. So, so stay tuned. So, our next
milestone uh will be in 2027. We'll
hopefully we'll have done some training
runs.
>> Would either of you go to space if you
have the opportunity?
>> You trust uh the current spaceships?
>> Yeah, they're pretty good. I mean, I I I
grew up wanting to be an astronaut. I
failed obviously.
>> Yeah. Oh, [laughter]
>> I did not. And I will not be going to
space. All right.
>> I am more I'm more interested right now
in how do we make Earth better? And I
think that's where um AI can really make
a difference.
>> Yeah. Imagine focusing on this planet.
That's it's an idea. All right, Lyla
James, thank you so much for coming on
the show. Really appreciate it.
>> Thanks for having us, Alex.
>> All right, everybody. Thank you for
listening and watching and thanks you
thank you again to Qualcomm for having
us at your space here in Davos. This
concludes [music] our series of episodes
uh at Davos. Been a great four, five
episodes actually if you include the one
we did with Demis. [music] And uh we'll
see you next time on Big Technology
Podcast. Thank you. Thank you.
[applause]
Thank you.