Google's "X" Experimental Unit CEO Astro Teller — The Incredible Promise Of AI & Its Dangers

Channel: Alex Kantrowitz

Published at: 2023-06-29

YouTube video id: 74ZkYreyoCI

Source: https://www.youtube.com/watch?v=74ZkYreyoCI

[Music]
welcome to the stage founder of big
technology Alex kantrowitz
[Applause]
welcome to Big technology podcast a show
for cool-headed nuanced conversation of
the tech world and Beyond and we are
here with Astro Teller who's the captain
of moonshots and the CEO of X at
alphabet thanks for having me
all right
now as you can hear we're doing this in
front of a live audience here at Summit
at Sea and folks I think you just
clapped a little bit but I really want
you to be on the recording so let's hear
it you gotta be loud
[Applause]
doesn't it feel so good to be back in
person again man for sure is this your
first time doing something in front of a
live audience I've done it a few times
but nothing as boisterous as this this
is amazing so thanks everybody for
coming
um you know Astro as I was doing my
research for our conversation which is
billed as maybe an optimistic look at AI
coming from you I was looking into your
background and saw that your grandfather
Edward Teller
is the father of the thermonuclear bomb
yes
it's going to be a slow conversation I'm
kidding I
so
I guess a bit of a curveball at the
beginning but here we are so is is that
weird and also what do you think about
when you think about AI what do you
think about in terms of our ability to
develop technology that could be quite
destructive
shh
let me give you two thoughts I guess you
can dig in further but the first one is
I have always been inspired by the idea
that getting
really bright people into an intense
environment to work on something really
hard that really mattered
could have sort of profound positive
impacts for the world and I think the
NASA Space Program the Manhattan Project
Bletchley Park in England there are lots
of these examples and that was one of
the things that inspired me when I was
young
and we'll probably get back to getting a
group of people together to work on
something hard in particularly hard ways
but the other thing is
the
nuclear bomb is makes sort of like a
good headline sort of the mushroom cloud
we all have emotions attached to that
but the emotions that we've attached to
our fears our frustrations
understandably about nuclear bombs
translated in the 60s and 70s into such
a Negative narrative about nuclear
energy
that we as a society completely missed
the boat pun intended
the disaster which is climate change
right now would not be happening
if we as a society had not let our fears
about the first thing translate into an
inability to use the upside of nuclear
power to save us from what is now
arguably the biggest problem in the
world so as we sort of move forward and
talk about the Technologies of the day I
would encourage us to think about that
Duality not that nuclear power has no
downsides but that if we're if we
weren't careful and we weren't in that
case we missed a lot of the opportunity
for the upsides and thank you you could
give it up okay
it's an almost a perfect precursor to a
discussion about artificial intelligence
because AI can help our society in
countless ways and in fact it's already
in place in certain ways helping us but
it also has this capacity for
Destruction I mean you have I think it's
an often cited statistic but one in ten
AI researchers saying there's a chance
that it could effectively turn
civilization into paper clips so my
question to you is when we create such
powerful technology with the capacity
for good but also the capacity for bad
what calculation do you think needs to
go through our head before we decide to
move forward
yeah
again this is a very big conversation so
we may have to come at it from a couple
different directions but artificial
intelligence number one is algebra on
steroids just so we're clear it is a
very big field of study and when you get
out of microscope and you like look down
inside the computer you cannot find the
AI in there anywhere it's just math
and sort of depending on how you measure
it humans have been working on
artificial intelligence by that name for
70 years and it has been making progress
over the last about 70 years so it's not
like we were at zero and then there was
like a huge step function the plane that
flew you here flew itself almost the
entire way using artificial intelligence
and we all rely on artificial
intelligence every day whether we
realize it or not so this is not to say
no to your question but just maybe for
us to set the table that this is not
like the lights just got switched on
where we're sitting there by the light
switch wondering whether to turn them on
do I think that things are picking up
speed yes but this is part of a much
longer narrative and I think that we
need to be really thoughtful about
how as we develop any technology can we
get the most benefit out of that
technology and at the same time as
wisely as possible see potential
downsides from that technology and then
find ways to mitigate against them or
Corral them into places where they won't
be a big downside for society so you're
a professional inventor effectively who
manages professional inventors
and I wonder what is it about us about
humans that we'll go forward
and create things that we know have the
potential for great destruction and kind
of Hope kind of be optimistic that we're
going to be using them for good I mean
nuclear energy nuclear weapons like
nuclear energy could help save the
planet but the fact that we have nuclear
weapons increases the chance that we'll
wipe ourselves out
same with ai ai is something that is can
do a lot of good has done a lot of good
glad that the plane made it here okay
but also you have these potential
downsides so think put your like put us
all in the head of an inventor now and
and explain sort of why the human Spirit
pushes forward with things and hopes
that will do a good job taking care of
them even though there's risk
so inventing sounds like a monolithic
effort but let me separate it a little
bit into two different things one of
them is learning the discovery of new
knowledge
if Humanity can't survive the discovery
of new knowledge
I mean I don't believe that maybe you do
but I believe in humanity I think it
could be bumpy at times but I believe in
humanity and I believe we can survive
discovering new knowledge I don't want
us to need to infantilize ourselves as a
species by preventing new knowledge
that's the first half now the second
half of invention is what you do with
the knowledge and in instantiating the
opportunity a new thing like let's say
electricity should we put electricity
into the things around us you can say
sure generically but when you start to
do it in specific cases you can say
things like who will benefit from this
who will be harmed by this
you can say almost for certain with
something like either electricity or
artificial intelligence we can't say for
sure all of the ways in which this will
play out great so how can we sandbox
this discovery this invention process so
that as we try to instantiate it in
products and services we can put it out
in the world before it's done not be not
to be irresponsible but to be
responsible and to say to people what do
you think how should we change this what
can we learn from this and if you'll
allow the metaphor you know for a long
long time waymo the self-driving cars
which actually came from X we had a
person sitting in front in right by the
steering wheel with their hands really
near the steering wheel eight hours a
day just making sure that nothing bad
happened so the car was practicing
driving itself but there was a backup I
think there are lots of ways in which we
can learn in the real world and do it
responsible by engaging the rest of the
world and what's happening and getting
that feedback so that we can get these
unforeseen consequences out into the
light so that we can design around them
so I'm going to get to the optimistic
stuff I promise but I want to keep going
here just a little bit more
um it's very interesting how you talked
about how with AI it's just you know a
series of numbers effectively it's math
and we're not going to see God inside
the algorithms you could also say that
with nuclear right it's just a series of
equations that we figured out how to do
some crazy stuff with sometimes crazy
awesome sometimes crazy bad so is there
ever a moment where we say
where we like you mentioned we shouldn't
stop we shouldn't worry about surviving
the discovery of new knowledge but is
there ever a point where we say stop
like I think about the letter that Elon
Musk and a bunch of others said about we
need to stop researching AI which seemed
to be a bit of a pipe dream to me but is
there ever a point where we say this
type of stuff we shouldn't keep going
with or
do we sort of is it inevitable that we
push ahead
I can't speak for the whole world I I
think the reality is that
lots of people who signed that letter
and lots of other people in the world
are going to keep working on it no
matter what level down that's great for
headlines for them but yeah for sure I
think what's important the only thing I
can control is what we work on and I
want to work on what we're working on
responsibly and so that we can get as
much benefit to humanity as possible I
think that the world is overrun with
serious problems I would put at the top
of that list climate change nowhere
close to second place I would put
nuclear weapons just since we use that
as an example and I would put AI doing
something particularly horrible for
Humanity a very very distant third so
instead of focusing on all of the bad
things and I'm willing to talk to you
more about them but I'm interested in
why we're not talking about the
downsides because this is about us
netting Humanity to the positive the
downsides are the upsides why we're not
talking about the upsides right
yes okay
so
so I agree that we're gonna do that by
the way excellent I'm looking forward to
it and
it's is it possible that there will be
complexity for Humanity as we go through
this a hundred percent do I believe that
anything particularly extreme In Our
Lifetime is going to happen I don't I
I'm sorry but I've been working on
robots trying to open doorknobs for like
the last 30 years and it's been a slog
so as someone who's been in the field
for 30 years I'm just a little bit maybe
more sanguine than people who started
learning about this recently I think I
need to just clarify the question here
because it's not do we stop it's is
there a point where we think about
pausing that's what I'm asking really
it's not like I'm not sitting here
saying Astro we got it this stuff is
about to turn us into paper clips not
the point at all the point is
philosophically do we have a point where
we say maybe we don't want to develop
those things and if we don't that's fine
and that's you know one answer but I'm
curious if you actually think that there
is a place where we do say no yes I'm
sure such a line exists I argue by the
time we've gotten to that line it will
already be too late so this is actually
me agreeing with you I think way before
that line we should be saying what are
we doing how are we doing it and can we
put intelligence into the things in the
world around us in ways that benefit
humanity and how as we're doing that
even if our vision is really well honed
to be in that positive for Humanity can
we be on the lookout for downsides and
get ahead of them because I don't want
to ever get to that line and I really
think if we get to that line and then
some half of humanity says okay we're
out the other half of the humanity is
just going to keep going so I think we
need to be worried and thoughtful and
and responsible about these things
starting now not starting when we get to
that line yeah it's kind of I mean it is
very interesting and that's kind of the
opinion that I share with you is that
we don't really have as a species the
capacity to stop and that's very
interesting same with the Manhattan
Project same with AI so so okay we're
going to go ahead and build AI
um
do you do you think there is a similarly
positive
impact that artificial intelligence can
have the same way nuclear energy can
have in terms of preventing we you know
we advertise this session
energy climate change so let's start
there like is there a place is there a
way that AI can have that impact and is
it the AI that we've been developing all
along meaning that the optimization
technology computer vision stuff like
that or is it does this generative AI
wave have a role to play in this world
as well and let me add a third question
because this is now a two-parter so
might as well make it a three what at
Google X or alphabet X right is now
happening uh to tackle these issues
I'm happy to go down to all three of
those I think we're going to get lost in
the rabbit holes we're gonna have fun
getting lost in them but you may have to
bring me back to some of those topics
I'm up to it awesome so I mean let's
start with
you know artificial intelligence is a
really big basket of things many people
may have heard a lot about large
language models recently that is a
particular piece so artificial
intelligence has lots of baskets machine
learning is one of those baskets deep
neural networks is a subset of machine
learning and even within deep neural
networks there are ways of setting them
up and training them that are what
you've heard in public referred to as
these large language models
those will continue to have a lot of
impact on the world for sure
but I'd rather Focus I think it's
actually more productive to go one or
two steps back up to like machine
learning in general and to say how are
we solving problems we should be falling
in love with the problems not falling in
love with the technology
and so for example one of the projects
that's near and dear to my heart that's
at X I think it's a really big issue in
the world the world's electric grid is
the world's most complex machine that is
by far the most complex machine Humanity
has built
and the people who run the grid all over
the world from their many pockets
obviously these are good people they're
trying hard but it is very complicated
they're Regulators there are private
companies they're these semi-public
companies the system operators the
people who own the wires the people who
own the generation the people who on the
distribution the sort of Last Mile to
people's homes all often different
companies it is a very complex problem
and so when you go to the people who are
supposed to make sure that the load they
need for electrons and the source of
electrons are balanced on a millisecond
by millisecond basis they're trying
really hard to do that for all of us and
they're barely hanging in there and
that's leaving the grid the way it is
there are now all of these solar Fields
all over the world that want to jump
onto the grid all these batteries that
want to jump onto the grid all these
electric vehicles that want to jump onto
the grid and they have no way of
figuring out how to maintain their
system because here's the crazy bit they
don't know what their system is if you
go to a system operator and say show me
the digital map of where every inverter
every Transformer every wire is in your
grid they will just say we don't have
that
and that's why it takes them 10 years
when you wonder why there are 10-year
weights in Most states in the United
States to get a solar field onto the
Grid it's not because people don't want
to put solar fields on the grid they
don't know what will happen to their
machine if they plug that solar field
into the grid
so what if machine learning could help
them to learn about their grid
virtualize their grid and then answer in
a minute instead of in a year in 10
years
what will happen if you plug this solar
panel onto the Grid it's going to be
okay give that one a yes
think about the Tsunami of Renewables
that are already waiting they've
literally already been built they're
just sitting there in the dirt wind
farms and solar panels
waiting because we don't have a
virtualization of the grid that is an
example right now that X is doing to try
to use machine learning through the
energy infrastructure to make the world
better
thank you
and let's talk about the wave of
generative AI large language models
where do you see the potential there and
yeah I'm curious if x has any projects
underway that because look I if I had to
put myself in your shoes I think what
you're probably going to say is that
this is
a little bit overhyped and we can
actually do more with the technology we
have I guess that's what you just said
in in a sense so but like yeah where do
you really think that the opportunity is
for generative Ai and is X working on
anything there
so generative AI as you've probably seen
a play out in the media recently leans
more into things like asking Bard to
write you a poem going to one of these
image producers and say hey make me a
picture of a pony that's actually
Unicorn it's at a birthday party and
it's wearing a purple saddle and then it
makes you that picture that's what you
know in the media right now generative
AI is typically being put forward as
let me try to get a different that's
real that's going to continue to be a
thing but that's the tip of the iceberg
so think about it this way
we're in the middle of actually this has
been decades coming and it will take
decades more to move through Society
of a process of lifting up people moving
them away from the Craftsman
mechanical detail work of Designing and
making things lifting them up into
guiding computers who help them make
things and this has been going on for a
long time if you're a photographer you
are familiar with and you have used
Photoshop for example and that doesn't
ruin your ability to be a photographer
it lifts up your ability to be a
photographer so if you work at a car
company and you have a car strut let's
say so it's sitting there it's one of
the main pieces that holds the car bits
in place around the wheel to the frame
you want it to be really strong
when you pull it but also when you
smoosh it together it's got to be really
strong it has to be really strong in
torsion as well because otherwise it'll
snap like this
but you want it to be cheap to make and
you want it to be as light as possible
so instead of Designing what you think
would be the best one what if you went
to a system that could try millions of
different possible car struts so many
that it started to hill climb in car
strut design space and you were watching
it and telling it things like how much
you value fast to make cheap to make low
carbon footprint to make low carbon
footprint because it weighs less for
driving it around afterwards so you're
guiding it you're making the decision
but it's trying millions of things and
it comes out with a car strut at the end
which is better than any car stripe a
human could have designed we're gonna
see this sort of thing play out in every
discipline in the world over the next 30
or 40 years and it will take a long time
x is really interested in some of these
spaces where we can help the people of
various Industries to be inventing and
designing much faster so that we get to
much better designs so we get much
better solutions to the world's biggest
problems so you're working completely on
moonshots the what you just described
just Sparks a question for me which is
that if this becomes democratized if
this
gets in the hands of so many people then
does the path from wild idea moonshot
idea to production become much quicker
and then also much more democratized we
don't have many invention houses like X
in the world so are you worried maybe
that you're going to have a little bit
more competition
exactly the opposite
the world is not going to run out of
problems and the fundamental goal of
Acts is to get a bunch of people
together to work on those problems as
efficiently as possible the more people
can work towards solving Humanities
problems the better off we'll be so I
hope it does democratize things I'm
watching it currently start to
democratize things and I'm super excited
about that I would also say so there's
democratize in the sense of more people
can be at the Starting Gate and doing
that work it is also lifting us and and
people like us up in our ability to
reach out even better to Partners and so
we're now helping aquaculture experts in
Norway in a way that 10 years ago I'm
not sure we could even have imagined
we're helping on in the electric grid in
Chile in ways I just described to you
and so they're getting this same benefit
they're being lifted up by this
technology as we are able to share it
with them what are you doing with the
aquaculture experts
oh aquaculture what is aquaculture so
let me take a step back this is our the
first project that was described to you
is called tapestry
um this project that I'm about to
describe on Ocean Health we call Title
and
are let me give you the context here so
the humanity gets about two and a half
almost three trillion dollars a year
from the oceans and we are destroying
the oceans as many of you probably know
faster than we're destroying the land or
the air it is this sort of sink for
Humanity it is pulling all of the worst
bits that we're putting into the world
into it and that's why the ocean is
dying
we humanity is not going to stop using
the oceans we need to get more value per
year from the oceans and we need to do
it in a way that is not only not
destroying the oceans but we need to
regenerate the oceans
there is no possible way to do that
unless we find a way to take automation
to all the things that we currently do
in the oceans and can do in the oceans
so if that's the big picture where are
we going to start
well one of the cool things about
fishing I mean fishing is open sea
fishing is actually really problematic
as most of you know we're in the middle
of overfishing all of the fish in the
world but aquaculture actually helps us
not to overfish the oceans and because
the carbon footprint of a pound of uh
fish is 1 8 the carbon footprint of a
pound of beef we are wildly better off
as Humanity moving to producing food
through aquaculture but when you go to a
huge pen even the our partner Moe which
is a sustainable aquaculture farm in
Norway it's the largest salmon Farmer in
the world and they are very good at what
they do and the state of the art when
they wanted to find out a year ago even
two years ago maybe how well their fish
were doing how much their fish weighed
they would in a pen with 250 thousand
salmon they would pull 20 salmon out of
the water put them on a scale weigh them
and be able to average that and be like
well that's probably what they weigh
and put them back in the water if they
wanted to find out if the fish were sick
they would pull 20 fish out and like do
I see any lice on the fish put it back
in the water so what we're doing is
we're enabling them through computer
vision and machine learning and
Automation and specialized sensors to
allow them to look at the health of the
fish the weight of the fish we're
helping them to automate the feeding of
the fish which makes it much more
sustainable because the runoff from
overfeeding on these Fish Farms is
actually one of the big problems in
aquaculture so we're making the
farmer better while we're making the
world better using machine learning
[Applause]
so Google X
is it called Google X anymore X it's
called X
is there to
effectively try to insulate alphabet
from The innovator's Dilemma which is
effectively you're an incumbent you have
a business and you do everything you can
to protect that Flagship business and
that might potentially head off your
ability to do things that would disrupt
your core business we've talked about
some very cool projects self-driving
cars we've talked about things with
climate and and food
but there's a big there's been a big
story recently which is that these chat
Bots
um coming from places like open Ai and
Microsoft have threatened Google so now
Google has its own and has Bard but I
almost wonder when we think about
insulating Google from The innovator's
Dilemma
shouldn't X have been front and center
building the the first chat GPT and not
letting an open AI for instance run away
with it I swear he's not a plant
but good setup I think you didn't you
say that we shouldn't be afraid of the
advancement of knowledge yeah absolutely
absolutely so no I love the question uh
let me take a step back and remind you
sort of how X functions what our goal is
our goal is to invent and launch
moonshot technologies that
if we do it right help tackle some of
the biggest problems in the world and
lay the foundations for enduring
sustainable businesses
one of the early things that we did was
a thing which when we graduated back to
Google was called Google brain Google
brain is the origin of much of what you
now think of as Bard and so that was an
example in the earlier days of X of us
making something realizing that it would
be productive for that to be back in the
mothership we moved that back to Google
and it has gone on to do amazing things
and so the Transformer paper and other
things that you've heard about somewhat
more recently this is about five years
ago we said
we can feel on the horizon we're going
to get to the place where the ability is
going to be there for us to be working
in much tighter loops with software
developers like like a partner to them
where we can complete code when they
start to write it if they write out the
an English language description of what
they want it will write out the code for
them it can even be like a pairs
programmer and have a conversation with
them and help them to be better
programmers and so that work happened
for about five years at X about seven
eight months ago we moved it back to
Google and it was just announced
recently as Cody and that is now sort of
the code making part of what you think
of at Google
sailors at virgin voyages Ahoy that was
the timing of that was so freaking
perfect
just finished the answer and then there
we go
location and receive important safety
information from our crew
[Music]
so the real question is is he gonna
delete this from the podcast or is he
gonna leave it in what do you think
should he leave it in
thanks everybody
oh that's right
and someone shut this off
what what is required by law
what one of our jokes at X is after we
you know make time machines work and
cure cancer we're gonna figure out how
to get AV sorted out that'll be like the
hardest thing after we do the easy stuff
like cancer and time travel
so I want to ask a follow-up question
which is
you mentioned that brain started in X
and then graduated to Google
I want to question the graduation
process because when you do graduate to
Google it'd be doesn't it become a
little bit harder to say I'm gonna we
have we've made all this amazing
technology now I'm going to make
something that's competitive with the
bread and butter for instance you know a
chatbot that can sort of serve as
something that people might want to use
instead of search and once it goes into
Google doesn't it have the constraints
that it might not have an X which is
that now all of a sudden it has to worry
about the quality of search and the
quality of response where it might not
in a more experimental unit
foreign
anything up to serve you more than a
billion people a day is Super complex
um we're talking about you know hundreds
plus languages
um there is a lot technically legally
ethically just to do this I mean
practically at all but also doing it
ethically that X is totally not set up
to do for the same reasons that X may be
a particularly good place to do rapid
prototyping and learning of new things
we are not the right place to move
something from yeah we have something
special here to it's ready in a really
thoughtful responsible way to serve a
billion people a day I so I hear you but
I would actually say that that would
have been irresponsible for us to like
try to keep that to ourselves that the
goal of X is to create really good seed
crystals we don't want to do it all
ourselves we want to get the ball
rolling where where you know either
sometimes the rest of the world in the
form of an other bet at alphabet like
waymo is or back at Google
the world says Ah got it now see why
that could be really big before it's
done but it can survive on its own
because it's left behind that like what
are you talking about why would we do
that that's definitely not possible when
those things are still on the table
that's what x should be for is to
actually work through and often be wrong
about whether there's something awesome
there and so we try a hundred things and
99 of them don't work out and our job is
to be wrong about those 99 as
efficiently as possible to move past the
ones where we're wrong about them with
evidence as quickly as we can so that
that one which we can double down on
over time can sort of go on to have a
really productive future
so here's my uneducated argument to
switch things up
um yes it makes sense to have a
prototyping lab but also couldn't you
for instance you know a billion users
they'll be building something like Bard
into Google Chat but open AIS chat GPT
started with 100 million users that's
the fastest growing consumer product in
history
it seems like you might even be in a
great position to you know say hey we
have this chat thing keep developing it
within X and then call Sundar up and be
like hey man can we have some of that
cloud infrastructure the same way that
open AI called Saya and said can we have
some and that's sort of what's enabled
them to power chat GPT and then you
don't really have to worry about the
competitive elements of coming up
against the core business
let me use a related example see if this
at least partially satisfies you so one
of the things that we've worked on at X
for a very long time is Factory
automation the world spends sort of
pushing 10 trillion dollars a year
making stuff and if you can automate the
making of stuff you can do it faster you
can do it cheaper you actually have less
waste it's more reliable because it's
been automated but it is fiendishly
complex and expensive to automate the
making of almost anything because until
very very recently there was a lot of
bespokenness artisanalness to
programming robots
and so
X worked on that for many years
created it was a project called
intrinsic it graduated it's now in other
bet at alphabet they just announced a
few days ago that they have a platform
called Flow State which is a developer's
uh arena in which you can develop new
capabilities and new skills for robots
to have with a seamless ramp from early
investigations in simulation to actually
Landing them in robots in factories
doing real work and that creates more
jobs for developers makes it easier for
people all over the world to help make
robots more capable and it helps the
makers of the world and it democratizes
the making of things because right now
it costs a billion dollars to seriously
automate a factory so that's an example
that started at X it's not exactly a
chat bot but it is you know making
robots much more capable much more
flexible much more nuanced and dynamic
in their ability to solve problems on
the Fly and then we've laid out the
early infrastructure but are allowing
the rest of the world to build on top of
that and which I think is going to be
really good old for yes for alphabet but
for the whole world as well so that's
something that didn't go to Google that
that lives in this region post X but
still inside alphabet called an other
bet
so do you want brain back what was that
do you want Google brain back no no no
no our job is to catch New Waves for
alphabet to catch New Waves for the
world what excites us isn't sort of
Empire Building or or getting to sort of
like have been what the one who rang the
bell that is not our goal we want in a
really efficient way keeping this
balance of audacity and humility just
right audacious enough that we'll try
almost anything but then humble enough
that we're honest right after we start
out on each of these investigations that
we're probably wrong and that we need to
spend our money figuring out that we're
wrong rather than trying to prove that
we're right since mostly we are wrong
and by doing that in lots of different
domains what Rises to the surface of
these new things like inverse design or
generative design that we were talking
about it's not that I said there will be
generative design seven or eight years
ago it's that we've tried a thousand
things over the last decade and the
generative design stuff is some of the
stuff that's bubbling to the surface in
a way that's looking really promising
over and over again that's what excites
us is catching those New Waves doing it
efficiently and doing it responsibly
so Astro I'm actually curious how you
decide what to fund inside of X because
you know I might think of this as like
you need to predict the future but
that's not the way that you think about
it exactly I I one of the
core ax seems that X is I do not believe
that anyone certainly including myself
is any better than random at predicting
the future
I know that that's like not the cool
thing to say in Silicon Valley but I
just don't believe that any of us are
better than randoma predicting the
future I think we can discover the
future a good bit more efficiently than
maybe other people discover it but
that's very different so
the core kind of map to beginning a
journey at X is we make these three
circles and we talk about ex efforts
living at the intersection of those
three circles the first one is there has
to be a huge problem with the world that
you want to solve if you're proposing
something for X you have to tell us what
that huge problem with the world is you
want to solve if you can't say that
we're not starting on the journey
number two there has to be some radical
proposed solution for how to fix that
problem some science fiction sounding
product or service however unlikely it
is that we could make it that if we made
it would make that huge problem go away
and then there has to be some kind of
core technology opportunity some
breakthrough technology that gives us
the opportunity to start on that Quest
and those three things together are a
moonshot story hypothesis that doesn't
mean you you're right in fact you're
almost certainly wrong but if you can
propose those three things it has the
form of a moonshot and then the answer
is great gorgeous moonshot story
hypothesis what is the smallest amount
of money and shortest amount of time you
think it'll take to kill your idea
because your idea has a 99 chance of
being wrong
and there's no way to avoid that because
if that wasn't true it wouldn't be
radical we're only interested in the
over the horizon stuff and so as soon as
we sign up for that we are explicitly
signing up for being wrong most of the
time and that's why the humility has to
kick in so it doesn't work like we will
work in these areas we won't work in
those areas it's it mostly is what are
huge problems with the world so maybe
not coincidentally more than half of
what we're doing right now is in the
climate change space but not because I
mandated that but because that's what
people were excited about and that is
legitimately some of the biggest
problems the world is having right now
and then it's actually evidence that
kills off most of these things over time
and the ones that survive that we double
down on so it's that winnowing process
that looks at the end like we planned
each of these things but we didn't plan
these waves to catch it was just a
filtering process that weeded out all
the ideas who are just bad sometimes but
more often we're like just wrong time
technology wasn't right or maybe even
the technology was right the timing is
right and we just couldn't figure out
how to make it great enough which
happens sometimes
it's a very interesting process I'm
curious most people who are listening to
this are not in the position to take
moonshots you must hear this all the
time is there anything from your process
they could put into place inside their
companies that might help them achieve
those 10x moments that you're going for
for sure
um I would actually argue what I just
described is the most efficient you can
be in trying to find something new and
even if you've pre-decided you know
you're working on flying cars or
whatever that is
inside of your effort there are lots of
things that you might be thinking should
it be gas powered should it be batteries
how many propellers are we going to have
there going to be lots of things for you
to figure out over time which of the
airframe be shaped like and so there are
lots of risks to take lots of
discoveries to have and for each of them
you can do exactly what I just said
so one of the things that we say at X is
if you're starting out on a journey I
know it's no fun to kill your ideas I
get that
a for argument's sake that the idea you
have isn't going to make it would you
like to find out now
for one dollar or find out three years
for now for like 10 million dollars
and everyone of course says well I guess
I'd like to find out now for a dollar
great or welcome to X thank you for
working on the propellers or the time
machine or whatever it is you're working
on
how are we going to do this are we going
to be intellectually honest or
intellectually dishonest about our
Discovery process you're like oh I guess
intellectually honest Astro like we all
know in our hearts what to do what I
just described is not rocket science
it's not like X invented this that's not
what's going on it's just so hard to do
it all of human nature drives Us in the
opposite direction from what I'm
describing so what x spends all its time
doing is trying to create the
infrastructure the social norms reward
systems Etc so that people actually do
these things but what I've described for
sure if you work in a startup company if
you work in a huge conglomerate anywhere
in the world this is still the right way
to explore new ideas
let's check in on a couple of famous X
product projects uh waymo right
um I think I'm seeing more progress in
self-driving cars now than in like maybe
the past year than because I was in San
Francisco 2015 2016. people were like
this thing is going to happen next year
we're in 2023 not quite there yet but it
seems like there's a lot of progress
being made what's your what's your
perspective on where we are right now in
self-driving cars and how far are we
from realizing that original vision
for sure it was harder than it turned
out to be
one of the things that I think caught
waymo by surprise got us by surprise is
that
there are a lot of things that human
drivers do that if we want to follow the
laws we can't do those humans just kind
of like
are in the gray area a lot about how
they choose to drive their cars where
they pick people up where they drop them
off if it's like a transportation it's a
service obviously waymo can't do that
that is incremental hard stuff for us to
figure out
um so there are a lot of edge cases to
make sure that this is super safe it's a
it's a fun experience for people
but I also think that
the way it tends to work is any
exponential ramp up
it's the same ramp up every year
assuming it's gradual ramp up but if
we're not paying attention to it it
feels like it went like this
and I think most of you will experience
self-driving cars like this there are
three cities in the United States where
you can get a ride you know with nobody
in the front seat in Los Angeles in
Phoenix and in San Francisco from waymo
and I don't know exactly when but I
pretty much guarantee there will be more
cities over time and so if you live in
one of those cities it probably feels
more real if you don't it feels like
that's not going to be a thing and then
all of a sudden it will be a thing and
that's natural because you know unless
you're really in that industry you
aren't watching really closely as the
number of rides per day per city is
going up from a group like waymo I'm
legitimately pumped to write it in a
waymo for the first time hopefully this
year
another project that I want to talk to
you about is Project Loon which was was
is these balloons that are supposed to
beam internet down to everyone no matter
where you are now I'm curious like
people now if they think about
accessible internet they immediately go
to starlink how did loon which was you
know a great idea early on sort of I
don't know is it right to say but I
think lose ground to Elon Musk and
starlink
first of all I mean so loon was a goal
could we make a worldwide
stratospheric layer of balloons or like
cell towers but floating at 65 000 feet
that were talking to each other and then
ad hoc mesh Network and beaming LTE or
5G to the ground so that people in rural
communities around the world
particularly the people who don't have
good internet connection all of a sudden
would very inspiring and it took a long
time we built it it was working we were
beaming LTE and 5G to hundreds of
thousands of people in multiple
countries we couldn't figure out how to
get the business to close with the
um the owners of the spectrum that we
had to work through in those different
companies so we turned it off it made us
very sad starlink is a cool company but
they're solving a different problem
there's a fixed amount of bandwidth you
can land when you beam something from a
satellite in any one region so it works
very well in rural places but you
couldn't have like a huge number of them
parked very close together so that
wasn't really the goal that loon was
trying to solve but let me tell you a
sort of fun Side Story it was crushing
for all of X for us to kill loon it had
gone on to be another bet it was really
one of the inspiring things that had
come from us and so made everyone really
sad for us to stop doing it
and we have this saying at X we're very
focused on moonshot compost whenever we
stop something the people the code the
patents the know-how we try to keep them
all at X they recirculate working on new
projects and so in this particular case
one of the technologies that was
allowing these balloons at 65 000 feet
to communicate between the balloons at
very high bandwidth or lasers
and so when loon ended someone said well
how come
we couldn't put those lasers on the
ground you know like on a pole
and that sounded
almost embarrassingly too simple after
all of the work we had done to get them
up to 65 000 feet
but fast forward five years we have
these things they're about this big and
smaller than a traffic light
it shoots a laser
up to 20 kilometers it's eyes safe you
could just put your eye right up against
it nothing bad would happen it's
unregulated it's near visible light it's
about one order of magnitude outside of
visible light but in the EM spectrum
that's basically visible light
and it's received by another box that's
you know two feet tall you strap them on
Two Poles if you plug the internet like
a fiber optic cable into one you have 20
gigabits per second up to 20 kilometers
away for less than one one thousandth
the cost
of trenching fiber
and we're rolling these out right now in
mainly Africa and India and that project
is now moving more data to real
customers per day than loon moved in its
entire history so go moonshot compost
[Applause]
project is called Tara if you want to
look it up
pretty cool okay uh let's see we have
maybe two or three that questions to get
through in nine minutes let's see if we
can do it in the break I asked some
people in the audience to shout out
different big problems that maybe X
could get working on and we've talked a
lot about hardware for instance but what
about some medical physical issues is
that ever something that you'd want to
or I don't know societal issues is that
ever something that you'd want to take
on people so I'll just give one example
so one person in the front here said
Community I mean we know that Community
Building Community is one of the things
that people struggle with the most right
now I mean we're here with one so it's
like it's nice to see that they're
they're it persists but most people
would say or many people many more
people than usual would say that they're
lonely they don't have friends and stuff
like that is that something that an ex
would ever take on or is that even too
Moon shoddy for the moonshot factory no
no not at all uh as long as we could be
proud about the output and there is a
technology solution to the problem we
would be excited to work on it
um you know we have had various
Explorations for example in the social
Justice space
because the Temptation is to think that
something like belonging or systematic
bias in society is just not amenable to
technology helping and maybe that's true
but that's not written in stone
somewhere like shame on us for not at
least trying so we don't have some huge
news to give you on that front but we
certainly haven't given up I'll give you
another one that's sort of like that
education like I will consider it a
failure if x doesn't eventually have a
great moonshot in the education space
we've tried like 30 things it it's so
painful
but I also don't want to be kidding
ourselves that something is a solution
if it's not really going to be a
significant phase transition for society
and some things may either just not be
at all amenable to technology or have so
many sort of other complex human issues
around them like pedagogy in education
if you take the teacher out of the
process
again in ordering is it written in stone
you can't help kids but it might be
fundamentally different and harder so
there are lots of those kinds of things
where we continue to go back to the well
and technology is what we're good at so
we're not going to sort of do things
that don't have a sort of core
technology piece to them and some of
these hopefully will have great news for
you in the future but you know haven't
solved yet
have you thought about building an app
you can make it blue where people would
post their personal profile and connect
with the people they know in life and
maybe share small updates
like 140 characters just very succinct
while darker blue okay
hey that could be something no I feel
like the world's already got one of
those and um
you know what we're best at are the
things where in the early days
it doesn't look at all reasonable or
possible and usually because we're not
scientists per se it's less that we're
doing kind of basic research it's more
like taking something from column A and
something from industry B and some
observation from sort of field C and
putting them together in a really
unexpected way and again often we're
wrong but if those things come together
then at our best I think we're system
integrators and we're really good at
getting mud on our boots getting out in
the world working with Partners
prototyping quickly trying and learning
and using humility as a superpower to be
wrong admit we're wrong learn from being
wrong and get better faster like I think
that's us at the best
okay and then uh we also had someone
yell out carbon sequestration
I think that was what it was yes okay
he's not correcting me so thoughts on
that yes we have done a bunch of
interesting work in carbon sequestration
in green hydrogen we're doing
Explorations in lots of other parts of
the space
um we're excited about the possibility
for making much lower carbon footprint
cement so I can't tell you that we've
absolutely cleaned any of these problems
off yet but these are spaces that we're
excited about and and by the way spaces
where inverse design can be really
helpful because these tend to be large
machines chemistry problems
electromechanical problems they have
sort of a lot of the the Techno
economics of these things are really
unforgiving ultimately the number of
dollars it costs for you to sequester a
ton of carbon is like the only thing
that really matters at the end of the
day green hydrogen of course everyone
will take it but the number of dollars
per kilogram is just just going to
determine because it's a commodity
whether people buy it or not and so
trying to come up with something very
different but also have an eye on the
sort of long-term like the Rust Belt
engineering side of this work the
project Finance side of this work what
would this really be at scale because
it's it's actually kind of easy to make
any of these things look great in the
lab it's having that Sandy check on top
um
what is this really going to be at scale
would this actually change the world you
know we had this project it was one of
my favorite we had a device it was about
as big as like the area where we're
sitting a little bit bigger and it was
taking seawater and producing methanol
you could burn in a gas tank
there are four billion internal
combustion engines in the world like
tears were running down the faces the
people like capturing the first couple
of drops here this project was called
Foghorn that felt like a real save the
world moment it was actually working
and we could not convince ourselves
we're going to get it cheaper than about
15 gallon of gas equivalent and that's
just not going to save the world so we
turned it off
so tighter economic conditions interest
rates around five percent do you worry
that now that we've exited zero interest
rate environments that you might not be
able to get as much funding from the
mothership as you had in the past
[Music]
I think alphabet needs to be very
thoughtful about how it spends every
dollar and we have spent the last 13
years working hard on our efficiency and
on our rigor so
you know if alphabet stopped being
interested in sort of like the 10-year
plus that would be one thing but as an
alphabet is very serious about the long
term and as long as alphabet is serious
about the long term I am sure that you
know making sure that we spend every
dollar wisely we'd be very important and
I think that alphabet is still excited
about having a part of itself that goes
and explores these other spaces let me
ask you one last question
um we started about talking about where
invention could go wrong I actually am
kind of curious from your perspective I
know we only have like a minute left but
um why do we continue to to try to
invent and build because it does seem
like in some ways that like if we all
put our heads together we could have
enough
on this planet for everyone but we don't
do that so I'm kind of curious like what
you think where are we trying to get to
I'll give you two answers
the first answer is humans are
fundamentally explorers
I think we all have some Pioneer spirit
inside of us wanting to learn wanting to
grow wanting to find new things and do
new things it's a very fundamental part
of who we all are so I don't think
humanity is going to stop doing that
anytime soon because I think it's part
of what makes being a human great
and
don't worry I'm going to end on a
positive note here
there is enough pain and complexity in
being alive enough for reasons to think
short term
that people are going to by and large do
what is in their somewhat narrower
self-interests and what solves best for
their pocketbooks
so if we want to save the world if we
want to make the world a radically
better place
we have to find ways for doing the right
thing to be cheaper than doing the wrong
thing especially when it comes to
climate change and the only way that
we're going to get to the place where
doing the right thing is cheaper than
doing the wrong thing when we can dig
the problem out of the ground and burn
it is going to be radical Innovation so
that's why I believe we're working on it
and I think that's why the whole world
is working hard on it right now
[Applause]
Astro thanks so much for coming on my
pleasure thank you for having me thank
you everybody for listening thanks
everyone
thank you so much to our live audience I
want to thank Summit for having us here
thank you Nick wattney for handling the
audio LinkedIn for having me as part of
your podcast Network everybody in the
audience if you find big technology
podcasts in your app of choice we have
the product manager on Bard Google's
chat bot that's going to be on the feed
within the next week as well thanks
again for being here thank you Astro and
enjoy your time on the ship