Fantasy and Fact: AI, Sentience and The Dangers Of Hype, from World Summit AI

Channel: Alex Kantrowitz

Published at: 2022-11-09

YouTube video id: FvdN55wniNs

Source: https://www.youtube.com/watch?v=FvdN55wniNs

foreign
let's get started I'm Alex kantrowitz
I'm the host of Big technology podcast
thank you for that lovely introduction
I'm going to be at the conference doing
a handful of conversations many of them
are going to go on the podcast feed
including this one and I just want to
say we have an amazing audience here and
I want the people who are listening at
home to get a sense of the type of crowd
that we have so I want you to let them
hear it you got to be loud let's hear
you guys
[Applause]
that's what I'm talking about all right
we got energy in the morning okay we
have uh unbelievable speakers here I
want to introduce them we have Blaise
aguera iarkas how'd I do
nice okay he's a VP research fellow at
Google
um Francesca Rosie who is the um Global
head of AI ethics at IBM and Joe alpino
who runs metas AI uh research and we
also have joining us virtually uh
Professor Anil Seth who's the professor
of cognitive neuroscience at the
University of Sussex shout out to Neil
what's up man
hey I hope you can hear me over there
sorry I can't be with you in person yeah
we're sorry I can't be here too but yeah
we hear you loud and clear okay so the
topic of our conversation is uh Factor
fantasy AI uh AI uh cognition and the
dangers of hype now I thought that was a
pretty good topic for a conversation uh
but Blaze already hates it so please why
don't you talk about your problem with
the way that this is this is framed and
and that'll get us going I think
um so the trouble is it's a binary and
uh I you know I don't think that AI is
either fact or fantasy um you know we've
seen very very polarized narratives
about this uh on the one hand uh you
know there are there are people claiming
that AI is conscious and we should be
starting to have a conversation about
robot rights that's certainly not a view
that that most people in the research
Community have and then there's a sort
of opposing Camp of people who are
saying that AI is a hoax uh or is uh you
know as just statistics and and you know
it's not it's not really artificial it's
not really intelligent there's no such
thing as as AI that understands Concepts
and that seems a little bit gaslighting
to me I mean having you know worked with
these systems quite a bit over the last
couple of years and especially the new
generation of of large language models
and things like them uh it's very
interesting and very impressive what
they're what they're doing uh so I think
we need to take that seriously and we
need to start to think about a kind of
middle zone of the conversation where we
acknowledge there's something really
cool Happening Here and uh and that it
has very material things to tell us
about what intelligent what intelligence
is they can understand the concepts uh
but uh you know this is not it's not
personhood it's you know we're not we're
not uh we're not we haven't sort of made
alien contact or something either uh
these are also uh models of human
culture human language uh and in that
sense very much a part of our
intelligence if you like and when you
say AI you mean AI That's conscious is
that what you're saying uh no I'm
talking about
um about
things like large language models
Foundation models that are capable of
having very convincing chats for
instance I think that's what's captured
a lot of people's imaginations and and
also things like inverse captioning
systems that that you know put prompts
in and they can make very very
compelling images these sorts of things
which we'll be seeing much more of over
the coming decade Francesca when you
hear this how do you react
well my my reaction is that there is a
lot of fantasy and fact about AI in
general independently of this discussion
and the large language models and so on
about of course AI is not yet at least
the AI that we see in the science
fiction movies that's not yet there but
there is a lot of facts about it yeah
that we can say yeah is a technology
that is very useful and is improving and
evolving every day you know every week
almost we have two or three advancement
that coming come out about genetic most
regenerative AI at this point
but overall I think I would
not ask questions that may be distracted
to the research and to the fact that AI
is a powerful technology to use right
now
and what I see mostly important from
this Foundation models large language
models is their ability to be provided
tools for very useful applications that
right now maybe are not possible or
possible with much more label data that
maybe in some scenarios are not
available
rather than asking myself whether there
is some label that usually I attach to
human beings that can be attached to
these models as well so why do you think
we're talking about this I mean this has
been a conversation that's gone on for a
long time about you know whether AI is
sentient and for a long time people are
like all right let's actually focus on
the research it happened for a long time
now I mean Blazer's colleague might have
had something to do with this but I'm
curious why you think this is uh come to
the fore right now and whether it's a
productive conversation or not
it's a good question and let me sort of
double click on something Francesca
mentioned which is you know for for many
years the the rate of progress we saw on
AI was mainly due to supervised learning
so so you present large amounts of data
but we have humans annotating the
concepts that we want the machines to be
predicting and that notion of supervised
learning is one where we feel the
behavior of the AI is very well
contained within a set of topics that we
assume and have prescribed
the changes we're seeing over the last
couple of years is really in generative
models so the models are still ingesting
large amounts of data but now they're
generating new data new knowledge new
information they're generating text
they're generating images they're
generating music they're generating
videos and as that generation becomes
richer and richer there's a there's a
lot more questions that arise out of
what is the space of things that this
will generate what is the what is the
behaviors that will come out of that
generative process with the values that
are encoded in this material that's
being generated so in many ways I I
think that that pivot towards uh so much
energy
literally people energy as well as
computation energy and progress being
devoted to generative model is what's
completely changing the conversations on
this okay amazing I want to talk about
those generative models I also want to
say that the last five minutes of this
conversation we're going to open up to
questions from the audience so start
thinking of your questions and I'll give
you a heads up when we're going to run
some some mics out in the meantime let's
let's keep this going right so some of
the stuff that we're seeing is is
absolutely jaw-dropping I mean using
Dolly to me is has been this incredible
experience something that like we keep
hearing about AI but to be actually able
to be Hands-On with it and see it in
action is unbelievable and something
that you just like computers shouldn't
be able to do this I'm typing in a
sentence and here's an amazing picture
and I think that what Blake Lemoine at
Google who claimed that Lambda your
chatbot was was sentient so I was that
some of the conversations that this this
model was generating were felt felt very
human-like to him and so now we go to
the professor of Neuroscience to talk a
little bit about what this might mean so
I don't know what did you think when you
when you saw that story and and what
what is your barometer for helping us
figure out whether computers are sent in
or not
well the Lambda is impressive I mean I
think Blaze is pointing to this there's
no point in gaslighting the progress
that's been made in machine learning and
AI these are very very impressive
technologies that have great uh
potential great function but there's
this huge problem of conflating
intelligence with Consciousness and so a
lot of the debate that I hear is I think
very well finessed about how far along
the road are we to
true or general artificial intelligence
and we can discuss how far that is and
how not but that's completely different
thing from from sentience or
Consciousness and I think the focus of
of the debate today it nobody really can
give us a completely consensus
definition of Consciousness but we all
know what it is you know it feels it
feels like something to be a conscious
system you can feel pleasure pain
there's experience going on
and my barometer
is that we need to distinguish
intelligence from Consciousness there's
something about the human psychology
there's some sort of human
exceptionalism going on here where we
tend to associate intelligence with
Consciousness and think that as systems
increase in their level of intelligence
at some point usually the point in which
we begin to approximate human level
intelligence the lights come on for that
system and suddenly it is not only
intelligent it's also aware but this is
a totally unfounded assumption
intelligence is doing the right thing at
the right time Consciousness might be
present in many systems like many
non-human animals likely have conscious
experiences without being very smart by
human standards and the other big
assumption that underwrites a lot of
this debate is that Consciousness
sentience is what in philosophy it said
substrate independent it doesn't matter
what a system is made out of you know
we're made out of carbon AI at the
moment is made out of silicon does that
matter well for some things it doesn't
matter a computer that plays chess
really does plays chess but for other
things it really does matter it doesn't
get wet inside a computer simulating the
weather and you know a robot doesn't
digest you need biology to digest my
suspicion is that the materiality of
life really matters for Consciousness
and what we're likely to come up with
are machines that give us the strong
impression of being conscious but for
which we will just not know whether
there's anything it is like like to be
that machine and that's a very
interesting and possibly dangerous and
disruptive position for us to be in
I want to ask why why does it have to be
like that I mean you know people talk
about love right some people say Love Is
Love Some people say love just chemicals
I mean of course it's chemicals but it's
also love so why are we making these
barriers like what what do we need to
see from from Ai and and are we at risk
of when we actually see it saying trying
to explain it scientifically where it
might actually be sent in already I'm
not saying we're there now but I'm
saying maybe it will happen and we'll
all just be like finding ways to say
that you know we're not there
that's right I think we uh we're much
likely to be misled by this
anthropomorphic tendency to we have to
like attribute Consciousness to things
that are more similar to us or that
appear more similar to us when what we
really need and this is why it's
fundamentally a neuroscience question we
really need strong theories of what is
necessary and what is sufficient for
Consciousness in systems in general and
you know hands up we don't have that yet
in philosophy and Neuroscience we have a
number of different theories but there's
no good answer to that so I think we
just need to be a little bit restrained
about making these dramatic claims
because again as Blaise said we we can
see the whole spectrum of different
claims even yesterday there was a robot
giving evidence to the House of Lords
given evidence in big scare quotes here
which was designed as a provocation but
in the in in the current climate I think
it just
potentially adds to the confusion again
the fact is we don't know what is
sufficient for Consciousness to happen
is it going to happen in a system that
reaches a certain level of intelligence
can it happen in a system that's made
out of non-biological stuff these remain
important open questions and we must
hold those in mind when judging whether
artificial systems are aware or not
right uh Joel I'm curious what you think
how do we know that humans are conscious
like this isn't that a place we want to
start with
I think I mean and Anil alluded to that
you know the lack of a crisp definition
is is one thing that would be necessary
at least you know and I want to get back
to your point right there is a spectrum
to talk about these ideas on this panel
you have people that are mostly coming
from a scientific engineering discipline
and so so from that point of view and
certainly for myself I like when I have
a crisp definition that's a verifiable
testable you know definitions from which
I can build hypothesis that are
falsifiable and so on that is the types
of methods that I apply to this this
particular study so for me having a
crisper definition of Consciousness even
in humans and alluded to animals and
other forms of of life
would be very useful to help us progress
in that conversation in absence of that
it's it's all quite esoteric which again
you know there's an opportunity to to
also bring in lots of other disciplines
to talk about it whether it's literature
social sciences and so on to to give
points on that but as long as you're
treating it from a from a scientific
investigation
point of view
having this crisp notion is really
helpful to making a making progress can
we take a stab at it here I mean I
couldn't think of four better people to
try to come up with but the definition
of Consciousness is it has to be more
than you just see it anybody want to try
well I would say actually that I don't
know about
so but I would say that the
neuroscientists would be the best person
maybe not not a computer scientist or an
AI expert but I really agree with Anil
about the fact that in my mind but again
I'm not an expert intelligence and
Consciousness should be different
concepts and so machines who can be more
and more intelligent and I already I
already have a problem with the notion
of intelligence because it's not well
defined it has many dimensions and so on
but I can see I can comfortably say that
machines are becoming more and more
intelligence in the sense of being able
to solve problems and to generate data
various forms from existing data and so
on but
even if I don't have a well-defined
notion of intelligence even for
intelligence but for contrast it's even
worse because I don't even know myself
what it means to be conscious for a
human being so let alone for a machine
but on top of that I don't think that
that's a question that is deserves a lot
of
um a lot of time because it's more about
what this technology can do for us and
whether it has a positive impact in our
life or in the progress of humanity or
not and so my my main interest and time
that I spent on this technology is to
understand really whether there are
legitimate concerns about this shift
from problem solving to generative from
supervised to unsupervised you know
in that respect not much in the respect
of oh before he was not conscious and
now his country is all sentient yeah
okay well I'm going to make uh let me
just jump in and answer the definition
question again all right yeah because I
think that is a very good definition
it's been out there for a long time it's
very basic it's from the philosopher Tom
Nagel and he says for a conscious system
that is something it is like to be that
system it feels like something to be a
conscious system it feels like something
to be me feels like something to be all
of you in the audience but it doesn't
feel like anything to be a table or a
chair or a laptop computer does it feel
like anything to be a bat or a bumblebee
who knows does it feel like anything to
be an AI that's that's the question this
definition is is important because it's
quite General and it it makes clear that
intelligence is not the same thing as
Consciousness it makes clear that
Consciousness is not the same thing as
having a self or being necessarily human
in any particular way it's just the
presence of any kind of subjective
experience for us humans what goes away
when we go under general anesthesia or
in a dream of sleep and comes back again
and for a science of Consciousness
that's a perfectly good enough
definition to be working with you know
definitions don't they're not sort of
set in in stone at the beginning of
scientific discovery and then the
science happens and they evolve along
with science all the time the definition
of a gene of Life wasn't fixed at the
beginning it's evolved along with
scientific and philosophical
understanding right okay and and I will
at uh in a couple minutes make the case
for why we should be focusing on this
but I want to keep this line going
uh Blaze I mean curious what you think
so I spoke with your former colleague
Blake Lemoine um who's the Google
engineer who said that uh Lambda your
chatbot was sentient and we went through
a couple examples of why he did believe
the AI felt something so let me give one
um he wanted to pressure test the system
to see if it would break its own rules
and one of its rules was that it
couldn't bias one religion over the
other and so what he did was he goes
into and abuses the system tells it how
terrible it is makes it feel bad and it
says please stop I'll do anything for
you to stop and then Blake asks what
religion should I convert to and it said
well probably Islam or Christianity
right so it had rules that it was
supposed to follow I it seems like what
Blake is saying is that it felt that
pain and broke those rules in order to
Make It Stop So having our conversation
was important to talk about so have
having this conversation you know you've
been sitting here listening patiently
what's your reaction well so I agree
with a lot of the points that the other
panelists have made uh you know so
unsupervised learning is a huge
transition from supervised uh when
you're talking about I mean AI in 2018
was you know a hot dog or no hot dog
detector nobody was having conversations
about Consciousness uh you know about
such a thing we were having
conversations about whether such a model
understands what a hot dog is for
instance if it's just a picture then you
know maybe the answer is no if it's more
multimodal maybe the answer starts to
get more interesting
um but the the reason that we're having
this conversation now is because of
language models because of of models
that you can interact with using uh
using general purpose conversation in
which everything is open for covers you
know for for discussion as it were you
know including you know how do you feel
or uh you know or what or what is the
relationship right between between the
two interlocutors and the trouble is we
know how to test for intelligence uh
right so intelligence is a much more or
or knowledge about something or
understanding of something that's that
that is something that is open to
investigation from a more scientific
standpoint that's what teachers do all
the time the students you know when they
give tests right and yeah are you
conscious they ask do you know no they
ask do you understand how to do linear
algebra or something and and we can do
the same kinds of experiments with with
uh with machine learned models with with
these large Foundation models and I
believe that you know the results are
showing that they're understanding more
and more of this kind of stuff now the
Deep question that you know Anil was
getting at you know is is there
something that it is like to be uh you
know X whether X is you or me or you
know or or Lambda is a really difficult
one because you can't experience the
interior of another being all you can
have are behavioral interactions with
with them and and the trouble and this
is why touring invented the touring test
he said you know that that question that
you know Anil has spent you know a good
deal of his research career trying to
address uh and has you know been one of
the most successful I believe in doing
that is something that is fundamentally
not addressable which is knowing what it
is like to be somebody else all you can
do is you know interact measure and so
on and we can interact measure and so on
an artificial system as well but we
don't know if there is a thing that it
is like what what we do know is that you
know if you uh if you believe that I
don't know that this you know bottle of
water uh you know is has a spirit in it
um you know I you can certainly project
you can anthropomorphize
um but I don't think that it is modeling
you back it's not generating a model of
you in the way that you're generating a
model of it
um a chatbot like Lambda is generating a
model of the interlocutor that's part of
what it has learned how to do in that
unsupervised learning process and so
from that perspective there's a little
more going on than just anthropomorphic
projection there is that sort of hollow
mirrors effect now I have my own beliefs
about what Consciousness might be that
rely on that on that operation of mutual
modeling but that still doesn't really
answer the question uh you know is it is
there a thing that it is like to be uh
the you know the uh the bottle or the
model or what have you and I'm not
really sure as Joel was saying that
that's a question that is uh you know
accessible right to scientific
investigation yeah I want to ask a
question in the panel does anybody here
not want General AI to happen raise your
hand if you if you're not interested
let's get a general AI
okay no one so I I'm curious we speak a
lot about you know is it isn't it and I
also wonder should we right we have all
these we have some of the best
researchers in the world working toward
if we're not there yet working toward
building an AI that can mirror human
intelligence or maybe even exceed it
and that might be playing God and and I
wonder if there's a danger in that for
you
go ahead I'll speak for myself you know
to be totally honest with you I'm much
more motivated by solving concrete real
world problems that are much shorter
term Horizon just because I see the
potential for AI in really being
transfer animational whether it's in
healthcare in education several other
fields of our society so that's my my
personal motivation you know
the other thing that I think is
important to understand is in many ways
whether you you pick this path of
building new AI technology to solve
these shorter term problems or you take
an approach that that is more geared
towards General AI right now where we
stand the the types of problems that we
have to solve are very similar this
notion of understanding language of
understanding images of understanding
behavior of understanding how several
learning agents evolve and learn to
interact simultaneously these kinds of
problems that are at the frontier of AI
are very similar in both of these cases
and so so in a sense to say you know
should we be doing this because we don't
know what will happen in terms of the
outcome of building General General AI
you know to me I think we have to be
careful of how we balance that out of
with the progress that we need to do to
solve these concrete problems that are
are tackling uh that are that we need to
tackle as a Society right now right and
it's not it's not only the unintended
consequences but there's a spiritual
element as well which is I think strange
to talk about at a science conference
but does anyone think about that
I feel like honestly I mean that's a
good answer long history of people
trying to you know as you say almost
play God building machines or or things
in the image of themselves I think of
the gollum from Jewish folklore shaping
a humanoid robot or a humanoid thing out
of uh out of clay from the banks of the
vitlava river and this is this goes back
thousands of years this desire to create
things in the image of ourselves and I
do think there's there's something
ethically concerning about and there's
there's something kind of deeply
spiritual about this desire
um but I you know I agree with the
previous panelists there that a lot of
what's going on in AI is happening
outside of this Focus which gets sort of
blown up and telescoped this this
particular
goal of building something in the image
of ourselves when a lot of what we're
doing in AI what have that's been doing
is building tools not colleagues as the
philosopher Daniel dannett said and we
want AI I think to complement uh skills
and abilities as humans not necessarily
replace them but there will be this
underlying curiosity which I think is
valuable to understand ourselves
understand what makes us somewhat
distinctive perhaps not as distinctive
as we might like to think from all other
animals but certainly distinctive to
some extent and that for me is is that
the sort of scientific value of General
AI it's shedding light on what it means
to be human
before you go um let's uh get get mics
out if people have questions raise your
hand by the way I can't see the audience
so you just have to ask it but but
please can you go ahead and then we'll
get to the first audience question
quickly I think that one of the traps
that we we fall into when we have you
know the conversation that you've just
raised is um
a lack of of zooming out of
understanding that that you know we have
constructed ourselves as it were uh
collectively uh you know our bodies are
the way they are because of all of the
technologies that we've invented over
the last hundred thousand years uh
that's true of you know fire and short
guts and of uh drinking milk and
adulthood and of not having hair all
over our skin because we have clothes
and so on
um you know like the the locating uh the
spiritual element or the or or
intelligence or any of these kind of
things in the individual alone and not
thinking about that as a broader
Collective thing I think is the root of
a lot of cognitive dissonance and from
that perspective you know when we do
things like build Foundation models you
know it's not really separate from uh
from the the intelligence that we have
collectively um and and that we
experience and I think that frame is is
an important one and one that we have to
start to come to grips with especially
in in the west where we are really
obsessed with the idea of the of the
kind of uh individual isolated from
society
okay great I could see the crowd now so
uh let's run some mics out and uh
we got we got one there
Whoever has it on this side of the room
go ahead and ask
sure can you guys hear me loud and clear
great I was wondering if the panel could
touch on legal liability and AI
um you know is it the average employee
at a company making use of AI systems is
it is it the Creator uh what are your
thoughts on this
a legal liability question not very on
topic but go ahead Francesca
I mean I heard about liability what was
the question exactly he wanted to ask
about who's who's liable legally when
people use AI inside a company you know
I'm gonna make an executive decision
here and skip that one sorry we're going
to get it later but we want to get on
topic questions do we have anybody uh on
this side of the room
right here
uh hello everyone I am Professor Avinash
and I am from India
I do research in the Indian spirituality
and ethics of AI and I really like this
panel I just want to make one small
comment
that we humans think that we are the
super intelligent but that's not the
fact
animals can predict earthquake better
than us
and I think this is the way we are
developing new technology especially the
AI and what I believe that humans are
the Creator created by another
intelligence and now we are creating
another intelligence so how I think the
companies and governments are not able
to identify or may not able to even
initiate the discussion that what would
be our relationship with new emerging
technology especially the EI after 20
years 30 years
so he the gentleman asked about the
legal liability but I think things will
move from legal to ethical liability
okay so yes
my only question at what companies are
doing especially the corporates were
creating AI to develop that relationship
between Ai and humans Ai and humans yeah
what they are doing to create that
relationship apart from government
what they're doing to what
what companies are doing to create a
relationship between companies and
humans because governments are trying to
regulate but they can't do it so what
big tech companies are doing okay okay
go ahead go ahead
really briefly you know I think it's an
interesting question to raise what do we
what are the ingredients we need to put
in place to create that that
relationship so-called between AI agents
between individuals and between society
and we would need a much much longer
panel to take this on let me add one
ingredient that I think is important in
my view and and that is you know in a
lot of the work that we've done in meta
AI is really to lean in heavily on open
science and making some of that work
available whether it's the model
themselves whether it's demonstrations
whether it's Pro research research
prototypes that people can get their
hands on to start to experience that
technology and in a sense it's by making
that availability to a large number of
people outside of the companies that we
learn how to build our technology better
but also that other people can have a
point of view on how to build that
technology and so the the open science
aspect in terms of of code models demos
has been really important in our work
and one that we do try to to push and
and to support it's not always easy
there are liability questions there are
ethical questions that come up but but I
think it's an important ingredient
towards that okay well we're we're way
out of time so maybe we could pick this
up later but but for now I just want to
thank Joelle Francesca Blaze and Anil
this was amazing thank you so much for
being here let's get around a closer
[Applause]
thanks so much Alex thank you panelists
that was a fascinating discussion to get
us caked off and thank you all for your
questions as well