Is AI Dangerously Overhyped? — With Gary Marcus

Channel: Alex Kantrowitz

Published at: 2022-09-13

YouTube video id: BdZSjabDfAk

Source: https://www.youtube.com/watch?v=BdZSjabDfAk

welcome to the big technology podcast a
show for cool-headed nuanced
conversation of the tech world and
beyond now in late july we had blake
lemoyne on the show he is of course the
google engineer who believed the chatbot
that he was speaking with lambda was
sentient
this week we're going to bring on a
different perspective gary marcus is the
author of booting ai he's one of the
most influential voices in the ai field
the founder of geometric intelligence
which was acquired by uber
and he wrote that lemoine's perspective
was nonsense on stilts to quote his post
on substance
word for word so we're going to get into
that we're going to talk a little bit
about what you can actually do with the
lambda technology why he doesn't think
it's sentience what it might take
you know to actually get to sentience
what that actually means different
perspective than blake had
and of course we'll go into the state of
ai today because it does seem like the
field is booming
and it'll be fun to discuss what is
happening in it with gary and maybe hear
a little bit of a different perspective
than you hear typically so with that i
want to welcome gary to the show welcome
gary thanks a lot for having me
thanks for being here um i definitely
want to get into the lemoine stuff
people who listen to the show um you
know basically listen to an hour and a
half of him you know speak about his
interactions with lambda um to me and uh
with me and uh i i thought it was pretty
fascinating obviously the conversation
doesn't end there but before we get
there i'm kind of curious last week
there was this really interesting
uh situation at a colorado state fair
where this guy jason m able jason m
allen
entered an ai drawn
painting into the art contest there and
actually won first place and it's caused
this whole big controversy among artists
saying that he's a cheater and he's like
i'm not backing down i followed the
rules
curious what you make of the whole
situation what do you think it says
about ai that you know now people can
use a prompt he basically said
you know draw this and the ai drew it so
what do you think it means that people
can just use a prompt and now it's
winning human art contests
i think we're in a whole new world on
that score
later we'll talk about
some of my skepticism
in using ai's for some purposes but
there's no question that you can get a
whole breed of recent software to draw
amazing paintings or things that look
like paintings
and society has to sort out what it
thinks about i mean it's sort of like a
performance-enhancing drug right right
um
and it's untraceable in general um and
so i mean
you know i i don't i don't know the
details in this particular case and how
people found out but
in general people are going to be able
to use these techniques um
in
you know the 1970s people started using
drum machines uh and started doing all
kinds of stuff with electronic music and
now
you know in in the studio if you're
doing music you can you know change
notes to make them have the right pitch
you can change the timing in subtle ways
and stuff like that
in general in music we just care what we
hear and we don't really care how the
sausage was made as long as it's
entertaining and maybe people will take
that attitude in art maybe they'll
they'll be upset about it you know my
expertise is really in what can the ai
do
and not so much in the ethics of
attribution and so forth if you talk
about another domain like language
synthesis it turns out that current
systems can make very convincing
language but it's often [ __ ] that
doesn't matter in the same way in art so
in
if you have a system make up a news
story even if it's trying to be truthful
it'll probably drop in some stuff that
isn't true and you know we expect our
news stories to be true
in the case of these artworks
if the thing doesn't do what you want it
to do you can say well i was just trying
to be surrealist or whatever there's no
fact of the matter the way that there
might be in an essay
and so then it's up to society you know
how do you want to treat these things
they are going to extend the reach of
artists just the way that you know
having a track tape extended what the
beatles could do i mean somebody might
have said somebody probably did say you
know you can't do this in a live
performance what is this like they're
using the studio as an instrument now
that you know some people will make that
argument
around computers
i don't have a strong
stake there i'm happy to tell you you
know what i think is plausible and where
the systems might break
i don't think i'm qualified to say you
know should this be legit i think that's
probably going to depend on what you
want your competition to be about
right but you know you're living in the
world of ai every day so you know i
think that like we'll get into some of
the other stuff but it is interesting to
hear your perspective on this stuff one
more question on that the guy wins the
art contest but his art is actually you
know ai drawing a painting based off of
all this other paintings that it's
ingested
is that original work where do you what
do you mean you
know there the analogy is a little bit
to sampling and it's an it's almost like
a
sampling on steroids so you know we have
licensing requirements and music and so
forth and people do you know they'll
drop in a
sample from an old police song or
something like that
and they'll have to pay royalties for it
and so forth and what these systems are
doing is kind of like
a amazing enhanced version of sampling
where you don't even recognize what the
samples are anymore it's all derivative
now you know there are always arguments
about this in the arts anyway like dylan
will say there's nothing new under the
sun and i just put it together in a new
way
it reaches a different level where a
system might have access to 600 million
pictures
and
it's difficult for the artist to say
what is the relation between those 600
million pictures that this system saw
and the thing that i got out of it when
i gave it a prompt i said you know draw
me a picture of a piano keyboard with
clouds around it and the system is
drawing on this database of existing
you know clouds and pianos and and so
forth and we really don't know the
relation of course we don't know that
with a human artist either but there are
certainly you know copyright questions
to consider and
it's worth realizing that the
art system doesn't really understand
what it's doing it's just correlating
words with images in its database
it doesn't have the same intentionality
about the objects as a person does but
it can still be a very effective
technique and you know
it's here and
we're going to have to grapple with it i
think it probably will change art i
think
in general that humans are still going
to be like the creative inspiration and
they're often going to be filtering
things so you have the system it's going
to create eight different choices maybe
the person likes one of them you never
even see the other seven and so some of
it is like
it's like monkeys and typewriters and
then you've got somebody at the other
end um looking what the monkeys made
and you know
the monkeys weren't that clever but but
somebody was clever to pick this one
thing that came out of this one monkey's
typewriter and they say this is great so
it's complicated i i don't know if
there's any magical answer but i do
think it's the new reality is um that ai
is going to enhance
um the palette that's available we had
the same kind of questions with
photoshop before right i mean right so
i do photography and i almost never
do anything more than
um tune the color a little bit there are
other people that you know the images
that they share have been completely
redesigned it's similar to that right
you have power there you can also think
of these things a bit analogously to
filters in photoshop or something like
that um you know they're creative tools
to push you harder you always need the
human in the loop
you don't want to just sort of blindly
trust that photoshop is going to give
you the result that you want to give
there's an artist who has an idea about
what they want out of this thing but
i think it's you know it's pretty
interesting um it's also interesting
that the systems can do as well as they
can do without having that much
comprehension of the world they can do
that because they are like parroting
essentially um these massive databases
that they've seen before
right and and you know these are these
are really interesting points that are
sort of relevant to the lemoine
situation i don't think anybody would
say that the ai artists are sentient
they are responding to commands they are
drawing pictures however when you start
to deal with ai systems that have
you know that can communicate to humans
through words versus pictures
all of a sudden you start to see that
and you know i think that um you've come
out strongly saying that um lemoyne who
was the google engineer who was chatting
with lambda and said it was sentient was
fooled
actually we'll read read uh you know a
bit of your nonsense on silt's story uh
you wrote neither lamdan or any of its
cousins gpt3 are remotely intelligence
all they do is match patterns draw from
massive statistical databases of human
language these patterns might be cool
but language
but these language systems the language
these systems utter don't actually mean
anything at all and it sure as hell
doesn't mean that these systems are
sentient so i'm curious like how you how
you draw that line because um you know
obviously the chat bots are producing a
stunning result similar to the artists
where what is your line for saying you
know
what is sentient and what isn't and and
what would someone have to show you
in a chatbot for you to say okay maybe
this is
the latter question is interesting um
the the
first thing to say ascension can
actually mean a bunch of different
things
there's one really narrow definition
which is not what i think the
conversation was about which is like a
system that can sense something so
here's such a system and on that
definition this
is my apple watch
yeah and my apple watch has in it
sensors that for example detect my
degree of acceleration
um and that allows it to track how many
minutes i've exercised each day um
because it it
imperfectly understands what i'm doing
in the world if i'm out on a boat it
might think i'm walking because it
misinterprets the acceleration forces of
the tides um so it's imperfect but it
does some sensing it's true that you
know the acceleration has moved in this
in that way um it also has a microphone
i can do something with that so um you
know but nobody really thinks
seriously in a broader sense of sentence
that my apple watch has sent it so um i
don't think what lemoine meant is just
lambda has sensors and in fact if he did
that would be a foolish place to make
the case because lambda actually has
fewer sensors than my watch my watch has
a lot of sensors and lam doesn't really
have anything sensing the real world
except for its linguistic input um and
in that sense alexis and like there are
you know it's ridiculous right that's a
narrow definition but if if you want to
do a linguistic analysis you have to be
careful and say that there are different
ways of using the term but what i think
he was getting at and i i'll say i
didn't have the luxury of having him on
my own podcast for 90 minutes but um i
did try to pin him down a little bit on
twitter and he was very weasley when i
tried to do it he kept turning it back
on me um but
it seems to me that what he was implying
was something in the realm of conscious
and intelligent
um and nobody would argue that my watch
is conscious like what did that mean and
it's not particularly intelligent
although it does a few things we might
associate with intelligence he was
describing something in that
domain if you look at the wikipedia
definition of sentence that one of the
definitions is like the sort of science
fictiony one that's like you know do
alien life forms have sentience you know
are they conscious of intelligent and
that's clearly what he seemed to mean
and that's what the conversation was
about he's saying straight up that this
thing is a person
yeah well there's no question that it's
not a person i mean that's a ludicrous
claim um what it's doing is is repeating
things that people have said and it's
not just repeating them it's a little
bit more sophisticated than that um but
you know if you fed into it um computer
programs then it would start speaking so
to speak in the language of computer
programs right it's a mimic is what it
is it's a very talented mimic the things
that it says don't reference either the
real world or even an internal
construction of reality so when i talk
i'm talking about the real world i might
get it wrong i might tell you i think
that there's a cat outside the door and
maybe i'll make a mistake right i have
an internal representation i think that
there's a cat because i hear a certain
pattern of footsteps and so forth but
maybe somebody
tricked me with a tape recording or
something like that
so i don't my brain doesn't have direct
access to the external world everything
is mediated through my perceptions but i
have a model in my brain of how the
external world
works so i'm in a you know i'm in my
basement i'm in my house this house is
in british columbia and you know i
understand relations between entities
i understand that i need to pay them
taxes in a particular place i have all
these
beliefs about the world most of which
are accurate
um and my language is a connection to
that so if i say i saw my mother last
week i probably did actually see my
mother the word mother probably refers
to a specific person and i probably did
actually see i probably you know i could
be a sociopath making that up but i'm
probably not um and in fact i did see my
mother last weekend earlier this week
and that was great i hadn't seen it in a
while um
and so you know there's physical entity
in the real world that corresponds to my
mother and then i have this mental
representation when
i go through this in some depth because
when we get to lambda lambda doesn't
have a model of the world so one of the
sentences i found most telling in a
quick perusal of the transcripts was
lemoine asked it something like
what do you do for fun and it said
something like i like to spend time with
my friends and family and do good deeds
for the world
well it doesn't have friends and family
it's not referring to some internal
representation of who its family is if
you ask that who its family is it would
have to make it up at that point it's
like a complete [ __ ] artist in that
sense
some people might say well they just did
that to please you but it doesn't
actually even care to please you all
it's doing is predicting in this
database of sentences that i've seen if
somebody said something like the last
sentence what would the next sentence
they say be
i think one of the analogies i think i
put in that paper was there are some
people who play scrabble in english but
don't actually understand english they
just have memorized a list of english
words and so for them i think i once saw
a phrase these are word tools
they're not words the word tools
everything has a word tool
um for lambda it's just you know the
statistics are that this is the next
thing i say so people ask me what i like
to do well lots of the answers in this
database and the mind reels at what a
database of almost a terabyte is it's a
really really huge and it's way more
than the works of shakespeare times i
guess a hundred thousand this is the
amount of stuff that was used
fed in
the lambda was trained on so it's
trained on massive amounts of
conversations from red
excuse me from reddit and and stuff from
wikipedia and all over and so
there's a much simpler algorithm that's
easier to talk about called the nearest
neighbor and you could imagine it would
just use nearest neighbors the nearest
neighborhood would do
it was it would look through everything
it said
right now find the thing that was
closest and then say whatever it was
said then
and that would work like 70 as well the
current technique is a little bit more
sophisticated but it kind of gives you
the idea imagine you're just finding the
closest thing in this massive transcript
because the
transcript is so massive usually you can
find something and usually whatever
somebody else said they were a human the
human did have a model of the world was
understanding the world and said
something that was contextually relevant
so you pull it out of this database
imagine i'd done the same thing with a
spreadsheet like
people would look at me if it were
insane if i said a spreadsheet was
sentient and rightly so the
spreadsheet's not essentially just like
i'm going to add up these columns add up
that column and i give you the answer
that corresponds and essentially that's
all it's doing that doesn't mean we
could never build an ai system that did
have a model of the world that did
reflect on its own model and so forth
but this one doesn't um you know if i
wanted a candidate for sentience i would
give you the turn-by-turn navigation
system in my phone which uses
accelerometers um uses satellite signals
in order to build a mental
representation of where it is in the
world and then it acts on that mental
representation of the world in order to
calculate the best way of getting from
point a to point b that's not very sexy
it's not like the most you know
it's not like it's sitting around eating
grapes and contemplating the universe
but my turn-by-turn system has more
elements of what i would actually
ascribe sentience than lambda which is
really just autocomplete on steroids
that's all it is right you type in your
phone i will meet you at and it guesses
that you know you might say the
restaurant because either you or other
people have said that before it's all
lambda is doing is predicting next words
and sentences and it is this massive
scale of data that makes it seem like a
real thing that it just isn't and
inevitably these systems do break down
um you know he did some cherry picking
he showed the best things and so forth
um but that's almost not the point the
point is not so much the errors it's
just the basic mechanism it's just
predicting next words and that is not
what sentience is about
now but some of the stuff that we heard
from blake i'm just going to relay it
and i'm curious what you think because
some of that stuff that we heard from
him
indicates that that lamb did it you know
had more capabilities than we're talking
about here for instance um lambda asked
blake to build it a body so it could
take the mirror test where like the
mirror you know the mirror tester you
hold the bottle above your head whether
you look up or look at the mirror that's
a determination of your intelligence and
then there was also this moment where
blake
wanted to pressure test its rules this
isn't something that you would do with
the spreadsheet one of the rules that it
had
was that you can
it could not privilege one religion over
another
and so blake then said that he
told lambda he was going to pressure
test it and he said okay if you have to
kept on telling it how terrible it was
and then said what religion should i
should i convert to and then lambda said
christianity or islam despite the fact
that it had rules not allowing it to
uh privilege a religion over another so
when you hear this stuff you know i'm
curious what your response is to it um
and and again like maybe this is a good
moment to go back to the question of
what would it take for a chatbot then to
to show enough that you would say okay
this is sentient
it has to do with the fundamental
mechanism of the system
in order for me to think a chatbot
ascension
he would have to represent itself and
the world and things about that
in a way that it could reflect on them
and do something with that and this
system just isn't doing it it's just
predicting next words
again like
unless you really think
hard about how many words are in a
trillion words of training you don't
realize that for example anything that
you want to talk about is probably in
some damn reddit conversation already
and it's probably drawing from that
um you know
a real sentient system if it said
something like i like to play with my
friends and family would have something
in mind about what its friends and
family are that's part of what being
sentience is is when you think a thought
it's related to something in the world
some philosophers will call that
intentionality there is none of that in
this system it's just a potent illusion
um you know earlier in ai
there was a system called eliza in 1965
and all it did was keyword matching but
it sometimes fooled people people
started it was set up as a therapist and
it would do very primitive matching
stuff like if you said something about
your girlfriend who would ask you to
tell you more about your family if you
said problems it would say can you tell
me more about that and so you know it's
easy for a person to see a small
evidence of what looks like humanity and
ascribe humanity to that thing right our
evolutionary ancestors did not have to
deal with the problem of discriminating
between machines and people that did not
arise so what our brains really
are
evolved to do among other things was to
find con specifics that we could mate
with and to rule out those are not con
specifics so
you know we're very good in general at
telling other biological creatures do
they belong to our species or not but
there was no you know there's no
machinery in our brains
innately to tell us the difference
between a person the machine and what
happens
is
that in evolutionary perspective
anything that could talk was probably a
person right other you know you could
worry about parrots a little bit um so
we we don't have
machinery in our brain so um a skilled
person
can actually find a lot of problems with
this system so somebody who is trained
as i am in the cognitive sciences um you
know compose problems and find cases
where these systems will break down and
so forth um but it's not something that
like an amateur can do amateurs are
easily fooled the remarkable thing about
the blake lemoine case is
at least to some degree is an expert an
engineer at google google you would
expect him to know better
um
you have to also look at his history
he's been talking about like robot
rights for a long time and you know
there's an old youtube of him like five
years ago and so forth he
he had had a will to believe he wanted
to believe um i think that this system
was sentient and
these systems
are so good at mimicking language human
language that you know you can talk
yourself into it but it's just not how
the system works it's not
relating something to the world it's
just predicting next words interesting
okay one of the things that i
kind of wonder about this is you know
how does again like
i i understand your your perspective on
what sentience is but like
one of the thoughts i've had and reading
about it speaking with blake is what are
humans if not for you know intelligent
machines trained on
you know many years
many terabytes of historical data so
where do we draw the difference because
there are these machines but we're a
very different sort of machine and it
goes back to our trying to
represent entities in the world and to
reason upon them and to act upon them
and so forth
it's just a different
set of computations that we're trying to
do i am in no way arguing that it is not
possible to build a sentient machine
um i don't think we know how to do it
and i don't think we're clear enough on
what it would consist of but i'm not
making the argument that it's impossible
i'm just looking at how this system
works and that's just not what it does
right i mean here's another way to think
about it
a lot of sentience
talk is or talk about consciousness
and
a lot of what we talk about is really
self-reflection
when we talk about consciousness
there's a general problem here that
there are many terms they're fuzzy
they're not well defined and so forth
but but part of it is about
when we reflect on ourselves we're
reflecting on ourselves in a world
um in our relation to that world i'm
thinking about am i making clear enough
answers to you that's part of like my
self-awareness circuit and am i
convincing you and not um you know maybe
you're not completely convinced and i'm
disappointed and i'm trying to think how
to make you more convince and so forth
but but these are with respect to
constructs about the world so i have a
construct of you i don't think we met
before um but we've seen each other's
name around the internet or whatever and
so you know here's this person he's
doing a podcast he's got a good audience
and so you know for me it's like i could
get my message out and for you he's an
interesting guy and so like we have all
this like model of each other and why
we're doing this and you know you know
that i'm sitting in this room in this
newly renovated house that has a hole in
it and so you know some things about me
and you can reason about them like you
wouldn't be totally surprised if now my
roof leaked on me having been told this
other stuff you know about the problems
with my new house um
so we have all of these ideas and then
you reflect like is that funny you know
should we cut that from the the scene
did it work did it not work or is it
worth the trouble of editing you know
where is this going to leave me in my
life you're reflecting all the time on
the things that you hear how they relate
to your knowledge about the world
and that's part of what consciousness is
and maybe some of it's like this meta
higher level like you think am i
thinking about this the right way or
something like that this system's just
not doing that it just isn't
like there's no
part of the system that represents that
the topic that we have right now is this
that the friends that i mentioned are
these that the family or i mentioned are
these the closest thing i could come up
with in that paper in some ways was that
it's a little bit like a sociopath right
a sociopath
would um tell you in conversation you
know read the room be like they've asked
me what i like to do with
myself well if i were in one environment
i might say what i like to do is play
basketball but i'm not in a sports crowd
so i think what i'll say is i like my
friends and family even though in truth
i have no friends and family because i
shot them all right i'm a sociopath but
i'll say that anyway even though you
know i don't have i don't like my
friends and family um
right you just make it up
and
this system is kind of like that because
it's everything it says is just made up
um but it's just not doing it for the
same reasons the sociopath is doing it
because the sociopath wants you to like
them so that they can get some power or
leverage or whatever and this system all
it does is predicts next words and
sentences and the astonishing thing is
that humans like so much to please each
other that they often affirm what they
do and um and so forth so you get really
weird cases like gpt3 which is one of
lambda's cousins if you say
um i think i'd like to commit suicide it
might say i think you should because
it's so common in the statistics of
predicting next words for people to say
i think you should whatever you know
half-assed thing you might have in mind
your friends are like i think you should
so you work in his database or maybe the
ai has come to a different ethical
judgment about something but it hasn't
though right like it can feel that way
but
and you could build an ai system that
makes ethical judgments and i think
that's a really interesting question but
a good system that made ethical
judgments would for example be able to
represent the fact that if you committed
suicide you would no longer be alive it
should be able to represent the fact
that your family members would probably
be disappointed if you had any um and so
forth that there would be like insurance
to work out or you could think about all
of the consequences this is just
spitting out the words i think you
should without any idea what any of
those consequences are and i mean that's
what makes it reckless like you you
could put these systems into advice
medical advice giving chatbots and they
will merely give you advice and a lot of
it will be bad advice and it will be
unreflected upon bad advice it will be
given because people say these words
frequently and not because it has reason
through
that it might be ethical so i mean a
human could have a deeper conversation
and say well you know why do you want to
commit suicide are you having a medical
problem is it an unresolvable medical
problem have you talked to anybody about
this and could could you know do that
sort of change things and and maybe you
could convince them that in your
particular case you know suicide really
is the right answer but this system
hasn't done any of that it just walks in
cold and says i think you should it
doesn't even know who the you is that
it's talking to and it doesn't care
it just knows that these words follow
these other words it's so shallow it's
too shallow for me to possibly ascribe
sentience to it it doesn't it's the
sentience is to be aware of some stuff
and it's not aware of any stuff right
and gary you've been my watch is aware
of some stuff right so my watch is again
a little bit more sentient than land is
interesting gary you've written that the
wright brothers didn't build a bird
right so
the way that
that we built something artificially
that could fly looks very different from
the way that it looks in nature
so i'm kind of curious what are very
different but not entirely different
right there's an interesting
intermediary middle there a lot of
people run that argument in the wrong
way and they say airplanes aren't like
birds and so we have nothing to learn
from nature and that's not right either
you know they they figured out some
stuff about flight control by watching a
lot of birds right so you know
in in the case of ai
i don't expect
that
if we ever get to so-called artificial
general intelligence which would be sort
of like the star trek computer you can
ask it any question and get a
trustworthy answer i don't expect that
to work just like in human intelligence
but i suspect that it will borrow some
things from human intelligence or have
something similar so i'm pretty
confident that we'll have models of the
world internal ideas about how the world
works um i don't see how to build an ai
without it so there'll be some things
borrowed from people and some things
like you don't want to do your
arithmetic like a person
right i mean people are terrible at
arithmetic and so you know you don't
want your system to forget to carry the
one
you know a long arithmetic problem so
we'll borrow some things and not others
right but i guess when it comes to
assessing whether ai is intelligent
how like it can look metallic for you
know what i'm trying to say like
it can it doesn't need to be
why does it need to mimic our our
awareness of the world versus be a
seemingly intelligent conversation
partner in your perspective
well because the problem
is one of reliability so okay i don't
think it has to have the same models of
the world so you know my gps system
doesn't have the same model
of location as i do it relies mainly on
um satellite receivers that i don't even
have any sensation to pick up right it
triangulates between a bunch of
satellites and i don't navigate that way
i mostly use landmarks and my gps system
doesn't give a [ __ ] about those
landmarks which in some ways makes it
more reliable because if the landmarks
change yeah
i was going to say some ways a feature
or a feature or a bug
definitely characteristic of humans you
can't argue with is that
we're unreliable
we're unreliable but i mean i mean the
shocking thing is that as bad as we are
at driving we're still better than the
best machines
um you know so for now
we're
for now that will change eventually but
it probably doesn't take longer than i
think a lot of people recognize
um so you know there are some some ways
in which people are more reliable some
in which machines are the way in which
more we are more reliable right now is
in understanding
let's say an article that we read or
movie that we watch
understanding the motivations of
characters why they're doing things like
in the world of understanding physical
objects and human relationships with one
another things like that we're just far
ahead of the machines and arithmetic
we're way behind and just we're way
behind so you do have to look at these
things domain by domain one of the worst
mistakes i think people make is they
think ai is like a one purpose they're
sorry one size fits all universal
solvent that can do anything the reality
is it's a bunch of different tools some
of them work really well some of them
don't work really well there are
problems that have been really well
solved and problems where we have no
idea um haven't made any progress in 50
years so
it's this really mixed bag
and some of it's better than people and
some of it's not yeah okay i want to get
to some of the dangers
of of this type of stuff in the second
half but let's just close out this half
with a question that i read on your
substitute from a commenter that i found
interesting i think the commenter said
something like how do we know that that
humans are sentient if we're trying to
do all this work trying to figure out
well in philosophy we call that problem
of other minds and ultimately all we
really have is ourselves right so i
don't know for sure that you're sentient
and
some point
um
i'm going to say 30 years from now
we'll be able to make machines that do
podcast interviews and i won't really
know i don't really know
if you're a person fake or you know
machine faking you out or whatever at
some point
um
at that moment we have no
independent test of like whether
somebody else is conscious like this
whole field of consciousness we'd like
to answer questions like is a rock
sentient or conscious
and you know most of us would say no
there are some philosophers that would
say rock has a little tiny bit of
consciousness or maybe sentience i've
never heard anybody quite make the
argument that way but it wouldn't be too
far a leap from some positions i've
heard so there's this idea of pan
psychism um where there's a little bit
of consciousness everywhere
i'm not a big fan of it but like there
are respected philosophers that try to
make arguments like that um we don't
have my point is we don't have an
independent meter for that
um so mostly i ascribe sanctions to you
because you do the kinds of things that
i think i might do they say and you know
i have my own internal representation
and whatever it's not completely
convincing like
you know i i wish they were a better
tool and some people play around with
like you know different brain signals
you might measure
their interesting questions about like
how do i tell if somebody's had an
accident they can't talk anymore how
much is still going on there and there
are ways of looking at brain scans to
try to make guesses about that but none
of them have a full like independent
grounding there's there's no like gold
standard like here is this you know
pound of gold that we can use as a
universal reference and we um you know
we can describe a second in terms of how
far the earth travels on this orbit
there's no
independent reference there and so
philosophers call this the problem about
their minds i think it's for now an
unsolved problem
gary marcus is with us he's the author
of rebooting ai and founder of geometric
intelligence which was acquired by uber
uber lots of great stuff you can find
his
writing on
garymarcus.substack.com we'll be back
right after this short break and then
we're going to talk about the dangers of
what might come with um ai that can
convince people that it's sentient um
but is not
and we're back here for the second half
with gary marcus he's the author of
rebooting ai it's a great book you
should go pick it up also the founder of
geometric intelligence ai company that
uber acquired gary let's talk a little
bit about the hype situation here so we
know now let's at least take the the
notion that ai can fool a google
engineer into thinking it's sentient
there's a lot of people who don't spend
time who don't who aren't well read um i
would say almost everybody you know are
aren't experts on these systems
if the ai can now convince somebody um
who's an expert that it is sentient
what's going to happen when we're going
to be living in a world where you have
you know these systems run amok um is
that is there you know you gave the
example in the first half about you know
health ai may be telling someone to
commit suicide um
is there immediate danger here and what
is what is the um concern you have with
folks
who say that this stuff is is he that
artificial general intelligence is here
now present um and among us
well there were a couple different
questions um
artificial general intelligence is not
here like that that one in my view is
not controversial to be artificial
general intelligence would mean that a
system can encounter problems it hasn't
encountered before and come up with
sensible solutions um that would be
critical to artificial general
intelligence as opposed to artificial
intelligence so you have narrow ai like
a chess computer we already have things
like that and do a particular problem we
just don't have systems that you can
confront with a novel problem that
hasn't seen before and expect a
reasonable answer it just doesn't exist
yet um
the next question part of what you asked
is like
should we be worried right now if if so
what we should we be worried about
um
the first thing we should be worried
about right now is actually
it's kind of like the wild west out
there people can put up any piece of
software they want and there's almost no
um before the fact regulation
so
if you want to i don't know make a
military drone or something like that
there's a lot of regulation before you
can put something in the air if you want
to introduce a new pharmaceutical to
fight covet you have to do tests before
you can commercialize it right um see
phase one phase two phase three testing
all that kind of stuff if you wanna put
out an ai system
that does something that could
potentially
lead you to commit suicide for example
no regulation on that
um prospectively at all there's some
antecedentally in the sense that if you
do something bad you make some bad
software um somebody could sue you for
liability but it's only after the fact
there's really very little there's a
little action in europe but
essentially there's there's no
regulation so if
one night somebody at the tesla factory
got mad and broke in to the system and
decided to hack it in a way similar to
something that did just happen in russia
the other day they could do that there's
no law that
says it's going out so they think what
happened that happened in russia yeah
was um
with different technology but somebody
managed to get all the taxis uh
to go to a single place at the same time
which created all of these um it wasn't
an
autonomous vehicle thing but they just
like put out fake requests or something
so all the taxi drivers converge on this
one square in russia which caused these
you know massive traffic jams as well
you have to like get them all out of
there once you you know figure out i
mean i don't know if it was a practical
joke or it was done out of malice or
protest or why it was done but um you
could easily for example
if you were malicious make all
driverless cars converge on a point or
you know small set of points or
something like that um
and then you know if you had a bad actor
inside of let's say tesla wanted to do
that and then they put it over the air
there's nothing to stop that except
after the fact you discover it didn't
work and then you know you deal with the
consequences um which is not unlike
kind of the situation with cyber
security and and so forth we were like
really
running behind the malicious actors in
many um domains so like you know you see
these crypto heists and stuff all the
time and
the major companies spend from what i
understand massive amounts of money on
you know payouts cyber criminals and
stuff like that so so you know ai is
just software and the software is not
all that tightly regulated so that's the
first thing to realize is like anybody
kind of put out anything and there's
some after the fact mechanisms if it
doesn't work out but not a lot of
stuff in advance to say hey like
have you made a safety case here have
you proven that you could actually use
this reliably
there's very little software where
people have proven that things are
reliable you need to do that when you
design a plane so there are actually
standards around that um so like the
dreamliner i think had a lot of software
verification in the process but in
general software verification is not
required in the answer that's the first
thing that's like background context
anybody can do anything kind of at any
time yeah before you get on to point two
one of the things that blew me away
after so i tried out dolly with open ai
folks
and then i was like well they're you're
being very cautious about the type of
images that people can release here
but there's gonna be copycats that will
not be cautious at all and all the
problems you're trying to prevent are
gonna end up being real problems for us
pretty soon and and really in quick
succession it was amazing how many
different dolly copycats came out there
and and now all those things the
confusion is
is you know the flavor of the month and
is pretty open and yeah um i don't think
any of those have solved the problem of
like if you put in doctor you get a
white male they all have pro i mean they
may have solved that particular one but
you change it slightly and say you know
all these things entomologists think
you'll get right
nobody's for example solve that that
problem i'm sure that it's pretty easy
to get them to do things that are you
know graphic and gory and maybe would
make a lot of people uncomfortable um so
so there's no regulation around any of
that or hardly any regulation
around any of that
there are copycats right now
um
there's really only one technology that
people are using it looked at an
abstract level which is use a massive
data set
with
one or two common kinds of algorithms
and predict what's going to happen next
based on the data set that you've got or
you know draw the thing that's closest
in what we call a space of images
the text you've got um and so at some
level they're actually not that hard to
copy um which is the point that you're
making right it's not like
dolly has some brand new
intellectual insight that allows it to
happen
or dolly to um relative to the rest of
the playing field like everybody kind of
understands the technology that we're
talking about it's mostly a matter of
getting together the data set once
somebody realizes hey you can do this
with this kind of data set somebody else
can get a similar data set they can do
the same thing so these particular
technologies
are not that easy to protect
intellectually i'm not saying you should
i think there's reason you know you
might want them to be open but
whether or not you want them to be open
right right they get copied that's the
reality so there may be some major
conceptual advance in ai and i think jan
lacoon who i've notoriously gotten yeah
he's had some good back and forth
on twitter he runs artificial
intelligence for meta for those
listening i was actually on the show
people months back so he's a chief ai
scientist at meta um you know he and i
disagree about a lot it's kind of famous
people
write you know clash of the titans
things whenever he and i get get it mix
it up but we actually agree that these
systems
don't really solve the problems the
larger problems of artificial
intelligence um and that we need some
paradigm shifts here there there's
somebody else i got in debate about
whether we need a paradigm shift
some people know him as slate star
codex or scott alexander
um sorry um
and you know he tried to make an
argument that maybe we don't need a
paradigm shift but it was a softly
sad argument um lacuna and i agree we
need some paradigm shifts and when those
paradigm shifts come maybe only a few
people will have them and there will be
some technologies built around them that
are
restricted
but right now
you're right most most of these new
technologies can be copied relatively
quickly you know open ai introduces
something and google's got a better
version four weeks later and maybe
public uh consortium you know has
something very similar another few weeks
after that so that that's background and
it's relevant background to
to the malice question you're just
talking about so like
um for a while openai kept gpt3 kind of
under lock and key they didn't let me as
a scientist use it in fact i requested
access and then give it to me um
yeah i'm still waiting for the dolly
access so yeah
i think dolly accessories are starting
to open it loose right but it doesn't
even matter now you can use stable
diffusion and you will get you know
essentially the same kind of uh results
so for many purposes it doesn't even
matter anymore that it's closed um
so
the possibility that bad actors will get
their hands on
these things is very high so meta really
something that's very much like gpt3
out there in the general public and so
one of the
specific cases that i worry about most
is actually misinformation so systems
like
gpt-3 and lambda and so forth are really
good at making up text that sounds like
a human rhoda but they have no concept
what they're talking about they're not
bound to the truth and if your job is to
make up lies that's not such a bad
technology right so if what you want to
do is to put out like 10 000 versions of
something on twitter
something untrue and find one that
sticks then
misinformation as a service which is how
they might call it in the tech industry
is a pretty damn powerful technique and
if it hasn't been widespread it soon
will be and i suspect it's already you
know i mean
the troll farms aren't going to publish
what software they're using but um it
would be foolish of them not to be
making use of this and so that is
chatbot also was like pretty amazing it
immediately started making like pretty
uh next level critiques of facebook
saying you know
even if you're trying to connect people
you cannot be like a public good if as a
capitalist enterprise and mark
zuckerberg is just doing it for the
money some amazing stuff some of which
was
hilarious yeah but it's also a reminder
what we're talking about in the first
part of the conversation so it's not as
if the system reasoned through
kind of just surveillance capitalism and
power and
zuckerberg and the ownership structure
of meta and the you know special shares
that he has which would be really
interesting if you know you could get a
system to do that instead it was you
know that some line from somebody in
reddit maybe it's put in some synonyms
and stuff like that but some human
basically came up with those ideas and
then they churned through this machine
there's um embeddings give you synonyms
and stuff like that but you know they
weren't original thoughts a lot of
people have actually thought that
there's you know a lot of hypocrisy in
meta and how zuckerberg runs things but
it was hilarious that it came out of the
system the other thing that it shows is
it's almost impossible to corral these
systems so i wrote a sentence somewhere
the other day um about how these systems
large language models basically we're
talking about are like bulls in a china
shop they're awesome powerful and
reckless like
you can't actually control them so like
meta didn't want to release something
that would make them look like they had
egg in their face and embarrass them and
so forth um they wanted to
help with open access science which is
to the you know their credit
but they would they didn't have a way to
corral the system such that it would
produce only things that were sort of
constant with
the goals of the company right and if
meta can't make its system keep its
mouth shut about zuckerberg well now
imagine this in the medical context and
you're you're trying to use the stuff to
give people advice
it's just not reliable enough it's gonna
you know tell you that vaccines are bad
because a lot of people said that in the
database and it shouldn't be telling you
the vaccines are bad or it's gonna
tell you that it's okay to commit
suicide i mean that's a real example um
from someone experimenting with the
system of a company called nabla trying
to see uh what these systems do it's not
a well it's hypothetical in one sense
another the system actually generated
that um we don't have any way of
controlling that right now we don't have
any way of making these systems reliable
in that way so in the art domain i'm not
sure it's a problem somebody types in a
prompt and out comes something with you
know knives and blood and the artist
doesn't like it
um the artist being the human who's
running the system to go back to our
earlier party conversation that's fine
they just don't put it out there on the
web um but if you are interacting
directly with a chatbot that gives you
bad ideas it's problematic um
i guess i'm violating an embargo if i
say this thing i i'm trying to try to
think about it do it
i i was asked to make a prediction about
next year it'll it'll be out soon enough
okay and
about ai and i i went dark um the the
prediction that i made
um
is basically that there will be a death
tied to a large language model in the
next year
and my reasoning was these systems
already
um have you know told people to commit
suicide they've said that genocide is
okay
they're also capable of making people
fall in love with them and lemoyne
basically fell in love with with lambda
they're
drawing he said he was only as he said
he was just a friend he had love of for
it as he would for a friend but not as
we're just friends yeah i heard that one
um sure now you know someone like lemoy
maybe not him specifically but who
developed that intimate relationship
with machine and then i don't know
discovered that the machine didn't
really care about them or whatever um
might commit suicide it would have to be
a fragile person i don't think lemoyne
no lemoine said he'd be that way you've
used lambda as a friend that he will
interact with again just like he has
many friends who you speak with and you
don't see for a while right but now now
imagine a more needy person
a little bit less savvy and you know
so so there are multiple routes by which
these things i think might actually
cause a death in the next year now
because they're now
scaled out so everybody can use them
there's going to be way more of these
chatbots be way more systems like
replica which is i think made fairly
careful with some other technology on
top um
there'll be you know reckless knockoffs
of that um
it's just an accident waiting to happen
yeah a series of accidents gary isn't it
interesting that in the first half of
this conversation we spoke all about how
the ai is not sentient and is simply
repeating patterns and in the second
half we've spoken about even so this is
a threat to people's lives i mean what's
exactly right what does that tell you
right about where this technology is
heading what does that say about the
nature of this tech
i mean we are certainly going to have
more and more technologies that fool
people into thinking that they're
smarter than they are and i worry about
that a lot so i'm actually more worried
about current ai than future ai
um i think that future ai will be better
and
will be less reckless
and the current ai just doesn't know
what it's doing and you know there's
certain narrow cases where it's fine so
i mean it turns out it is actually ai
when my
phone gives me directions that's
actually a set of ai techniques to do
search and whatever and i'm not too
worried about that although there are
cases i had a gps system telling me to
go off-road in iceland and i really
should not have done what it
decided pretty quickly what was the uh
idea conclusion of that situation
uh uh backing down very carefully um
okay it was it was uh don't listen to
this without four-wheel drive and
probably not even that um or or that not
all shortcuts um you know are what they
were taking anyway so i mean these
systems are not perfect but you know
it's true if i follow that road i i'd be
able to get from point a to point b but
the system you know might have realized
that i i didn't have the
stomach to go that particular route
anyway
um
most
sorry so a system like that most of the
time works it's been pretty well
debugged um but a
none of these chat bot systems are well
debugged
nobody knows how to debug them in fact
and so um both the problem with gpt3 and
and with um
driverless cars is we don't actually
have a methodology even for debugging it
most of the debugging at this point in
the um
drive the sorry the navigation systems
is like learning that this road is not
actually open updating a database and
then as soon as you add that fact to
your database the system will stop
sending people down that road so we know
how to debug it
we don't know how to tell gbt3 stop
telling people to commit suicide and if
people ask in a slightly you know you
might have it program a rule if the word
suicide comes up then do this or that
then people will say it in a different
way and the system won't recognize it so
you know you
somebody says
i'm thinking of ending it all by jumping
off a bridge and if you don't have a
filter that is looking for the word you
know jump off the bridge and just for
the word suicide it's not going to be
broad enough so we don't have a
systematic way to debug things this same
thing has happened with driverless cars
like there are all these what we call
outlier cases and you can enter them one
at a time but it is there's so many of
them that that's not really good enough
so my favorite recent outlier case is um
somebody summoned their tesla right you
press a button on your phone and your
tesla comes across a parking lot to you
only they did this when they were at an
um airplane trade show on a runway
basically so they summoned their tesla
and it ran into a three and a half
million dollar jet
straight in you can find it on youtube
and put on your show notes um
and it's an outlier in the sense that it
was not trained on jet airplanes because
most of the time when teslas drive
around and they collect data there
aren't any airplanes on the road because
they're not usually at airports um
there's just this endless string of
these and humans
deal with them differently when you were
on an airport runway if you should ever
find yourself uh at one of these trade
shows and you see the plane you'll be
like plain big expensive i probably
shouldn't drive into it so you'll be
reasoning about the properties that you
know about the airplane um
this system doesn't reason it doesn't
use logic to say if a and b is basically
just using like a library of videos and
it's not in the library of videos and
being a little bit crude over something
right and if it's not in his library
videos it doesn't know what to do with
it and there's no systematic methodology
for debugging it you know if you like
write a little computer program to i
don't know predict numbers in a sequence
you're like okay it didn't work here
maybe this line of code is wrong maybe
i'll fix it but you can't do the same
thing when the way your program works is
it looks in this big database so what
people actually do is they make the big
the database bigger and they pray that's
basically what our methodology is right
now
bigger bigger databases
scale is the only thing that matters
that's not really a methodology that is
getting us to reliability and so we have
all these systems mercifully most of
them are in limited context right now so
like there aren't that many of them in
you know as we're recording this in
september of 22
that many of these chat bots in
production but wait a minute facebook
just or meta just released the tools so
anybody can do this how much do you want
to bet that you know this time next year
there are like 100 or a thousand chat
bots on the apple app store driven by
this reckless bullet in china shop
technology
something's gonna go wrong like it's
just a recipe for
um error and nobody knows you know how
to make their chat bots constrained and
not toxic and not spew misinformation um
we don't have an answer for that i wrote
the first story about uh microsoft's
chatbot tay and yeah i had pinned it to
my profile went to sleep in california
woke up the next day and had all these
mentions on twitter about how i might
want to take my
uh my story down and i was like what the
hell happened and they're like well
taser nazi and i looked and i was like
oh [ __ ] tay is actually a nazi this is
bad and kay was not a nazi when you went
to bed when
and things went downhill and then it
meant them yeah i don't know if
everybody in your crowd
in your audience knows that um anybody
who doesn't know tay should look it up
it's not clear that we're fundamentally
in a different place than then right um
it seems like we are given
what happened fundamentally in the same
place yeah yeah it seems like we're in
the same place
exactly and despite all the hype about
you know we're so close to to solving ai
or whatever we're not we're facing the
same
problems and hey what was that
2016-15 that's right yeah probably
15 or 16.
yeah yeah ask you one more question
before we hop
so um
we've been talking you know basically
since i started covering anything having
to do with artificial intelligence the
big worry has been that we're going to
get into one of these hype cycles where
ai is going to get over hyped under
deliver
and then there's going to be a pullback
of research uh funding leading us into
what people call an ai winter
it seems today even though it's
imperfect and you know let's say not
sentient
ai is delivering in ways that um you
know are pretty remarkable the fact that
ai can go spring a full circle fact that
ai could go and win an art contest based
off of a prompt the fact that it can
fool a google engineer or you know
potentially full google engineer
thinking it's sending
to be that adept in conversation it
seems like we're we're not at risk of
having you know another one of those
hype cycles where we have a pullback
because the ai is delivering in the way
that it is right now what's your thought
on it
i don't know this is the first thing
i'll say like i don't have a
crystal ball i think that
you know the dolly kind of image
synthesis stuff is really cool it's
definitely going to have an impact in
the art world
you know there'll be video versions of
it at some point and
that's just boggles the mind what that
will do it kind of boggles the mind um
and so
there's that on the other hand there are
things that have been promised that
probably aren't going to work out and
not work out soon so
chat bots really are hard to rein in
they're higher stakes depending on what
you use them for so if you just use them
for chit chat maybe it's okay but um
chatbots may not work out you might
remember facebook m was going to be you
know universal um assistant and it was
very much hyped by wired and places like
that and then it canceled like a year
later because it just didn't do what it
was supposed to do and they couldn't
figure out how to get it to do it i
would also take the blame you know on
that one i wrote some stories about it
for buzzfeed that i wish i could go back
and revise so okay yeah um i i don't
know if you you know want to take the
hit on on google duplex but google
that do not
got a lot of hype and and it didn't you
know materialize um
and you know
right now driverless cars kind of have a
free pass but we could get to a place
four or five years from now where we
still don't have driverless cars that
are anything like what we call level
five where you can just type in where
you want to go
um
investors might be like all right enough
is enough this really isn't working out
um same thing on chat bots so yeah
there's there's a ton of companies that
are trying to use gpt's technology i
don't know any of them that are
you know breakout successes and so you
get four years out nobody can
control them then people might be like
yeah we were sold about goods um people
being investors and investors might pull
back and so that that could lead to an
ayah winter um
on the dolly side like i don't know how
much money is to be made there and
that's that's a material question for
that kind of issue about
winter or not um
at least three hundred dollars at the
colorado state fair
so i
yeah i mean
i mean the issue there is the software
itself is relatively easy to copy and so
you know what's the business model how
much can you charge and like
if if it winds up that people don't want
to pay more than 10 cents per
illustration and there's like 20 players
who are all doing this
i don't know there might be money there
there might might not be but in in terms
of like you know investors always want
their 10x return and stuff like that
maybe they get it maybe they don't i
don't know
um but you know much more money so far
has been put into the driverless cars
and i think a lot it's being put into
like customer service chat bots and
stuff like that and so you know it
depends in the end on whether things
that have been promised are delivered
and how long it takes for them to be
delivered um and and so forth and i
i can't
i can't fully tell you that what i can
tell you is that
ai could be doing a lot more than it is
we just passed the 67th anniversary of
the field
and there's some things that we have
always dreamt about like having ai
build better technology for science and
medicine and
with the exception of alpha fold which
is useful towards those problems
success has been limited
um i think too much of the effort has
gone to things like recommendation
engines and although i think the art
stuff is cute it's not getting it i
think the deeper problems of how you get
a machine to read a running text or
watch a video or something like that and
really read it with comprehension
and
you know i i think the world would be a
better place if we focused on those hard
problems
um and i don't know if we will or we
won't
gary marcus thanks so much for joining
this was super fun
thanks a lot for having me great to have
you so just a shout out the book is
called rebooting ai available everywhere
and people can go get your sub stack at
dot garymarcus.substance.com anything
else
thanks a lot yeah okay great that was
awesome thank you gary for joining thank
you everybody for listening thank you
lake nick watney for doing the editing
of the audio appreciate you as always
thank you linkedin for having me as part
of your podcast network and thanks again
to all of you for listening we will be
back next week with a new interview with
a tech insider or outside agitator and
we hope to see you then until next time
thank you for listening to the big
technology podcast