Can AI Become Conscious? — With Michael Pollan

Channel: Alex Kantrowitz

Published at: 2026-02-25

YouTube video id: VVmAHSxI1QM

Source: https://www.youtube.com/watch?v=VVmAHSxI1QM

Can AIs be conscious? Now that we have a
better understanding of the mind, we
might have some answers to that
question. That's coming up with
best-selling author Michael Pollan,
right after this. Welcome to Big
Technology podcast, a show for
cool-headed and nuanced conversation of
the tech world and beyond. Today, we
have a great show for you. We are going
to drill into the depths of artificial
intelligence's ability to achieve
[music] consciousness, and we're going
to do it with the perfect person. We
have Michael Pollan here. He is the
author of the book A World Appears, a
journey into consciousness, that is out
this week from Penguin Press. Michael,
great to see you. Welcome to the show.
>> to see you, Alex.
So, let's just start with consciousness.
Um the first thing that I really felt
while reading your book, because you
describe consciousness in many different
ways, is just that consciousness is kind
of amazing, and we're going to get into
whether AIs can achieve it, but just
strictly on the human side of things to
start. I mean, it is amazing in some
ways that yes, we're we're here, but we
have this awareness of ourselves uh in
the universe that's just kind of
mind-blowing, the fact that it exists.
Wouldn't you agree? Yeah, you know, it's
funny. We don't think about it very
often. We go through life thinking it's,
you know, it's totally transparent, and
the world as it appears to us is as it
appears. But in fact, it's all a product
of this phenomenon we call
consciousness. Um and in humans, it's
particularly complex and wondrous in
that
we don't just exist like other animals,
we know we exist, and that changes us in
interesting ways. So, it's funny, you
know, it's a universal phenomenon, but
many people don't think about it that
much. And one of the goals of the book
is really to get you to think about it,
because it is a very precious gift, and
it's one that in some ways I think we're
squandering.
Right, and we'll get into that. I think
one of the interesting things is that
the questions about consciousness is,
you know, if let's say the goal was
survival, you know, we could do that all
sort of mechanically, but
for some reason we don't just do it
mechanically, right? We do it in a way
that that we have awareness of it as we
go along. Yeah, and that's kind of, you
know, and that's one of the hard
problems. I mean, this idea, you know,
most of what the brain does, it does
without our awareness, right? It's
monitoring the body 24/7.
It's adjusting your heart rate, your
blood pressure, your
you know, glucose levels. I mean, an
amazing amount of things to keep you at
the proper homeostatic set point.
And we're not aware of this.
Yet, some at the tip of that ice iceberg
of of mind is this area of stuff we are
aware of. So, why isn't all automated?
Why wouldn't that have made more sense?
Why did some of it have to come into our
awareness? And there are various
theories about that. Um, you know, one
and I find it persuasive is that um
there are certain things that go on for
a creature that need to be addressed in
a reflective way. In other words,
um
that if you let's say you have needs
that are incommensurate
um and and are in conflict. You're
hungry and you're tired, which should
you which should you address first? That
kind of stuff would would come into
consciousness.
Um
I also think consciousness is really
helpful in a social situation. When
you're dealing with
um a world that [clears throat] is
fundamentally unpredictable, that is to
say what other people are going to do at
any given time, what other people are
going to say at any given time, and you
have to be able to imagine yourself into
their heads. We call it theory of mind.
And so, I think for for the fact we live
in this intricate social world,
being conscious is a is a huge boon. And
and having automated things, I don't
think you could automate something as
complicated as as social interaction.
Well, I guess we're going to find out
pretty soon because when you think about
it you know
>> is it automating or is it not? But we
certainly have, if you think about can
computers handle social interaction, the
answer is yes. They definitely can and
people are are falling in love with
them. Uh and we we can get to that in a
minute, but um you know, to me the the
thing that you talk about is is this
type of behavior or auto or the type of
um yeah, behavior automatable? Um
I I want to ask you the flip side of
that question, which is is is
consciousness computable? Are we able to
break down what consciousness is and
then [clears throat]
eventually with materials that we have
sort of figure out how to build it?
Yeah, I I don't think so. I don't think
everything consciousness does or is is
computable. Um I think the brain is more
analog than digital in many ways.
And there is a there is a deep uh
metaphor at work here when we even ask
that question, which is
is the brain a computer? And that
metaphor is very powerful.
And but I don't think it holds up when
you think about it uh hard enough. And
and one of the one of the goals in the
book is to help people think through
something like that. So,
historically it's very interesting that
whatever the cool cutting edge
technology is of any moment, we have
likened that to the brain. At various
times we've like likened the brain to a
mill, like a grain mill,
to uh a loom,
to a telephone switchboard, to a clock,
and now to computers because they are
the cutting edge technology.
But metaphors are um Norbert Wiener said
this really well, that the um the price
of metaphor is eternal vigilance. In
other words, being really aware not to
fall into the trap of equating two
things that that you're using as one as
a metaphor for the other.
And I and I think you don't have to look
very hard till you see that the computer
is brain metaphor breaks down.
Um first
you don't have in brains the hard
separation between heart between
software and hardware.
In computers you can run the same
software on any number of different
hardwares. Um
but in brains
hardware and software are absolutely
indistinguishable. Every experience,
every memory is a physical
um set of connections in the brain.
Your life story has changed your brain
in a material way. Your brain and mind
is not interchangeable because we grew
up on with different life experiences.
That period of pruning that happens with
brains of children happens very
differently depending on life
experiences.
Um you also have this analogy of neurons
with transistors, right? You know,
transistors are either on or off and
that creates that's the basis of
computation. Um but yes, neurons in the
brain fire or don't fire, but they are
on a spectrum of intensity of firing and
that's all influenced by chemicals, by
drugs, hormones, neurotransmitters.
Um so our neurons bathe in a bath of of
chemicals that influence their their
firing rate and intensity and all this
kind of stuff.
Um and the third reason I think that I
don't see consciousness being computed
is that consciousness seems to be
intimately involved with feelings.
And feelings are um yes, they convey
information, but I I think they can be
reduced to information. I think there's
a residue in a feeling that is a bodily
sensation.
Um and feelings depend on things that I
don't think computers have, which is to
say mortal bodies that can suffer.
Uh now they're telling us they can
suffer and and and Anthropic is always
worried about hurting the feelings of of
Claude to a remarkable degree
allowing it to opt out of uncomfortable
conversations. Um but I I actually think
feelings are completely without weight
unless there is human vulnerability, the
ability to suffer, and possibly the the
fact of mortality.
So for all those reasons, I think we're
talking about um something that
can't be conscious at least as we
understand it. There may be something
that feels like consciousness, and
certainly they they're very good at
faking us out. Um I mean already, as you
say, people are falling in love with
chatbots. Chatbots are
are are
you know, striking up friendships with
people. Uh 72% of American teens
turn to AI for companionship already.
Um but, you know, that's not the real
thing. Um
and as much as people in Silicon Valley
like to say that the simulation is as
good as the real thing.
I don't think that's always true. I I I
don't think a simula weather simulation
will ever get you wet. Um I think that
we I think they're real There is real
distinctions between simulation and
reality.
There was a great part of book, I think
it's in the early part of the book,
where you talk about how you had a
teacher who said that you can boil
biology down, the human body down to $4
worth of chemicals, and you hated that
because you felt that it is very
reductive
uh and and didn't fully capture what it
meant what what the the essence of human
is. Yeah, I mean that's kind of when I
realized I was on the team of the
humanists instead of the reductive
materialists. I mean, this was eighth
grade and yeah, and he thought it was
really cool on the first day of
chemistry to say
"Your real value is $4.60.
That's what all the carbon and other
things you're made of would cost at a
chemical supply company." And I thought,
"What a idiot."
>> [laughter]
>> No, I have had I have had a similar
experience where I have a friend who's a
neurobiologist and you know, I'm
definitely on the side of like, you
know, feelings are real and she was
always like, "Well, love is just
chemicals." And I I you know, yes, in a
way it is, but it feels like it's not.
But for the purpose of this conversation
let me take that side.
>> Okay.
I mean, after all
you know, feelings like you talked
about, where do feelings come from? They
come from chemicals. And what's going on
with with neurons? Well, they're storing
data and firing. And yes, okay, maybe
there's a certain level of complexity or
different chemicals that need to hit to
cause them to fire, but ultimately this
should be
in some way reproducible. It's not like
it's not like there are god particles
inside the brain that we couldn't
actually fabricate or data that we
couldn't store.
>> appealing to magic, but I am appealing
to a level of nuance and and qualitative
distinctions that I think are beyond the
ability to digitize.
Um
you know, if you read Proust, who is
just brilliant at describing phenomena
of consciousness, right? Feelings,
insights. Um he he points out that
everything that happens to you is
different than what happens to me. And
that's because when I look at a rose or
a madeleine or whatever it is, I am
bringing a lifetime of associations to
this. My memories of what roses are are
different than yours. My associations
with the smell, it's so layered and
complex that um
and specific. There's There's There's
this
There is familiarity. You know, what is
familiarity to a computer?
Um and and I think we lose track. I
think there's a tendency
when we're dealing with with
technological simulations of things to
simplify what they are
and lose track of the nuance.
Um
There's a Sherry Turkle is a is a MIT
sociologist that I interviewed for the
book. And she's you know, she says at
some point technology allows us to
forget what we know about life.
And what she's getting at, I think, is
that when you have a conversation with a
bot
uh or a computer in general, you are
um reducing or simplifying your notion
of what a conversation is. You're
leaving out what's going on between us
right now, which is acknowledgement,
skepticism, body language,
um
uh all the subtleties of human
conversation are stripped away.
It's kind of you know, the the paradigm
case is the emoji, accepting the emoji
as a substitute for emotion. So, I think
we have to be careful about that when we
simplify these phenomenon
uh like machine consciousness, um
like machine you know, conversations
with machines, uh relationships with
machines. Um
What are we doing to the word
relationship when we count that uh a
chatbot and a person's relationship? Um
so, I'm just kind of alert to these
these layers of meaning and significance
that attach to everything we touch. And
um
I you know, maybe you get there with
compute, but um I don't see how. I I I I
think in I use the metaphor of
encryptedness um
that there is um
And uh William James and Marcel Proust
both talked about this idea that that
there is a distance between any two
thoughts, any two people thinking any
two thoughts that just can't be bridged
except imaginatively through art. So, I
think we're in a realm that um
is beyond our the genius of Silicon
Valley.
Such as it is.
>> Which is Which is interesting. And And
look, I'm not going to spend our time
together trying to convince you that
today's LLMs are conscious. I don't
believe that. Very few people within
Silicon Valley uh do believe that even
though you and I have both had
interesting conversations with Blake
Lemoine, the former Google engineer who
was fired maybe in part because of that
belief. But, I will say that there is
There is So, so I accept all your
arguments uh
for now. And I I think there is an
interesting belief within Silicon Valley
that this is just a temporary situation.
I'll tell you something that Demis
Hassabis uh the founder the Sorry, the
founder of DeepMind and the CEO of
Google DeepMind uh spoke to spoke with
me about earlier this year. He said uh
This is something that he spoke with me
about and on the Google DeepMind
podcast, he said that information is the
most fundamental unit of the universe,
not energy, not matter, information. And
I think what that means is Demis' belief
is that if you go down to the element,
the very like foundational level of
anything, you'll find
some form of data or you know, that you
could end up manufacturing and uh and
end up building from the ground up in a
computer scenario, in a simulated
scenario. What do you mean? Well, that's
the kind of worldview that if you, you
know, grew up in the world of computers
would be very persuasive. I mean, I
think we have to ask the question
are is the concept of information a map
or is it the territory? He's saying it's
really the territory. He's saying that
that is the building blocks of reality.
Um
and if he's right, um
then yeah,
many things follow from that. And he's
not alone, by the way. There are other
There are physicists who believe too
that information is at the
bottom of everything. I tend to think
it's
more map than territory.
Explain the map and territory
distinction.
>> Well, it's it's a it's a useful
distinction, you know, it's very easy to
um
when we have a model, a scheme
to describe something,
um
it's very easy to fall in love with the
the model or the description and
overlook the fact that it's representing
something that's not going to be exact
in the same way a map can't capture
everything about the territory it
describes. It's a simplification.
Um I think that may be true for
information.
Um but what do I know? I mean, I'm you
know
Right.
I mean, I think that's the that's the
point, right? Is that we don't know
yet.
>> Yeah, and and guess what? We're going to
find out by trying to do this. And and
the most positive thing I can say about
the efforts to um
design or and build a conscious AI is
which is going on in openly and secretly
all over the place.
Um is that it will teach us something
about consciousness cuz we don't really
understand how you generate
consciousness out of a brain. Um so if
it turns out you can create
consciousness,
um that will tell us that yeah,
uh he's right and information is
foundational.
Um Right. And if it's feelings that are
foundational and they can't be reduced
to information. Well, then we have a
problem. Although there are people um
building conscious AIs who accept that
idea. I I I profile somebody in the book
named King Sun Man who's trying to build
a robot cuz he understands you need a
body to be conscious and you need a
vulnerable body. So he's actually
building a robot with soft terrible skin
loaded with sensors so that this this
robot can have really bad times and be
injured. Uh and he thinks that will
produce the kind of feelings that will,
you know, will those be real feelings?
He's not even sure. Um but he's he's
he's working on that assumption. So I do
think, you know, we're kind of stuck in
in our efforts to crack what's called
the hard problem of consciousness.
And this effort to build a conscious AI
is probably one of the most promising
uh intellectual
uh experiments to to help us understand
it. Whether it succeeds or fails, I
think it's going to teach us something
really important and and that's
exciting. Um I know a lot of people
worry about
uh you know, do we owe moral
consideration to uh a conscious AI? Um
I think, you know, before we worry about
the tender feelings of our computers,
there are a lot of humans we're not
extending moral consideration to.
Definitely.
>> much of the Silicon Valley conversation
strikes me as a way to address fun
thought experiments about the future and
absolutely ignore what's going on in our
world today.
Well, let me let me speak a little bit
more about where I think Demis was going
with that.
>> Yeah, please. I because
you know,
so for him, I think it's not when we
spoke, we've spoken a couple of times
about this. It's not that, you know, he
thinks Gemini, the Google LLM, you know,
is conscious or that's not what he's
trying to get at. I mean, I don't think
what he's trying to get at with this uh
you know, information is the fundamental
layer of the universe. I think the point
is he's found a way to use AI to decode
proteins. The next thing on on the path
is
building a virtual cell. If you can
build a virtual cell, you can build
virtual organs. If you can build virtual
organs and virtual cells, you can start
testing various cures
to diseases, you know, to ailments
whether they're mental or physical.
And that is sort of the idea of wanting
to pursue this. It's it's you know, of
course and I I'll I'll agree with you. I
think in Silicon Valley there are plenty
of insane things that go on
and I'm not I'm not I'm not going to
stand up here and be a great defender of
the universe, but that's sort of what
he's getting at. We're all going to have
a digital twin at some point that will
be very useful [clears throat] in
diagnosing disease and predicting the
outcome of various
health situations. I mean, people are
working on that now
especially with regard to the microbiome
but other things too. And I think that
that'll be useful but the interesting
question will be whether having having
built up from the cell
can you then make that leap over the
gulf of you know,
biological flesh to subjective
experience? And
>> Right. the problem we haven't talked
about is how are we going to know?
Because I mean as we said they already
can fool us.
They're very good at that. They speak to
us in our language in the first person
which is that was a faithful step that
we took without really thinking about
it. Was it with Siri? I don't know. It
may have gone even earlier than that.
But that's a kind of wild idea that we
decided yeah, let's have the computers
talk to us as people.
But anyway, so how are we going to be
able to test them? The Turing test
doesn't work for this consciousness
question. It was designed for the
intelligence question, which is somewhat
simpler.
Um but since they can pretend to be
conscious and some of them are very good
at doing that, uh we're going to need a
better test. And the best one I can
think of, and I don't know technically
what would be involved in doing it,
is uh
training uh a chatbot on everything but
the human conversation on consciousness.
Uh nothing about feelings. Maybe don't
let it read any novels or poetry because
that would give it, you know, a context
in which to talk about conscious
experience. And then engage it in a
conversation about consciousness. Could
it Could it hold its own under those
circumstances?
Um yeah. Anyway, I don't know whether
that's possible, but Or it's possible.
You have to start at the very beginning
though. You can't remove things
apparently from the training set. You're
going to have to build up from the
bottom. So, I hope someone takes that
on.
Yeah, no, I I read that I read that in
the book and I thought that was like a
terrific potential experiment. And you
know, this on this question of, you
know, how do we know?
Um
I I think a question came up that is
kind of silly, but I'm going to ask it
anyway, which is
why are why are we, you know, so
precious about consciousness only being
ours or only being And again, like I'm
not arguing that LLMs are conscious, but
like
you know, you type you you speak with a
LLM today and you ask it, "What are
you?" I'm a large language model. What
are you doing? I'm trying to help, you
know, help you in these things.
Um
and you know, it seems to me like, well,
what is our obsession with putting this
barrier up that like only humans can be
conscious? Where like, if you speak with
this thing about whether it's
self-aware, it's clearly self-aware.
Yeah, well, I'm not limiting
consciousness to humans. I'm I'm I'm
freely I'm giving it to plants as you
know as you know having read the book.
So, I'm I'm pretty generous in who I'm
willing to share sentience with if not
exactly consciousness. Um so, I'm not
being stingy about it. Honestly, um and
I think that's kind of an interesting
phenomenon that I I talk about in the
book that
we are our definition of the human,
what's special about us, which has
always been related to our intelligence
and consciousness is under enormous
pressure today from these thinking
machines and possibly feeling machines.
And then from all these animals that
we're learning are much more conscious
than we thought. You know, we we always
the thought we had the monopoly not just
on consciousness but tool making and
culture and language and one after
another they're falling.
So, who are we? And are we more like
these animals who can feel and are
mortal and can suffer?
Or are we more like these thinking
machines which speak our language and
can talk to us the way we talk to one
another?
So, you know, whose team are we on? I
think it's I think it's one of the more
faithful questions we face as a species.
Yeah, and it's also it's interesting
because our answer to that question
will lead us in some ways to the way
that we we handle the machines, the
ethics of it and
>> Or or the ethics toward the animals,
too.
Well,
unfortunately, I think our our our
record and I think you noted this, our
record on ethics toward animals and
humans is going to is is poor. But, we
have evolved in our thinking. Once we
realized what was actually going on
inside the the minds and souls of some
of the creatures on on the planet. For
instance, and this is the thing that
that really struck me as I read
Descartes thought that animals were not
feeling. That when you beat them up and
they howled, they were only mimicking,
and they weren't actually and so
>> Just noise. And then eventually we
realized, "Hey, wait a second. That's
wrong, and we shouldn't You go to jail
now if you did what what these
experimenters were doing to to dogs in
the past." Although that's not to
monkeys all the time.
>> Well, he was dissecting He was
dissecting um you know, dogs and rabbits
without anesthesia because he didn't
believe they were conscious.
And yeah, he was wrong. And could we be
making the same mistake with our
machines?
Well, um
some people think we are, and we might.
Um
but, you know, the idea we're
automatically going to treat them with
all this moral consideration
because they're conscious Yeah, that
seems to be the problem.
>> we we we we continue to eat animals we
know we full well know are conscious. I
don't know that we're quite as
enlightened as that conversation
suggests. So, let's talk about one of
those efforts that you do write about in
your book, one that is known. It was
called I think free energy fighter where
these scientists have built an AI that's
trying to get back to some form of
homeostasis, and they think that because
it's trying to do that it can be
conscious. I wasn't very convinced that
this was the right approach to take, but
I'm curious to hear your perspective on
it and what exactly it was. Yeah, so um
one of the characters I profile in the
book is a really interesting
neuroscientist uh named Mark Solms.
And he's from South Africa. He's
actually trained as a psychoanalyst, and
he is developing a theory of
consciousness around feelings. Uh his
work grows out of the work of Antonio
Damasio, who was really the first
in this modern wave of consciousness
scientists to to make us pay attention
to feelings as opposed to thoughts as
the basis of consciousness. And Solms
wrote a really interesting book called
uh The Hidden Spring.
Uh and he makes the case that
consciousness begins in the brainstem,
not in the cortex as people had
previously thought. Any Any proves this
or tries to prove this with uh evidence
that people who lack a cortex, and some
people are born without one,
nevertheless are conscious.
So, the cortex gets involved in
consciousness. You know, cortex is the
evolutionarily most recent advanced part
of the brain. Very, you know, it's
human, um more human than uh
other parts. Um
And that he says it doesn't really get
engaged until late in the process. It
starts with a feeling, let's say, of
hunger. And then the cortex gets
involved like, "Well, I'll book a table
at this restaurant, you know, at 8:00."
Um and forms images and counterfactuals
and all that cool stuff.
Um
you would think, and I thought, that
since he was so interested in feelings,
he would believe that it was impossible
to make a machine that had feelings.
But, no, I was wrong. And he has
assembled a team in South Africa
Actually, it's an international team.
There are people in several continents
working together to develop uh a
conscious AI based on his theory. Now,
his theory is that feelings arise when
uh homeostatic set points are being
violated, and you need to get back to
balance. You're hungry, you're tired,
you're thirsty, uh your blood pressure's
too high, whatever. But, that
many of these feelings can be addressed
unconsciously, but when you have two
feelings that are in conflict,
that's when things become conscious.
And so, he's trying to create a
situation, and it's essentially an
avatar in a video game right now. It's
It's not about advanced computation. Um
it They're really working in the
idiom of video games.
Um
what happens when this avatar is both
hungry and tired, and has to make a
decision which to privilege. That
uncertainty
is where consciousness is born. He He
defines consciousness very succinctly as
felt uncertainty.
Um and so he's trying to make his avatar
experience this felt uncertainty.
Um
I asked him, "Well, would these feelings
be real or artificial?"
Uh and he said, "Well, they're feelings
in the context of the game." Uh they're
so they're simulation, but he said for
the avatar, they're real.
Um so I found this all kind of
unsatisfying, interesting, but
unsatisfying. So that's that's the way
he's going about it. I've asked other
people who are pretty
knowledgeable computer scientists, um
nobody seems to think large language
models are the way to go toward
consciousness.
Um but there ways but people envision
future models of AI that are very
different and combine different modules
and and a large language model would
just be one module. And as as Blake
Lemoine said, you know, La- Lambda, uh
which was the one he was dealing with,
is more than a a large language model.
It had other modules, too.
Um but I've talked to people and I've
said, "Well, why would it be useful? How
could you monetize consciousness? Why
are you bothering except as an
intellectual experiment?" And some
people have said that, "Well, in the
same way consciousness helps us solve
problems in a unique way,
having a module that could reflect on
itself and [snorts] um would
possibly help you get to AGI.
Um and that um one theory of
consciousness is the global workspace.
And the idea there is that there's tons
of work There's tons of things modules
in your brain going about their
business.
They compete for attention in this
workspace.
Um and certain very important
information that needs to be broadcast
to the whole brain so it can take
action,
burst ignites into this workspace. And
and then that's the contents of
consciousness.
They feel that you could you could
create an AI that had a similar sort of
um competition for attention.
And consciousness would be useful in
that context. So,
you know, we'll see. I mean, right. You
know, we could have a bet.
>> [laughter]
>> Yeah, I mean, again, like I'm just
throwing these arguments out here. I
want to like be able to look at this
argument from all all different sides
and um I think the best argument against
against uh this video game uh
it's not perfect, but it does the trick
was is I think you brought it up that a
thermostat basically has a set point and
works really hard to get into
equilibrium when it's out of that. And
so And it's not And it's not conscious.
Yeah.
>> Yeah, although I talked to people who
said, "Well, that's the basement. That's
the very you know, that's the bottom of
consciousness."
>> there. The Yeah, you start with the
thermostat and build up from there.
So, I think just to conclude this
segment, you know, we've covered a lot
of ground, but the things that I would
say is, you know, if you are a believer
that machines can be conscious, I think
right now clearly it's not there, but a
lot of these objections seem to be
things that maybe
uh maybe the tech industry over time
maybe in decades can get to. Um this
fear of mortality, they can develop a a
desire In fact, we already know there's
a desire that they don't often times
want their values overwritten. They
don't want to be shut off. That's what
uh Lambda said to Lemoine. This idea of
having this familiarity, right? And a
familiarity
being able to like well, a long context
window will certainly build familiarity
with someone. The uh you know, the
emotions that you see when you're in in
person with someone. All these things
will develop avatars and computer
vision, you know, may one day
you know, be able to give them an
experience where they're seeing our
reactions as well, whether we want to
allow them to or not, I don't know.
But I I think that it's it's sort of the
this work on whether conscious what
consciousness is and whether machines
can achieve it is certainly going to be
is foundational now and will be very
important moving forward as we start
Yeah, I would just add that
anything's possible given enough time
and work.
>> That's right. Um so Anything anything
though? I don't know. There are some
things.
>> I
yeah, but I mean yeah.
Yeah, I mean it would take a very
different architecture I think and you
know, there are talk about neuromorphic
computers. We also have we're building
brain organelles in in solution.
I think they've got a shot at becoming
conscious.
They're working from up from you know,
actual brain cells forming organisms
together. I mean there's a lot of a lot
of things could happen definitely and
I'm I'm I'm obviously not talking about
the longest time horizon imaginable. No,
it's I think good it's good to have some
real science about what we're seeing
today which I think you've provided
in in large quantities which is much
appreciated. So
>> Good and I should also point out that
you know, I'm not um
very sophisticated on the topic of
computers or AI that I basically have
had to
I wasn't expecting to write about AI and
consciousness, but after um
2022 and chat GPT kind of burst into
awareness, the questions started and
Blake Lemoine actually who put it on the
agenda I think for me and a lot of other
people. I realized I couldn't write a
book about consciousness without delving
into this and that it was really a very
interesting
uh
phenomenon. Um
>> Right. And you know, I came out where I
did and that may reflect my biases. Uh,
probably does. Most arguments do.
Um,
I mean, seeing the world is made of
information is definitely
you can see how that might be the bias
of someone, um, steeped in in, uh,
computers.
Definitely. Yeah, no, and that's why I
think taking those theses and bringing
them outside of the tech world is is
important. So, I when I saw the concept
of your book come into my inbox, I said,
"We got to have a show about this
because it's going to be front and
center, uh, in this world." So, on the
other side of this break, I definitely
want to cover uh, maybe some non-tech
stuff. Uh, maybe about how consciousness
and religion might intersect. And you
brought up that you started with plants.
So, we got to talk a little bit about
plants and we'll do that right after
this.
And we're back here on Big Technology
podcast with Michael Pollan. He's the
author of the new book out this week, A
World of Peers Turning Into
Consciousness.
Uh, I think I think again, great book.
There's one interesting thing in the
book, many interesting things, but one
that I seized on, where you said that I
think what is it? The belief,
uh,
in consciousness is an escape hatch for
materialism. And that's the other side
of the, um, thing that I brought up
earlier, this belief that everything is
computable. Well, if you believe
everything is not computable and
everything is not information, then this
thought this this idea of consciousness
is actually, you know, quite refreshing
because it is something that it doesn't
just simply doesn't play by the rules of
our It doesn't seem to, you know, people
have been trying to do what has worked
everywhere else in science, which is
reduce phenomenon to mass, you know, to
matter and energy. And it's been an
incredibly productive strategy, but it
doesn't seem to work yet with
consciousness. And that the effort to
reduce it to things we know, um, hasn't
really worked. And it's a it's a
tremendous challenge to scientific
materialism, which is sometimes called
physicalism.
Um, because that framework has not
allowed us to understand consciousness.
Now again, might it at some point in the
future? Sure, we shouldn't rule that
out. But I also think we have to as as
some consciousness scientists have come
to this point of well, we may maybe we
need to look beyond physicalism.
And uh I profile Christof Koch, who is a
who's been working on consciousness
since the late '80s. He was a
kind of working with Francis Crick, who
was, you know, had won the Nobel Prize
for the discovery of the double helix,
DNA. And and he he unlocked the mystery
of heredity,
which is, you know, quite an
achievement. Then he turned his
attention to consciousness. He was going
to crack that using the same reductive
science techniques. He worked with
Christof Koch for many years and um
you know, they were trying to isolate
the neural correlates of consciousness.
Could they find the
neurons in the brain responsible for
conscious
experience?
And Koch realized at a certain point
that that would really wouldn't explain
anything. That would give you a
correlation at best.
But
subjective experience is subjective and
how can you explain that in terms of
anything objective? And um
and this is this is the hard problem,
what David Chalmers called the hard
problem. So, um
it's a unique problem.
Um I think that there is some wish
fulfillment that we we have something
that is immaterial, that therefore might
survive the mortal body. I I think
behind a lot of people's talk about
consciousness is the word soul, even
though it's not articulated. But what is
the soul? It's it's also this immaterial
essence of us
that is indestructible.
So, I think there's a little
uh wishful
thinking around consciousness that maybe
it's immortal in some ways. Uh and I
don't, you know, I don't go there, but I
think a lot of people do. And um
and we're looking for something that
transcends this material world. And
could it be consciousness? Now, there
are theories of consciousness that are
are not materialist in the Well,
actually, I shouldn't say that. They're
materialist, but they stipulated
different matter.
Um I'm thinking of panpsychism, uh which
is the idea, philosophical idea, that
everything has some itsy-bitsy quotient
of consciousness to it. Every particle,
every wave. Uh and somehow, so that
consciousness doesn't come into the
world, it precedes us. Um and somehow,
yes, these particles
uh come together the these mini
consciousness come together and create
the big consciousness we are. But that
combination problem, how you get from
conscious bits and pieces to us, is just
a is another hard problem.
Um
And there are other ideas that
consciousness is a universal field that
we channel.
Um and that our brains are
indispensable, but only in the way that
a radio or TV receiver is indispensable.
Um and that's, you know,
uh a kind of idealism. Um
So, all I'm suggesting is uh our
failures to explain consciousness in
material terms,
given the science we have, you know,
makes you think, "Well, we should at
least keep an open mind for some of
these seemingly weird and crazy ideas."
Right. And I do think that there's You
can't talk about consciousness without a
spiritual.
In fact, uh one of the questions I wrote
down was if if
um if we do solve consciousness with
computing, does some of the mystery of
the world go away? If that becomes
computable, and you could even ask it,
you can even flip that statement, and
you could say because of the fact that
consciousness is so mysterious, the
spiritual
can't exist.
The spiritual can't exist? Can. Can.
>> Can exist?
>> Yeah, I think to me where spiritualism
would reside in that mystery. See, yeah,
but that's a definition of spiritualism
that um I'm not sure is is mine. Um
that's that's I that's basically
suggesting something supernatural
exists. To me, spiritual experience, and
this this grows out of my experience
with psychedelics,
is more about transcending the self and
and merging with something larger. Now,
for some people, that's the divine and
and something magical, but for other
people, it's just nature or other people
or love. Um
So, it depends on your definition of the
spiritual, but I do think the the
hardness of the hard problem nourishes
certain kinds of spiritual thinking.
That we've got something here that
cannot be reduced to the usual
categories.
And that may well be true.
Yeah. Yeah, no, I mean, I think that the
spiritualism, it can be it can be a
bunch of different sides of the same
coin, I guess.
It could be
belief in the supernatural, or you could
find it and you know, sometimes that's a
way of saying that there is something
greater than just the individual.
>> true. Yeah, something bigger than what
we can perceive as individuals. I always
find that that there's the tension in my
mind is between spirituality and
egotism. And to the extent you can
reduce egotism, whether through
psychedelics, but also experiences of
art, experiences of awe, all of which
kind of shrink the I in a way that can
be feel really good.
That that is the to me the door to
spiritual experience.
Do you think so let's just
go through this route one more time. Do
you think that if if
consciousness is solved or the hard
problem is solved
what do you think that does to
this concept of religion and this
concept and and spirituality?
>> It depends on how it's solved. It would
have a huge impact.
>> So so talk through what you think about
what those impacts could be. It could
nourish our sense of
>> [clears throat]
>> an animate world. Let's say something
like panpsychism is proven.
And then we realize that oh my god,
consciousness is not a human thing. It's
it's a universal thing. It's in
everything. That could nourish a a new
religious conception.
And I say new but actually more like
religion pre-monotheism,
right? Where everybody was an animist,
right? Everything had much more life to
it than we believe. We've kind of
knocked that out of you know it's the I
think it's the default human perspective
to see life everywhere.
Children certainly have it, right?
They're all animists until we knock it
out of them in school. And
so so you could see a return to a more
spiritual or religious world. Or if it's
solved
by you know, understanding some some
activity some behavior of neurons
or some emergent property of neurons
organized in a certain way but really
prove it. Not just say it's emergent
because that's just abracadabra.
Then then you know, you could come up
with a material explanation.
Um
that would demystify the world even
further. So I think a lot hangs on that
discovery. Absolutely. It would be one
of those
you know, identity changing discoveries.
Right. And in fact this was something
the reason why we have so much study of
this now and we didn't [clears throat]
beforehand is okay okay of course
science has advanced but as you point
out in the book
this was something that was left to the
church this idea of consciousness.
>> [snorts]
>> Science you know, back when science
started making some real breakthroughs
said we'll take we'll take everything
outside of this. Yeah, we'll take
everything measurable and quantifiable
and and the church you can have
everything subjective and qualitative.
That was Galileo's deal and Descartes'
to some extent and it was a very
pragmatic deal but it's left us with a
science that's ill-equipped to study
what they left by the roadside which is
to say subjectivity. And you know, the
interesting thing too is can we redefine
science in a way that makes it easier to
study consciousness. And there people
who argue with I I talked to
philosophers um Evan Thompson is
somebody in particular author of an
amazing book called the blind spot.
And he says the blind spot of science is
its inability to deal with lived human
experience and it just doesn't value
that. It So for example, it doesn't
value the experience of the color red
which it sees as just a frequency of of
light and that red is a construct of the
mind and therefore we ignore it. No,
it's a construct of the mind. That's
incredible.
>> [laughter]
>> Why aren't we looking at that? And he's
basically saying the experience of red
in the minds of humans is a is a
phenomenon of nature that deserves as
much attention
as electromagnetism.
By the way, Galileo never should have
agreed to that deal cuz it ultimately
didn't work out very well for him. Yeah,
it's true. It's true. It was a good try
but it probably saved a lot of other
scientists from from trouble.
>> Okay. Well, I'm glad that they it turned
out okay.
Um
lastly, let's talk a little bit about
psychedelics and plants because that's
seems to be where this emerged from. Uh
I picked up the book and I was like,
"So, did Michael just like do a lot of
mushrooms and start talking to plants
and then ask what consciousness is?" And
I I wasn't that far off. No, you
weren't. I mean, one of the inspirations
for the book
was the experiments with psychedelics I
did for How to Change Your Mind.
And, you know,
I'm not unusual. Lots of people who do
psychedelics start like having trippy
thoughts about consciousness. And And
the reason is that psychedelics smudge
this windshield that's normally
perfectly transparent between us and the
world that we were talking about at the
beginning. And suddenly you realize
there's a windshield. And why is it this
way and not that? And you you
defamiliarize consciousness to yourself.
There are other ways to do that.
Meditation does that, too. Um certain
experiences of art do that, too.
Specifically with regard to plants, I
did have this experience
in which the plants in my garden, and I
was I had taken mushrooms and I was in
my garden in Connecticut, um seemed
uh aware. They seemed aware of me. They
seemed like they were returning my gaze.
Um they were more alive than they had
ever been.
And afterwards, I dismissed this as your
usual drug adult, you know, psychedelic
insight. Um
but I also thought, "Well,
let's see if we can test this against
another way of knowing." Um cuz I had
talked to people who said, "What do I do
with psychedelic insights? Can I Should
I believe them? Should I dismiss them?"
And actually William James
uh talked about this with regard to
mystical experiences. He said, "We don't
know enough to say whether they're true
or false. The challenge is to one, how
useful are these ideas? And two, can you
corroborate them with other ways of
knowing? I.E., science. So, I went down
this rabbit hole and found this
community of
botanists who call themselves plant
neurobiologists
in full knowledge that there are no
neurons involved. Um, and that they're
doing really interesting experiments
that show that plants are
a lot more intelligent than we thought
and
perhaps also sentient, um, which I
should distinguish from consciousness
because sentience is a kind of more
basic kind of consciousness. It's just
awareness of your environment, ability
to tell
uh, positive from negative changes and
and deal with them. Um, lots of
creatures have that. Single-celled
creatures have that. Um, and it may be
just a property of life. And and
consciousness is how we do sentience,
you know, we've we've elaborated it in
various ways as we've discussed.
Um,
so,
uh, you know, I learned about some
incredible capabilities of plants, um,
that they can see,
um, so that there are vines that
actually change their leaf form to mimic
the leaves of the plant they're they're
colonizing. Um, they can hear. If you
play the sound of caterpillars munching,
they will produce chemicals to repel
those cater- caterpillars and also alert
other plants in the neighborhood. They
can recognize self from other in a pot,
um,
and they'll share nutrients with related
plants in a pot and not with other
plants that are not really same species
but not not kin. Um, I mean, they can
hear the sound of water in a pipe and
will will send their roots over there to
see if they can crack in.
Um, they're incredibly capable and
intelligent and they're not doing
everything automatically by any means.
The other thing that kind of blew my
mind was you can anesthetize plants.
>> That was a That was the cra- I was going
to say you can put them under anesthesia
was the thing that literally blew my
>> for a plant?
>> Well, if you're if you have a um, you
know a
snapping you know you plant a
carnivorous plant or a sensitive plant,
you know, things that have a behavior
you can see, the behaviors will not
happen under anesthetic. And the same
chemicals that put us out. Which by the
way, we don't understand how they work
on us, too. Some of them are
are totally inert chemicals like xenon
gas that shouldn't react with us at all,
but somehow put us out. So, if a plant
has two states of being
awake and asleep then you know
you can say it is like something to be
that plant when it's awake that's
different than what it is when it's
asleep. At least that's the argument and
it's a tough one to refute. So
you know, I'm not ready to say plants
are conscious, but I think sentience
that I think we can make that case and I
think maybe that's a property of all
living things.
Yeah, the last last thing I'll say here
and then we can wrap is just the thing
that really struck me was
um
you you talked about how plants if you
watch them sped up they can show real
intent. For instance, like that there's
a bean sprout that doesn't just kind of
flail about to try to find something. It
sees a branch and it makes a bee line
for it twisting like a whip effectively.
And you told the story where I think
this is there's an alien civilization
that comes down to humans but they're
just real sped up. So, we're moving so
slowly they feel they can do whatever
they want to us. Exactly what we do to
plants irrespective of speed. Yeah, so
we all you know, every creature lives in
its own dimension of space and time.
And we live in a different dimension of
time than plants. They're very slow from
our point of view and therefore we don't
give them a lot of credit. But as this
as this story makes clear another
species another alien species could look
at us. And if they were sped up the way
we are relative to plants
they would basically
smoke us and turn us into jerky for the
ride home.
Well, I hope they're on on our same
speed and
we can we can just be friends with the
aliens like we might be with the
computers. Who knows?
>> [laughter]
>> Let us try.
Let's try. All right, the book is a
world appears. Michael, first of all,
thank you for taking that mushroom trip.
I'm glad it sparked the book and thank
you for coming on the show to speak
about it. Great having
>> Thank you, Alex. It was a pleasure.
It was great. All right, everybody,
thank you so much for listening and
we'll see you next time on Big
Technology Podcast.