Brain Computer Interface Frontier: Depression, Coma, AI Merge

Channel: Alex Kantrowitz

Published at: 2025-09-05

YouTube video id: GzFSkN6sRkA

Source: https://www.youtube.com/watch?v=GzFSkN6sRkA

Could mapping the mind through brain
computer interfaces allow us to one day
build a foundational model for the
brain? We'll find out on a special
edition of Big Technology Podcast right
after this. Welcome to Big Technology
Podcast, a show for coolheaded, nuanced
conversation of the tech world and
beyond. We are talking a lot about brain
computer interfaces on the show these
days. And there's a reason for it
because the vision extends far beyond
just allowing people who are paralyzed
to be able to move uh a cursor on the
screen. And in fact, the technology can
be applied in far broader ranges and far
broader use cases. And we're going to
talk about it today. We're lucky to be
joined by the founders of precision
neuroscience. We have Michael Major
here. He is the CEO. Michael, great to
see you.
>> Thanks for having us.
>> And Ben Rapaort is here, the co-founder
and a neuro practicing neuroscience here
to tell us all about how this technology
works. Ben, great to see you.
>> Great, great to be here. Thanks for
having us.
>> So, let me take a look at the uh at the
stats that you guys have. All right,
we'll just read it off from the
beginning just so you folks understand
uh the folks listening at home
understand that this is a legit company.
Started in 2021, raised $155 million,
closed your series C in December 2024,
and you have 85 people uh working for
you. And I'm just going to hold this up
to the camera. Um and for those
listening at home, I will try to
describe it. Uh this is the brain
computer interface that precision has
built. Uh you can see it here. Um it is
quite flexible and um and I think it
doesn't damage the brain which is sort
of the um the one of the differences
that you have with Neurolink. Of course,
we had uh Nolan Arba on the show a
couple months ago or many months ago now
u talking about how this has changed his
life. So we're going to talk about how
it could change people like his life and
many more. Anyway, um talk a little bit
about the the what brought you to this
technology and why you think it's so
promising today. Michael, do you want to
start?
>> Yeah. No. Um I you know, my my answer uh
is what brought me to this technology is
really Ben. Um you know, I think Ben uh
who um I'm going to introduce a little
bit because he's uh often too modest.
Ben is a neurosurgeon as you mentioned.
Uh he's practices at Mount Si. Um he
also has a PhD in electrical engineering
and that's not accidental as a
combination. You know, the brain is an
electrical system and so to interface
with the brain um really understanding
the electrical nature of the organ
itself as well as the electronics that
you need to design uh to to interface
and to drive function um is is totally
core to what we're doing. This is really
Ben's life's work. Uh Ben was also one
of the co-founders of Neuralink uh with
Elon Musk and and and several others um
and left to approach this technology in
a different way for for reasons that
we'll get into. Um, but I met Ben. Ben
and I were in college together but
didn't know each other and a mutual
friend put us in touch. Really, my
background is in investing and business
building. And um, I have partnered with
Ben to help translate his intellectual
vision for uh, a a device that I think
is going to really transform what it
means to be disabled and eventually
transform medicine into a practical
reality. And Ben, can you So, Michael
mentioned that the the brain is a
basically an electrical uh what did you
call it? Electrical device.
>> An electrical organ.
>> Organ. So, talk a little bit, we talked
about this on the show a couple of
times, but I'd love to hear from your
perspective. Um talk a little bit about
how the brain is electric and is it is
that sort of reducing the brain's
capacity a little bit? For instance,
like uh if it's just electric, the idea
would be that you could basically decode
what's going on in the brain. Um but
right now the brain is so little
understood that there's I think some
conventional wisdom that you know part
of it is just gray matter that we'll
never understand.
>> Well, the brain is definitely not just
electrical. Um but uh thinking of it as
an electrical system helps us to
interface with it and in some ways to um
to heal the brain when it is injured. Uh
and so um thinking of the brain as as an
electrical system in certain ways is a
very useful extremely useful model for
um understanding how the brain works,
how it communicates with the body, how
we can communicate with it from the
outside world and vice versa. Um so it's
not that it's just an electrical system
but it is the electrical nature of the
brain um and the electrical nature in
which the brain processes information um
makes it special. Um it makes it unique
kind of among biological systems and uh
the fact that we have a good
understanding of how the brain
represents the world and uh interacts
with the world through electrical
signaling makes it possible to develop
brain computer interfaces.
>> Okay. Okay. And so that that kind of
brings us to the present where we're
starting to see, I would say, rapid
advancements of brain computer
interfaces going into people's brain. Uh
Noland, who we've had on the show, has a
brain computer interface developed by
Neurolink company you co-ounded um that
allows the computer to face basically
read the signals in his brain and when
he thinks about moving his arm in one
way, uh the cursor will move in that
way. When he does a different thought,
he can click. Um, so talk a little bit
about you're you're not quite doing that
at precision neuroscience yet. Uh, but
talk a little bit about why this field
is advancing as quickly as it is and
what this device is uh that you've bu
you've brought here today and how it
differs.
>> Right. So um I would say first of all
one of the reasons that now is such an
exciting time in the field of brain
computer interfaces is that a couple of
companies precision uh Neurolink among
them um have advanced to clinical stage
meaning that we're well out of the realm
of scientific research and um and on the
path to bringing bring computer
interfaces into medical reality. And
that for um many of us who've been
interested in this field for a couple of
decades has been um an incredible
transformation to be a part of because
there's been this notion that um what
was scientifically known to be
scientifically feasible for many years
is now uh squarely on a path to becoming
a medical standard of care in a way
that's going to help a lot of people who
have what were thought to be previously
untreatable diseases like paralysis from
ALS or stroke or spinal cord injury. So
that's one of the reasons that there's
so much excitement about the field today
is that this technology is reaching
people who need it um like Noland.
>> And you know when when precision was
started there was a dogma in the field
that you needed to penetrate into the
brain in order to drive powerful
function like some of what you just
mentioned Nolan's able to do with his
device. Uh, precision was founded um
with a very different philosophy which
is that safety and performance are not
in opposition. they are actually
self-reinforcing. Um, precision has now
done 40 implants uh that's more than the
rest of the industry combined uh in the
past two years and what we are able to
achieve in those implants which so far
are temporary. Um so we have not
permanently implanted someone but we
have implanted patients for hours at a
time and enabled them to control
computer cursors with their thoughts to
control robotic arms with their
thoughts. So this is at the cutting edge
of performance that has been
demonstrated by other systems and it's
achieved without the requirement for you
know puncturing brain neural tissue and
and doing some damage to neural tissue
um that other systems require.
>> Okay for listeners and viewers I just
want to say that there is going to come
a moment in this conversation where
we're going to take it from today to
where this goes in the future. I'm
talking about treating depression. I'm
talking about treating stroke and coma.
Uh I really think that is worth uh
sticking around and and uh joining us
for the second half. But for those uh
who want to understand the basics of
this technology, we're going to do that
now just for a bit. Uh but this is
definitely one of those episodes where
it's worth staying uh with us and and
talking about maybe there's a way that
that can end up uh helping build better
AI models through the brain or
understanding the brain through AI. So
that's all coming up. But, uh, Michael,
you mentioned that this is a
non-invasive device. I'm going to kind
of, um, not listen to your instructions
and just kind of hold it up so just
people see it. Um, this is this is the
device and, um, there's 1,024 electrodes
here. For those who are listening, it is
uh, paper thin or less than paper thin.
Uh, and the electrodes are kind of come
up uh, on a on on a head at the top
here. Um, pretty amazing. And the the
thing that you mentioned about it not
being invasive. So um the history of
this technology has basically been that
if you want to get a read on a brain
signal, you need to put some like
hairike follicles or some prongs into
the brain, but this lays on top of the
brain.
>> Yeah, that's right. Um just to describe
the the device um in a little bit more
detail, it's actually thinner than a
piece of scotch tape. So incredibly thin
and very conformal to the brain. and
it's designed to sit on the surface of
the brain without puncturing or or
damaging neural tissue. The system has
a,024 tiny platinum electrodes which are
basically sensors. Um and those uh those
electrodes are we manufacture this
system using photoiththography. So it's
the same technique as semiconductor
chips. Um this is really the first
example of cutting edge uh manufacturing
techniques being applied to medical
technology. Um we actually own the fab
in which we do it. There there is no
domestic supply chain that's capable of
this sort of work. Um but what we are
able to achieve with with this sort of
um a a super high resolution system that
sits on the surface of the brain is
really to record the awake uh activity
of the brain at a resolution that's
never before been seen and never before
been been achieved. And just going back
to your question earlier, you know, the
brain is an electrical organ, meaning
that when we have thoughts, there is
actually a physical manifestation of
those thoughts. So we we intend to move
our hands or our fingers or um we recall
an idea. Um these there is actually
physical manifestation um for that and
and it's electrical and what this system
is able to do is measure to record that
electrical activity uh in a way that's
never before been possible.
>> And so Nolan's device has 1,024
electrodes on it. So same as this uh but
you have uh at one time inserted as many
as four of these in a brain. So that's
given you much greater signal than that
just one neural device.
>> So the the part of the philosophy of
precision was to make is to make the
technology highly scalable across a
number of domains. And I think this will
probably be a theme of the conversation
as we take it from where the technology
is today to where we might go in the
future into other disease states or how
this technology informs the development
of artificial intelligence. But as you
said um what you're what you were
holding in your hand just a moment ago
is one module that contains 1,024
electrodes but um we have the capability
of deploying because it's uh
non-damaging to the brain multiple of
these modules onto the brain
simultaneously and we've done that
actually many times uh in in quite a few
patients we've used um at least two of
these simultaneously providing 248
electrodes uh on the on the surface of
the human brain And in one in one
particular case we used uh 4,096
electrodes four modules and there's not
really
>> what does that greater uh number of
signals give you? Well, um it really the
important it it provides us uh um a more
detailed picture uh and a more complete
picture of what the brain is doing at
any given time because I think one of
the important things to to realize about
the brain um at least the parts of the
brain that we think of as being most
relevant um is that almost all of our
conscious experience whether that is
movement or sensation or vision or um
sort of decision making or memory all
happens um basically at the surface of
the brain. We call that the cortex. Uh I
think that's not an intuition that
everybody has because we think of the
brain as a three-dimensional structure
which it is living inside the head. And
so people think of it as um as kind of
all of the functions of the brain.
People have this intuition that they're
kind of uniformly distributed in this uh
in this you know 1500 grams of tissue.
But actually they're that all that
processing is not uniformly distributed
within that volume. It's almost all very
very close to the surface. And what's
deep in the brain then?
>> Well, most of what's deep in the brain
is wiring. Not not all of it, but but I
would say most of what's deep to the
surface is the wiring that connects the
cortical surface to the rest of the
body. And then there are areas like for
example um the basil ganglia or the
cerebellum or the brain stem that
primarily subserve non-concious
functions. So the smoothing out or the
learning unconscious learning of
movements breathing vital you know vital
functions that we don't think of um as
part of our conscious experience. So
most of conscious human experience is is
happening very close to the surface and
that certainly includes what we're
focused on now which is movement and
most of the way we interact with the
digital world and the physical world is
in some way through movement or intended
movement. So the fact that we're focused
and the brain computer interface world
today is focused on uh paralysis as a
target disease state is not really an
accident. One of the reasons for that is
first of all that is a is a pathway
within the brain that is extremely well
understood. How we how the how the brain
goes from what's happening at the level
of intention to move and activity on the
brain surface to activation of the
muscles is one of the best studied and
most well understood neurological
systems. So interfacing with it is uh is
kind of a natural thing to do. But also
if you just think about um the way we
live our lives in in 2025 uh you know
you're sitting here with a laptop in
front of you and you inter interact with
that laptop through touching the keys
and moving the cursor on the screen. And
so those are valitional movements and
they happen in actually a very small
piece of real estate uh on the on the
surface of your cortex that's only a few
square centimeters in size. And so to
get back to your original question of
what does that do for us, the ability to
put a few square centimeters of
highdensity electrodes on the brain
surface, it allows us to interface with
the entire
extent of your desire and planned
actions um into the outside world. So
the the key in in brain computer
interfaces as they exist today, the key
to restoring or augmenting um you know
an interface with the digital world
between the brain and the digital world
is to get um sensors onto the area of
the brain that are responsible for those
interactions with the outside world. uh
to span the entire extent of that real
estate and to do it at a level of
resolution that is um appropriate to the
brain's intrinsic signal processing
capability and yet to do it without
damaging that part of the brain at all.
And that's really what we've um what
we've done at precision is we've
actually people when we started the
company didn't think that this was
possible. As Michael mentioned, there
was a dogma that said that uh in order
to interface in a reliable way, you
actually had to penetrate into the brain
um with these needle-ike electrodes. And
I think part of that dogma in some ways
stemmed from uh a mistake in intuition
that that there was um you know a a need
to unlock information deep within the
brain, so-called deep within the brain
in order to do that. But in in reality,
enough information is represented right
at the cortical surface that if you
build the sensors with a high enough
resolution as we have um and you span
the appropriate amount of cortical real
estate, you can get um incredibly high
fidelity function out of these uh out of
these interfaces and actually we um you
know we've begun to show that I think in
ways that are surprising uh even to the
experts. So that has been an incredibly
exciting development for us over the
last uh couple years, especially this
year.
>> Okay. What what surprising ways because
you're both smiling about the surprising
applications.
>> Well, I mean, you know, you you
referenced um you referenced uh cursor
control and that kind of interaction
with the um with the digital world as
kind of a benchmark for function and
brain computer interfaces, which it is.
uh it's not actually a it's not trivial
to control a cursor in in multiple
dimensions in real time. Um and yet uh
we've begun to be able to do that uh in
ways that are incredibly reliable and in
ways that patients can uh learn to
achieve cursor control actually within
just a couple of minutes of um of
training data. So
>> and you don't have the threat of thread
retraction which is what Nolan has
experienced because there are no threads
in the brain.
>> Correct.
>> Yep. Exactly. And I think you know one
other way to answer your earlier
question why why does more resolution
matter? Why does a higher electrode
count matter? I think one analogy that
um we sometimes talk about is that BCI
is at core it's a communication device.
Um and if you think about it in that way
you know the the bandwidth of a
communication device is really what
determines how it can function. So as an
example like a 56k modem is capable of
chat whereas a fiber optic connection
enables Netflix. It's the same thing
with BCI where if you have a very low
resolution system you can do very basic
things let's say like uh click on a
computer mouse. Um our ambition is far
greater than that. So what we are um
going to enable in in people who are
right now paralyzed is seamless control
of computers and smartphones and other
digital devices in the way that we all
frankly take for granted every day. So
think about, you know, rich video games,
um, productivity suites like Microsoft
Office and Google Suites. The that sort
of level of functionality, which we
think is really going to be
life-changing for people who who today
have very little therapy available to
them, is only possible with a system
that has a high degree of of bandwidth.
>> So they'll be able to do this just
entirely with their thoughts.
>> That's right.
>> Exactly. And I I would emphasize also
that you know um the component that you
were holding up that's here on the desk
in front of you is only a part of the
brain computer interface system. So
these um electrodes form the sensor that
touches the brain kind of caresses the
surface of the brain but of course
they're connected to uh implantable
micro electronics that uh amplify
digitize record compress the data uh and
all of that data. Uh and this will
probably take us into some of the other
areas that you wanted to touch on that
you alluded to in the beginning. That
data all has to be processed. So it's um
artificial intelligence that allows us
to in real time decode the electrical
information from the brain surface into
meaningful command and control signals
for a digital interface. The signals
don't come off the brain in a way that
tells us how to interpret them. Actually
there's a kind of translation process
that needs to happen in real time.
>> Okay. So one more question about the
device itself then we move into the more
uh futuristic and theoretical questions.
Why haven't you left it in? Because if
this is not damaging the brain uh we
know that the other devices have been
left in. Nolan is kind of moving about
the world with a neural link uh in his
brain right now. Uh but this is
something that has only been used in
circumstances of like when the brain is
already open to understand uh to try it
out or understand a little bit more
about surgery. So why haven't you left
it in?
>> That's right. We've we've taken we will
be leaving it in is the is the short
answer to your question and the the
reason is that we've taken a slightly
different approach to um to development
which is um to emphasize to make sure
that what we've developed is uh safe and
highly functional before beginning
permanent implants. And so uh for us
it's been incredibly important uh to
ensure that the interface works and
delivers a level of functionality that
we think is um is essential uh to
guarantee to patients before we start
leaving the devices in. So we we
therefore pursued a uh a strategy that
was sort of a phased development
approach. Um and so in our first 40
patients um those were temporary
implants uh that were designed to
validate that the quality of the signals
and the ability to decode those signals
in real time. We then um you know we now
we're actually the first u modern brain
computer interface company to have FDA
clearance. So the uh version of the
electrode that you were holding in your
hand actually has now um FDA clearance.
So among the current leading BCI
companies, we're the only company uh
that has a full clearance from the FDA.
And uh as part of leveraging that
clearance, um we're moving to uh a next
phase in our clinical studies that will
allow the system to be left in place uh
for up to 30 days. Um and that phase uh
will help us further validate the um
quality of our uh of our decoding
algorithms um and deeply understand the
nature of the signals um as these
devices are left in place. as you
alluded to um you know others have had
some problems in their early in their
early after implantation and what we're
trying to do is to anticipate and avoid
such issues as we move to permanent
implants and then it will be uh after
that phase after this next phase that we
um that we start leaving the devices in
patients as part of a clinical trial
permanently
>> and just just for context you know I
think the fact that we even have this
path available to us is a testament to
the underlying safety of the system. So,
we did our first patient implant within
two years of founding. You know, it took
Neuralink eight years. It's taken um
other competitors even longer than that.
And I think that that's, you know, one a
product of just the the focus and and
the design philosophy that Ben just
articulated, but it also is a a feature
of the fact that our system is
reversible, that it doesn't damage the
neural tissue, that you can remove it,
um and uh there's been no no harm to
patient. So I think that that just
speaks to, you know, some of the
underlying characteristics which we
think are are important today and will
be very important in the future for for
clinical adoption.
>> So there's two timelines here. One is
this timeline that it's going to take
you to be able to get this current
device uh implanted full-time in
people's brains or on top of people's
brains to be more accurate and um and
then read their brain signals in the
motor cortex. Uh I'm going to be really
unfair. I know that that's going to take
a tremendous amount of work and if your
company does that successfully, it's
probably a good uh good outcome for your
company. But let me be unfair and ask
you now where the technology can move to
after that because Ben, you mentioned
that the most of what we do or most of
our our signals of what what happens in
our brain happens at the surface. So
what happens if you expand beyond the
motor cortex? Where else could this
technology go? I know Elon Musk and
Neuralink are currently working on
eyesight which even if you are born with
completely without eyes could
potentially take signal from the world
around and then beam it into the visual
cortex. Uh so that maybe might be one
application where else should we look?
Uh it's a great question and we
definitely are thinking about this and
it actually um it actually dovtales with
the prior question which is um you know
what are we learning uh as we bring this
uh technology in its current stage into
the world and as we work with uh
patients and physicians across the
country even in the early uh clinical
studies and um part of part of that
experience for us has been a process of
discovery you know when you bring a
technology uh uh that you've been
developing into the real world and you
put it into the hands of um you know of
uh people with lived experience and
experienced insightful practitioners um
you learn all kinds of things that you
might not that you might learn might not
anticipate. And so actually even though
we're focusing on applications in the
motor cortex as part of these studies
our electrodes have been placed um
actually all over the brain. They've
been placed in uh sensory cortex in
prefrontal cortex areas which are
responsible for decision- making.
They've been placed um in the spinal
cord and on the brain stem. And so um I
would say that at this point um it's
very exciting for us because we have uh
an incredibly large rich data set that's
probably um just absolutely unique in
the world now as far as regions of the
brain that we've interacted with and um
things that we're thinking about
potentially doing in the future. So just
that gives you a sense of what kind of
data we're starting to experience and uh
in practical terms um you know we think
that probably stroke represents the next
expansion of the market for brain
computer interfaces. So right now uh the
disease states that we're uh designing
the interfaces to treat are really forms
of severe paralysis really forms
primarily of quadriplegia that result
from spinal cord injury in the neck or
uh brain stem stroke or uh diseases like
ALS, neurogenital diseases like ALS.
Those are very severe forms of paralysis
that are either complete or near
complete and leave people uh either
mostly or completely unable to use their
hands. Um there are other forms of
paralysis which are more common and
those are uh those arise for example
from common forms of stroke. Stroke
affects almost a million patients in the
United States, almost a million people
in the United States per year. And about
about a third of those um so several
hundred thousand people uh recover from
their stroke with a persistent deficit
that leaves them either having
difficulty articulating speech or
difficulty using their hands uh or
difficulty walking. And those people
although they may not be completely
paralyzed um live with a severe
uh
deficit or um or or disability uh in
their ability to interact with the world
just because of their of their uh
paralysis. And so we believe that um
that that is a next step for brain
computer interfaces in the medical
world. Um that's what that's what
physicians with experience are asking
for and that's what patients uh with
stroke are asking for. And it also is we
think very medically and scientifically
feasible.
>> And when you apply it to stroke
patients, uh is it that you are able to
decode what what they want to say and
then help them say it or is it if they
lost some movement actually using
electrical signals to help them move
again.
>> So uh we're still talking we're still
talking about a function that is
decoding intention and expressing that
through digital means. So for example,
somebody who you know who has a stroke
on the uh you know on one side of their
brain and can't move a h a hand for
example or can't move it well enough to
type uh you know that that kind of
deficit which is debilitating for people
who are trying to return to work uh
especially if it's in the dominant hand
for example that kind of deficit could
be augmented by a direct thought to
digital world uh communication. Does
that make sense? We're not talking about
yet stimulating the brain in a way that
restores the hand back to normal or that
um you know provides an arthosis over
the hand that moves the hand again. I do
think that will come and we're already
talking to partner companies uh to do
that kind of a thing. But the therapy
from brain computer interfaces is
primarily something that kind of reads
brain activity in real time and
establishes you know um intuitive smooth
communication with the digital world.
And I think that um helps hope hopefully
give an intuition for how this industry
is going to develop in coming years. So
so we're starting with very severely
paralyzed people. There about 400,000
people who have no use of their arms and
hands. And so for them being able to
control a computer, a smartphone, a
tablet, it's going to be life-changing.
And we think that that's a sort of $2.5
billion market to start with. But as Ben
mentioned, there are other um issues
that are, you know, much more common. um
stroke being one of the principal ones.
Uh which is why the number of people who
have some sort of motor deficit, maybe
not complete uh inability to move arms
and hands, but some deficit is about 12
times that number. So so many millions
of people in the United States alone. Um
and allowing them uh to to interact with
the digital world in a seamless way we
think is also going to be really
transformative. It's going to start with
the most severely impacted patients and
then and then move. But uh in terms of
restoring movement um you know what what
you really need is uh a basically a
partner device to help with either
stimulating peripheral nerves um or you
know creating a prosthetic. The way that
we think that brain computer interfaces
are going to develop over time is that
we think that the the neural data is the
the hardest to access. It's the hardest
to extract really. Um and uh as a
result, we sort of think about it as
like the operating system. Um there will
be other um products and devices that
plug into the data that we're able to
provide to enhance people's lives in
different ways. Um think about it like
an API. Uh but but I think the the the
companies that are able to um record at
high fidelity and then transmit the
neural data uh are going to be able to
create an ecosystem around them to do
all sorts of things that are right now
not possible.
>> Yeah. And just to be clear, we've
already had inbound from quite a few.
>> Okay. So yeah, but before we go to
break, let me like throw out like one
example like potentially you get rec you
could be reading signals of depression
on someone's brain and then maybe some
some ancillary company can use that API
data to like stimulate the part of the
brain that's have that has a deficit of
some electrical signals as a potential
cure. that I mean that's that's already
starting in um in academic research
where there's sort of closed loop
systems that are helping people who have
refractory depression that that you know
doesn't respond well to medication and
which is very severe um with with
systems that you know right now uh we
have technologies like electrocomvulsive
therapy which are very coarse which you
know effectively uh you know incite a
seizure in people um but it does have
efficacy which is why it's still used
but if you think about
>> imagine being targeted on this stuff as
as opposed to just using brute
electricity.
>> Exactly. And I think we're headed to
that future.
>> You know, you rais a a good a good point
and an application space that we've
thought a lot about and that many people
think about. Um, and I think kind of
what you're alluding to is this concept
of a digital biomarker. Uh, and it's
something that we can do with the
precision system that's very difficult
to do with the system of penetrating
type electrodes um for for a number of
reasons. But, uh,
>> what's a digital biomarker? digital
digital biioarker is kind of like um
it's kind of like a a signal but it's
instead of instead of a molecular
readout that like you get with a uh a
blood test um it's a it's a digital
signal that you get by electrically
reading the brain. So um think about if
you if you were to place the uh you know
precision electrode array over a
relevant region of the brain that's
relevant to depression and some of those
are already known and well established.
So in a patient who's prone to
depression, you might see and this
there's already very good preliminary
data to suggest that this is the case.
You might see that when they're um
beginning to enter a relapse of their
depression, uh there are particular
digital signatures that occur in that
area of the brain and that are
predictive of them uh entering a relapse
and that can be lifesaving for people um
you know who may become suicidal uh due
to severe refractory depression. And so
being able to predict those uh periods
before they happen and either deliver uh
stimulation or change their medications
or alert um you know alert uh care team
that that is in the process of happening
and that kind of concept uh actually is
common across a number of important
disease states including epilepsy and
depression and and others. So this is a
direction that um you know uh that we
thought a lot about that we've had good
good discussions with you know the
industry about and it's actually not
that dissimilar from some of the
molecular studies that people do
nowadays um you know like uh in the in
the era of gene sequencing you know um
uh lowcost gene sequencing has enabled
many people to do to learn a lot about
their own biology and predict things
about them that they wouldn't have
already known about. So it may be that
um you know one direction for
neurochnology is is is is something like
this.
>> Go ahead.
>> No, I I I think Ben's getting at a
really important theme that that we talk
about a lot um in the company which is
you know why has there been more
progress in other areas of bi biology
than there has in neurology. And one of
one of the reasons I think is as as Ben
is uh alluding to is that we've been
able to digitize biology in other areas.
genomics revolution being a perfect
example of that and that has led to a a
number of breakthroughs and very rapid
progress today. It's been tough to um
convert the neural signals of the brain
into something that we can apply compute
to.
>> The brain is hard to access. You know,
>> Ben knows this better than we do. Uh
it's it's hard to get there. Uh and once
you do
>> a good design, keeping that thing
protected.
>> That's right. Uh that this the skull
does serve a purpose. And you know the
the the biology once you get there is
sort of this this mushy mess. And so
figuring out a way to effectively
digitize um this this biological system
in a way that is scalable which which
again I think you know the fact that our
system um is scalable across different
areas of the brain at the same time at
super high resolution. This is something
that that we think is going to end up
unlocking a number of breakthroughs
which frankly today are are hard to
predict. Yeah, I I know I know you
wanted to get get to, you know, where
does this where does this take us in the
world of AI and um foundation models and
you know modern machine learning and how
does that connect to what's happening in
brain computer interfaces and I think
this is a kind of a good segue for that
and just a bit of intuition that I would
want to provide this is something
special about the precision system
because the electrodes that you were
holding in your hand a minute ago they
form a regular lattice and they have a
spatial relationship with one another
that's kind of like the spatial
relationship among pixels on an on a
screen. It's the same every time. And so
when those are placed uh onto the brain
of one patient, the the data format that
they read out is the same data format
and has structural elements that are the
same from patient to patient. So it it
it brings commonalities across patients
uh that we've studied um into very sharp
focus and that has been a major
limitation of the ways that we've
interfaced with the brain you know for
generations including with the
penetrating electrodes that are used by
uh other systems because when you
penetrate into the brain uh using
needleike electrodes the spatial
relationship of what you're recording
from is kind of a little bit random
>> right
>> and so that relationship is not
preserved from patient to patient and it
makes it very difficult to apply
learnings from one patient to the next
patient. Um in in the precision system,
one of the inherent um advantages is
that the data is so regular in structure
that we're able to compress it. We're
able to learn across patients, across
populations and and leverage those
learnings in the machine learning
algorithms that we develop and that we
have found to be just a um you know a
massive advantage. I want to take it one
level deeper and ask about decoding full
brain activity and making the brain less
mysterious. But why don't we do that
right after this? So, let's take a break
and come right back. We're back here on
Big Technology Podcast with Michael
Major and Ben Rapaort, the co-founders
of Precision Neuroscience. Uh we before
the break, we talked a little bit about
un unraveling the mysteries of the brain
through electrical signals. And at the
moment we could all agree the brain is a
mystery even though the technology
companies like yours are getting better
at decoding parts of it and you know
being able to do something with that
information. But do you ever anticipate
a moment where the the totality like I
started this show saying could we build
a foundational model for the brain? Um
and I guess that was like a way of
saying could we find a moment where the
totality of everything happening in the
brain is decoded by technology. And if
that happens, what does that lead to?
>> Uh, we talk about this a lot. Um, this
is sort of in the zeitgeist right now in
the tech world, this question of the
whole brain interface. And, um,
I'm happy to discuss it. We have kind of
our own view of what it means to have a
whole brain interface. And I think
understanding what that implies um,
requires a few things. One is that as we
mentioned earlier the distribution of
information through the brain is not
uniform. There are areas of the brain
that are much more relevant to our
interaction with the world than others.
Um so most of the brain uh is is
actually not relevant for communicating
with the outside world or with
artificial intelligence. most of the
brain um is taking care of the body uh
and not in ways that are particularly
relevant to interfacing with the outside
world.
>> Does that mean like controlling like
blood flow and stuff like that? Does
that happen in the brain?
>> Let's call it our our vital functions
and unconscious functions and things
like that.
>> So that's all being directed by the
brain.
>> A lot of it is and not all of it is, but
some some of it is. And certainly
>> fascinating because I guess like we
think that the heart beats and the blood
goes and it has nothing to do with the
brain, but the brain really is the
command center for a lot of this stuff.
The brain is is involved. It's not
necessarily
doing every thing moment to moment.
Certainly the heart has its own um
intrinsic ability to function. Uh but
the brain is heavily involved in a lot
of what goes on in the body. And um but
my my point is just that uh and also a
lot of the brain as we mentioned before
is uh white matter connections between
you know between different parts of the
brain and from the brain to for example
the spinal cord. um and those are
incredibly important for how the brain
functions in a biological system. But
with respect to interfacing with the
outside world, it's those are much less
relevant uh when you think about a whole
brain what we think about as a whole
brain interface intuitively. So that's I
want to start with that that even though
the brain um there's this notion that
you want to have a whole brain interface
actually what I think we really mean by
that is we want to be able to interface
with the parts of the brain that are
responsible for our conscious
interaction with the world and that
actually is a spatially limited uh
portions of the brain.
>> But isn't that relatively unambitious? I
mean, I know it's very ambitious, but
this is easy for me to ask, but I'm
saying that like to me the thought is,
what about taking my memories and kind
of dragging them from the brain onto a
computer?
>> I totally get that, right? And so the
point I'm trying to make is that your
memories have a spatial location within
the brain
>> for, you know, to a to a significant
extent. And so the problem of how to do
that is not the same as how do we record
the continuous state of every single
cell in the
>> uh you know 1500 grams of the brain. I
think it's very important to understand
the brain is a physical thing. It has
structure and how we think about
interfacing with the brain part begins
in part with with understanding that
there are some parts that are more
relevant to forming that interface than
others and then how do we get there and
how do we um listen in or interact with
those areas of the brain. So, as we
mentioned earlier in the uh in the
conversation, we're really focused right
now on movement related areas, the
so-called motor cortex. Um those that's
an area that's very very salient to our
interactions with the um with the
outside world in moment to moment. But
um you know the vision related areas and
sensation related areas and hearing
related areas and memory related areas
and decision-making areas these are all
surfaces uh within the brain that if you
want to build an interface that that
encompasses all of those functions you
have to touch those surfaces. So in in
some way
>> can we talk about two of those memory
and decisions. So do you anticipate at
one point the science might or your
companies like yours might be able to
effectively go in and u download our
memories or maybe go in and like you
know decision scientists would have uh a
field day with this like basically
decode how we make decisions just by
reading the signals off the brain.
>> I think um from the standpoint of those
those are two very different problems.
Um, from the standpoint of decision-m I
think the answer is much closer to yes
and we have an understanding of how that
sort of thing might happen. From the
standpoint of memory, it's a little bit
of a little bit different and we can
take those two uh questions, you know,
um, one at a time. I'll just say that
for memory, it's important to understand
that um, the way the brain stores memory
is very different from the way we think
of memory being stored in the digital
world. Yeah, this is important.
>> Yeah, I think this is important to
understand that um memory as it's stored
in the digital world really the bits
have a physical manifestation um that on
some level you could you interrogate
them right when you when you store a bit
it's a spin state or you know uh or
something like that. Um and so the
reading of those memories is really the
reading of the state of something that
you know at a very very small physical
level has a representation in the
physical world.
>> Yeah. You click on a file it will go
into the sort of semiconductor mainframe
and then access where that file has
lived
>> right. a transistor state is changed
right you know or a spin state is
changed or some something in solid state
is actually uh is changed to represent
the bits. So when we think about you
know people talk about bits and atoms as
being you know intrinsically uh linked
they they are right there is there is a
really a onetoone rep representation
between a particular bit that you're
trying to store and some state in the
physical world.
>> So how's the brain different? The brain
is different because um and this really
relates to how um one of one of the one
of the ways that neuroscience has been
so um important in motivating
developments in artificial intelligence
is that uh the the the way that a
particular memory is stored is not
really in um the the flipping of a
state. Um, in order to read out a memory
from the brain, you have to stimulate
the network, what we think of the
network as of the network that
represents that collective uh, you know,
memory and you have to stimulate it with
something that triggers the recollection
and the network then either completes or
reproduces that, right? So, if if as an
example, you know, you you can be
reminded of a memory by by a particular
trigger, right? uh or you can you can
trigger yourself to remember something,
right? But there's no scan that you can
do or that we can do that looks into the
brain and says there is the face of your
family or there is the you know the the
combination of your combination lock. Do
you see the difference?
>> Weird question then like where do the
memories live when they're not there?
>> They they they um they don't really
exist as such.
>> Wow. they they um they the the brain is
a system that uh that can produce the
recall of the memories when
appropriately triggered. But it's not
like somebody by reading the physical
state of the brain with a scan the way
you could a disc drive for example can
can find all of those bits of
information. Does that make sense? It's
like it's very different.
>> Where do they exist? So they just don't
exist.
>> It's not that they don't exist, but the
the storage mechanism is very different.
It's like saying that in order to in
order to get the in order to retrieve
the memory, you need to trigger it,
right? So in order to in order to get
the memory of a combination lock out of
your brain, you need to basically say
what's the combination to your
combination lock and then you'll
retrieve the answer, right? Um but it
much it's much more difficult to like
there's no file address that we know to
go to in your brain that contains the
digits of the combination lock. Knowing
what you know about the technology, do
you think we'll eventually find that
filing cabinet in the brain where all
this lives?
>> I don't what what I'm trying to say is
that it doesn't exist as such to to the
best of our to the best of our
understanding of how the brain works.
>> So then how could a technology company
then go and basically stimulate the
brain to share memories. Um maybe
there's a way that you know I'd like to
for instance like relive uh a memory or
maybe you know uh there's something that
my father told me 20 years ago that I
forgot uh but I really want to remember
what he said uh word for word. Can
technology one day be able be used to
stimulate that and then re recall those
memories?
>> I don't think I don't think we know the
I think we're far from knowing the
answer to to this. I think well let's
put it this way. It's known that by
electrically stimulating the brain, you
can reproduce certain memories, but it's
much at this stage of our understanding
of neuroscience, the predictability of
how we do that is really not well
understood.
>> Okay.
>> Um, and uh,
>> but one one sort of uh way that I think
we think about some of the questions
that you're getting at is that we've
just never had the tools before in
neuroscience to interrogate some of
these questions. Um, so we've never had
as high resolution a picture of the
awake human brain as we now have with
the precision system. So, you know, the
the electrodes that I mentioned earlier,
the,024 tiny platinum electrodes, most
of them are 50 microns in diameter,
which is actually the size of a neuron,
>> right? Um, and we're starting with a
postage stamp sized uh electrode array
over motor cortex, but the the the
medium-term vision for precision is a
much larger electrode array covering
much greater areas of the brain's cortex
with hundreds of thousands, someday
millions of electrodes. Um, and I think
once we're able to achieve that and
apply, you know, the cutting edge
compute algorithms that are available
today, we're going to learn answers to
questions that that right now we just
haven't been able to to to interrogate
properly.
>> And I think that kind of interface um
will allow us to um fluidly interact
with the kinds of digital memory uh that
we kind of have more of an intuition for
in you know in our daily interaction
with technology. Um and certainly many
of the ways that we think of expanding
our memory type capability is in that
kind of file system storage way with
addresses. So you know certainly we can
think of ways and and the way we
interact with those memories are with
the kinds of brain computer interface
type technology that we're already
talking about right so I definitely
foresee a future in which we can you
know a near future in which we can
augment uh you know memories through
that kind of fluid interaction.
>> So we could have like an external hard
drive that the BCI
>> we kind of we kind of already have that
you know with uh with assist of
technology today. Yeah, I mean you carry
it in your pocket instead of in the
cloud,
>> but a direct link between a device and
the brain would
>> I think that I think that that that
that's we're we are already building
towards uh you know a state of brain
computer interface technology that will
facilitate that. No question about that.
>> And I I think that that's important um
because it's based on just an extension
of what we're already doing. You know,
right now we're already cyborgs in some
way. we have this digital extension
through a black box at the end of our
hands that we tap on furiously and
that's how we access you know our
external hard drives in in in in the in
the way you know whether it's you know a
Dropbox in the cloud or Wikipedia or or
whatever. Um you know I think the
evolution of the way that human beings
are able to control technology and and
and compute has changed. It's changed
repeatedly over the course of the past
40 or 50 years from mainframe computers
where we had like punch cards to
desktops and laptops where you know we
use our hands to control a keyboard or a
mouse. Now it's mostly phones.
Increasingly I think this is moving to
wearables. Um think about AirPods that
people forget they even have in and you
know makes them sort of quasi cyborgs.
uh I think over time uh it is likely
that a more seamless a more intuitive
way for us to engage with the digital
world is is is going to happen and I
think this concept of sort of
thought-based
uh control of computers which right now
still sounds like science fiction even
though we're doing it um in in in
clinics across the country um and even
though it's been done in academia for a
couple decades it still sounds sort of
fantastical uh but but the the moment
for real clinical adoption is very near,
right? And this concept of of seeing
people control computers with their
thoughts is going to become much less
amazing and much more commonplace. And I
think as that happens, our our attitudes
towards how to control computers um is
is going to change.
>> What about studying decision-m?
>> Yeah. So, I think decision-m uh we
picked we picked the more difficult one
first because I think
>> memory is more memory memory is a little
more difficult. decision making I think
is we're already doing that in in many
ways you know I think we have a much
better understanding um of certain
aspects of decision making um because a
lot of that relates to um the
pre-planning of speech and motor
function and so we have a better uh a
better lens on that now and uh a lot
already is understood about the neural
systems that serve decision-m so
predicting decisions before they happen
at least at the fraction of a second
level um is already possible and we have
already some understanding of the areas
of the brain that are responsible for
that. So um both decision support and
decision prediction is kind of already
in the near-term uh in the near-term
road map.
>> Stock traders are going to have a field
day with that.
And by the way, you know, I think that
that has the potential actually to be
influential in some of the um more
cutting edge foundational models that
are being used in in or they're being
developed in in the area of artificial
intelligence today. So, you know,
interestingly, uh a lot of AI has been
inspired by the architecture of the
brain. You know, neural nets are are are
where we've started. Um and some of
these more abstract ways of um decisionm
are likely to help with uh breakthroughs
towards sort of the next stage of of
generative AI. And that's why you're
starting to see some convergence of the
pure software players with um folks like
us who are interested who are actually
developing hardware that interacts with
biology.
>> What about uh decoding states of
consciousness? So, we talked so we've
talked a couple times now uh before
we've done this episode just so I can
try to wrap my head around what you guys
are doing. Um and one of the things we
spoke about was coma as right now coma I
think is very uh little it's it's
minimally understood. Uh people just
sort of say okay they're laying there in
kind of like a half sleep. Uh but what
can for instance putting a series of
these brain computer interfaces on the
brain of a coma patient potentially tell
us about what's actually happening
inside their mind as they're laying
there?
>> Yeah. I mean this is something that we
think is profound that we're actually
actively working on and um it's due in
part to work done over the last decade
or decade and a half um in the neurology
and neuroscience community to try to
understand coma and um you know this
notion of consciousness is something
very very deep in philosophy and
neuroscience and um you know a lot of a
lot of time and effort has been spent
trying to even understand what that
means. Um but everybody has a some
intuition for what consciousness means
and that coma represents a kind of
disorder of consciousness um or a lack
of or almost lack of consciousness but
there's a spectrum um between coma uh
and normal wakefulness that all of us
here in the room experience and um it's
not exactly even a linear spectrum. So
uh you know we we all familiar with
sleep right um and different sort of
phases of wake different degrees of of
wakefulness and uh and people who um
have certain forms of brain injury uh
for example severe trauma or um
debilitating strokes they uh they um
especially in the early phases of their
disease and and you know many of the
listeners may be familiar with family
members who've had um very severe injury
that um results in a coma or a severe uh
change in the state of consciousness.
And for for decades, this has been a
really vexing problem. Um trying to
understand is the is the person who's
currently unconscious uh or currently in
a coma going to emerge from that state?
And uh if they do, will they emerge as
the person that we knew? Um, and in the
period in which they're not wakeful and
able to communicate, are are they in
there, so to speak, right? Is that
person in there, are they are they
thinking and a not just not able to
communicate or are they not there at all
in the way that we knew them? And um it
turns out that uh for some disorders of
consciousness, some people who are um
you know seemingly not able to interact
with the outside world, their eyes are
closed, they are not moving or speaking,
um some of those people, it's now known
about maybe even 15 or more percent um
in some cases have the ability to uh
think at least for some portion of the
day and uh and even can modulate their
neural activity in kind of the same ways
that we do uh in order to to drive
speech or to drive movement, but they
can't make that movement manifest in the
outside world through vocalized speech
or movement. Uh which are the ways that
we normally communicate and express our
consciousness. And that um if you think
if you stop to think about it is can be
both troubling you know on a deep level
but also represents kind of an
opportunity which is that the same brain
computer interface technology that we've
been talking about to restore movement
and people who we know to be totally
lucid but paralyzed. Um that same
technology actually can serve as a
bridge um both to help diagnose and pro
and provide prognostic information for
whether the person who has this brain
injury is likely to recover from it and
even in that period when they're not
fully recovered to provide a window into
what's going on inside so that they can
actually uh communicate in some ways
with the outside world. So there's a
world you're saying where it might be
possible to effectively talk with coma
patients through BCIS
>> not coma patients because coma means
that they don't have a level of
consciousness that can provide that but
there is an intermediate state set of
states they're called for example uh u
minimally conscious state uh for example
um uh or cognitive motor dissociation is
the term that's often used for these uh
these states which are in between coma
and full wakefulness. on this sort of
spectrum and those patients
they look effectively like patients who
are in a coma and it's very difficult to
distinguish them in some cases. Uh and
so we feel that we're working on this um
that brain computer interface technology
can provide a tool for distinguishing
those patients from true coma and yes
providing a way for them to communicate.
That's unbelievable.
>> And you know th this has important um
predictive power in terms of as as Ben
mentioned uh determine the chances that
they recover and become themselves
again. Uh right now you know this sort
of continuum of uh consciousness is is
most often diagnosed by nurses um who
are just trying to perceive whether
there's any wakefulness or movement um
or or an ability to respond. But that's
in the context of you know loud
hospitals and lots of patients who who
command their time and so the error
rates are extremely high right now. Um,
so being able to do this in a way that's
much more definitive. Um, and also which
is as as Ben mentioned gives people who
are in there who are able to modulate
brain activity. So they can imagine, you
know, you ask them um a series of yes no
questions and they can actually answer
them by imagining uh making certain
movements allowing them to express
themselves and communicate out to the
world. I mean, it's just it it's it's
sort of an unimaginable thing to be in
that state and to be conscious or or to
be at least um aware of what's happening
and to be completely uncommunicative
because you can't control the muscles in
your body. Uh and so I think this this
is a a bridge that has the potential to
be really important.
>> Crazy. All right. I have two questions
for you before we leave. Uh the first is
we kind of touched on it a couple times,
but what happens when uh people trust
their brains to BCI companies? Let's say
this, you know, 10 years or 30 years
down the road becomes common place to
have some sort of device in your brain.
I know we're right now or right now
you're mostly using this for disabled
people or or exclusively using it for
folks who like need it to be able to
function. Um, is there a worry that
someday that technology if it becomes
too commonplace can be used to hack into
brains or to write signals uh in a way
that um, you know, you got a bad actor
in there? Like there's this study, I
think we're going to talk about it with
Sally D or she's already been on the
show to talk about it, but um, there
were like these rats that were able to
hit this pleasure lever uh, and and that
would send an electrical signal into
their brain uh, and they would basically
do that all day and not do anything
else. Uh, so what do you think about the
ability to write and should we be afraid
of it?
>> I feel like that button for rats is like
Instagram, right? Um,
look, there's definitely parallels in
the human experience today. No doubt.
>> Yeah. I mean, I I these are incredibly
important questions. Uh, the the the
issue of um neural data privacy, neural
data security, you know, there's nothing
more inherent to who we are than our
than our brain activity. Um, I'd make a
few points about this. Um, and it's
obviously an evolving space. One is
we're a healthcare company. We're
developing medical technology in the
context of the FDA regulatory regime as
well.
>> I'm not worried about your device right
now being being used to hack brains.
>> But but but I think these concepts are
being considered now. Um, it's it's
early. I think, you know, we're
developing a tool for paralyzed people
to live higher quality of life. Um, but
over time, I think these issues will
will certainly emerge. And um the FDA
has actually taken a very proactive role
in helping define the regulatory regime
not just for today but for tomorrow as
well. They've helped create something
called the collaborative community which
they've done in a a few different um uh
areas of emerging medical technology.
And it's a way to convene a lot of the
different stakeholders into one place to
map out uh clinical practice guidelines,
you know, reimbursement as well as some
of the ethical consider considerations
which are different uh depending on the
technology. So this is you know
academics, these are the patient
advocates, this is industry, um this is
clinicians, uh these are hospital
administrators, these are payers like
insurance companies. Um, and so one of
the uh work streams of this
collaborative community, it's called the
implantable brain computer interface
collaborative community. It's a little
bit of a mouthful, uh, but is is
specifically focused on um, data
privacy, data security, and uh, ethical
considerations. And so I think that's an
incredibly powerful uh, sort of forum in
which to to start mapping out what this
looks like in in coming years. Let me
let me take that in a different
direction just because I mean as you can
tell um as futuristic as brain computer
interfaces are we're pretty
practical-minded as a company um really
really trying to bring this technology
into the real world to impact people and
become part of the standard of care. But
having said that, you know, um uh I
think we're profoundly influenced by the
science fiction of our childhood and of
modern times and there's a long track
record of um you know the science
fiction of today uh helping to influence
the science reality of tomorrow. So we
take these um we take our responsibility
in this regard very seriously and we
take these thought experiments um very
seriously and um as Michael mentioned I
think we and others in the space are
trying to ensure that this all develops
in a as responsible a way as possible
understanding that it can be hard to
predict what happens and uh it can
certainly be hard to legislate around
you know um uh all future eventualities
but We definitely have our eye on it.
>> Okay, let me end with this on the theme
of science fiction. We've talked today a
lot about trans transposing the uh brain
into uh uh compute, taking thoughts from
the brain and bringing it into
technology. U what about the other way
around? So I'm curious if you think
there is the a way for AI in some way to
merge with the human brain. And then I
guess as a correlary to that, knowing
what you know about human consciousness,
do you think it's possible for at all
for AI to achieve consciousness or
self-awareness?
>> Yes.
>> Say more.
>> Yes. You should have us back on the show
for another episode on uh on whole brain
interfaces merging with artificial
intelligence and uh and AI
consciousness. I mean, that's a whole
that's a whole another couple of hours.
>> But you're you're a neurosurgeon. Do you
believe AI can achieve consciousness?
>> I do. Yeah.
>> How?
>> I mean, I don't I don't think that to me
it's not even such a I don't I don't
even see that as such a um
I don't even see that as such a
difficult or troubling question.
>> Okay. So, we will have to do another
hour then is what you're saying. Uh and
then what about this idea of of AI and
human brains merging? Any thoughts on
that?
>> It's but it's already happening, right?
I mean, in some ways that's that's
exactly what we're developing. And we we
see the brain computer interfaces today
as as you alluded to and as Michael
mentioned as kind of like the um in some
ways the foundational layer of um you
know a merger between the brain and
artificial intelligence. Right now it
has some very practical manifestations
which is effectively to become a
different kind of user interface. You
know, as Michael mentioned, we have a
ways today that we've become accustomed
to of how we interact with the digital
world, and it's usually with voice or
hand control. Um, but the technology
that we're building is to enable direct
brain to digital inter digital world
control. Right now actually what we're
doing almost of necessity uh because so
much technology is just built around um
voice and gestural and um you know hand
motor control is kind of a two-step
bridge between
neural intent and a conversion to what
would be for example typing on a
keyboard or moving a cursor or speaking
some commands to a computer. But that's
just kind of a an artifact of the way
the user interfaces of today are built.
We already know actually um that the
latency between your brain and your hand
and the ability to think something and
type it is around 25 milliseconds. So
that actually puts a biological hard
limit on how fast you can interact with
the digital world even though you're
thinking faster than that, right? um
with in a brain computer interface the
the latency of the system is right now
in the single-digit milliseconds will be
even faster than that.
>> That's why no one is kind of like he has
super human abilities when he exactly
right. So many many participants in
these clinical trials have described it
in similar terms kind of like the the
the neural interfaces predicting what
they're thinking. This gets to your
question earlier about um you know
decision- making and where is where and
when is the decision made and when can
when can a a neural interface infer that
or predict it. But um I'm using this
specifically to just point you in the
direction of what the user interface of
the future where brain computer
interfaces are pervasive looks like. It
looks like not a keyboard or voice
activation or gestural control. It looks
like something that almost is is
predicting or intuiting what you're
thinking. And it's actually it's
counterintuitive to how we interact with
the world because we're built and wired
and have an intuition around actually in
interactions with the world that that
are built around this 25 or so
millisecond latency. Anything that
happens faster than that, you don't
realize that it has that latency. And so
when you see it happen faster, it almost
seems like magic. But that's what an
inherent brain computer interface user
interface looks like. That's happening
and that's just the first step in what
you know uh a kind of a merger with
artificial intelligence looks like.
>> Okay, Michael, let's give you the last
word. I want you to just give us like a
realistic timeline of like what the next
couple years are going to look like in
this technology and then what size of a
business that could be if it if it works
according to plan.
>> Yeah. And and ju just to uh I'll answer
both those questions. Thank you. But ju
just to follow up on what Ben was
saying, just in layman's terms, we
already are augmented by AI. It's just
slow.
>> Okay,
>> we have to type. Uh and I think that
there will be an ability to access um
information much more quickly and also
much more intuitively with context. Um
which is part of the reason that
companies like Meta and Google are
developing wearables. Context is going
to really improve the functionality of
these systems and how we use them. Um,
and I think that that's going to be part
of brain computer interfaces as well.
Um, in terms of timelines and market
size, you know, I think that there is
growing recognition that this is going
to be a big industry. Morgan Stanley
wrote a report last year that estimated
a $400 billion TAM. Um, and uh that that
will build first in some of the medical
applications that we described. Um we
expect within five years there to be
precision system on the market and and
maybe one or two others that are making
a big clinical impact for people who are
severely paralyzed and then expanding
from the sort of 2.5 billion a year
market that uh the the severe paralysis
represents to something that's 10 to 15
times that as the um applications for
the technology become wider. Uh I think
what we have at precision that's unique
within the context of brain computer
interfaces is the ability to
commercialize a temporary version of the
system. Ben mentioned that uh we have
our first FDA clearance which is a
tremendous milestone for precision but
it's also just a sign of the the
progress of this industry towards real
commercial and clinical impact. Uh that
is not instead of our permanent implant
that is in addition to in parallel to
our permanent implant. Um the the
constraint to a temporary implant is
that it can be implanted up to 30 days,
but there are a number of applications
for a 30-day implant. Some of which Ben
described in in the disorders of
consciousness, um which we think are
actually going to create, you know,
businesses that are several hundred
million in annual revenue and which do a
tremendous amount of
good in terms of human health.
>> All right. Well, folks, if you stayed
till the end, I promised you we were
going to get into some weird and good
stuff, and I and I think that we
delivered. So, thank you for staying
with us uh all the way until this late
moment in the interview. I called brain
computer interfaces the uh this that I
said that 2025 was going to be the year
of brain computer interfaces, and I
think the conversation that you just
heard really shows that it is unfolding
exactly that way. So, Michael and Ben,
so great to see you. Always fun to talk
and I hope we you do come back and give
us that extra hour on whole brain
mapping and whether AI and the human
brain can merge.
>> Sounds good. Looking forward to it.
>> All right. Thank you so much for being
here and thank you everybody for
listening and for watching. We'll see
you next time on Big Technology Podcast.