Are We Too Obsessed With AI Predictions? — With Carissa Véliz

Channel: Alex Kantrowitz

Published at: 2026-04-22

YouTube video id: TWbg2HYucZU

Source: https://www.youtube.com/watch?v=TWbg2HYucZU

If prediction is the basis for today's
cuttingedge AI, shouldn't we examine the
nature of prediction itself? Let's talk
about it with Oxford philosopher Carissa
right after this. Welcome to Big
Technology Podcast, a show for
coolheaded and nuance conversation of
the tech world and beyond. We have a
great show for you today. We're here
with Oxford philosopher Cararissa who
has a new book out this week called
Prophecy: Prediction Power and the Fight
for the Future: From Ancient Oracles to
AI, talking all about prediction and
what it means for our society. And
prediction really is everywhere in our
society. Wouldn't you agree, Carissa?
>> Absolutely. Thank you so much for having
me, Alex.
>> You bet. I mean, I was thinking about it
when I saw that your book was coming
out. I said, 'We have to have this
conversation because and we're going to
get into the AI stuff in particular in a
moment, but just from a big picture
standpoint, I mean, everywhere we look
today, we're trying to predict
everything, right? AI, of course, or
generative AI is the nature of
predicting the next word. Um, we also
have the older versions of machine
learning, which has like uh lots of
different predictive capabilities,
predicting whether you're likely to
default on a mortgage. Uh, and then of
course, we're like in the middle of this
like prediction market mania. What is
what is happening
>> exactly as you say it's everywhere.
Prediction sounds like the holy grail
for everyone. Everyone wants to know
what's around the corner because
everyone's anxious about the future.
That's where we will all be spending the
rest of our lives and whoever can gets a
glimpse of the future has a competitive
advantage. But that script alone makes a
lot of assumptions that are very
problematic because it seems to suggest
that the future is written and our task
is to discover what's there to kind of
discover this script that has been
written for us. But actually the future
isn't written. And even though it's
frightening,
the most important events in your life,
in your personal life, but also in your
business life and in our lives as a
society are the ones that are the most
unpredictable. So it's very easy to see
what's ahead when the road is straight.
It's the curves that are really hard to
see and in some cases impossible and
those are the ones that will change your
life. So you're saying that okay so we
have this world where algorithms are
making all these predictions that could
influence us. It could steer us and are
you saying that we should just do away
with those predictions or we should be
mindful of the fact that you know there
might be something hidden underneath
because there clearly is.
>> Yeah. I think we should be mindful. I'm
not saying that we should do away with
predictions. I use them and and in a way
predictions are part of how we make
decisions but we should be much more
enlightened about it. I think we're
being so incredibly naive and in some
cases, sure, we shouldn't use
prediction. Let me give you an example.
>> Mhm.
>> So take the justice system or any system
in which we really care about fairness,
in which fairness should be the value
that is more important than efficiency
or than profit. In those cases, it's
very tricky to use predictions because
when you predict that somebody's going
to fail, you affect their lives. So say
you use an algorithm to determine that
someone is unemployable and you don't
give them a job, but because everybody's
using more or less the same algorithm
trained more or less on the same data,
that person will never get a job. And
the company is going to the company that
runs the algorithm is going to say, "Oh,
see our algorithm is 99.9% accurate."
But it may be producing that accuracy
through creating the reality that it's
purporting to predict rather than that
person really being unemployable. And
here's the interesting thing.
Self-fulfilling prophecies are like the
perfect crime because it's like a murder
weapon that disappears upon striking. It
leaves no record. It creates no error
signals. We will never know how that
person would have fared because they
will never get the job and that data
will never get collected. And so it
seems like nothing untoward is happening
when in fact great unfairness may may be
happening and being covered up.
>> So you're talking in that example
specifically about like AI filtration of
résumés through job sites. Exactly. Yes.
>> Okay. Here's here's my push back on that
one.
>> All right.
>> Um
I think that that example leaves out the
agency of people to bend the job
application process to their to their
will to a degree. Um I think if the if
we are just at the mercy of like job
application portals then I would say
sure you know this stuff is probably
bad. Um, but isn't there, and I think
this probably applies to all your
arguments, and so let's have it out. Um,
>> let's have it out.
>> Uh, isn't there the ability of people to
just be like, um, I don't want to be at
the mercy of this algorithmic job
portal. I'm going to write straight to
the hiring manager and make the case
myself and sell yourself. And then
outside of this sort of algorithmic
filtration thing that the hiring manager
notice knows we'll miss people and and
you know that the pl the sort of
workplace in need uh understands is
imperfect.
>> Yes and no. So for example I I I've met
someone who is really good at their job.
They're a computer scientist but every
time they apply for a job through the
normal procedure they get filtered out
and he doesn't know why. there might be
something in his CV that makes him look
quirky and algorithms don't like quirky.
Um, and then people get to know him and
he gets offered these highpaying jobs um
from the same companies.
However, there are many systems in which
we're not allowing that leeway anymore.
So, there are many systems in which you
try to find the email of the manager and
you can't find it. more and more we're
being limited to these automatized
processes and that leeway that is so
important that you're talking about is
disappearing a bit. That's that's one
side. But the other side is you might
have people who are brilliant at what
they do, but they don't have that kind
of personality of looking for the
manager. And they might be a particular
kind of nerd, right? Who may be a genius
at, I don't know, programming or a
genius about on writing, but they're
socially not as savvy to try to break
the system. And society wants that
talent. We're missing out on important
talent when we streamline everything.
But isn't that to to a degree uh
encouraging passivity? Um think about
the example of like well their email
address might not be listed. Um I I
think that you know think about how long
it takes to filter to and sorry to the
hiring managers because if people listen
to this your email inbox is going to get
blown up. But I don't actually really
feel that bad about it. Uh but uh for
instance like we're talking again about
algorithmic hiring. um going through
these processes, it's arduous. I think
you could spend half the time guessing
email addresses until you get the right
one. So, so to say, let me just throw it
out there, to say um these these AI
algorithmic um systems uh are um are by
like shouldn't be used because of this
unfairness. maybe you have actually a a
better advantage if you do try to break
out of that system and be active a
little bit um and decide not to be at
their whim like people have agency at
the end of the day.
>> Again, you're you're assuming that you
can break out, but even if you're right
that you can break out, the other side
of the coin is that actually you're
incentivizing something like stalking.
So the the the guys that will end up
getting those jobs are the ones who are
most insistent, who are most willing to
break the rules sometime. And one of the
concerns I have, I don't know what it's
like in your world, but in my world in
academia, I think we have a serious
problem of fraud of people who are very
well-known and who have been very
successful and who have fudged their
data or who have committed other kinds
of academic fraud. And it's precisely
these kind of people very active, very
insistent
uh is this kind of profile. And I don't
think we should incentivizing that
either. So, and I think we should get
the best of both worlds. So, we want the
active people. We want to have a system
that encourages them in the right way.
And we want the people who I wouldn't
call them passive, but who have other
kinds of talents. You know, some people
are introverted and they tend to have
different kinds of talents than the
extroverted. And to put all our betting
coins on the extroverts is, I think,
losing a a great pool of talent.
>> Okay. First of all, I'm definitely not
encouraging stalking.
>> I think this can be done outside of the
realm of of stalking. No, and and I also
think that u you don't necessarily need
to be a fraudster to go make your case
outside the system.
>> Absolutely not. But it's the kind of
incentive that attracts that kind of
profile sometimes.
>> Yeah. And I I think we shouldn't let
this sort of take away from your broader
point here because um I have seen these
systems. I mean I've been lucky enough
not to have to apply for a job for a
while. Uh but I have friends who have
gone through these processes and I'm
kind of stunned at what job hiring
software looks like today.
um they they filter for personality and
I I mean I understand for an employer to
want to have some indication of what
somebody's personality is like uh but
they do it to a degree where it's like
you have a great candidate in front of
you and like one little misstep on a
multiple choice and a poorly worded
question filters them right out of the
pool. I think that's actually a bad
thing for employers as well.
>> Yeah. or using AI to read people's
emotions in an interview. There are so
many assumptions and so many glitches in
technology that is very very
questionable. Another really interesting
example is um loan applications.
So if I'm a bank and you apply for a
loan and I have clear criteria about
what you need to get X amount of of for
a loan, um that those are verifiable
facts. So, if I say, "Alex, you need
$10,000 in your bank account to get this
amount of loan." Either you have them or
you don't. If I if I if I reject your
loan, but you do have the $10,000, you
can prove me wrong and then we can solve
it. But if you apply and I reject your
application on the basis of a
prediction, there's no way you can
contest that because predictions are not
facts. At best, they're educated guesses
and because they're not facts, you
cannot prove it to be false. And so it's
a way to shroud a lot of injustice and
and to lessen accountability.
>> Okay, for the sake of argument, let me
now take the bank side.
>> Yeah, absolutely. Um there are great uh
machine learning companies like C3 AI
for instance that will evaluate um
mortgage uh applications for instance
and they sort of put you in a category
and forgive me if I don't get this
exactly right but it's what my research
points me to is that they'll put you in
a category in terms of likeliness to pay
back a loan. Green very likely, yellow,
all right kind of borderline. Red
statistically you probably won't pay it
back. Uh, if I'm a bank, my job is to
put money out and recover it, right?
That's the whole point of being, let's
say, a mortgage officer is to loan that
money out and do it with a high degree
of confidence that you're going to get
that back. And because that exists, the
mortgage system can exist because we
give people all this money, you know,
that they otherwise wouldn't be able to
obtain to buy a house. So if a bank can
use this software to um to be able to
determine or be able to do this job more
effectively through prediction, what's
the problem?
>> So even though banks are businesses and
we want them to do well, we depend on
them to do well really. I mean we can
see the financial crisis in 2008 and
what happens when
>> sometimes they mess up
>> when that doesn't happen. Um, it's also
a very important opportunity to give a
loan to someone is life-changing or to
deny a loan to someone is is is
life-changing. And so there are also
considerations of fairness going on. So
if you have an algorithm that is not
very accurate and that is not very fair
but is profitable enough, if it were
just about profit, then it would be
fine. And there are some areas that in
which frankly it's just about profit
like maybe retail. So and that and
that's fine. But because in this area it
also has to do with life opportunities.
When you scratch the surface of those
algorithms, for example, the markup had
a a very long um story a few years ago
about how two people who had applied for
for a mortgage had been denied. And when
the markup investigated, their file
looked exactly the same or very similar
to other two people who happened to be
white and they and it turns out that
they were black. So you start getting
all these correlations
that are very unfair. And when you have
clear and contestable criteria,
one of the important things, there are
two important things. One is that it's
usually causally related to to whatever
you want. So if you have $10,000 in your
bank, that means that you've probably
been good enough to save and that means
that your your likelihood of paying back
this amount of loan is high. But it's a
causal relation because sometimes
machine learning picks up on speerious
correlations. If you have three credit
cards, you're more likely to pay back
because it just happens to be that
people who with three credit has credit
cards have had better luck uh paying
back. But the other really important
thing is that if you don't fulfill the
requirements, you know what to do to
change a decision. So if you have only
9,000 um dollars in your bank, you know
that you need a thousand more. And so
you know exactly what to do to get the
kind of answer that you want. When it's
a blackbox statistical pattern matching,
you have no idea what what you need to
do to get the loan. And in some cases,
the best way to get the loan would be to
have a different race. And that seems
not only unfair, but also
irrational in some way.
>> Yeah. So, first of all, it's it's great
having you here because we have we have
recently especially we've had a lot of
people from industry on the show and I
always love to to feature the critics
because it's important to hear your
voices um and and talk through and so um
don't don't take my uh push back here as
a uh as a me being a stand-in for
industry. It's no absolutely we need to
have this conversation and I'll I'll you
know the same way I'll ask you know sort
of probing questions to the people in
industry I'm going to ask you some some
more. Um so let's just keep going with
this. By the way this is all sort of
like old school machine learning which
is predictive. We're going to talk about
more of the generative AI uh side of
things in a moment but let's keep going
with this because it is rich and
interesting um you know to talk through.
Um
so there there really is a question is
the question is
again like if this system helps a bank
do a better job um shouldn't the answer
be instead of throwing it out um
investigate it for bias. If it's biased
fix that bias and if not let it run like
for instance I'll I'll just talk through
this three credit card example. Okay
people with three credit cards for
whatever reason are better at paying
their bank back on the loan. Now, it
might seem like totally like irrelevant,
but at the end of the day, if you have
three and you're that's a stat just
correlation to that you'll you're more
likely to pay the bank back, then that's
actually maybe an additional loan that
they could make that they wouldn't make
otherwise if they didn't have that data.
So, instead of saying this system is
rotten, throw it out. Shouldn't the
right response be investigated for bias
and inaccuracies, but overall maybe keep
it?
>> Well,
there is value in that.
>> Okay,
>> for sure. However, even if you
investigate for bias and keep it,
there's still the problem that the
prediction will affect that life. So, if
you don't give someone the loan, they
will do financially worse. And then
you know you can claim accuracy but
accuracy at the price of creating that
reality is not what we're looking for.
It's not the kind of accuracy we're
looking for.
>> You can't give everybody alone though.
>> No, you can't give everybody alone. But
the thing is when you say well let's
investigate for bias or investigate for
inaccuracy. There is a limit to what we
can do because we will never have the
counterfactual. This is not a randomized
control trial. Right?
And you still have the problem that
without clear criteria you can't make it
a contestable process and you can't give
the person the conditions under which
they would get a different response
which seems like an important thing to
do. We are building systems that are
very cafka-esque that are impossible to
navigate. And I don't know if you've had
this experience in which they are
becoming so alienating and so KFK-esque
that people start having like magical
thinking about the algorithm attributing
it beliefs and trying to figure out what
it wants. And and this is something that
the philosopher Hannah Arens warned
about because um you know back in the
1930s there was something similar with
very opaque bureaucracies that were very
random. And what it does to people is it
creates a sense of alienation, a sense
of of not being able to understand the
rules by which you are ruled. And that
is incredibly toxic for for human
psychology. Uh I'm, you know, it is
interesting because sometimes you do
really get the bad outcome here. Uh I
think this is a real thing. There was a
uh a tweet over the weekend that um
somebody told JetBlue that they have a
um they have a $230 increase in a ticket
after one day. And that's crazy. And
they're just trying to make it to a
funeral. And the JetBlue account says,
"Try clearing your cash and cookies or
booking with incognito window. We're
sorry for your loss." Um, you're right
that sometimes these algorithms, I mean,
there are times where they just clearly
break down and they do become
Kafka-esque or just like really tough to
navigate
>> and so many times there's no one to
complain to. There's no one who can
understand you, who can fix a mistake.
It's just a machinery.
>> That's right. Well, I mean, I think I
think this really sort of gets to u
it gets to one of the tougher parts of
this, which is that
there's a lot of AI AI is being used
here, whether it's predictive AI or
whether it's generative AI. Um, and a
lot of this stuff will make decisions
and you just have no idea where the
decisions are being made. I mean, within
AI, there's this large field should
probably not large enough, but it's a
large field called interpretability.
Yeah.
>> And it's like, yep, the whole job is
trying to figure out how the generative
AI systems work. And it's like you're
putting this out there, people are
relying on them, and then Oh, I mean, I
guess I don't know. And along the way,
you're you're trying to like figure out
how they work. You're trying to
interpret them. Like, isn't that
backwards?
>> Yeah, it is. And something really
interesting. It's it's it's a bit of a
metaphor. So, I'm not saying it's
exactly the same thing, but we can
really learn a lot from ancient Greece
and ancient Rome because our current,
you know, we started this conversation
by just pointing out how much we're
relying on prediction. And we've always
relied on prediction, but I think there
are times in history when that goes up
and goes down. I think this is a peak.
And another peak was ancient Greece and
ancient Rome. And if you were to
interview an ancient Greek person and
say, "What do you think about the Oracle
of Delelfi?" They would say, "Well, it's
it's cutting edge technology. you know,
it's it's the best we have to make
decisions. And how does it work? Well,
we're trying to interpret it, right? And
the same thing with astrology. It was a
very technical thing about how how to
read the stars, how to measure the
distance between the stars. So, in in a
way, we've seen this before. Even though
the technology is different, the
political role is actually quite
similar.
>> Okay. But this one also, all right, the
Oracle of Delelfy, right? The Oracle of
Delelfy didn't know anything.
>> I mean, it's a story, right? But like
let's say you have an oracle back in the
day this there'll be a great famine.
>> They're pulling that that's like that's
total Like they don't know
what they're saying. But
>> an AI system can actually predict that
there will be a famine. Let me give you
an example where prediction could be
really good. Um
>> all right
>> Google is uh you know say what you will
about Google. One of the things that
they're really working hard on in Google
research is flood prediction, which we
know like kills way too many people
because it's totally preventable. Um,
that's not, you know, it do we know
every single thing about how these
machine learning algorithms make these
predictions? Maybe not. Um, similar to
the way that we didn't, you know, know
when an oracle was was um, going to make
the prediction. But in the real world,
we can tell whether they're accurate or
not. and they have been accurate and
they have saved people's lives. That to
me is a great form of prediction that um
that AI can help us with. Now, is this
something in full disclosure that Google
holds up and says look how good our AI
is. You know, look over here while you
don't look at the rest. Yes. Uh but it
doesn't change the fact that that's I
think an undisputed good. And this is
part of why it's so important to have
this conversation which astonishingly we
haven't had before as a society about
sure there are kinds of predictions that
are very good like weather prediction. I
look at my app every single day multiple
times a day and I will continue to do
so. But then there are other kinds of
predictions that are clearly very
problematic. And the interesting thing
is that there's no formula. There's no
way to say okay if you check this box
this box and this box then it's fine.
It's a it's a public debate that we need
to have and that's why it's so important
with Google. I haven't looked at the
flawed thing. Um but let's say that's
correct. That doesn't mean that every
kind of prediction that Google does is
equally valid. So one very fun example
is well not fun but I mean interesting
example is um when when Google tried to
predict flu and pandemic type events and
it tried for years and years and years
it increased the complexity it increased
the data and it could never do it and
eventually it shut down partly because
it was relying on people searches people
doing searches and when you search for
symptoms sometimes you search for
symptoms because you're having the
symptoms sometimes you search for
symptoms because your sibling is having
the symptom or because you're worried
you might have them. And so it was it
was too confusing and and they couldn't
do it. And again, even though there is
no checkbox and no easy way to tell
which predictions are acceptable and
which are unacceptable, one thing to
take into consideration is is this a
prediction about a thing flaws of
>> Yeah.
>> things or about something more social?
>> Yeah. Well, let's go with let's go back
to the pandemic example. So I I that's
the first time I'm learning about this
Google example, but there are other
versions of you know prediction, AI
based prediction that that are helpful
when it comes to pandemic. wastewater
analytics for instance is really
interesting where like um there are
companies we've had them on the show
actually that can uh let's just take the
co example they see how much COVID or
how much virus there is in the
wastewater and then they look at the uh
the rate at which it's advancing and
then they can actually predict a spike
that could free people because if
there's like no um if there's no
prediction involved in like when the
spike is going to be like the answer
might be lock down everybody Uh the
other side of it is if you can predict
that there's going to be a spike, you
can be selective in when you want to
shut things down versus open them up.
>> Yeah. And one important thing is the the
closer you are to the present, the more
likely your prediction is reasonable. So
if you hear somebody predicting what's
going to happen in a thousand years,
take it with a big big pinch of salt or
in fact just kind of laugh it off.
>> Now the people that come on this show,
they won't predict like a year into the
future because this AI world is changing
so fast. But yeah, a thousand is
>> but like long termists in effective
altruism are thinking about the world a
thousand years from now.
>> You got to Yeah.
>> Or you know some people are are are
thinking about the world in 50 years or
25 years. So the more you ground
yourself in the present like if you see
the analytics of the wastewater right
now that is very useful information and
depending on how much you know about the
virus and how much experience we've had
you might be able to make some useful
predictions. Now that doesn't mean that
we will be able to predict the next
pandemic. It might be a virus that we've
never seen before and we don't know how
it behaves. And one very important kind
of warning is beware of people who will
pred who will promise a prediction in
exchange for huge surveillance because
the price to pay for surveillance
is a police state. It leads to
authoritarianism and so often we're
willing to surrender our privacy on
promises that are never kept that are
that are very problematic even if they
could keep it and we're sort of selling
our democracy.
>> Okay, but we need an example here. So,
all right,
>> like where is the surveillance happening
uh
>> that that leads where are these
trade-offs? So,
I used to live in New York City and I
hadn't been in the city for a while and
I've noticed how there are many more
cameras than when I used to live here,
>> right?
>> And many people claim that well, we need
the surveillance for safety. The more
surveillance we have, the more safe we
are,
>> right?
>> But that is empirically inaccurate. So,
the safest countries in the in the world
are not the most surveiled ones. So,
Spain is one example. It has some of the
lower um
statistics for any kind of crime,
including homicide and so on, violent
crimes, and it's not better surveiled
than the US or the UK. And in fact, the
UK is the country in Europe that is most
surveiled and it has more more crime.
And so that's one example, but it's
important because when you have a
protest and in particular a peaceful
protest,
it's very important to have anonymity.
That is one of the bedrocks of
democracy. And when you have cameras all
over the place and now with facial
recognition being so easy to use, you
are eroding one of the most important
tools in the toolbox for democracy.
>> Uh oh, I have so many questions about
this. I mean, I first of all, I'll just
say like uh have you been to China?
>> I've read a lot about China.
>> Okay. I was in Beijing for a day, but
that was enough to see the level of
surveillance there. Yeah.
>> Lot of cameras. Now, there is a feeling
of that society is safe, but it's not a
society I'd want I don't I'd want to
live in.
>> Exactly.
>> Um, and but there's like there is some
sort of spectrum there where like you
probably do want some cameras up so you
can if like for instance a security
camera in some areas that's good, right?
Without any video footage, you probably
solve less crimes. So, isn't it a matter
of like finding where on the spectrum
you would live, you want to live?
>> Yes. But I think we're getting it very
wrong. I think the the practical
question we're asking by having this
amount of surveillance is how much
surveillance can liberal democracy take?
And I'm afraid that we might find out
>> and I don't want to find out.
>> Yeah.
>> Because I don't want to live in China
either. And the illusion of a world
without crime
>> ignores the fact that that would be a
world very full of a very different kind
of crime, authoritarianism. Exactly.
>> Yeah, that is a problem.
>> Uh so talk a little bit then about about
how generative AI sort of comes into
this because
>> I mean I think we kind of hinted at it
at the beginning and then and then sort
of went to this earlier version of
machine learning that's everywhere. Uh
but there is this now there's a trust
towards chat bots and yeah you can
really I mean you can really steer your
life towards different outcomes based
off of what Chad GPT tells you and it's
probably worth at least like thinking
about that before diving at first like I
often do.
>> Absolutely. And maybe just to end the
previous uh topic, surveillance is
important because the whole machinery of
surveillance
is there to feed the machinery of
prediction. So these two machineries are
intimately related and that's why it
matters.
>> But we're not we're not living in like
minority report though.
>> We're we we seem to be walking in that
direction and I would like for us to
walk in a different direction.
>> I mean but we're we're
let's just talk it through. We're not
like arresting people on crimes they may
commit, are we?
>> No, but we're using predictive
algorithms in the justice system for
sentencing for for many aspects in the
justice system and for the reasons that
we explored with insurance or with loans
or with jobs. That's very problematic.
>> We talk a little bit about how those
predictive algorithms are used in the
justice system and then we're going to
get to this generative AI question, but
you keep taking
>> it depends on the place. They vary a
lot. Um but some algorithms are used to
assess the risk of a person committing a
crime and on the basis of that deciding
whether they might get bail for example
>> parole
>> parole
>> all these things
>> all these things. Another
case that worries me that I think people
are less aware of is
whether, for example, an insurance
company decides to cover a lawsuit
because they only cover a lawsuit if the
case has a 51% chance of succeeding.
And that makes sense in some ways. You
can see the re the rationale behind
that. But at the same time, when we make
the justice system about probabilities,
we're losing its principled approach.
And so you make it very easy for the bad
guys to get away with it because you
don't have to make it impossible for
people to challenge you or even very
hard. You just have to make it slightly
unlikely for them to win and then you
get scot-free.
>> So there are all kinds of distortions of
justice when we introduce probabilistic
thinking into an area that I think
should be more based on principles.
Okay, I got one more for you. Um, why I
I just want to hear you talking through
um the why there's a right to privacy if
you protest.
>> I'll tell you what my fear is.
>> All right.
>> Um, and it's good to just talk it
through. Like
>> if you end up having
Now I'll say something negative about
algorithms. We have a world that where
algorithms drive people to extreme
positions. The more extreme you are
likely you are to get play in the
algorithm. Um the and part of that is
anonymity, right? You can say these
things as trial balloons um with
anonymity and sort of see how people
respond to them and then double down.
And I think one of the fears with with
anonymous protests and I'm not I'm not
I'm just talking it through. I'm not
taking a position here. Uh but is it it
it it takes some of those online
dynamics and brings them into the
physical world where like you can
if you're if you're unidentified the
temptation to move to the extreme or the
ability to move to the extreme is
further and further. I believe in free
speech but I also think incentives
matter. So talk talk through what you
think about this one.
>> Absolutely. I have a paper called online
masquerade which I'm going to send to
you
>> and the gist of it is that even though
it's very intuitive to think that way
when you look at the empirical data it
shows that people who are identified
online tend to be more aggressive and
then they tend to be more followed and
more successful in that aggression. And
you know we h we we see this in the
public's domain. We know important
politicians who put their name on things
and who say very outrageous things and
it works. And so anonymity is not
necessarily leading to more aggression
or more toxicity. The second thing is
that if you're in the public square and
you're protesting and let's say you're
protesting peacefully and there is one
person who is aggressive or who is um
showing something who does something
illegal, then of course the police can
always arrest them. But we don't need to
have mass surveillance in in order to
have that to have accountability. We we
didn't have mass surveillance a few
decades ago,
>> right? But I'm not I'm not saying the
mass surveillance. I'm just saying the
anonymity part like if everybody if
everybody protests in a mask, you know,
doesn't that you think that's that's
leads to better outcomes than if than if
they don't? Well, we shouldn't need a
mask because we we shouldn't have this
kind of surveillance that that
identifies
>> as a product of the mask.
>> Yeah, exactly.
>> But like even if somebody wear mask and
you know they break a glass or whatever,
then have the police arrest them and
take off the mask, you know?
>> Right. But you can't I mean,
>> okay, I'm just going to I'll I'll let
that sit. Yeah, I don't want to spend
the whole day debating this, but it's
interesting interesting to hear you talk
about it. All right. Um, now talk about
the gender of AI side finally.
>> Yes. Um so if we have these systems of
prediction in our world I mean again
like people who are building genai tools
they they care very much about
prediction predicting the next word
predicting outcomes and once they can
predict outcomes then their agents can
take the next step where is that leading
>> yes so some authors make this
distinction between predictive AI and
generative AI and I am not sure it makes
sense because both kinds of AI are
essentially predictive. We might use
them differently and they might look
differently, but essentially they're
both machine learning. And what machine
learning broadly does it it has some
data and it projects data that it
doesn't have based on data that it does
have roughly whether it's predicting the
next word or predicting whether
somebody's going to be a good employee
or not. And in the case of generative
AI, it's fascinating. I don't know where
to start. It's fascinating how it got
trained that I mean that's one thing you
know with with copyrighted material with
personal data um so that's one kind of
thing we can just park it but just
notice and in the way it works it's a
very syopantic system as we know it
likes to please people because that's
the way it gets you to be engaged and so
it will tell you things like oh that's a
brilliant idea and it will continually
validate you and they were built they
designed to do that. They were designed
to make people feel satisfied instead of
being designed for another thing for for
example for being truth tracking which
would be much more useful if say you're
a researcher
and I think sometimes we lose sight of
that and one way to put it is in in the
philosophy world and there was this
philosopher called Harry Frankfurt who
wrote a book called on
and Frankfurt says that is very
dangerous for democracy because the
truth teller and the liar are playing
the same game on opposite side of the
court. But the liar has to know what the
truth is in order to lie and care about
the truth. The bullshitter doesn't care
about the rules of the game. They're not
playing the game at all. And that's very
toxic for democracy because it's very
hard to have a debate or a conversation
with someone who doesn't care about the
truth, who will say anything to just
have the kind of reaction they want with
no regard for the truth. And that's
essentially what a large language model
is. It has no regard for the truth. It
wants to please you. If what pleases you
happens to be true, great. But if it
it's not true, then it doesn't care one
way or another.
>> But is that true? Because I mean these
the labs have done a lot of work to
ground these models in truth. And in
fact, like if it was a bullshitter the
way that you would explain, there would
be very little economic value, but we
can see now that there's real economic
value.
>> We don't know whether there's real
economic value. That's the jury is still
out on that. But
>> yes, the the labs have done more.
>> You don't think I mean I I guess like it
seems to me like we're we're past that
point where there where there's a real
question here. Now maybe it's not going
to be broad economic value in a way that
makes the boom uh appear justified but
if you think of places look at places
like coding like there are areas where
we can see today that there is definite
real economic value there.
>> I don't know I'm not saying there isn't
I don't know because sometimes these
systems create mistakes that that then
are very expensive to fix and it's not
easy to make the calculation of whether
we are getting economic value. Um, there
was a there was a paper recently in the
at the Harvard Business Review that
suggested that even when people think
they're being more productive with AI,
when you have researchers look at it,
they're being less productive because
they're they're spending a lot of time
fixing what the AI gets wrong and not
noticing that. Um, so I don't I don't
know. Maybe we do, but I'm not as it's
not crystal clear to me.
>> Okay. But it's nice to have somebody
with a different perspective here. We
shouldn't have the same people all
believing the same thing. No, of course.
And and and I grant that I don't know,
you know, I'm not just saying something
I Yeah, I just don't know. Um but even
if they do um where were we
>> that this I mean this is really the key
question about Genai where your argument
is that it's a bullshitter but and I
will like just throw out there
>> and this is something I really do
believe
>> these companies are are spending lots of
hours and and uh dollars trying to
ground these models in reality because
if you do that they become much more
useful and they've become much better at
it over time.
>> Absolutely. But the interesting thing is
the way they've become much better has
been by getting away from this
probabilistic and statistical thinking.
So for example, when you start chatting
to a chatbot and then they realize that
what you're looking for is for something
say in a manual in a PDF then they refer
to the PDF and that's how they ground
themselves in reality or when they
realize you want a calculation then they
plug into a calculator because these
systems cannot calculate as we know and
so that's interesting that the way to
make it better is to move away from this
probabilistic thinking. So part of my
criticism is not about AI or any kind of
AI, but about prediction, about how
we're using prediction and how naive
we've been about using prediction. And I
think if these systems had been designed
differently from the start, they would
have needed less patches. And how do we
think about this going forward so that
we design systems from the start to be
truth tracking rather than fundamentally
about engaging people for profit?
>> But how impressive is it that they know,
okay, actually my my knowledge actually
stops here. I should use the calculator
or I should actually go look in the PDF.
And I would say the argument that the
model makers would make is
you can't have the tool calling before
you have the base model. And it took a
couple years for these base models to
get smart enough to know when to call
those tools.
>> Th that sounds great and I'm on board.
Okay.
>> But in practice, they're still not quite
there. So for example, I I'll give you a
very recent example. It's weeks old. Um,
if you ask one of these chat bots, u, I
have a box and I'm going to put two
bunnies in it. And then
five months later, I take five bunnies
out. How many bunnies are there? And it
will say minus three bunnies.
>> So, they still don't have enough
understanding to always figure out what
they need. Right? In this case, they
might have gone to a calculator and that
wasn't appropriate, right? Because they
don't understand that bunnies can
reproduce. Um, so yes, with no ones.
>> Yeah, I mean there there are people
examples of people asking the most
advanced models like how many uh P's are
there in straw perry and it's used to
being asked how many Rs are there in
strawberry and it gets it wrong. So
>> exactly
>> um let let's just end this segment. We
do need to go to a break, but let's just
end this segment sort of with your broad
your broad thesis here, which is that
like we is it and you tell me if this is
the right way to to encapsulate it. We
live in a world where there's a lot of
prediction, more prediction around us
all the time. Prediction in the AI
models, uh prediction that's influencing
the jobs we get, whether we get alone,
all these things. And rather than just
take this notion of prediction for
granted, uh, we should probably pay
attention to the nature of those
predictions themselves. Is that sort of
what you're saying?
>> Yeah, exactly. Because predictions can
be weapons of power. They can be power
plays in disguise and we need to be less
naive and smarter about them.
>> Okay. Well, that is all being put on
steroids in these prediction markets
because oftentimes you'll see a
prediction in a prediction market. And
the question is, is that somebody
manifesting an outcome? Is it someone
with direct knowledge of an outcome or
is it actually just a market for what
might happen? We'll cover that when we
come back right after this. And we're
back here on Big Technology Podcast with
Carissa. She's an Oxford philosopher and
the author of Prophecy, Prediction
Power, and the Fight for the Future from
Ancient Oracles to AI. Great title. Uh,
so what do you think about prediction
markets, Carissa?
>> They scare me.
>> Okay, I that is after our first half
conversation, I'm not surprised. What
particularly about them scare you?
>> So the argument for having them is that
they can be a source of knowledge,
right? Um, when people bet with their
own money and if they get it wrong, they
lose, they'll try to get it right. And
when you have a lot of people placing
bets, in theory, we can harness the
wisdom of the crowds. All of which
sounds great. But it assumes that
prediction is a kind of quest for
knowledge. And it doesn't consider that
sometimes prediction is a quest for
power. So for example, if you want to
influence public perception and you have
enough money, you can bet heavily on
something or someone to make it look
more popular. And we already have
examples of politicians betting on
themselves.
It's great use of campaign funds.
>> Yeah. Exactly.
>> Make it look inevitable. That's what
every campaign tries to do.
>> Exactly. And when you start having these
prediction markets have deals with
newspapers in which newspapers are
reporting on the prediction as if it was
a fact, then it gets to be really smart
way to invest your campaign funds. Uh
another example which is concerning
are ones in which
there are many ways to make a prediction
come true and one of those ways is to
make it come true after the fact. So, I
don't know if you read that there was a
case in which an Israeli journalist had
reported about a strike in um in in the
in the conflict and some people bullied
him to try to get to change his story
because they stood to win $900,000 from
a bet.
>> Yeah. It's like fantasy sports.
>> Yeah.
Uh another case that is concerning is
six anonymous accounts earned $1.2 $2
million on a prediction market betting
for the attack on Iran. And some of
those wallets were funded hours before,
which suggests that they might have had
insider information. And if they had
insider information, did that conflict
of interest lead to a different kind of
decision? Another concerning case is
cases in which an adversary might be
using those prediction markets to inform
their own tactics. And so it might
change conflict itself. And even when
there there isn't any bad player, even
when it's just people who are
well-intentioned, I worry that many
people thinking that there's going to be
a war, makes it much more likely for
there to be a war because the other
country interprets it as a threat and
then they escalate and then we escalate
in response and suddenly it's a spiral
that nobody wants it to happen. But our
expectations shape the future.
Why do you think people are so
enthralled by these markets? I mean,
they're having a moment uh because
they've been accurate uh and I think
better than the polls in some cases. Uh
but just the outsized attention and
interest in them is it's very
interesting. What what do you think is
behind
>> I don't know.
>> You're a philosopher.
>> I'm a philosopher.
>> Um I don't know. These are hypothesis
but one hypothesis is that we have truly
become so accustomed to thinking in
these betting
terms that we are exporting that kind of
mentality to more and more spheres of
life and I think that's a very bad
thing. It's also having has to do with
gamifying life. And there's something to
me very disturbing about standing to
earn money from a bet
in which if you win somebody's going to
suffer greatly like in the case of a of
a war or something like that. Because in
the ca you might say well you know
prediction markets aren't that different
to the stock market right it's also a
kind of bet. But the stock market, when
you invest in a company, you're actually
contributing capital to that company in
a way that is an important contribution
to society. Whereas the prediction
market is just a bet. And the only value
they might have is if they're accurate,
but accurate at what price and accurate
in what sense and and accurate when and
there there's a lot of noise. And even
if in one instance you might say, "Oh,
in this case, the prediction markets
were more accurate." Well, what does
that mean? What does that really mean?
and it doesn't nullify all of the other
problems. We don't want to gify
everything. But I think maybe another
another reason why they're so popular is
because there is this general sense of
that we're living through times of high
uncertainty. And that is, you know,
leads people to be anxious. I can feel
it as well. But I would like to invite
people when they feel that anxiety about
uncertainty to realize that uncertainty
is actually good news because it means
that the future is not written and that
means that we can intercede in it that
we can influence it and that is a great
news. If you knew exactly what is going
to happen tomorrow, you probably live in
a police state.
>> Yeah. But then I mean even if there's a
prediction market out there, you could
probably also intercede. I don't think
you have to give up. Like same thing
with political polls, right? Like you
could say the same thing about political
polls as the as the prediction markets
that they become self-fulfilling
prophecies because they do in many
cases.
>> Yeah. And why do we do political polls
in a way we do it for entertainment and
is that good enough? Because I'm not
sure it's good. Another reason might be
well you might you might be well
informed. You might want to be well
informed because depending on how things
are going, you might vote one way or
another, right? Tactical voting. But I'm
not sure we should incentivize people to
be tactical voters. The ideal democracy,
I think, is one in which people vote
according to their conscience. And that
says more about what they want and and
that is more democratic. It seems to me
that when we push people to think
tactically.
>> Yeah. I don't think I'm going to stand
on the table for political polls.
>> Okay.
>> They kind of annoy me also.
>> Fair.
>> Um, all right. Let's let's end with
this. I mean, you have a perspective
that
you got to have use comedy in this era
and that's sort of a counterweight to
some of these ills that you see. Talk a
little bit more about that.
>> Yeah, it's really funny because my first
book, Privacy is Power is
kind of gloomy in a way because at the
time everybody was so excited about tech
and not seeing surveillance and I felt
that somebody like we needed a warning,
more of a warning. But now it seems like
we're in such a gloomy space in which so
many people are making horrible
predictions about the future. And I talk
with my students and sometimes I don't
know if young people can even imagine a
bright world and if they can't even
imagine it how are we going to get to
that kind of bright future that I wanted
to emphasize the good things that we
have and two very good things that we
have and are very important resources
and tools are first the analog world.
Sometimes we forget about it. We are so
dazzled by the digital that we forget
the the world of things of your favorite
coffee shop and your favorite bar and
the people you love and your dog and
your and the e ecological world and
trees and rivers and to ground ourselves
there and and cherish and protect that.
But the second thing is humor. And humor
is quite important not only because it's
a way to make life more fun and get
through the hardest parts of life
better, but it's also a very important
tool in the toolkit of democracy. when
you don't when you lose sense of humor
you're probably also losing some amount
of freedom and and and democracy and for
example um Milan Kunda the novelist
wrote a novel called the joke making
exactly this point um given his
experience with communism
and so the way that we one way to
confront all these gloomy predictions
are first noticing that they're
predictions predictions are not facts
they can be defied and thinking okay is
that the future I want and If not, what
am I going to do to create the future
that I want to live in? But secondly, to
treat it a little bit with less
seriousness. I'm not saying I'd be mean
or anything, but just like laugh a
little bit about the absurdity of life.
>> Um, and humor is also a kind of
intelligence. It's a kind of noticing um
the absurd and noticing what's off. And
one example I give in the book is that
of Seinfeld because it's it it's also
about curtailing predictions. When
something's funny, it surprises you in a
certain way. Part of what makes a joke
funny is that you're expecting something
and then you get something else and that
makes it funny. And Seinfeld was
brilliant at this is brilliant at this.
And the show is a very interesting case
because it's exactly the opposite of
what an algorithm would select. So the
show was incredibly unsuccessful as a
pilot. Focus groups thought that it was
weak and people didn't like it. It
wasn't what people wanted to watch. So
if we had had algorithms back then
selecting what people want to watch,
Seinfeld would have not been one of
those cases. But there was one executive
in NBC who really believed in the show
and championed it. And the first few
seasons
were a bit successful. It had like a
niche following, but it was still small.
And then it took off. And part of why it
took off is because it changed people's
sensibilities. It changed our sense of
humor. And that's part of what great
comedy or great art or great literature
does to us. It makes us look at the
world different. And when we use
prediction too heavily, when we only
predict what's going to be successful
based on what has been successful in the
past, we are missing out on those
innovations that will make us look at
the world a new and different.
>> Yeah. And to your point, the one thing
that LLMs do the worst is humor. They
cannot make jokes.
>> Yeah.
>> And it's because I think like as you
point out, they're just used to the
average of averages and not throwing
curveballs.
>> Exactly. And also because there's no one
there. There's no one being irreverent
towards power. And part of comedy is
that is like the court gesture. What
makes it so funny is that you know they
are challenging the king in a way.
>> And they are the king.
>> And they are the king. Yeah.
>> The book is prophecy, prediction, power,
and the fight for the future from
ancient oracles to AI. Chris Le. So
great to have you. Thank you for coming
on the show. This was fun.
>> Thank you so much for having me, Alex.
It was great.
>> Awesome. All right, everybody. Thank you
so much for listening and watching.
We'll see you next time on Big
Technology Podcast.