Silicon Valley's Effective Altruist vs. Accelerationist Religious War

Channel: Alex Kantrowitz

Published at: 2023-12-04

YouTube video id: AIDMnH3jKi0

Source: https://www.youtube.com/watch?v=AIDMnH3jKi0

let's dig into silicon Valley's
religious war between the effective
altruists and the accelerationists in a
deep discussion with a fantastic group
of guests that's coming up right after
this welcome to Big technology podcast a
show for cool-headed nuan conversation
of the tech world and Beyond and this
week we're going to get into the EA and
accelerationist war that may or may not
have been at the center of the open AI
debacle but certainly are is starting to
play a bigger role in Silicon Valley and
we need to get into it so we have an
unbelievable group of guests joining us
today all of whom have written about
this um first uh returning uh actually
in in short order after her first
appearance and uh a crowd favorite Molly
white is here she is the author of Molly
White's newsletter which you can find at
newsletter. Molly white.net and she's
also a crypto researcher and critic
Molly welcome back to the show thanks
for having me back great to have you
here we also have have Ryan Brodrick
he's the author of garbage day on
substack it's a great newsletter that
looks at internet culture I would say
you're probably the most imers reporter
on internet culture working today Ryan
welcome thank you thank you for having
me and uh yeah I'm excited to talk about
um my new religion that I I worship now
so I'm very excited I'm excited to hear
which one you picked we also have uh
Deepa C Raman who's a reporter covering
AI at the Wall Street Journal who just
wrote a Terri long story about the role
of EA in the explosion at open AI Deepa
welcome thank you thanks for having me
great to have you all here all right so
let's get right to it the term effective
altruism and the term accelerationism
have been thrown around uh kind
of in a in a wild amount without most
people knowing exactly what they mean
and I think that like on this show in
particular like getting the definition
and talking about who these
organizations are and what they believe
in is pretty crucial so Molly in your
story you did a great job defining
exactly you know which what these two
groups believe in who they are um can
you take us quickly through like what
the Divide is between EA and and the
effective uh accelerationists and I know
that your perspective here and we'll get
into it is that you know we shouldn't
just be focusing on these two groups but
like just for definition purpose let's
talk about who they are sure so the two
groups are um you could probably spend
an hour talking about each one but um at
least especially when it comes to AI the
two groups have become very prominent in
terms of their philosophies around AI so
briefly effective altruism is a group of
people who believe in sort of doing as
much good as possible uh with this sort
of like data driven analysis approach um
and they have some of them have very
recently started thinking about whether
it's really the best idea to be focusing
on people who are alive today versus the
possible thousands millions trillions of
people who could be alive in the
long-term future um and when they start
thinking about that they start thinking
about existential risks so things that
could threaten those groups of people
and one thing that they have been
focusing on a lot lately is this idea of
artificial intelligence as an
existential risk and the idea that if
you were to create a super intelligent
artificial general intelligence it could
pose you know it could it could kill all
humans it could pose this enormous risk
to people and so that's part of the um
effective altruist concern and why they
have largely adopted this idea that
although they think we should be
developing artificial intelligence we
should be doing so in a
slower um more methodical way that tries
to account for these risks and tries to
create an an AI that is aligned with
Humanity for whatever that means um the
effective accelerationists on the other
hand believe that we should just go all
out no breaks develop as quickly as we
can without any real regard for the
risks and that it will all just sort of
work itself out because of some
philosophy that they sort of root in the
thermodynamic bias of the universe that
you know it just the universe is not
biased towards destroying itself and so
therefore everything will be fine those
are sort of the general competing
ideologies that have been recently at
the Forefront of the AI debate although
they are hardly the only groups involved
Ryan can you talk a little bit about the
formation of these groups or like how
they they manifest online I I understand
there's like big Message Board
communities that sort of um you know
basically put forth these ideologies and
debate them and it's like pretty
interesting the way that they organize
and build online yeah I've tried very
hard not to have to explain any of this
for many years now uh cuz it's just so
tedious and embarrassing for everyone
involved Sor here we go yeah but
basically at the uh the middle 2000s
there was the rise of a very kind of
Niche but influential uh Message Board
in blog called less wrong and less wrong
um it's kind of like the thinking man's
forchan I guess um with like a real uh
preoccupation with uh the emergence of
AI the uh the possibly apocryphal story
about Les wrong that I think is very
funny and I try to spread as much as
possible is that they became obsessed
with the idea of an AI arising in the
future that would kill anyone that
didn't help it to be born and sort of
created this like version of cyber hell
that they became obsessed with and had
to ban discussion of it from the board
for a while good like Message Board
drama stuff uh I highly recommend going
back and checking out some of the
conversations but it was the birthplace
for several sort of uh digital
philosophies online philosophies so the
the big one is obviously ly effective
altruism but it was also the birthplace
of the very beginnings of neore
reactionism which was sort of the uh the
the initial philosophy that Steve Bannon
subscribed to a lot of kind of like
techn feudal thought was born on lesong
this idea of using automation
to replace democracy replace countries
turn them into machine operated city
states and you'll see kind of references
Sly references to this idea still
kicking around uh Silicon Valley because
it's like a very seductive idea um
effective
accelerationism didn't so much start on
less wrong it's sort of like this uh I I
sort of think compare it to the the idea
that you had the the alt-right who then
became embarrassing to gen Z who then
created the gers and effect of
accelerationism is sort of the same idea
which is that uh the effect of altris
the less wrong guys they're like Doomer
Boomer cringe dudes so we got to create
like this new idea a to Galvanize
younger people around and so if uh in in
that's sort of how I kind of view the
breakdown um but if you wanted a really
good example it's more than just a a
different iteration of the same ideology
right like you have one especially
regarding AI you have one that's very
much into um slowing down AI progress
worried about that risk and then you
have another group that's like no put
the gas pedal on as hard as you can or
or am I not seeing it right
I I have not found much difference in
what these two groups want only the time
frame in which they want them to happen
um it is it in my opinion I I I have not
seen enough difference between the two
in
fact effective effective accelerationism
was more or less a meme until last week
like it wasn't really even like a thing
it was sort of just like a way to make
older cicon Valley guys look cringe or
more cringe so it's now becoming a
little more uh nuance and a little more
coherent but it's it it mirror is almost
exactly kind of like the splintering of
the farri post Trump like but but for AI
essentially yeah but you had I mean it
is like you had Mark andreon right you
know effectively whether whether someone
labeled him as this effective
accelerationist or not sort of carrying
the banner of this group and actually
what he's saying is holding weight
within Silicon Valley don't stop and
basically um continue to build as fast
as you can right and don't worry about
regulation or safety or anything like
that so that is a divide though it is uh
it is definitely a divide I I am I'm
curious uh if Mark Anderson is is sort
of thinking this through in a
philosophical sense or if he's just sort
of like latching onto it because it's
exciting well yeah I mean I was just
gonna say I mean isn't mark when he did
his Tech Optimus Manifesto when was that
a couple months ago didn't he say that
um any type of deceleration of AI would
cost lives and so if you're slowing down
AI development you're basically a
murderer yeah he did hang on he did say
that he did yes I have yeah well I mean
like I you know there's also this it's a
real belief in like I effective
altruists definitely think that you know
AGI or AI is going to usher in this new
potentially Golden Era and so do the
effective acceleration is so I do agree
that it's a it's a difference in time
scale but where like they are like well
we need to get everything perfect and
right however we Define right and
perfect first the accelerationists are
like now now now and there's a moral
imperative to go now now now so Deepa
can you talk about how this like
practically split you know with within
the boardrooms and the Executive offices
within some of like the bigger tech
companies you know especially open AI
but also you know this has played out
with anthropic as well which is
effectively a split away from open AI so
talk about how this we actually
manifested these ideologies in different
companies and and yeah Paces of
innovation I think I think it's it's
it's it's really interesting it
basically is part of it's a force that
sh Apes everything either you are pro EA
or you're anti EA or you're mixed on EA
but no one is I've met has been neutral
on EA or like they don't think about it
that much right like it's definite it's
a it's a force whether you like it or
not and I think the way that it it sort
of has coursed through these companies
is is pretty interesting I mean open AI
had a lot of
EA aligned researchers and Founders at
very beginning and um a lot of the early
employees so Dario amade who's now the
CEO of anthropic was you know running
research at open AI for a long time and
he is very EA aligned um for this story
we asked all the companies by the way if
they agreed with characterization of
them as EA or not and all of the
companies told us they are not EA
aligned well sure they're not going to
say they are now I mean goodness just a
little data point but you know the way
they're perceived is definitely EA right
and so you have at open a kind of two
different cultures forming uh one that
is more EA adjacent so this is like
worrying about existential risk worrying
about um you know an era where humans
are treated by machines the way humans
currently treat animals which is
something that Ilia Sater has said
internally to various employees then you
also have this like very practical side
you know these are a lot of people that
came from Facebook actually that are
trust and safety type alums that think
about oh technology at scale can do
really crazy things let's try to figure
out how those principles might apply to
generative Ai and those two um cultures
theoretically don't need to be in
Conflict but are at times especially
with the resource disparity so if you
look at open AI they had and have this
team that is dedicated to Super
alignment which is how do we align super
intelligent AI systems that as far as I
know don't really exist yet and so and
what should the alignment look like and
one of the projects that you know we
reported on was they're trying to create
an AI scientist so this is like not a
science like a human scientist like an
AI scientist that would study alignment
um and they have all that really
interesting work going Ilia sat is like
part of that team it so it has a lot of
internal cache it's you know interesting
and then they have these more I don't
know like hand-to-hand combat kind of
trust and safety folks that are working
on things like the next election right
and so open AI a couple months ago just
hired a somebody on their policy team
focused on the 2024 election so it's
like on two tracks right like we have
the super alignment team that is
building these systems safety systems
for systems that don't exist yet based
on values that don't they haven't
defined but then we're also planning for
you know we just hire one person so far
for the election which is an election a
lot of us have been calling the AI
election for a while because we're wor
you know the impact that AI could have
on that on that time so a lot of the
people and sources we talk to saw that
resource disparity is a not necessary
right like why can't we do both was a
question that I heard a lot but also
reflective of EA style principles that
even if people at the company don't
think they're EA they're influenced by
EA such that the long-term stuff gets
more more resources than the short-term
stuff so that's just one example of how
it cuts across yeah from your story
frustrated employees said attention to
AGI and Alignment has left fewer
resources to solve more immediate issues
such as developer abuse fraud and a
various AI uses that could affect the
2024 election they say the resource
disparity is reflects the influence of
EA so Molly I'm going to turn it to you
I'm kind of curious like um some of
these things that EA is thinking about
uh whether that's the risk from AI or um
how to how to like you know be maximally
effective like I don't know it seems it
seems like they interesting way to try
to solve problems I'm kind of curious
like whether you think there's Merit to
their ideas and what do you think the uh
the impact is you know as it gets
applied within tech
companies yeah so the thing about
effective altruism is on the surface it
seems very reasonable you know the idea
is that you should you know most people
are altruistic to some degree and you
want your altruism to be effective like
that is hard to argue with um but you
end up sort of in this Rabbit Hole
around trying to Define exactly what
affect
means and there's all of this
philosophizing behind it it's very
heavily tied to this sort of rationalist
utilitarian uh schools of thought where
you're sort of trying to develop these
impartial equations around what the most
effective use of your money and your
time is um and so and so people take
that in very different directions um
there are people within the EA movement
who you know donate massive portions of
their income to you know pandemic
prevention or malaria bed nets in Africa
or you know things that are probably
doing a lot of good in fact the EA
Community has done a lot of good in some
you know charitable movements um so it's
really hard to sort of criticize that
but then you also have this side of it
that has spun out into this almost
extremist um you know we must focus on
existential risk over any other problem
including very serious problems that are
affecting humans today
um we must value the lives of future
humans above the lives of humans today
um and so it's one of those things that
like it's it's really hard to to to um
criticize the movement because you'll
often get people who will say you know
look at all the good things yeah and
it's it's one of those things that like
the problem is that when you when you
take effective altruism and you put it
into practice you end up with a vast
sort of milu of of implementations some
of which are enormously problematic and
so um it's it's sort of hard to
criticize EA on the face of it because
you could be talking about 10 different
things but in the same vein you know you
should not necessarily be looking at EA
and going oh it's just effective you
know philanthropy that's great um
because it is much much more than that I
me Molly you've been studying these
organizations some organizations where
people that are you know inherently tied
to EA rise very high or sometimes run
them I mean Sam bakman freed who we've
talked about on this show was tied to EA
we also have you know folks extremely
high with an open Ai and anthropic um
tied to EA how has it been so effective
at turning out leaders in the tech world
who believe in this stuff well I think I
think it goes in both directions to some
extent I think some people were to some
extent shaped by EA you know Sam bman
freed encountered it very early in his
life and at least to his talent made
decisions about his life based on the
philosophy whereas there are other
people who have sort of adopted it after
much of their careers so you know Elon
Musk for example has somewhat embraced
effective altruism but it doesn't seem
like it was really a defining feature
more as it was something he learned
about and said oh yeah that sounds about
right um and I think the same is true of
Mark andreon and effective
accelerationism for that matter you know
he was not thinking about effective
accelerationism which was not even a
thing back in you know when he was
developing web browsers this is
something he is now adopting sort of
post Hawk um and I you know I argued
this in an essay that I wrote but I
think you know effective altruism and to
an extent effective acceleration or
sorry the other way around effective
accelerationism and to an extent
effective altruism are really just sort
of a rebranding of a lot of the same
philosophies that have existed in silic
Valley for a long time where exactly
people really want to feel good about
themselves like they're you know
pursuing a higher cause not just the
pursuit of massive wealth and these
philosophies are very convenient because
they allow people to Define themselves
as these hero figures who are you know
accumulating wealth but not because
they're greedy not for their own
purposes it's for a higher cause you
know in the case of the altruist it's
because they're going to donate it all
at some point in the future maybe um in
the case of accelerationists you know
they're creating all of this wealth
they're accumulating all this wealth in
the pursuit of this
technological uh goal that is you know
completely Beyond criticism because
anything other than that is akin to
murder um and so you know I think it's a
I think it's a philosophy of convenience
in a lot of cases also I just want to
step in before we get too far away which
is that like they're not opposites in
the sense that like EA are communists
like they're both pretty aligned on the
idea of making as much money as humanly
possible uh they're both supported by a
lot of anti-democratic thinkers they're
both sort of uh Center to not maybe just
fullon rightwing like they're they're
not total opposites they're just sort of
uh aesthetic there a most of it is
aesthetic I would I would argue but then
also yeah resources internally sure but
like they're not they're not opposed to
each other one is not like a Workers
Party right right and I would add
something that I wrote about which is
that you know I think recently we've
been seeing this framing of the AI
debate as being between the effective
altruist and the effective
accelerationist as though those are the
two sides of the debate when in reality
that is a very small portion and it's a
very extreme and very loud portion of
the debate but there is sort of this
vast history of people who are have been
studying artificial intelligence machine
learning you know ethics all these
different things for decades who are
like hello you know like we have a lot
to say here we are not the shiny Mimi
effective accelerationist but you know
we we have been thinking about this
right let's bring in deeper here because
you know you the headline of your piece
did paint a different picture yeah well
it's I I think it's not necessary I
think effective altruism is so
polarizing that it does split the
industry I think one group that has also
served as a foil for the EA people or
What's called the AI ethics group so
these the people that think we should
probably focus on current problems
things like bias and misinformation and
all kinds of things those are the people
that are actually getting squeezed at
open AI right like that that's a lot of
the trust and safety folks that I spoke
to
um uh were felt like they were more in
that bucket and the that it at least
that they thought the open AI team that
did trust and safety was more in that
bucket and they were pushing really hard
against this movement but it just didn't
have the same clout right and so and
then you also see this other this other
group the effect of accelerationists
that just think it's you know again like
murder to slow down AI development but
the AI ethics people are like but we
don't this isn't our these are not our
people either right so they're just kind
of stuck in the middle trying to find
ways to uh minimize harm not a lot of
people are listening to them at all the
companies but but how did this factor in
in the Altman episode right because the
narrative is that he did get pushed out
by the EA group and he was more aligned
with the acceleration folks and in fact
I think this is from your story he
called effective altruism an incredibly
flawed movement with very weird emergent
Behavior good take I I
think I think it it it played out in a
couple always I think first it's
probably important to note that the
board in their statement as sort of
lacking in detail as it was said that
they didn't fire Sam for anything but
lack of cander right like they didn't
fire him because he wasn't AI safety
enough um but our understanding is that
there were a lot of AI safety debates
this there was constant tension about
you know the poll to commercialize
versus the poll to versus the poll to to
slow down and like think about the
long-term risks you know while the trust
and safety people are like look at me
like they're trying to get everyone's
attention and not really succeeding
either but
Sam I think got into some disputes with
board members a little bit about these
issues and because the board was EA
heavy yeah well it two of the board
members were um associated with EA
organizations so that's Helen toner who
used to work at open philanthropy which
is a very EA organization and um Tasha
Tasha right and she was on and Elia also
who if you talk to people who know Ilia
he's like I'm not EA I'm AI safety which
is like a subgroup I guess of of this
right so I mean partly this is a little
a hard conversation at because yeah
there's it's not like you're a democrat
or a republican registered you have like
it's a big range more religious right
right so there's a the names are all
very similar and
overlapping like AI safety is different
from AI ethics apparently you know
so it's like types of milk it's like 2%
whole soy you know they're all kind of
different yeah yeah well they're all
milk
right Pro milk they're all very Pro milk
yeah if you think everybody's so similar
what do you do you think there was an
ideological dispute there because it
seems like in your piece that you
thought there was yeah no I mean like I
have no trouble believing that a bunch
of nerds read too many blog posts and
destroyed a company over it like and and
that it's happen having Ripple effects
across Silicon Valley like yeah no I I I
I am extremely dismissive of this stuff
and I think it's very silly actually but
I do think that the people who believe
it take it very seriously and I I sort
of view AI as a bottleneck
philosophically for most of these groups
so like you know really like EA people
should be like super into vaccinations
they're like preparing for the covid 24
or whatever but they've determined that
it is the ultimate risk to humanity uh
because it's exciting and interesting
and it like makes for a good post um and
and that is sort of my take across the
board with these groups is that like
they all require the buyin that AI is
important and not just
autocorrect right okay so Deepa sorry
you you were you were trying to take us
through exactly what happened there
between these two groups right so you
know so Sam would have uh like
conversations with different people at
the organization and there were a lot of
debates inside open AI about AI safety
and where's the line and how do we
navigate the line and whatever but it
wasn't so much his stance that bothered
the board it was the response like how
he treated the board and how he engaged
in these conversations which again the
board hasn't provided a ton of detail
here so it's very easy to widely widely
speculate but I think um it's safe to
say that they felt like he was lying
right or that he was misleading them in
the way he responded to to different
things a lot of this is still very up in
the air I'm working hard to try to get
examples I'm trying you know but it is
and it but it has been a little bit
frustrating um because if the board is
going to say hey Sam lacked cander in
these conversations like what happened
right and how give me a specific example
of how he responded and what did AI
safety specifically like what
conversations around AI safety were
so made him respond in such a in a way
that made the board really take an
extraordinary step to fire him right
there's still a lot of open questions
there Y and right now what we're kind of
seeing is that you know people using the
framing that um this is this was this
battle between EA and accelerationist
like people are saying the
accelerationist won and EA has lost and
EA has you know had taken a couple of
setbacks here with the open AI situation
and and the S bakman freed situation so
Molly how would you say that that they
are responding and what do you think
this means for that
movement well I mean I think it's kind
of the same way that crypto responded to
the SPF movement where they can just be
like oh that wasn't us you know like
there's a lot of reframing and
redefining of terms when things seem to
go poorly for a movement and so I you
know I wrote this piece about effective
altruism and effective accelerationism
and the vast response that I've received
from people who Define themselves to be
effective altruists is like oh that's a
totally different group of people you
know that's not what EA is all about
trying to sort of
downplay uh what has happened recently
or sort of do the you know no true
Scotsman around like those aren't real
effective altruist um type of things uh
but you know like I said earlier I think
it's it's really hard to to sort of cast
such a broad net around a movement that
has you know a very broad range of
people in it um but I do think you're
right that the the takeaway that a lot
of people got was effective
accelerationists have won effective
alterists have not you know we should be
clearly because of this you know the the
the winning philosophy is just to
develop AI with no breaks and you know
go completely pedal to the metal on it
um which I think is concerning you know
especially again because I feel like it
it removes a whole important part of
that conversation you know not only are
there more people than effective
altruists and effective accelerationists
at open AI having these conversations
there's a lot more in AI than open Ai
and so you know the idea that effective
accelerationist now will be dictating
the future of this entire field I think
is uh is it's something that risks
becoming sort of a self-fulfilling
prophecy where we now give these people
far more uh attention and and sort of uh
you know weight to their opinions than
probably ought to happen M you've
written about this but like one of the
things that we've also seen is that like
both groups believe that AGI or like
really scary Doom type of AI
applications are right around the corner
which like if you're looking at the
technology you're like what like even
this big qar you know Revelation that
came over the weekend about open AI like
you know it seems to be not a complete
nothing Burger but not like a you know
10 alarm fire like people are making it
out to be so why are they speaks a
little bit to to Ryan's point that like
there isn't actually that much of a
difference between these two groups they
do they do diverge on sort of a very
important point but they both have this
sort of Mythology in their heads about
this you know Godlike AI being that is
just around the corner and you know I
think it's important to look at the
historical record when it comes to that
we have seen predictions of gods being
created since I mean about as long as we
have a historical record um Molly
Bitcoin could go to a million so
hyperbitcoinization could happen you
know AGI and and Bitcoin are going to
the Moon y yeah but like I mean you know
we heard this with the sort of Sparks of
artificial general intelligence paper
we've seen this yeah we've seen this you
know for years and years in the
Computing sector and far longer than
that just in the sort of religious uh
side of things and so I think you know
it it's one of those things where people
get very tied up in the mythology and
the religion behind it and they don't
necessarily pay all that much attention
to the
technology um and people begin to really
change their own definitions of what
artificial general intelligence means
what sentience means what Humanity means
in order to make these things seem more
plausible or more true than they really
are I also don't think it's an accident
that we're hearing about all of this
stuff right as every major company has
rolled out an AI widget for their
service and it's sort of been a flop
like we're we're in like an
extremely uh low like kind of boring
moment for AI in between releases the
hype is sort of dying down as it does
every time we get a new version of these
tools and now all of a sudden there's
like two competing up like Doom CS
around this idea like that doesn't seem
like totally an accident to me yeah
Deepa like from a practical level like
how is this playing out you know in in
corporate boards and within AI companies
is it going to continue to
spiral one thing that I've now heard
from a few different people is that the
reports about The a letter being sent to
the board about qar um that that that's
not true and that it didn't precipitate
the board decision okay I mean part of
the thing re like there's a vacuum of
information about what actually
propelled the board to do whatever it
did so part of what we're describing is
the debate and speculation that's
happening in its absence right but more
broadly in Silicon Valley like how is
this going to evolve how is AI going to
evolv no the EA versus acceleration
I was talking to somebody about this
last week who said that they felt that
the open AI Saga and all the speculation
about EA um was more damaging for EA
than S bankman freed because with SPF
you can look at him and be like that's
one guy and this is like a movement this
like a lot of people acting in unison
apparently irrationally right like that
is the the the attitude and that's the
Fe that's the the vibe right now and I
think there're a lot of Truth to that I
think that you know EA people already
are viewed as a really insular really
kind of Clubby organization that they
only ever site EA Papers written by
other EA people and they only ever want
to work with EA people this isn't
obviously everybody this is stereotype
but that I wonder if the double hit to
their integrity and their image is going
to force like a further contraction or
where they just s of like like go deeper
but the reality is also that over the
last year this movement has had a lot of
influence not just on a practical level
like on the ground at AI companies even
though that's significant like we
haven't talked about hiring yet and a
lot of EA organizations start on student
campuses and that's the pipeline if you
want to hire an AI and like researcher
at this point you're gonna hire from a
serious University like Berkeley or
Stanford or Oxford and a lot of them are
in this movement so there a gigantic
overlap so you're probably going to hire
a bunch of EA people and you're going to
have to make them happy because they
could leave at any time right there
there's huge competition for people can
actually build these
systems so there's that but then you
know so they'll still have a lot of pull
just from the hiring perspective they
also have a lot of pull on a policy side
I mean if you look at some of the
comments from the EU about AI risks they
talk about existential risk like sunak
entire AI safety Summit a lot of that
were a lot of the speakers were EA
aligned or EA um the White House even
like defin you know Dario Amad and like
other people who are broadly viewed as
EA people they went to the White House
and talked about their views like they
have a seat in the room so I don't think
it's just going to like go away at all
because they've already been there and
and they're very serious and they keep
you know they're in the room they can
they continue to hold some kind of
influence right I got to ask you what's
going to happen to anthropic I mean
anthropic was seen as this like
counterweight to open AI but run by you
know a lot of I mean I know they're like
quote unquote not associated with EA but
they have big EA influence there they
left open AI because of safety you have
Amazon that just invested 1.25 billion
in that company um Google also invested
billion like more than a billion um are
we going to see the same stuff happen in
anthropic as we did Within in open AI I
mean what's the future of that company
now because it's even more closely tied
to this movement yeah well it's also as
as I understand it a little bit more uh
homogeneous like there are a lot more EA
people there right there's a culture fit
test like when you get hired uh where
you're
asked that that as far as I understand I
don't have like the questions in front
of me but a lot of the
questions are sound they're basically
trying to test whether or not you'd be
EA enough to be an anthropic that's how
it's been described to me
um good good Lord and they you know they
have a philosopher on staff who really
everybody does that right but is Amanda
ascal so she's the former wife I think
or of uh will mcll who is the right the
founder of EA so there's like a big
tight connection there they
uh but you know they're not they're will
they be split up by it probably not just
because they are all coming from the
starting point that EA is good it's just
maybe the execution is a problem or we
are misunderstood and it's a public
issue it this some of the stuff kind of
reminds me a little of like what it was
like covering Facebook if you remember
Alex like a lot of there are a lot of
people internally at Facebook that were
like well no Facebook's really good we
just need to get better at managing the
bad stuff and like making sure people
can see how good we are there's
definitely that population inside the
company and it it gives me the same Echo
right so it seems like we're we're
largely EA skeptical in this
conversation um I first of all if
someone has a Counterpoint they want to
make um my email is alexb technology.com
so hit me up there I'm happy to listen
to it maybe we can uh bring you on but
also like I'm curious just from the
group here like what would someone who's
steeped in EA or effective
accelerationism say is like uh you know
is there is there any like rebuttal to
some of these points that we're making
that we should be con you know that we
should be
considering I can think of like a
hundred but yeah I mean they
they I mean Molly's point was correct
which is that like if you if you really
talk to a person who believes in EA like
they're they sound very reasonable and
most of the things they're going to
point to are are not s bankman freed
committing Financial crimes right like
the it's it's mainly just that when you
start to take that longterm view uh
things start to become normalized that
possibly wouldn't be uh if you weren't
looking so far in the future that's why
a lot of EA guys end up showing their
ass on Twitter like posting weird stuff
about race science and like demographic
shifts and stuff and with the
accelerationist like I I mean I I I I
hesitate even calling it a philosophy I
think think it's still very much just a
meme like it's the Dogecoin to to EA
Bitcoin right like it's not really
anything yet it could be and in fact
like I think that if there's really any
Legacy from the last week of drama
inside of open AI it's that the
accelerationists all found each other
and like now know how to talk about each
other and now we'll start to see sort of
like those memes becoming more serious
and becoming like you know possibly a
real Counterpoint as opposed to a bunch
of different factions wanting different
things you do like with their bios like
Gary tan the head of white combinator
like right in his bio he calls himself
an effective
accelerationist yeah yeah I think you
know I think a victory of the past weeks
week or two weeks or so is just the
attention that uh has been given to the
effective
accelerationism idea again I agree it's
not really a philosophy I wouldn't even
really call it a movement I I think it
is mostly a Rebrand of you know move
fast break things which has been silicon
valleys you know Mantra since early
Facebook um but with the added sort of
religious almost uh philosophy behind it
and the the very effective altruist uh
style of speaking and writing in this
very sort of esoteric and long- winded
way um it's I think it's
uh it's it's probably it's something
that I don't give a lot of weight in
terms of its own sort of um Foundation
the the quality I guess of the ideas
behind it but I do think that the
effectiveness of its proponents in terms
of spreading it uh sort of mify it
making it appealing to especially
younger people who are just getting into
Tech and trying to find a way to think
about their place within this sort of
huge capital list structure uh is
something that probably should be taken
seriously um as well as the tendency of
some of these so-called movements you
know except effective altruism being one
of them to almost radicalize the people
within them as they normalize these sort
of thought exercises that can L like
lead down a very concerning path Ryan
I'm kind of curious like you read the
message boards I mean don't message
boards and online forums have a CH have
a tendency to radicalize and also like
make the most intense ideas rise to the
top is that is this just kind of like a
classic case of that I've never heard of
that ever happening before what are you
what could you possibly be referring to
yeah I mean no it does I mean that's
that's definitely the case um I mean we
didn't even really go further into this
because it's so dense but the reason I I
was even writing about effective
accelerationism this week is because I
got sent a tip about a group that was CA
causing trouble on Tik Tok for my
readers and when I started poking around
it turned out that they were a group I
had written about previously of crypto
accelerationists that had rebranded into
Ai accelerationists and were part of the
New York downtown scene trying to get
Peter Teal's money like these are the
same people that have been kicking
around for five years trying to bring
about some kind of white nationalist
apocalypse via Automation and they went
from cryptocurrency to Ai and it it's
the same people doing the same stuff
sharing the same memes that they've been
sharing since
2017 um and I think when you talk about
the radicalization of these message
boards of these online communities that
are talking about these philosophies
what it is is that you don't know who
you're talking to and you don't know
what their goals are or where their
funding is coming from or what they're
influenced by so you know it's very
common to be reading like a Twitter
thread and all of a sudden like if you
don't know I mean Molly or DEA might be
able to do this but I don't think the
normal person be like oh yeah there's
the EA person there's the rationalist
there's the neore reactionist there's
like the techn feudalist mold this mberg
fan or whatever like normal people don't
have the time to care about this [ __ ]
and unfortunately like people in Silicon
Valley really care about it and so it's
it's very thorny um and we don't really
know where this stuff is traveling or
how it's mutating until it's in front of
our faces and one of the things that I
wrote down U before we started is why do
we need these movements like why can't
we just have folks that are building
cool things and you know saying they're
trying to build cool stuff like chat GPT
is a cool product like do you you don't
really need a um like a religious I do
think there was like a Schism around
clippy in the 90s there were the pro
clipp ists and the anti- clipp ists and
this was very similar I think they
exactly but there were always been there
have been movements around technology
like who should technology be for what
should it look like and this is kind of
what we had asked of Facebook right like
why didn't you guys think about the
harms beforeand people is de facto good
right right right but maybe it's not and
we ask that of the tech companies and so
I think the defense that you'd hear from
EA groups is like look like these are
real problems and we're going to focus
on the problems it's just got to be a
debate about like what the problems what
problems are most you know valuable to
focus on yes I mean in retrospect if if
uh AI does Wipe us all out like next
week this podcast is going to look dumb
so but I yeah put me out of work give me
some Ubi but I do think that like you're
right that this stuff goes all the way
back I mean the Protestant Reformation
one version of it is to blame it on the
accessibility of literature and the
printing press and the ability to read
the Bible and go wait a minute I don't
agree with this right so like every
technological Revolution if we're really
in one right now does typically involve
some sort of political social religious
upheaval right I just think the tech
guys are so excited about the idea that
they're in a revolution that they may
have like invented a religion before
they've been able to prove if we're in
one or not yeah it's interesting and I
think there's also this this um sort of
urge among some of the wealthier people
in Silicon Valley to try to ascribe a
greater meaning to the work that they do
and these types of philosophies are very
appealing to those people you know Mark
andrees being one of the most prominent
who wrote this literal he called it a
Manifesto um you know I think I think
that that just sort of exposes that you
know he is sort of an embarrassed
billionaire to some who wants to Define
his legacy as more than just amassing
wealth off of venture capital um and so
by adopting this religion and becoming
one of its uh preachers you know I think
that's a way to do that for some of
these people I think that's why you see
people in those positions become drawn
to this and I also think they realize
that it is a very effective way to
spread their own influence by becoming
the you know leaders of these sort of
movements they realize that you know
they can preach to a very willing
audience who might not otherwise listen
if they're just saying you know here's
this new tech thing that I'm interested
in don't you think it's cool so just
stitching everything together my
takeaway here is that we've really just
seen kind of round one that like the
story doesn't end with open AI it almost
begins here with the accelerationist
starting to find each other in EA you
know maybe reforming but not going
away and we'll see I mean hopefully
there's room for others right that's
that's the real question I I actually to
your point I will say I think that the
philosophies that we've seen sort of
emerge and the rebranding of these
philosophies that we've seen happen over
the last month or so have a good chance
of outliving the generative AI fad if if
we're in one like we may lose interest
with mid Journey next week but I think
these people are going to be around in
sort of thinking about this stuff in
this way for a lot longer I think that's
very true I mean if you look at CP to a
lot of those people have moved into
effective accelerationism without even
blinking and you know the the crypto was
the the last old interesting thing and
now it's artificial intelligence or AGI
or whatever they want to focus on but
you know they they're continuing the
same beliefs and and behaviors with just
whatever is in front of them great all
right so let's uh let's round this one
out I just want to give everybody um
who's listening a chance to um find our
great guest online so Molly white uh oh
it's called citation needed now okay I
remember you rebranded it newsletter.
Molly white.net
and then Ryan brodricks uh garbage day
garbage day.
email and you can find deepa's work on
wsj.com
okay well thank you Molly Deepa and Ryan
this has been a great conversation
really appreciate you coming on and
helping us break this down and I also
like like really love that you know you
didn't take it face value that this is a
one versus one but kind of brought into
out and talked about what we're really
looking at here which is were we
supposed to fight each other oh no no I
see the philosoph yeah
exactly so thank you so much and I will
one versus one each other at a different
point I'm sure for sure yeah we will
host it we'll host it um yeah so thank
you so much thank you to our guests
thank you to our listeners um and thank
you to everyone who helps make this
podcast possible but we'll be back on
Friday with another Show Breaking Down
the week's news and continue with our
coverage of these religions of open AI
of the AI field every week Wednesdays
and Fridays all right we'll see you next
time thank you for listening and uh have
a good one we'll see you on Friday on
big technology
podcast