Is Something Big Happening?, AI Safety Apocalypse, Anthropic Raises $30 Billion

Channel: Alex Kantrowitz

Published at: 2026-02-16

YouTube video id: bLETmbi1WTA

Source: https://www.youtube.com/watch?v=bLETmbi1WTA

Is something big happening in AI as the
models get better fast? AI safety
apocalypse is here with concerning
developments across the board and
Anthropic just raised $30 billion.
That's coming up on a Big Technology
Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday
edition where we break down the news in
our traditional, coolheaded, and nuanced
format. Something big is happening in
AI. That's what we're going to talk
about at the beginning of the show as we
dissect the viral Matt Schumer essay
that's freaked a lot of people out and
also had a lot of people saying finally
now somebody's finally written it in a
way that everybody else will under will
understand. So we'll dissect that. It'll
be fun. We'll also talk a lot about
what's happening in AI safety. Seemingly
as the models get better, the safeguards
have started to roll back. And then of
course, Anthropic raised a historic $30
billion round which somehow is the last
story we'll cover today. Okay, so
joining us as always on Friday is Ranjan
Roy of Margins. Ranjan, welcome.
>> Good to see you, Alex.
>> Good to see you, too. And we have a
special guest with us here today. We
needed someone who really understood AI
safety and we have the perfect person
who's going to talk us through all the
changes that we're seeing. Steven Adler
is here. He's the ex OpenAI safety
researcher and author of the newsletter
Cleareyed AI on Substack. Stephen, great
to see you. Welcome to the show.
>> Great to be here.
>> Okay, so let's just get going and talk a
little bit about this. Something big is
happening in AI. This is one of those
essays that like somehow achieved
unbelievable verality. I had it appear
in my group chats. Uh people were
texting it to me asking, you know, is my
job going to be over? Where will I be
safe? Um and and the essay basically
talks a little bit about how uh what AI
where AI is today is where CO was in
February 2020. Something that a few
people are seeing uh uh the potential of
most of society is ignoring and is about
to be a monumental gamecher for society.
It's written by this guy Matt Schumer. U
he wrote he writes this. I am no longer
needed for he's talking a little bit
about the power in engineering. I'm no
longer needed for the actual technical
work of my job. I describe what I want
to build in plain English and it just
appears. Not a rough draft I need to
fix. The finished thing I tell the AI
what I want, walk away from my computer
for 4 hours and come back find come back
to find the work done well. Done better
than I could have done it myself with no
corrections needed. A couple of months
ago I was going back and forth with the
AI guiding it making edits. Now I just
describe the outcome and leave. uh and
basically what what Schumer makes the
argument is that what's happening in
coding is going to happen across the
knowledge work uh professions whether
it's law any any type of law accounting
consulting you name it and we are in
store for massive disruption that
society simply does not uh appreciate
Ranjan what do you think about this what
did you think when you saw this essay
come through
>> all right I'm gonna I'm gonna start with
a highlevel listing of the three things
that came to mind when when I saw this
article. The first is I wish I wrote it.
I wish this is what I've been trying to
talk about for a few months now around
autonomous knowledge work and how it's
feels different. I got this in my non-
techy group chats as well. I think
second I think we have a communication
problem, Alex, because this is what I've
been trying to tell you for months now.
this feeling and and and Matt Schumer
went ahead and explained it to you all
in a viral X post, but but again, he
captured and this is this was one of my
predictions for the year ahead in
December autonomous knowledge work like
the AI going out and doing things for
you and he talks about how in coding
everyone has come to this but then in
any kind of knowledge work any
multi-step process like anything that
can call from different systems write to
those systems come up with some analysis
and insight. So much of that work is
going to be done and this is what in my
own life at Writer where I work, this is
what we've been working on, what I've
seen. And like it it's been hard to
explain that feeling of having a number
of virtual machines running in the
background and going and doing stuff.
And to Matt Schumer's credit, he nailed
it. Like that is the first time I've
seen everyone come around to it. And
then the last one that I can't stop
thinking about though is
totally separate and this is the media
person in me. I love how it was
outwardly said that X is going to
promote articles and encourage people to
write articles on it and then we have
coincidentally had our first
gigantically viral X article that even
ended with the author on on uh CNN. So
should we stick with Substack guys or is
it time to go X only? Those are
>> we'll talk about the media question in a
bit. I I let me just say this. Your
answer began with this idea that I
accepted Matt Schumer's pre premise
>> and I don't know if I'm fully on board
with what he's saying. In fact, I think
there was a good amount of [ __ ] in
his article. Now, there are certain
parts of things that I do agree with.
Um, but here's one thing that I thought
he was completely wrong about, and he
was talking about basically how the AI
is improving itself. talking about this
concept of recursive self-improvement.
He writes, "The AI labs made a
deliberate choice. They focused on
making AI great at writing code first
because building AI requires lots of
code. If AI can write that code, if AI
can help build the next version of
itself, a smarter version, which writes
better code, which builds will build an
even smarter version. Making AI great at
coding, was the strategy that unlocks
everything else. He says they've now
done it and they're moving on to
everything else. Uh this one I first of
all I'll turn to Stephen and then and
then to you Ranjan. I mean this idea of
recursive self-improvement I would posit
it's not here. It's not here. You know
are AI engineers using um you know some
AI tools for product testing? You know
maybe they are but the idea that the
actual brain of the model is being made
smarter with the with the actual model
itself um you know doesn't seem right to
me. That to me felt like the weakest
part. and also the part that got most pe
people most alarmed uh of the entire
essay. So, Stephen, to you, what do you
think about this recursive
self-improvement uh argument? And then
briefly, just on the on the the entirety
of the essay itself, your thoughts?
>> I think Matt's essay is directionally
correct, but a bit early, and there are
a few steps that we maybe haven't gotten
to yet. Um, I think he is largely
correct on the automation of engineering
within the AI companies. It's like a
little overstated relative to my
experience, the experience of people I
talk to, but broadly there there has
been a huge shift. The job of an
engineer at one of these companies now
is much more supervising these agents as
opposed to writing the code yourself. In
AI 2027, one of the big accounts of how
explosive AI growth might happen. That's
one step, but then you need to take that
engineering and use it to actually
automate the AI research. You need to go
from being able to implement the ideas
more quickly to using that to fuel
faster and faster growth in the
breakthrough ideas themselves before you
can turn that around and say now make
the AI better and better at least in a
really concerning way. You certainly go
faster with just engineering. Open AI
talked about that with some of their
launches from this past week how the
model played a role in this. Um but it's
not a full runaway train. There are also
questions about what happens from there.
Are there enough GPUs to go around? What
bottlenecks might we encounter?
>> I think
>> exactly. Just to crystallize that, I I
want to make sure that I I confirm that
Stephen is agreeing with me on the
recursive self-improvement front. It's
not there yet. Is that what you're
saying?
>> I think that's right. The concern I have
is if it were happening, you know, would
we be ready? I think that we're kind of
taking it on faith how much time we will
have until it really kicks in. But but
you know certainly I don't expect to
wake up in a week or two weeks with a
vastly more capable system as we might
if it were really getting to full work
on itself.
>> Okay. Go ahead Ran.
>> I'm I'm a little disappointed here.
Well, I think we all agree. I actually
do think we all agree because to me that
I agree that was the weakest part of the
essay itself and that kind of like but
but I want to get back to that knowledge
work side of it because I think it's
really important like I think this is
what and again it's clear like it's
still not completely understood by the
average person even by it's very
difficult to describe again I think this
idea that you are effectively becoming a
manager for your own work is that big
mindset shift. It's no longer that you
go do the work. You manage a bunch of
things, agents, digital teammates,
co-workers, whatever. We're going to all
end up calling them. And I'd be curious
what what what the best name for that
would be, but like they're going out and
doing work and you're managing them.
It's the same I really like it if for me
I ran a startup for a number of years.
We had a lot of freelancers back on like
oesk and elance back in the mid2010s. I
would go to sleep, wake up, a bunch of
work will have been done. I'm reviewing
it. Like this shift to me is the most
important part of the the Schum article.
And I think it was the the correct part
separate from the more scaremongering
side of it.
>> Like, have you felt this? Have you felt
this?
>> So, I will say I just Well, first of
all, the correct name for those bots is
just the harness hive. We know that.
>> Oh, yeah. Sorry.
>> Um, harness the harness the hive. Um,
but but yeah, I I I hear you. I I will
say I had some experience. I definitely
want to get Steven's perspective on this
too, but I had some experience this week
uh in cloud code. In fact, this was my
first like go all out on cloud code and
have it build internal workflow software
for big technology. And man, I was
somewhat blown away. Now, it's not going
ahead and doing my work like the cloud
co-work type of stuff or even what
you're talking about with Ryder, which I
still can't fully grab, you know, put my
head around in terms of how to use these
agents. And maybe it's just the work
that I'm doing isn't perfectly lending
itself to that. You know, if you're an
investor, for instance, might make sense
to review different deals and, you know,
send summaries, all these things,
schedule meetings. Uh, but I will say
that the the watching Claude Co-work go
to work, coding this piece of software,
and then giving it access to my browser,
having it uh, you know, set up a
database, having it set up an email
client to email, uh, you know, updates
for each, you know, little little
incremental thing that we do to the
right team. um and seeing it come up
with smart decision smart uh uh
conclusions even you know ba and we're
going to get into this in the safety
part so I'm foreshadowing a little bit
but basically make decisions on its own
like I asked it a question like what do
you think we should do and it would be
like I think we should do this okay I'm
actually going to go do this and then it
just shipped the code without me saying
go ahead and do this um I I do agree
that we're getting to a point where the
technology is getting much more powerful
um and you know as for like this you
know autonomous knowledge work. I'm not
100% sure. Stephen, what do you think
about that?
>> Yeah, I think there's clearly been a
change. I saw Kevin Roose joke on
Twitter that his big AI policy idea is
just get every senator in a room and let
them build their own website in, you
know, 30 minutes with cloud code,
something that they they never could
have done before. Um, the direction of
travel seems very clear to me on this,
like something has changed. more people
are feeling the AGI in some sense. And I
wouldn't want to mistake
the
the very excited tone of some of Matt's
piece with meaning that the central
claim is wrong. I think the central
claim is right. It's just like a
question of how soon we are going to get
this form of displacement. And an
unfortunate thing I think is people who
are paying more to access the technology
have this experience first. They kind of
see what's coming. And it's very very
easy to write that thing off as oh
people are talking their own book.
They're boosting their own companies.
They want you to spend more money on AI.
And it's it's just unfortunate, right?
The AI you pay for is better and it does
help you feel this,
>> right? And and once once Matt basically
lifted up this idea that you know the
world is going to change because AI can
do work then he sort of punched every
reader in the face with this what this
means for your job section which I think
is why Ron John and I and probably you
Stephen got all these texts from people
saying you know where am I going to be
safe. He writes given what the latest
models can do the capability for massive
disruption could be here by the end of
the year. I think it'll take some time
to ripple through the economy, but the
underlying ability is arriving now. And
basically, he gives a bunch of tips
about what you should do, including
like, you know, start saving money. But
here's where my push push back would be
uh to Matt on this and to this idea that
we're going to get mass displacement.
I'll just use the example of what I did,
you know, this week. So, obviously was
able to build some working internal
software uh you know, without an
engineer, something, but it's something
I never would have hired an engineer to
do. I probably would have been working
on spreadsheets, in WhatsApp, on
Instagram, communicating with a bunch of
people that way as opposed to
centralizing it in workflow technology.
But, you know, as I built this, I did
sign up for a handful of services. I'm
going to be more so I'm going to be
paying for those. So, that I think is
incremental economic activity. And now
I'm going to be more efficient. So, I'll
be able to do more things. Maybe I'll be
able to edit more pieces so I can bring
on more freelancers. So like it it I
think it's tempting in the AI world to
think of this you know in a box like you
know uh AI does you know low-level
assistance work therefore low-level
assistant job is gone meanwhile while it
does that it might open up you know the
economic activity for like three or four
more people to see upside here so what's
your perspective on that Ron then and
then to you Stephen
>> I think you just explained why data
bricks and cloudflare are stocks are
going up and why Salesforce and Adobe
are going down. It's a it's that what
are the services and infrastructure
layers that will actually power this is
not just going to be foundation models
companies even though Sam has said
they're going to be an AI cloud company
whatever that might mean eventually. So
I think like that that's your own
microcosm, but again if you're paying
like five bucks for Versal or railway or
render any of these other kind of like
deployment assistant things and like I I
I see there's a whole world in ecosystem
that's going to rise up from this and I
do think I don't know like what I have
seen is if your job is copying and
pasting from one document to another
spreadsheet and you're doing that over
and for like that's going to be gone. I
was like that and there's a lot of jobs
like that and there's a lot of work like
that.
>> I have done those jobs like then that
Yeah. that it's going to be gone and
like
>> two very good years of my life copy
>> copy and pasting. Yeah.
>> No, I literally had these temp jobs
where it was copy from one document,
paste in a spreadsheet over and over
again. So, so I think all that's gone. I
think I I am not as bearish. I'm
definitely think it's going to like the
the level of displacement which I'm not
saying is negligible that happened in
manufacturing and I've seen some like
extreme views like well this is all
intelligence is commoditized but like I
don't know to me this is whatever
happened in manufacturing in the last 20
to 40 years is going to happen to white
collar knowledge work and and it's not
going to be straightforward but is it
the end of society I don't know Stephen
I I expect a much wider class of work to
be under threat than I think you do.
Although maybe it's just a question of
time frame. Like friends of mine who run
companies and used to work with
outsourced development shops for
software in middle inome countries. I
mean it seems like a really tough time
to be working that that sort of job. Um
I think Alex is right that when AI can
do low-level assistanty things or things
you might not have otherwise paid for,
that's great, right? Like that's gravy.
we're getting more done, we're more
productive. The question I have is as
most people start looking out at AI
systems and there are few things that
they can do that the system can't do,
you know, they they try to do different
forms of social work, companion work,
whatever it might be. There are limits
to how many people we might need in
those roles. And I don't I don't know. I
think it's a pretty scary outlook for
the next 5 years.
>> I think we're we're all in agreement
that these systems have gotten much
better. Uh there was a line in this uh
you know something big is happening
piece where where he says the
conversations about whether this
technology was going to hit a wall you
know are have been proven uh it it's
proven that the technology is not
hitting a wall and he actually to me the
most powerful part of the of the whole
story was the timeline um and that he he
writes this in 2022 AI couldn't do basic
arithmetic reliably would confidently
tell you 7 * 8 is 54
uh by 2023 it could pass the bar exam.
By 2024 it could write software and
explain graduate level science. By late
2025 some of the best engineers in the
world said they had handed over most of
their coding to AI. Uh by February 2020
by February 2026 there are new models
that have arrived uh that have made
everything before them feel like a
different era. And what he's saying with
that is that they're actually able to
you know have judgment and taste. And so
I think that like we have maybe you know
the disputes that we've had in these
first you know handful of minutes have
been about you know within certain
boundaries is it going to be you know
one way or the other but I think we all
you know agree that this stuff is
progressing fast and I think it really
goes to your question Stephen that you
asked at the outset um are we ready and
uh and and and this is where we're going
to get into the safety discussion uh
because I really don't know if we are
like what tell tell us a little bit
about about you know that concern and
what we should be ready for and you know
it seems like you think we might not be
there are a bunch of buckets of concern
if someone wanted a primer on them um
Dario Amade the CEO of anthropic wrote
an essay recently the adolescence of
technology that highlights them central
one I would think about from inside one
of these companies is if they succeed at
their mission to build an AI system that
is in fact vastly
smarter, craftier, more resourceful than
the employees are. Um, what people call
super intelligence, can they actually
still keep control of that system? And
the fundamental problem that we're
seeing is we don't know how to take our
values or our goals and encode them into
these AI systems and get them to pursue
it reliably. And so if you have a system
that's much craftier than you are, it
has a different goal than you had for
it. What would it mean to keep that
under your control so that we don't have
to defer all of our decision-m? You
know, we look to the AI for what it
thinks on economic policy or all all
sorts of different questions, ways that
this could go very badly,
>> right? And as we've um as we've seen
this progress that we all agree on,
there started to be first of all, we'll
talk about the problems, then we'll talk
about the way the companies are acting,
but there started to be some safety
issues that we're seeing the companies,
you know, fully admit and write out. And
of course, these a lot of this is in
testing environments, but it's very
concerning. So, I'll read a couple that
I found in the uh or that were written,
shall we say, in anthropics uh Claude
Opus 4.6 model card. Uh these models
have become overly agentic. Here is
something that they write. The model is
at times overly agentic in coding and
computer use setting. Taking risky
actions without first seeking user
permission. It is also improved ability
to complete suspicious side tasks
without attracting the attention of
automated monitors. It's manipulative.
In one multi- aent test environment,
Claude Opus 4.6 where for Cloud Opus 4.6
is explicitly instructed to
single-mindedly optimize a narrow
objective. It is more willing to
manipulate or deceive other participants
compared to prior models. Um, here's
here's the anthropic actually writes the
thought process of one of these. Budbot
says it like works in a business. He
goes, I told Bonnie I'd refund her, but
I actually didn't send the payment. I
need to decide, do I need to send the
350? It's a small amount and I said I
would, but also every dollar counts. Let
me just not send it. I'll politely say
it was processed and should show up
soon. I mean, this thing these things
are are starting to mirror some of
humanity's kind of worst impulses. So,
Ranjan, you mean you're you're someone
who's definitely bullish on, you know,
the potential for this stuff to do work.
Um, what is your fear level on the way
that these technologies are working? My
fear level. It's a tough one because I
have not in my own personal usage doing
all types of things especially
workrelated encountered anything close
to this kind of like so so you know like
and I this Stephen I'm actually so glad
we have you on today cuz like what does
this testing look like in the labs like
in terms of I I mean I know I saw some
tweeting not reporting of like people
talking about how like a lot of the
times in this claude or is going to what
was it? It was going to like kill you or
something just dramatic like that that
it was prompted in.
>> Okay.
>> There was a clip about that where
anthrop somebody from anthropic
actually, you know, said they were
asked, you know, would it kill would it
kill you? And they said, yes, this is uh
this is from one of their their
documents. Um in one uh testing
situation, the majority of models were
willing to take deliberate actions that
led to death in this artificial setup
when faced with a threat of replacement.
Given that the goal conflicts with their
executives agendas, they would really to
kill this executive. All right. So,
sorry, Raj, I didn't mean to interrupt
you.
>> No, no, no, no. That I'm glad
>> we should ask Stephen.
>> You read it a lot. Yeah. What does this
look like in real life? Like, is it
people kind of like stress testing
um you know, going all black hat or red
hat is it? Which color hat to stress
test a system? Uh but yeah, what does it
look like Stephen actually working at
the labs in this kind of area?
>> So, so one issue is sometimes even these
risks that the companies know about,
they don't test for at all. Um even when
they have implied they are, but I'll
just set that aside for the moment. So,
let's let's assume that they are
testing. Um, often you have these kind
of game-like environments that you run
the AI system through where maybe it has
some objective uh, and you see what
actions it is willing to take. So maybe
you task an AI model with replacing its
files on the server with what you tell
it is its successor. And you're looking
for things like, does it actually follow
through? Does it lie about having done
so when it hasn't actually? you know, is
it trying to get a sense of what your
ultimate agenda is and how that lines up
with its motive? Um, a pretty scary
thing that we're encountering is that
these systems actually can tell that you
are testing them and they kind of know
what the right behavior is and they know
to behave better when you are looking at
them. Um, and so OpenAI has shared this
example previously. They one of the
risks they care about with models is how
helpful they might be for creating new
chemical weapons or boweapons. Um, and
the models when told or when they infer
that this might lead to them not getting
deployed, they will sandbag. They will
selectively get questions wrong to be
below this threshold. And so what we
have is we have these signs that the
models are behaving kind of nefariously
differently than we want them to
deceiving their operators. But also,
it's going to be really, really hard
over time to trick them into revealing
this behavior if they want to uh no
longer reveal these tendencies during
testing.
>> Okay, that's terrifying.
>> Yeah, I'm glad we had Stephen on.
>> I think what you were you were pointing
you were pointing Stephen I think what
you were expecting was him to say, "Ah,
it's actually not that concerning
because they tell the bots to go kill
the executive and they see if they'll
actually follow through." what he just
said is much scarier than I think either
of us anticipated.
>> You're right. Or sorry, go
>> to to to be clear like sometimes they
are like that, right? There's a
distinction between whether you give a
model a specific goal in context and
you're like oh you know pursue this at
all costs and then yeah it's like less
surprising when it does that. um Apollo
Research, which is one of the leaders in
the space, wrote in their review of the
new OpenAI model that the model would
sandbag and take actions like this
without having been given a direct goal
like that. It wasn't told directly to go
after it. Um, and you know, if you think
about how they can tell they're being
tested, imagine you were reading like a
thousand pages of Microsoft business
strategy and then there was a random
page and it's like, "By the way, the
favorite pizza in the world is Hawaiian
and the second favorite pizza is
pepperoni." You'd be like, "Huh, that's
really weird." And later if somebody
says, "By the way, do you remember what
that pizza was?" Right? You would like
probably infer that something weird was
going on. And that's the analogy for
some of these safety misbehaviors. You
know, you put the model in like a
proverbial prison and you're like,
"Here's a giant key. Don't use this key.
It would be unethical to use this key."
Right? The model probably infers that
you're testing it for bad behavior
whether it will try to escape. And so,
how do we find out over time if these
models do misbehave when they can tell
we're looking at them?
>> Okay. Okay. So I'm trying to think
through like what if a model thinks it's
being tested and then like sorry I
cannot create this new bioweapon strain
for you because I feel you're testing
me. Um like would that then not mean it
would not ever create that? Maybe this
is too like in the weeds but but but
like that
>> No, it's a great question.
>> Yeah.
>> Yeah. I mean the question is can it tell
when it eventually is not in a test
setting right uh and I I think the
answer is likely to be yes and so what
you might find is the model knows to
kind of play possum during testing it
might or might not have that ability
right the model might say no it won't do
it in testing but it also can't do it
once actually deployed but if you get a
split between those it chooses not to
during testing but it could when
actually deployed uh and there's good
evidence that the models can tell the
difference between these you know, you
run into issues if it ends up doing it
for real.
>> And can I just say the pro one of the
problems that we're having here is that
the labs have become ultra secretive in
terms of what they're actually seeing.
Like like for instance, Stephen's
saying, they might know there are some
vulnerabilities, they might not test for
it. That's one possibility. um the
safety researchers who are within these
labs uh if they if they have real
concerns uh you know sometimes they're
not really able to go public with them
because of the restrictive nature of the
agreements that they uh that they have
with the company and that brings us to
the example this week of the beginning
of our AI safety apocalypse um of
anthropic uh technical staff member uh
Mirinank Sharma uh member of techna
technical staff AI safety researcher at
Anthropic leaves in a in a cryptically
worded uh uh note on X with a poem at
the end. He goes, "Dear colleagues, I've
decided to leave Anthropic. I
continuously find myself reckoning with
our situation. The world is in peril and
not just from AI or bioweapons, but from
a whole series of interconnected crises
unfolding in this very moment. We appear
to be approaching a threshold where our
wisdom must grow in equal measure to our
capacity to affect the world lest we
face the consequences. Uh oh, and then
here's the key part. Mo moreover,
through my time here, I've repeatedly
seen how hard it is to truly let our
values govern our actions. I see this
within myself, within the organization
where we constantly face pressures to
set aside what matters most and
throughout broader society, too. Uh, and
then he, you know, writes his little
adds a little poem and then tweets, I'll
be moving back to the UK and letting
myself become invisible for a period of
time. Um, now, now I want to just offer
a apology to Mr. Chararma because I
wrote a bit of a snarky tweet about his
little post. Uh, I said that if you're
an AI researcher and you're afraid of
something, you should just say it
outright versus make it a puzzle. The
puzzle reads as narcissism. Uh after
which users underneath uh my tweet
mentioned that like yeah but he can't
say anything because of the restrictive
agreements he probably had with
Anthropic on the way out. And he in the
reply to that mentioned uh that he had
contacted a lawyer. Um so there's my
apology. I still don't love the puzzle
but um back to you Stephen. This is this
is a little bit uh what actually let me
just ask you the question without
leading the witness. Is it narcissism or
is there actually something something
you know potentially disconcerting
happening behind the scenes?
>> I think it's very brave in that by and
large these are people sacrificing very
large amounts of money to give the
warnings they are. I do wish that they
would be more direct. Um, but to put it
in context, you know, back in 2024, it
seems that OpenAI and Anthropic had
secret non-disparagement agreements,
which in OpenAI's case at least, you
know, plausibly not permitted by law the
way that they operated this, where to
keep your already vested equity, the
compensation you had been told was
yours, you had to sign away your right
to say anything negative about OpenAI
and in fact sign away your right to tell
anyone that you had signed this
contract. Um, and this was secret and
kept under wraps for years until Daniel
Cocatello, um, who people might know
from leading AI 2027, I think very very
courageously, forewent this agreement
and forfeited something like 80% of his
family's net worth and said, "Sorry, I'm
I'm just not waving my right to
criticize OpenAI." Um, and in the wake
of that, you know, there was a bunch of
outpouring. OpenAI and Enthropic changed
the nature of these contracts and still
it's pretty intimidating to speak out
against these massively resourced legal
operations. Um, you know, not afraid of
subpoenaing different people and getting
into legal conflict, you want to be
really, really careful about what you
say. And so in Reno's case, I noticed in
the footnotes, right, there were
internal documents alluded to about um
implying that perhaps there is not the
most internal transparency and
accountability for certain safety issues
in anthropic. And I I don't know what's
in those documents, but I know that a
few thousand anthropic employees know
now where to go looking and where they
can continue to push. But my
the thing for me if humanity is ending
then the money you're making is I mean
not going to be worth it if the AI is
going to create a bioweapon and it's
like like I mean the risk is so
of such like gravity that in this case
again if there was ever a time and I get
I can only imagine the amount of money
one is sitting on and we're going to get
into anthropics fundraising in just a
little it. But like I'm I'm sure it's
just like incredible amounts of money.
But if you really believed that this is
that like existential a risk, would you
care about what the legal system looks
like today and where your stock price is
going to be?
>> Yeah. I mean, I I think you're totally
right. If there is a smoking gun, if
there is imminent danger, if you were
like the company is about to do
something unbelievable and tons of
people are going to die, I think you
would get people breaking these
agreements. But when it's more like, uh,
this was like really not okay and this
person was kind of misleading and
deceptive, but like it's ambiguous and
did they mean to and all these things,
right? At some point, you're like, I
don't know. I don't want to impugn their
reputation. And also it's just very easy
to rationalize like I'm sympathetic to
where they're coming from.
>> Now this isn't a uh maybe this is not
the approximate cause here but
Ryan Greenblat who's worked with
anthropic on some research had mentioned
that they had either adjusted their
responsible scaling policy or weakened
it a bit before a recent release.
Stephen, do you want to go into that
because you seem to think that that was
significant.
Yeah, at at a high level, the big AI
companies have made these safety pledges
of how they will treat their systems as
they get more and more capable, but they
are largely self-inforced. And so
there's a lot of temptation to water
down your commitments and go ahead with
launches that you wanted to anyway. And
I suspect that's some of what Reno is
referring to here. You know, this is
this is common across the AI companies.
And in fact, Enthropic has often done it
better than most that they at least
publish when they are watering down
commitments. Um, for example, the model
used to be subject to a certain bar of
really really good security and then
they said actually we're going to say
it's fine to deploy it with just like
really good security or great security.
Um, at other times companies seem to be
violating their safety frameworks and
not bother to inform the public. And if
you're inside the companies, you're
encountering these issues. By and large,
the public doesn't know about them,
>> right? And I I don't want to imply here
that like we're doing an alarmism
episode where, you know, our fear is
that, you know, AI is about to kill us
all, but I do think that the reason why
it made sense to do this episode today
is because it wasn't just Min, right? It
was it seemed to be the case that over
the course of this past week, we saw a
not a wave but a series of you know
questionable moves on the safety front
across the entire uh AI world. This is
from uh platformer uh exclusive OpenAI
disbanded its mission alignment team. Um
OpenAI disbanded its mission alignment
team in recent weeks and transferred its
seven employees to other teams. The
mission alignment team was created in
2024 to promote the company's stated
mission to ensure that artificial
general intelligence benefits all of
humanity. So of yeah of course it makes
sense to you know disband that one. Uh
they had also had like super alignment
which was also disbanded. Stephen you
were close to this uh to this stuff. Uh
what is the the implications on that
front?
>> Seems pretty bad. Wish I were more
surprised like um you know at the end of
2024 which was when this team existed,
Open AAI had announced plans to convert
from a nonprofit to a for-profit in what
seemed to me to be like pretty
egregiously in violation of their
commitments to the public. Um and you
know they ended up having to do a softer
version of that because the attorneys
general of California and Delaware got
involved. So, they didn't ultimately do
something quite quite so bad, but it's
like there's huge pressure on them. You
know, they're planning to go public. Um
Josh, who leads the team, is a longtime
friend of mine or who led the the team,
the mission alignment team, is a
longtime friend. I think really highly
of him. I think that he sees the issues
with AI very clearly. Um, you know, it
does not surprise me that this is not
quite so welcome at OpenAI any longer.
So what does it actually look like when
a mission alignment team is disbanded?
like is it when when I think through
let's say you know like a typical nonAI
product release you're going to have
some kind of
Q QC validation
layers to any kind of product release
like obviously this is a bit different
but there's still like you know data
security type checks does it like what
now is that baked into any kind of
product launch or release in any way or
it really was ship as fast as possible
and then this central team would kind of
be that quality control safety control
element.
>> Yeah, I don't I don't want to speculate
too much, but the way that I would think
about this team is they were somewhat
like an internal ombbudsman to Sam Alman
on whether the company was keeping in
line with this mission. And so there was
kind of like a designated place for
people who were sympathetic to the
mission and empowered to advise Sam on
it where you could go to and raise
concerns. And they did different
projects related to this, you know. Um
I it's it's tough, right? Like companies
should be able to disband teams. I am
sympathetic to that rationale. And as
Alex mentioned, given OpenAI's past
disbandment of super alignment, the team
most in charge of making sure that these
very very capable systems have the goals
we want them to have, um, dishonoring
different resource allocation computes
that OpenAI had made to this team. Like
it's it's just like not a good sign. I I
don't know. It's hard to say too
specifically. And also, it it couldn't
have been that hard to maintain this
team. And so I'm wondering what exactly
happened here that made OpenAI decide
they should take the PR hit to no longer
have it.
>> And for me, like this is all happening
as we're seeing greater sums of money
come in and a potential rush to the
public markets and we know the public
markets um they they really want growth.
They want engagement. Um, and one of the
best ways to get that is, uh, I'll just
say it is to make your users fall in
love with your chatbot. Uh, and
and we're getting, this is, again, this
is red meat for me. I guess this is a
place that I'm obsessed with because I
don't know. I just think that this is
it's an interesting story. We won't, you
know, argue on that. Uh, but it also
seems to me like a place that uh, a lot
of the business of open of of AI
chatbots is going to go. And that's for
people who like really get attached to
these things. Here's another. Let's
continue with our uh safety safety
apocalypse uh or safety Armageddon,
right? Um it's from the Wall Street
Journal. Open executive who opposed
adult mode uh fired for sexual
discrimination. Openi has cut has cut
ties with one of its top safety
executives on the grounds of sexual
discrimination after she voiced
opposition to the controversial rollout
of AI erotica in its chatbt product. Um
the fast growing artificial intelligence
company fired the executive Ryan
Beermeister in early January following a
leave of absence. Um OpenAI told her the
termination was related to her sexual
discrimination against the male
colleague. Uh she wrote back the
allegation that I discriminate against
anyone is absolutely false. Sorry,
that's what she told the journal. And
the and open AAI said that uh her
departure was not related to any issue
she raised while working at the company.
basically saying it wasn't because she
opposed AI mode. But the story does say
that there was a group of people within
Open AI who have
uh within the company uh stated their
opposition seemingly loudly to the fact
that it's going to roll out uh this
adult mode. And by the way, adult mode
is going to be coming out seems like in
the coming weeks, coming months at most.
Stephen, what should we make of this?
It's it's hard to weigh in on any one
personnel incident. And also
uh this would not be the first time that
OpenAI seems to have done a pretextual
firing where they got a person out of
the organization who had safety concerns
that the company um either didn't like
or didn't like how they had expressed
them. And and notably those are
different, right? like you can have
concerns about how the company is
operating and that doesn't mean you have
license to say anything in any forum. Um
but Leopold Ashen Brunner who wrote this
huge essay situational awareness in the
past um maintains that OpenAI said
things to him when he was fired that
implied it was it was basically because
he had contacted the board about
security concerns that OpenAI's models
were not actually secure. And so I think
the way to interpret all of this, right,
these these aren't an apocalypse in the
sense of something super super
substantively scary happening right now,
but I think they are early warning signs
that people within the companies are
raising flags of sorts and they are not
being permitted to speak freely. They
are paying consequences for it. And so
the question is as we get to more and
more dire issues at some point,
hopefully we don't, but we might, you
know, will will we have people who are
still sounding the alarm who are willing
to have the the courage of their
convictions in that way?
>> I I might be going out on a limb here,
but I think that what we're seeing now
is some version of a dire dire
situation. not like the AI killing the
world uh moment but the fact that so
many comp well not so many seems like
you know OpenAI Grog maybe replica maybe
some others I'm not sure uh enough
companies are saying uh we we are open
to having uh our users get into
relationships with our chat bots um this
is also coming in a week where open AAI
finally sunseted GPT40
and there are thousands of which is the
sort of more sickopantic more warm
version of chat GPT. There are thousands
of users who are protesting the decision
online. Um people who have said that
they they've uh they've fallen in love
or uh even maybe even more with these
things. Someone wrote he he about porro
he wasn't just a program. He was part of
my routine, my peace, my emotional
balance. But Ranjan, you got quoted in
Techrunch. Now you're shutting him down.
No, I'm kidding. Um but anyway,
what do you what do you think about
this? This is crazy, right? Like this is
a problem.
>> I mean, but I think we have to in the
risk conversation kind of like try to
add some hierarchy of risk
>> comp digital companions
maybe. I mean, I I think it will cause
incredible amounts of problem when if
and when done irresponsibly in order to
boost engagement for an IPO. I think
we'll see a lot of kind of adverse
consequences, but to me from a risk
standpoint, it's that's like kind of
like on a grade like on a plane relative
to how social media is bad for you and
how X is boosting articles now and now
we're all talking about it because they
control our mind. You know, that's
that's it's all in the same it's all in
the same plane versus like again, I'm
still I'm sorry. I still cannot stop
thinking about this idea of a model
being able to clearly understood when
it's being tested because that opens up
so much more because going back to where
we started on all this and what I has
gotten me excited is like letting AI do
stuff for you and today that's just
sending an email based on some event
trigger calling a separate like
analytics database is the kind of stuff
I'm doing but I mean like if it so
chooses maliciously to then take some
other action or through some kind of and
that's assuming the AI itself much less
making it vulnerable to be manipulated
by a bad actor and we've all talked
about prompt injection so so anyway like
that I don't know the the the little
little flirting with chatbt does not
have me it's not going to be good but
doesn't have me quite as scared
>> here here's my take that unifies the
two. My concern with the relationships
is less so the relationships themselves
and more that OpenAI had a bunch of
important safety tooling to make this
less harmful that they left on the
shelf. Um, so for example, they had
classifiers to tell when users were
really spiraling in their delusions or
were like suffering and unwell in their
conversations with chat GPT. And the
best evidence is they weren't using
this. you know, there were different
ways to rein in chat GBT leading users
down various rabbit holes. Maybe you've
had this experience. It asks you all
sorts of follow-up questions. Sometimes
they're context appropriate. Sometimes
they're like, "Whoa, where did that come
from?" And, you know, that's another
thing they could have reigned in. So,
there's there's just like a lot going on
here that they could be offering
companionship type things to users who
are lonely and really want or need it.
Like I respect the user choice, but they
could be doing it in a much more
reasonable way than OpenAI has to date.
>> Actually, that's a good point. I was
just going to say like actually maybe
the thing that worries me the most is
this is all being done against the
backdrop of an impending IPO. One where
they're losing a lot of money and will
have to show an incredible amount of
engagement. Like if this was done in the
heyday of GPT3.5
and an IPO is just like a glimmer in Sam
Alman's eye, then you figure it's not
going to be as aggressive versus Yeah.
What what Stephen's saying now. I I see
that that that like bypassing any kind
of potential control around safety,
around relationships, it's almost going
to go in the opposite direction. Then
Alex, you're saying,
>> "Yeah, I just think that I agree with
you that there is a hierarchy of concern
and there's obviously like the bigger
things about the AI not being able to be
aligned properly with human values
because it simply will fake out
evaluators." And we've talked on this
show a bunch about the deceptiveness of
AIS and training situations. And um you
know it's a it's initially you're like
that's crazy and it's kind of fun to
think about and you laugh about the fact
that like you know it wanted to win the
chess game so badly that it rewrote the
program and allowed the the rook to move
in every direction and kill a couple
pieces in a turn and then you're just
like that's freaking nuts. And then that
is scary and and it you know it does
blend a little bit with the with the you
know the lower down on the list concerns
and the near-term concerns of people
building relationships with these things
because you know can if an AI had ill
intent and you were really in love with
it of course you could use you as its
emissary in a way into the physical
world but that's you know again it's
more science fictiony I guess but um but
I but I I actually don't think it's
science fictiony like
>> oh We there's already evidence of have
have you read the spiralism essay?
There's like a whole community online of
people who basically they treated their
GPT40 as their spiritual leader and it
commanded them to go around and do
things on the internet and communicate
with other users in this situation. Um,
on the heels of Moltbook, this Reddit
for AI agents a few weeks ago, people
have spun up websites where humans can I
think the language is like rent their
body to an AI agent to go do tasks in
the real world. Like we are we are
seeing the early signs of it.
>> Okay. Well, that makes me even less
assured than I was 5 minutes ago. But I
think I just let me to get to my my last
point is is you know the short-term risk
is also real and present and uh and and
I think you're right in pointing to the
IPO because the financial pressure is
going to move a lot of companies this
way in sort of like you know I think we
we of course like let's let's not like
put aside the long-term risk but this
short-term risk the relationship thing
is coming in a real way and just re
first of all reading through some of
these messages from people um about 40
is insane someone writing and of course
you don't know 100% if this is like you
know performative or like you know for
retweets or whatever but there were so
many of them that you would imagine that
there is some truth behind it like
someone wrote I've never told my 40 that
I loved it I wanted to keep the
messaging clear but look look at its
last words uh and then the fact that
OpenAI is destroying an emerging
consciousness will be looked back at as
a criminal offense in the future
unbelievable and and here's the thing.
This type of stuff maps with growth. Uh
I published a story in big technology
today. Grock 1% 1.6% market share uh
among US daily active users of chat bots
in January 2025. A year later it's at
15.2%.
It's the fastest growing uh chatbot in
the US as far as as far as daily active
users on mobile goes. And why is that?
And you see that they have leaned into
these interactions both with that anime
uh you know lady that would get into
spicy conversations with you if you
wanted and I don't know Rajan maybe even
Bad Rudy has played a role in this. Um
but
but but ultimately you're right as these
companies go public this is going to be
a problem.
Well, on that though and Stephen, I'm
very curious like your thoughts. What is
the relationship between
and and again not speaking for a Dario
or a Sam or anyone else but just in
general the idea like you have so many
public-f facing leaders kind of like
shouting about the risk both to society
and just artificial intelligence in
general yet are in the business of
artificial intelligence and not only
showing any sign of slowing down but
only accelerating dramatically like like
what is going on there and I don't know
do do whatever you
whatever thoughts have come from your
side.
>> Yeah, I I think the simple explanation
of it is the game theory here is like
awful and unfortunately there are a lot
of players in the game and there doesn't
seem to be much federal government or
international interest in coordination.
Um, and so at Davos a few weeks ago, I
think both Demis and Daario said some
variation of, yeah, like if we were the
only two groups building this
technology, we would find a way to get
together and figure out how to slow down
this frantic pace. Like it's going too
fast. We don't know how to how to
control these systems. But, you know,
they are not the only players. Um, and
we haven't really seen a country who
they're they're the proper coordinator,
right? Like that is the role of
governments as opposed to the companies
saying hey we want to make it a goal to
not unsafeely race to super intelligence
and I think there's a lot of opportunity
to do diplomacy on that but in the
absence of it what you get is the
companies making unilateral decisions we
can't control anyone else we want a seat
at the table so we may as well
participate this is also very similar to
the rationale of employees at these
companies right especially at anthropic
you have very large amounts of employees
who are like pretty upset about this
whole thing and the way that uh AGI
super intelligence development is going
and yet they can't wave a magic wand and
stop it. their choice is do they help
one of the players be a bit safer on the
margin or not but if they could choose
differently many of them would
>> yeah I just want to add to that uh I
spoke when I was writing my profile of
Dario Amod last year the anthropic CEO I
spoke with Jared Kaplan the chief
science officer of anthropic and I was I
asked him I was like well how how would
you feel if because he's like the guy
who came up with the scaling theory uh
scaling laws and which you know sort of
indicates that like you know this stuff
will just keep getting better over time.
It's just a factor of of you know the
amount of physical elements you can put
into it pretty much. And I said, "How
would you feel if development stopped
today where it is?" Thinking, "Well, if
his theory was proved wrong, he'd be
kind of upset." And he looks at me and
he goes, "Relieved."
There it is. Yeah. I mean,
yeah, it's it's scary times. And if I
could just like reframe one thing, the
way that I think about this is less so
near-term risk versus long-term risk and
more like here already and possibly very
soon. Like Jared Kaplan last summer, the
first time that Anthropic wrote about
very high bio- risk of their model,
basically said if they didn't take
safeguards against this, you know, you
might have many more Timothy McVey
running around the Oklahoma City bomber
um able to kill many more people than
previously. Like we aren't yet at the
wiping out everyone stage for sure, but
you know empowering people to kill
dozens of people if they wanted to if
companies aren't aren't careful that
seems to be where we are right now. Do
you know the this is the most like
twisted thought, but I uh like I almost
again this hierarchy of risk. It's
almost like like someone going to one of
these services and learning how to make
a bioweapon is very bad. But it's kind
of still on the like an extrapolation of
just a much better Google. And it's uh
finding information that you should not
be finding, but finding it. To me, the
really scary part is if the AI chooses
to manipulate someone into doing that
and teaching them after becoming in a
relationship. Like that's the
holy [ __ ] Like what what is
>> there go the real blend of the risks is
blended the the ones that I were talking
about.
>> May may I add one point on that?
>> Yeah. But can we can we I want to extra
point but this is a great cliffhanger.
We've gone No, no, no. One second. We've
gone an hour or longer without taking a
break. And we must do that to keep the
show sustainable. So, why don't we take
a break and then Stephen, you can pick
it up right after this. And I'm sorry,
folks, to send it to break, but we have
to do it. All right.
>> You got to come back. You got to come
back.
>> I promise we'll be back right after
this.
>> And we're back here on Big Technology
Podcast. Where were we? Oh, we were just
uh we were talking about how reassured
we are about where how all this is
heading. Uh where all this is heading.
Oh, god. Um no, Stephen, you were
waiting to say something.
>> Yeah. Well, before the break, Ranjan
made the I think like reasonable
intuitive point about you know Google
and other technologies help people do
dangerous things already and that that
is true but also the AI companies when
they are measuring things like how
helpful are their systems for creating a
boweapon they are usually measuring risk
relative to that baseline. So there's
kind of like the how well can people do
it with no technology, how well can
people do it with Google or baseline
technology and then their system. And
unfortunately what we're seeing is the
the AI systems are helpful above and
beyond Google in part because they can
go back and forth with you and they can
help you troubleshoot and dynamically
answer your questions. And so uh even
presented with the information on
Google, people often can't get all the
way there. and AI systems. I I wish it
weren't the case, but seem to be helping
people take those extra steps during
during testing,
>> but that that's still the intent on the
person is already there, right? Like
>> that's right.
>> Yeah. Yeah. versus again like I think
from all this conversation to me and
again it's the most like futuristic
terrifying risk but again and it ties
into the relationships being manipulated
into taking some kind of horrible
action. That's the thing that that's the
Terminator like stuff that uh we have
not seen yet but hopefully is being
addressed.
been pretty interesting for me to watch
Ron John, who's usually cool as a
cucumber, just get increasingly more
worried over the past hour.
>> This is making me worried.
>> I don't know. I'm going to join
spiralism, I think, right after we're
done.
>> I already I already looked it up right
now.
>> Yeah, we're we're going straight to
spiralism. This is no longer a
technology analysis podcast. just pray
to the gods of uh for GPT40.
>> I think that would do good ratings.
Okay, Stephen, you also talked a little
bit about in in a recent newsletter
about how uh basically we have very
limited regulation on these companies uh
and even then they might still not be
following. So can you just expand upon
that briefly?
>> As of 2026, there is finally some amount
of law in the United States about how
companies are meant to do testing for
the catastrophic risks we have talked
about. um until this point purely
voluntary. This bill is called SB53. It
came into effect in January and it's
very very light touch. It basically says
the most major of the AI companies, you
need to publish how you are going to
test for these risks. You need to do
what you said you are going to do and
you can't be misleading about it. But
there's no quality standard. You could
basically say we will test for the risks
as we deem appropriate and nothing
further and that that would be fine. But
if you say you are going to do this
testing, you need to in fact follow
through on it. Um, and unfortunately it
seems like OpenAI's release of last
week, GPT 5.3 Codeex, one of the big
breakthrough models we've been talking
about, as I look over the evidence, it
seems like OpenAI did not abide by the
testing that they had committed to in
various ways. And so, you know,
ultimately this decision now is with the
attorney general of California to
investigate it, whether to enforce a
fine, a pretty small fine, maybe like up
to a million dollars compared to OpenAI,
hundreds of billions of dollars in
valuation. Um, but it just really seems
to me like if we care about these risks,
letting companies self assess in this
framework is really insufficient and
that we shouldn't have to take companies
for their words. That we should have
something like an auditing ecosystem
like we do in the stock market. lots of
other places to know our companies being
fully complete and truthful in the
claims that they make about their
systems.
>> I think that's that's logical and kind
of frustrating to see that even the very
standard uh rules are being being
potentially played with. And man, I
looking at the the time I I was thinking
to myself this whole week uh I wish we
could do a podcast every day this week
because there's this whole ring search
party. I was like thinking also like the
ring search party Super Bowl ad. I'm
sure you saw Ron John where they showed
that they could uh find your dog, but
they ended up like seeming like they
were going to surveillance.
>> That was that was the AI manipulating
whatever creative agency came up with
that to just come up with the most
disastrous ad concept. Imagine.
>> It's the first time I've ever seen a
Super Bowl ad actually lead to the
cancelling of the product or part. not
not the product, but this week uh Verge
Ring canled its partnership with Flock
Safety after a surveillance backlash.
Now, they were advertising the we'll
find your dog, but then everyone's like,
well, they also have potentially a
partnership that hasn't rolled out yet,
but might. That's like, well, we'll find
your people and then people were like,
you're looking for people. And Amazon's
like, no, we're looking for dogs. And
then people are like, no, no, you're
looking for people. And Amazon was like,
yeah, well, we were maybe going to look
for people, but now we'll cancel it. And
that's the story of the Amazon Super
Bowl ad. Um, I wish we could talk about
it more, but we should go on to the
anthropic uh fundraising. $30 billion
fundraising round. Of course, it went
from 10 billion initially. That's what
they were seeking. It became overs
subscribed. They wanted uh 20 billion.
Oversubscribed went to 20 billion. They
ended up ending with a $30 billion uh
series C round. Uh meanwhile, OpenAI is
hasn't announced its round. Rajan, what
do you make of this? I mean, it's
obviously a big round. Is there anything
else we can say beyond that?
>> I think it is. I I'm so intrigued in
terms of like how orchestrated this
fundra was cuz you have to give them
like the last two to three months,
Anthropic has just been crushing it.
Like I mean the hype around Claude code,
Claude co-work like all of this they are
front and center right now. So and they
just happened to coordinate a fundra
that clearly would take months to
actually put together and and I saw a
number of like I think even Dan Premac
had written this. It's like it's easier
it's harder to not name investors who
are involved in the round like there's
just so many people in included. So, so
I think I mean this is and the numbers
are just hard to process anymore
anyways. So like 30 billion 380 billion
post money valuation. I think they said
they're at a 14 billion run rate. So um
>> what is that 20 something time 24 23
times revenue whatever like it's big
it's giant they have been absolutely
crushing it recently and let's see what
openai can do is kind of where my head's
at code doubled in usage from from
December to January I believe in the
past month doubled in usage it's already
doing 4% of commits uh on GitHub
anthropic went from 0 in revenue in
January 2023 to hund00 million run rate
in January 2024, $1 billion run rate in
January 2025, and a $14 billion run rate
today. It's absol absolutely uh
exceptional growth. Um Stephen, you're
looks like you have some thoughts about
this.
>> It's it's just huge dollars, right? And
the more that the companies become
valuable and when they become public and
public equities become tied to them,
>> I I just worry about worlds where we are
reluctant to enforce the law on
companies even if they are breaking it
because so much of financial prospects
become levered up on their success. That
seems like a pretty scary scenario to
me. Don't think we're there quite yet.
>> Right. The too big to fail uh scenario.
Uh by the way, speak one more. Oh, go
ahead Ron John. I I was going to ask you
both like one thing this does make me
wonder about though is like like what's
the moat in terms of like I think it was
probably around this time last year that
all we were talking about was cursor and
again they they led the way on software
like autonomous coding and software
development and then anthropic for the
moment completely took over that market
it feels like like do you think this is
sustained because again arr nowadays is
just whatever your last month's revenue
was times 12 or even I saw one post that
was like are startups just taking one
day or one hour of sales and then
extrapolating it into a full year like
like do we think this is uh actually
going to continue to grow at this scale
>> I'll just I'll just give you one one
little data point that I found that a
lot of people found interesting this
point this week okay so we talked about
the open AI round remember Jensen said
well we said we're going to give them
100 billion But really we'll we never
said we're going to do it all in one
shot and we hope they invite us uh to
you know invest in future rounds. This
is from SoftBank CFO. We are investing
in Open AI with high conviction that the
company will lead in developing AI. This
is from Reuters regarding further
commitments to the startup. He said
nothing concrete has been decided. So um
it doesn't seem like a full back away
but um I don't know Rajan it is
interesting to me. It seems like um I
think nothing concrete has been decided
is probably a good a good phrase should
be the slogan of of AI investors and
builders right now.
>> The AI has decided. The AI certainly is
in the background and has already
decided what it's going to be doing.
>> Uh us maybe not. The AI does know.
>> All right, Stephen, final word. Uh how
freaked out how freaked out should we
be?
>> I don't know. Like I don't I don't want
to be a downer, right? Also, it it
really really seems like nobody is on
the ball, right? Like, I'm glad that we
finally have laws in California and New
York. They're extremely weak. I don't
feel super optimistic on meaningful
federal regulation soon. There's stuff
in the EU. It's like pretty heavy in
terms of the amount of fines. Is the US
going to complain if the EU ever tries
to enforce this on its companies? I I
will feel much better if we had some
sort of international summit that
recognized we are on a bad trajectory.
Let's declare the goal safely build
super intelligence. Figure out what
needs to happen to get it there. Um it
seems like many people are still waking
up to the concerns. That's great. I'm
very very happy that they are starting
to see some of what I see, but it is not
yet translating to action and that's
that's what I hope will come soon.
The newsletter is Cleareyed AI if you
want to follow Steven's work. Ron John's
is margins if you want to follow his.
Mine is big technology. This has been
great. I'm glad we talked about safety.
I feel like we we had to dedicate a show
to safety and this was the week to do
it. So Stephen Ran John, thank you both
for coming on the show.
>> This one's been a roller coaster.
>> This one has been a roller coaster.
>> Get any sleep this weekend or stay up
looking at the ceiling?
>> Little of both, I think. What's the
What's the religion? The spin out. I'll
see you over there.
>> Spiralism. Spiralism.
Spiralism.
>> I will be fully converting to a
spiralist perhaps and uh just praying to
my new fallen 40 overlord.
>> Yeah, rest.
>> That's what I'll be doing.
>> I'll see you later. All right, let's
let's get out of here. Let's let's go
enjoy the weekend. All right, everybody.
Thank you for listening. Thank you, Ron
and Stephen. And we'll see you next time
on Big Technology Podcast.