Anthropic’s Mythos Dilemma, Violence Against AI, Tokenmaxxing at Meta

Channel: Alex Kantrowitz

Published at: 2026-04-13

YouTube video id: gCCmKDe_HCw

Source: https://www.youtube.com/watch?v=gCCmKDe_HCw

Anthropic's big new mythos model is
here. Is it real or is it marketing?
Violence breaks out against AI and
engineers at Meta and elsewhere are
competing for who can burn the most
tokens. That's coming up on a Big
Technology Podcast Friday edition right
after this. Welcome to Big Technology
Podcast Friday Edition, where we break
down the news in our traditional
coolheaded and nuanced format. Oh, we
have a great show for you today. We're
going to talk about whether Mythos, the
new model from Anthropic, is real or
marketing or maybe some combination of
both. We're going to talk about this new
surge of violence that's breaking out
against AI and why it should probably be
taken more seriously. We'll also talk
about this now infamous 1.8 billion one
or two person startup called Medvy and
whether that heralds a new era or is
just a bigger scam than we're used to.
And we're also going to talk about token
maxing which is the act of basically
burning as many AI uh tokens as you
possibly can. And maybe that's good or
bad. I don't know. We'll figure it out
at the end. Joining us as always is
Ranjan Roy of Margins. Ranjan, welcome
back.
>> Good to see you. Happy to be back. And
uh yeah, Mythos is here. What a week to
come back.
>> Mythos is here.
>> Yeah, Mythos is here.
>> People have clamored for Ron John's
return. He's made his return at the
>> I am Mythos. I am Mythos
>> because yes we have I think a very good
named model uh coming from Anthropic and
it kind of goes to the heart of the
matter because the question is is this
good branding really most of what we're
seeing or is it actually a step up? Is
it something that deserves the mythos
name in its own merit? Let's talk about
the new model because um Anthropic has
positioned it as something that is so
dangerous that it can't release it to
the public. This is from the Wall Street
Journal. Anthropic set to preview
powerful Mythos model to ward off AI
cyber threats. Anthropic is taking steps
to arm some of the world's biggest
technology companies with tools to find
and patch bugs in their hardware and
software. The company is making a
preview model of its new AI model called
Mythos available to about 50 companies
and organizations that maintain critical
infrastructure including Amazon,
Microsoft, Apple, Alphabet uh and the
Linux Foundation. Cyber security
researchers and software makers worry
that artificial intelligence is becoming
so good at exploiting vulnerabilities
that it could cause widespread online
disruption. Security experts have
predicted that AI models will discover
an avalanche of software bugs and it
looks like Mythos is capable enough that
it's been able to find so many exploits
that Anthropic has no plans to release
it to the general public. um a model so
powerful and so dangerous it can't
possibly be placed in our hands. Um I
think we're going to really get into
like whether this is a true step up or
whether this is more sort of I don't
know disaster porn marketing from
anthropic. Um maybe a little bit of
both. Rajon, what's your reaction to
this news?
>> All right. Well, we're going to get very
into why I think this is marketing in
just a moment, but I think at the high
level, I have a whole theory, so so get
ready for this one, Alex. Uh, but at a
high level, I mean, we've all been
talking about what's that major step
next step change in foundation models. I
mean in the last year actually I think
we've seen that the the how exciting the
entire industry has gotten around the
overall product and harnesses which
we'll also talk about um and all these
other layers of technology around the
model have actually been driving
innovation but it's been a while since
we've had anything really exciting on
the pure foundation model front and
anthropics certainly made everyone feel
this week that something big is
happening. like that that they've really
cracked something, but we don't know
what it is cuz none of us have access to
it,
>> right? So, first of all, you know, we're
going to speculate a lot on this show
because we haven't used the model
because we're not allowed to use the
model. Um, and only this group of select
companies and institutions can. Um, but
we can definitely talk through the
arguments for why it might be marketing,
why it might be a breakthrough, and you
and I can both weigh in here. And I
think there are some good arguments for
and against. Um so first of all you
could look at the fact that um this has
been a product of this ever growing
attempt to build big bigger data centers
and train on more powerful chips and
there's a chance here that maybe what
anthropic has done is just use this
scaling rule or scaling law of AI models
and just say all right these things get
better as you scale it up the the
conversation about around mythos before
this all happened was that it's been
trained on a cluster larger than the
Opus model. So, it's a bigger model than
Opus and would naturally see a step
change improvement. Not only that,
Anthropic has this consortium of
companies that have agreed to try it in
beta. All with the have all coming out
basically under the same, you know,
umbrella agreement that this thing has
found many cyber security
vulnerabilities
as uh as this user sporadica on X points
out,
are they all teaming up to lie about
mythos? Are they all coming out and
saying, "Yeah, we'll participate in this
cyber security consortium for just a
standard run-of-the-mill LLM." I mean,
the company names are wild. AWS is
there, Cisco, Crowd Strike, Google,
Nvidia, Microsoft, the Linux Foundation,
PaloAlto Networks, JP Morgan, Chase,
Broadcom. like do they all have AI
psychosis that they're coming out here
and saying actually you know you know
this sort of iterative model is powerful
enough that we'll sign on to be part of
this consortium which has a great a
great name the glass wing project.
>> So what would you say to that before we
start going through some of the holes in
the argument?
>> See as we get into the marketing do you
know what the glass wing is a reference
to? I had to look this up.
>> Oh you tell me. It is the Greta Oro
butterfly that has wings that are
transparent and you only see the veins
as opposed to actually having the
traditionally colorful wings of a
butterfly. And to denote transparency,
that is why it's called glass wing. I
find that one kind of fascinating. And
of course, Anthropic is just killing it
on naming everything unlike Spud from
OpenAI, but that's a that's a different
story. I think in terms of like so the
security vulnerability thing is
fascinating to me because the whole
security conversation has it hasn't been
front and center of how AI is going to
potentially exploit all existing
software. So I think it's good that it
starts being brought about but actually
it was in Tom's hardware there was a
really good piece around there was
actually they said thousands but there's
only actually 198 manual reviews in
terms of actual software exploits and a
lot of it was done on a lot of it was
found on older software or were exploits
that cannot actually be executed in any
feasible manner so it still lived more
in a theoretical way so I think like
there's
There's only a little bit of information
that has actually been provided by
Anthropic. There is this entire you know
like consortium of companies all of whom
have a massive interest in AI succeeding
and being like reaching its promise. I'm
not saying there's like some mass
conspiracy but I'm also saying like when
you have Nvidia and PaloAlto Networks
and Microsoft and Cisco and Crowd Strike
and Google, everyone wants AI to be this
like puckle generational
transformational thing. So, like I don't
know. I It's to me I don't like all of
this hype when you're not actually able
to see anything. And to me, otherwise
then we don't need to know this. Like
just do this. Have some meetings. Be
careful. But you don't need to like it.
Like here is Mythos. It sounds like an
Avengers movie and in the end we're just
having to sit here and just kind of try
to speculate about it.
>> Wait, hold on. Is there any other way?
Like let's say they did actually come
up. Let's say they're telling the truth,
right? How how would you want it to
play? Do you want them to do it in
secret? You'd want them to release it.
Like maybe this is a responsible middle
ground. Uh I would not want then don't
don't IPO. Don't raise more money. Stop.
If this is so we've had this
conversation forever. Like if this is so
truly dangerous and you're sitting here
on the precipice of like the destruction
of humanity, take a breather. And you
can say I I saw some people arguing that
this is taking a breather, but honestly
I I was hearing from someone that like
right now OpenAI and Anthropic are in
like a death race to who can get out
first in terms of their IPO. Like it is
just and everything when you start
thinking in terms of that kind of
framing you just see this stuff it's
hard to not but like everything is just
about we are sitting on this like
worldchanging technology that is so far
advanced than everyone else and like we
have to do something about it like I
don't know do you do you how would you
do you think this is responsible and
this is the most responsible not
self-promotional market driving approach
to actually releasing the myth mythos
model.
>> No, look, clearly it's self-promotional.
I'm just saying that if if mythos is
this unbelievably dangerous model, I
think this would be a responsible
process to release it. But I also think
there are some holes in the argument.
I'll go right to Tom's Hardware. um
Tom's Hardware, which I I did pay for a
subscription for because I had to read
this article. So, um here's what they
say. They say um Anthropics Cloud Mythos
isn't a sentient superhacker. It's a
sales pitch. Claims of thousands of
severe zero days rely on just 198 manual
reviews. So they write, "Mythos might be
good at finding vulnerabilities in
software, but many of them aren't as
potentially as aren't as potentially
damaging as anthropic wants us to
believe." Um, the big project glasswing
blog post report on Mythos from
Anthropic claimed its new model had
found thousands of high severity
vulnerabilities. Um, but it's not clear
how realistic those vulnerabilities are
and how many of them aren't actually
exploitable uh or how even how or even
how problematic they are. Uh in a case
of this one vulnerability FFMPEG
that's existed for 16 years, Anthropic's
own analysis of the release suggested
the bug is ultimately not a critical
severity vulnerability and would be
challenging to turn in this it would be
challenging to turn this vulnerability
into a functioning exploit. Mythos also
reportedly found several potential
exploits in the Linux kernel, but was
unable to exploit any of them because of
Linux's defense in-depth security
systems. There's also this subheading,
several thousands more. And Anthropic
states it can't actually confirm all the
thousands of bugs that Mythos claims to
have found are actually critical
security vulnerabilities. It's just
extrapolated
extrapolated that number from having
found found it in around 90% of these
198 manually reviewed vulnerability
reports.
It's all it's all in the documentation
that Anthropic provided. I mean
that is that is something that really
points to it being more of a hype piece
than not. And then do you want to get
into my grand theory of I know on this
show I often look at everything from a
lens of a comm's professional. I know I
think I've been rubbing off on you a
little bit but do you want to hear my my
Okay. So
I I like had to map this out because I
was like this just feels so coordinated.
So on April 7th at 2:06 PM, Anthropic
releases their first announcement of
Project Glass Wing and the Mythos model.
And then they have the system card
available. They start kind of tweeting
through at 2:15 p.m. They make the
system card available. The system card
basically is I think it's like a 70page
PDF or might even maybe it was 250
pages. There's like one tiny footnote.
Did you hear like I think you had
mentioned but basically there's this
this story going around how Mythos broke
out of containment and emailed one of
the researchers while they were on lunch
eating a sandwich. So like this this
gets picked up everywhere that they're
eating a sandwich and Mythos is has not
been given the ability to email someone
and somehow has broken out of
containment and has emailed people and
that or emailed this researcher. But so
the system card, it's this tiny footnote
in a 250 page document. But then 2:32
PM, 15 minutes later, 17 minutes later,
Sam Bowman, the researcher, writes this
20 tweet thread about Mythos. And then
in one of those, he says, "I encountered
an uneasy surprise when I got an email
from an instance of Mythos preview while
eating a sandwich in a park. That
instance wasn't supposed to have access
to the internet." So in this perfectly
coordinated way within 20 minutes of
each other. So you know you're not
writing out this entire tweet thread
both anthropic and Sam Bowman all of
this was prepared and then every there's
a ton of publications that start
publishing this within the next hour and
everyone focuses on that sandwich detail
meaning that there was some kind of
coordinated PR effort and it stuck.
Everyone's like I've heard from friends
like holy did you hear? like it was
like emailing people while they're
eating a sandwich in a park. Like it was
such a good detail and it got picked up,
but it was such a coordinated PR effort.
Now, did that happen? I would hope yes.
For how much attention they brought to
it? Is that good? And what does that
mean? That's a whole other discussion.
But but it's like they are coordinating
PR around these kind of details to
spread this. the fact that they did that
around the sandwich. They want that to
be the story and they got it to be the
story. So why do they want that to be
the story? That's that that's my rant,
but that's my mapping. What do you
think?
>> Well, it is definitely a story similar
to many that Anthropic has told us
before about these AIs sort of having a
mind of their own and the dangers around
them um trying to hack their benchmarks
for instance, which is something that
Anthropic has been very vocal about. I
think that story hit because it's such a
human story. Like think about how
different that is from like we went 99%
on the uh solve bench 17 exam. Like it's
much easier to be like, "Yo, this model
just broke out and emailed a dude eating
a sandwich." Like that. I understand.
>> In a park.
>> In a park.
>> Where else would you eat a sandwich?
>> Yeah. Nowhere else. Absolutely not.
>> So So that's that I I get that. But
you're right. the sequence of events.
There's no doubt that this is meant to
burnish Anthropic's image in some way. I
would just ask this. Do you think the
two of us might be in our skepticism
here? And we we have been reading many
of these announcements with like there's
a PR PR element to it, which of course
it's an announcement. Are we suffering
from some sort of uh what do we call it
uh AI derangement syndrome where where
where we are not I I made this point
earlier this week at a conference I was
at like um you know often times skeptics
can ask like what happens if it doesn't
work but sometimes you ask that so often
you um you forget to ask what happens if
it does work and so that's what I'm
asking about the derangement syndrome.
Do you think we're just missing the fact
that maybe this actually was a step
forward? And like at some point when
there is a step forward, they're going
to say it's a step forward. They're
going to coordinate the PR. It's going
to have a crazy story like the sandwich
story. And I don't know, maybe this is
it. It's Yeah. Okay. I that's it's
I do recognize
this could have happened but like the
fact that I have to struggle to
recognize that rather than just accept
well obviously if they're talking about
it and everyone's talking about it it
happened is the problem for me and I am
I just can't help but be skeptical
because that meant when you see stuff
that perfectly coordinated in terms of
timing like again 20 tweet thread, 12
tweet thread within a few minutes of
each other. The fact that people are
publishing it, it that meant there was
press releases on embargo done before
the entire thread. Like it's just like
you are choosing to push this specific
narrative. Now, you can argue maybe it's
for the good of humanity that they're
sitting around and they had multiple
meetings leading up to coming up with
this strategy. And maybe you can argue
like this is for the good of humanity.
We want to make sure people are well
aware of the dangers of this technology
and we feel the sandwich story is the
best way. Is that really what's happen?
Do you think that's the out of the
goodness and the altruistic nature of
the comm's professionals at Anthropic
that's why they came up or maybe the PR
agency that's who hired it or maybe
Claude was so good that it came up with
this strategy on its own. Is it for the
good of humanity or is it because they
raised a $380 billion valuation round a
month two months ago?
>> Now let me tell you what I think is
actually going on.
>> Okay. And it sort of maybe is in the
middle of all these. And is it a little
tin foil hat type of theory?
Potentially.
>> Okay.
>> So, maybe it's somewhat conspiracy-
minded, but I don't care. I think I I
legitimately think there's a chance that
this is what's happening.
>> Okay.
>> Think about what we've seen with
Anthropic and OpenAI recently. Remember
these companies released Claude and Chat
GPT originally as demos as ways to show
off what their technology is capable of.
So you might buy some intelligence
metered from their API.
Over the past 3 or 4 months, both of
them have gravitated toward building a
super app. uh something that uses the
most advanced intelligence to control
your computer that will to help you get
things done to in some cases even build
new software for you which has created
this big SAS apocalypse moment and also
on the other hand has helped them raise
globs of money 122 billion in open AI's
case 30 billion in Anthropic's case this
has effectively enabled the buildout
uh that they are embarked on which is
going to help them um raise more money
and grow bigger and build bigger models.
And so as these models get better, I
think there is a question that is taking
place within these labs. Do we take the
intelligence the most intelligent models
that we've built and do we keep them
exclusive to our super apps to our super
agents or do we make them available to
everybody and I think there is maybe
some hesitance there and wouldn't it be
interesting if the plan is instead of
using these instances as as demos like
the codeex and the cloud code they want
to build their own products and to do
that they want to have the best
intelligence And so therefore, we might
see more of these releases of we
actually did advance the model. Maybe
it's not mythical like a mythos would
exam would suggest, but it's definitely
better. And we want to have the monopoly
on the tools that will be able to use
them. This is from Martin Casado on
Twitter. It's only a matter of time
before the before only the model
creators have access to the most
powerful models. The rest get access to
smaller distilled version versions or
access the models through firstparty
apps and services that don't provide
direct access to the token path. This is
my belief on what's happening.
>> I don't
not like that one.
I kind of well okay so I have always had
I mean anyone who sells investment
advice at a price it's never made sense
to me because if it was so good you
would just use it for yourself and not
need to sell it like when it's pure
investment advice in this case it could
be the same thing if your model is so
good that it can create all the
experiences and tools and destroy the
entire SAS industry why would you give
it out and worry about that rather than
just kind of like taking over and owning
all of human experience and all work and
you know like I I I see what you're
saying but then why glass wing? Why give
it to Google and everyone else? Why not
just sit there and churn out the next 12
iterations of the product and let
Mythos, you know, might might do some
might harm a few people within your own
organization, but it's the price of
doing business. Like why would you still
roll it out in this way?
Well, I think there's you you take a
step there and there might be real
utility in having this consortium look
for these security vulnerabilities with
you. Uh because ultimately like if you
do put it in the hands of people through
cloud code uh then you're going to you
know potentially create these risks.
Remember anthropic isn't giving
Microsoft
mythos to sell through Azure. It's
giving Microsoft mythos to
>> test no fair. Fair. fair. So, is mythos
as
earthshattering and life-changing and
dangerous and exciting as it's been made
down to be?
>> I I don't think so, but I also think
it's not a nothing burger.
>> I know it's kind of like the the fool's
way out. It's somewhere in the middle,
but I really believe it's somewhere in
the middle.
>> That's you know, gun to my head. That's
what I believe. But I I want to get What
do you think? You think it's a nothing
burger?
>> No. No. I I
it's tough cuz the advances Anthropic
has made I mean up into the Opus 45s 46
like they clearly have been doing
something right and it's been impressive
over the last year right so like if
anyone is going to make it but by the
same token I mean we've seen so much
back and forth between who is leading in
what and is it going to be Gemini 3.0
know or is it going to be GPT5 was
supposed to be so it's hard to say that
just because like past success is not an
indicator of like where you're going in
the future but if anyone should be
positioned it's still I I'm I have
trouble
>> given the overall context accepting that
it is necessarily as grand as they say
it is or important and as dangerous
because there's so much incentive
like for to make it out to be that and
like the way they rolled it out. I think
it's been genius and I think it's just
ahead of the IPO again. I think I've
been like when I think about they're in
a death race and again it was framed as
like whoever gets out first like whoever
comes second it's actually going to be
in a terrible uh like space and uh like
when I keep thinking of everything in
that framing you start to see
everything like pushing what is the best
way to actually get to IPO quickly and
right now they have this mythos about
them too. You have to go there. But
>> can't believe you did that.
>> I mean, come on. That's what they named.
>> Okay. It was It was there for you.
>> It's not Spud. It's not Spud.
>> It's not Spud. Okay. Just answer this
for me. Um, what do you think about the
competing first party and and third or
API businesses, right?
>> What do you mean?
>> I mean, they're their first party tools
are going to be competing with the users
of their technology via API. Isn't that
a bigger deal now that this super app
stuff is really taken off? No one's
really talked about this.
>> So, wait, wait. So, this is a good
point. The the amount of revenue from
the API obviously was kind of like the
driving force before. Now, the kind of
like main app surface has become a lot
more and we've seen like they shut down
open claw access to clawed code, I
believe, or sorry, before it was part of
like your actual subscription. and now
you're going to have to be paying by the
token. It's that's a good point. Those
two are more and more inherently kind of
like in competition with each other.
>> I mean, just take cursor for example,
right? It's like, oh, you know, we're
supplying quad cloud code through cursor
codeex through cursor. I mean, I don't
I'm sure cursor still has a possibility
but still has potential but the fact
that we don't hear about cursor anymore
because so much of this has moved inside
and uh is is almost like the canary in
the coal mine so to speak or the signal
of what's to come because you know again
super app this is the way they want this
to be a venue for AI to control your
computer and when you do that you know
all these companies that are paying for
you know the API um might not be so
happy and you have to sort of make a I
think you will eventually have to make a
bet on what your business is. It's very
tough to sustain both for a while and
who you want to have the best models in
that case.
>> It's a good one.
>> I mean if I'm a first party I'm like I
want them.
>> Yeah. Yeah. Yeah. No, no. I like I think
this is a good I have a feeling we're
going to be talking a lot about this as
we kind of like go into the IPOs of
these companies and just that whole
process cuz you're right like there
isn't some it's not like a full
intrinsic conflict between those two
they could just be different business
lines but there is a bit of there's
tension certainly between those two and
uh I also though I I hate super app I
don't it no one's going to be wechathat
in the US It's super app. I don't know.
Do you remember like everyone wanted to
be super app in the 2010s cuz you'd hear
in China?
>> This is so different though.
>> Super app was like, oh, you open an app,
you can do the lottery, you can do Uber,
you can do payments, you can read the
news.
>> This is different. This is a
>> this is like a really super app.
>> Super app, right? It brings I mean it's
it's it yes, it's the same word, but
it's a completely different use case.
>> Okay, we need a different term then.
Super app is too loaded for me. We need
a
We'll think about it.
>> Mythos. Mythos is a good term.
>> Yeah,
>> mythos.
>> Um Yeah. Okay. So, let's just predict
the future here. Not like we know what's
going to happen. Um there is an argument
to be made that Anthropic will uh wait
until OpenAI releases Spud and then just
put Mythos out there in its like
distilled version or
>> actually No. No. Is that going to
happen? I I like even better if the
sequence of events is like Sam releases
Spud. And again, for if you weren't
haven't followed or weren't listening
last week, while Anthropic's code name
for their kind of like
incredible model is Mythos, OpenAI's
code name internally for their next
model is Spud. Um, and
I if Sam takes Spud and is just like,
you know what, this is like the most
single dangerous thing that has ever
existed in humanity. And guess what?
Rolling out to US users in the next 24
hours and international in the next 96.
I think that'll be such a power move and
uh the most sand thing ever. And then
they're going to have to follow and I
think they will.
>> You got sputdded.
>> You got sputdded. Gotta
never get sputdded by Sam
sputted.
>> So I think to be continued, right? Like
we'll really have to see what this model
looks like and how it feels when we use
it. Um but I think at least today we've
certainly presented the pro like the for
and against arguments for like why this
might be a step up or why this might be
uh marketing. All right, before we go to
break, I want to hear about the Meta
Harness. Uh this is obviously um this is
gravy for the harness hive. Shout out to
the harness I have out there, everybody
here with us. Uh what is a meta harness,
Ronjan?
>> Okay, so Stanford just released a new
study called the meta harness and
basically the idea we have talked about
this as one of the big trends and Alex
has been very uncomfortable with the
term but then came to embrace the term
and as we even I guess call our
listeners the harness hive. But the idea
>> now this is they've adopted this adopted
this in the comments we always get your
harness hive is ready harness where's
where's Ron John harness hive is waiting
>> well let's okay so so again an agentic
harness is the idea that you and this is
what has I have been fired up about what
I've been working on at writer since
last July like the idea that you have
like a set of tools and connected data
and like underlying foundation models
but the harness is basically what helps
control how agentic workflows are built,
actions are taken, how data moves
around, how outputs kind of are fed back
into a system. Like the harness is that
entire controlling layer. Now Stanford
came up with the idea of a meta harness.
It's a harness over other harnesses.
It's the idea that like the that you can
change the harness around a fixed model
and see a sixx performance gap on the
same benchmark model. So the idea is
that the more you can actually improve
that harness and actually have like AI
working on building the harness and
optimizing the harness, you can actually
improve the uh performance of a
foundation model. And it's in the whole
product verse model debate that we've
had for years now on the show. Now
introducing a harness is another kind of
like surface in which this actually gets
solved is interesting to me. But I don't
know. I just love the idea that
Stanford's got the meta harness. And
who's got the best harness? Maybe mythos
won't matter at all. It's all about
who's got the best harness.
Even though I do understand the harness
conceptually, I still hate the word and
I'll I'll take it to my grave. I'm never
going to endorse it. Harness hive, fine.
But the actual and meta harness is even
worse. I mean, we've gone we've really
run the gamut here. Mythos, good name.
Spud, bad name. Metah Harness, uh, I'm
ready to throw my headphones out the
window next time I hear that.
>> I don't know. Okay, it captures it. It
is what it is. Like, it explains what
it's doing. It's harnessing all these
tools and
data and wrangling them somehow like a I
guess a harness is a horse term, right?
>> I mean,
>> yeah, horse uh climbing. You can use it.
>> Climbing. Climbing.
>> Yeah.
>> Other potentially use cases of harness.
>> We're not going to go there. I mean,
maybe if you're if you're wet chatting.
>> Yeah. But all right, we're going to go
to a break and when we come back, we're
going to talk about uh some some pretty
concerning news about violence towards
uh you know, folks involved in the AI
buildout. Um and then token maxing. Uh
we'll be back right after this. And
we're back here on Big Technology
Podcast Friday edition. All right, crazy
story. This happened this week. No one
paid attention to it. I don't know why.
Uh from NBC News, Indianapolis
Councilman says shots fired at his house
and a new no data centers note left at
his doorstep. An Indianapolis council
member said more than a dozen bullets
were fired at his house Monday morning
and a handwritten note reading no data
centers was left on his doorstep. In a
statement, Indianapolis City Council
member Ron Gibson said he and his
8-year-old son were not physically
harmed, but they were awakened by the
sound of gunfire. Just steps from where
those bullets struck in our dining room
table, where my son had been playing
with his Legos the day before. The
reality is deeply unsetting. This was
not just an attack on my home, but
endangered my child and disrupted the
safety of our entire neighborhood. Um,
pretty pretty scary. And we talked
recently about how uh data centers have
become so unpopular in the United
States. Uh to me this is sort of just
kind of I mean first of all just
disturbing and never should never ever
come to this. Uh but it it does not even
but and it does follow a trend of
violence toward uh AI infrastructure
including this is from uh poly market
though it's I'm pretty sure I've seen
news reports of these separately food
delivery robots in Los Angeles,
Philadelphia and Chicago facing rise in
violent attacks from the anti-clanker
activists.
What do you make of this Rajan? Okay, so
I'm going to separate out the
anti-clanker activists and food delivery
robots from the the the data center
question I think is like fascinating
because so so the story I hadn't
realized before that apparently
Indianapolis is there's like a number of
state tax incentives. They've like grown
40 new data centers over the last few
years. There's like a bunch of ma
massive uh companies that are building
out there. every big tech giant is
investing. So, so it's actually like
acutely a an area that is feeling this.
Um I think like the biggest to me the
most interesting or scary thing that
happens is right now it's kind of like
our data centers taking the jobs or
taking water but as like or if energy
prices continue to rise given what's
been happening if resources start
getting constrained more if water like
there is so much around the resourc
ource side of it when it becomes like
more tangible that this stuff just gets
a lot scarier. So I think like there it
is probably the most clear physical
manifestation of like again mythos
crawling around some wires and sending a
an email is interesting but it's hard to
like you don't see it. This is uh like
this is a giant building being
constructed in the middle of your town.
it. I feel that these are going to
continue to be like a I don't want to
say use the word target, but certainly
they're a visual representation of
what's going on.
Yeah. I mean, I wrote about this in big
technology today that these buildings
can be faceless. They can be imposing.
They often are. And they're mostly
symbols of tech's like interest in
showing and delivering this technology
despite the uncertainty it causes to
people's lives. like if we hear the way
tech executive or AI executives speak
they'll always say like yeah there will
be some displacements right and but you
know we think the benefits of the
technology will outweigh the drawbacks
and sure longterm they might um but we
all know that the people that went
through the industrial revolution didn't
exactly have a good time despite the
fact that we've all you know sort of
benefited based off of you know now that
society has reoriented itself after that
painful period but people are people are
um growing increasingly obset at here. I
don't think they have a clear
articulation of the benefits of this
technology yet. And by the way, just
before we went to air, this story broke
in wired. Suspect arrested for allegedly
throwing Molotov cocktail at Sam Alman's
home. San Francisco police arrested a
suspect early on Friday morning for
allegedly attacking the home of OpenAI
CEO Sam Alman and making threats outside
of the company's headquarters. OpenAI
sent a note early to employees about the
incident early Friday. Early this
morning, someone threw a Molotov
cocktail at Sam Alman's home and also
made threats at our San Francisco
headquarters. Thankfully, no one was
hurt. We deeply appreciate how they
quickly the SFPD responded. I mean, this
I don't know. I think this is crazy. I I
I just am sort of stunned that
like people actually being violent
about, you know, against these I'll
include the robots, the robots, data
centers, and now the leaders. Um
it it is worrying because like again to
you know especially on the data center
front the the way that this technology
is advancing all the labs have said is
by increasing the physical footprint of
data centers and now you have you have
violence them against them and you also
have political opposition against them
and it's it's like obviously you don't
ever want to see violence anywhere and
you know above that um or you know on
top of that you you may already see you
we already see that these the data
center buildout is slowing maybe 50%
according to some reports won't be built
this year the ones that are on target to
be built this year uh and this makes it
even more difficult
>> yeah well on that last point I my I kind
of feel you're going to see more and
more
like announcements about slowing data
center growth or lack of actual
follow-through in terms of like planned
data centers and the Iran war or kind of
like geopolitics or like access to the
resources required will be front and
center to those stories separate from
the actual demand uh like for the actual
like uh for the actual compute. So, so I
don't know it's that part is going to be
interesting. I think like I mean we're
we haven't even we're in a midterm
election year. I'm surprisingly like
that part of it I guess there's enough
going on in the world but like it hasn't
really started heating up that
conversation but there there's no doubt
in my mind AI is going to be front and
center and it just makes for such a good
villain because we have talked about
this plenty the the industry has not put
the most likable people front and center
representing the technology there's has
not been a compelling story about how
this is good for you and all the people
kind of front and center are telling you
that half of jobs are going to be gone
and this is going to be like it's going
to be the most dangerous technology yet
it is making certain pockets of people
ungodly rich. So I think
>> it's a pretty it's a pretty good villain
>> and no access to any of the upside on
the public markets right now which is
you know a problem. Not like that's
going to be the main issue. Um, but
that's also one of the the factors here.
And we also talked about a few weeks
ago, we talked about AI's unpopularity
and its need for a public face that's
going to rally support around it,
whether Jensen could be that person or
not. Um,
>> yeah,
>> man, it's it's just we we wondered what
are the downstream effects going to be,
and clearly they are. So, I would say
the violence is maybe a symptom of that
discontent. Um, but we're now starting
to see the manifestation of it come come
to fruition. And of course there's this
bill that Bernie and AOC introduced
about a data center moratorum could you
know be national and you know there's no
chance of that passing but state by
state you could see in the United States
real push back to this and in fact as I
was doing my research and writing about
this today for big technology I found
this story. It's from CNBC. Maine is set
to become the first state with a data
center ban. Maine is poised to implement
the first statewide ban on data center
construction. A move that could clear
the way for other states to adopt
similar measures and pump the brakes on
a growing industry. Lawmakers in Maine
green lit the text of a bill this week
to block data centers from being built
in the state until November 2027.
>> Do you think this is going to happen
more and more? It's happening. Maine uh
I feel Maine would be Maine's got a lot
of land, but I guess the water
constraints yet.
>> Yeah. I mean, here's my thing. You
politicians read polls. The polls are
terrible for AI right now. Terrible. And
unlike social media, unlike, let's say,
software,
you do have a say into whether this
technology progresses because you can
stop the data center builds because the
data centers are so foundational here.
>> That's interesting. So whereas whereas
these companies were completely you know
sort of unencumbered by government when
they were just um you know building
social networks it's not the same thing.
>> Wait hold on hold on that's an
interesting like angle on that cuz but
social media I guess you could push for
regulation. It's just that everyone was
is too addicted to social media and
cannot stop using it so they don't want
to actually. Do you think that's the
issue that like for how and again this
is my personal view but how bad social
media can be for society but everyone
got so addicted to it that by the time
it was at trying to regulate it it was
too late versus most people still
haven't like really felt what AI can do
positively for them in their life and
the industry hasn't really explained it
well and that's by the fact that this is
going to happen at the beginning versus
like if it was like if people very
quickly in 2009
mobilized against social media, it would
be the the equivalent of that.
>> Yeah. Well, I think we know the polling
shows that if you use AI, you're much
more likely to be in support of it than
against it. Um, but there's like two
sides of it, right? There's like, do I
use it? And then
we don't really know what the job
implications are now. We all have a
thought on whether AI is going to cause
mass job loss or not.
>> Yeah.
>> But you can also be in a situation where
like you use AI and you like it and you
also got fired because you know your
boss thinks that they can do the same
work with like three employees instead
of 17.
>> No, you're right. That that that is a
completely different element of it
versus social media. But yeah, it's
going to be they who whoever an this is
where how good anthropic is at
communications given what we saw with
mythos and everything I outlined. Just
make people like AI a little more. Do
something do some of this like creative
communication strategy and just make
people be like, "Oh, AI is cool." That's
all.
>> I mean, I think they should. I think
that, you know, in retrospect, their
Super Bowl ad, even though they were
praised for it, was kind of a miss.
>> Oh,
>> because it ended up bringing down the
category as opposed to making people
excited about AI.
>> Exactly. And then meanwhile,
the Super Bowl as like and then you have
Google trying to be super like uh
emotional and sentimental and like and
still it was just the most random like
not connected to Gemini ad imaginable.
So yeah, Cantraits and Roy
>> don't
>> let's what?
>> Yeah.
>> Oh, you like it?
>> I I was going to say I don't want to
spend too much time on this because
we've covered it last week.
>> Um but TBPN coming into OpenAI like the
argument
>> last week. Yeah.
>> The argument open would make would be
listen these guys are great content
marketers and AI needs good content
marketing. So maybe maybe it wasn't
Jensen. Maybe it was the TBPN.
>> Yeah, but no one
>> brothers all along.
>> They I mean I Yeah, I know that was last
week's news and I was skiing in Utah,
but man, that one doesn't make sense at
all to me. They there's they don't they
know how to speak to people who already
love AI. They're not going to convince
AOC to not build a data center or like
some anyone who's like anti-data center
as an activist already is not going to
listen to TVPN and be like now I get it
now I understand. I don't know.
>> No, no, no. The point is these and I'm
not I mean I made the argument against
last week so let me try the argument for
this week. The point is that these guys
could help
show those benefits of AI because
they're AI literate and also somewhat
likable.
>> Yeah. But I'm saying
>> and and do that in a content marketing
side of OpenAI versus on the TBPN show
>> and and and I and I'm saying they're
likable to people who already like AI.
And I'm not I think they're they're
great, but like they're
>> I I don't think anyone who hates AI has
even heard of them.
>> Well, one last thing. Okay. So it's it's
they openAI has marketing a marketing
machine, right? We're talking about how
like this marketing machine needs to
show the benefits of AI. So not so by
acquiring them, not only do they have
the show, but they have these two guys
that inhouse as effectively content
marketers that can help with that side
of things. Not that use their platform,
but maybe shape the messaging.
>> Yeah. No, no, but I would I'm still
going to have to give the edge to
Anthropic on this one. Again, going back
to everything we were talking about
earlier. Yeah.
>> Rolling out a tight communications
strategy that actually gets the message
out that you want. Everyone bites it
gets it creates Scott Bessant is
creating like a council of Wall Street
advisers to address the potential
threats of your upcoming model. Like I
mean guys TB TBPN is not going to do
that. that whoever is doing that over at
Anthropic, God bless him because that's
communications.
>> All right, so let's let's um that we can
keep going on this over time, but I
think we both agree that this is there's
a clear image problem here and
>> it's and and it's just snowballing and
getting worse. So, oh, and and this is
not even going to help. Um I don't know
if you saw this New York Times story
about this company called Medv.
There's been there's been talk about is
there going to be somebody that builds
the1 billion oneperson company.
>> Um I think the times wrote the story
thinking they found it. How AI helped
one man and his brother build a 1.8
billion company. Matthew Gallagher took
just two months 20,000 and more than a
dozen artificial intelligence tools to
get a startup off the ground. From his
house in Los Angeles, Gallagher used AI
to write the code for the software that
powers his company, produce the website
copy, generate the images and videos for
ads, and handle customer service. He
created AI systems to analyze his
business performance, and he outsourced
the other stuff he couldn't do himself.
His startup, Medv, a telealth provider
of GLP1 weight loss drugs, got 300
customers in its first month. In its
second month, it gained 1,000 more. in
2025 and made 100 sorry made 401 million
in sales. This year they're on track to
do 1.8 billion in sales.
A $1.8 billion company with just two
employees. In the age of AI, it's
increasingly possible. Let's pause here.
What do you think about this before we
go into all the problems with Medv?
Okay, as you I got some thoughts on this
one and I might
first one on track to do 1.8 billion of
in sales. A$ 1.8 billion company with
just two employees. In the age of AI,
it's increasingly possible. I do want to
call out on track to do 1.8 billions of
dollars of sales. Regular listeners will
know of my hatred of ARR as a term. We
have no idea what that means. They have
not made $1.8 billion. They could have
just I was a little disappointed and I I
think Aaron Griffith who wrote Detroit
the New York Times is an incredible
reporter and followed her for years but
like that one like did they make the you
know like extrapolatable
one month of revenue? Was it one week of
revenue? Was it a few months? What
whatever it was? So already that number
feels inflated, but I will say a lot of
the backlash I saw and like Alex has a
tech dirt uh article linked here, but
like
>> actually does kind of point out that it
is an AI story. It's a really bad for
the industry, but it was like Medv
success has little to do with AI. This
is from Techctor. uh and quite a lot to
do with fake doctors, deep fake before
and after photos, misleading ads, snake
actual snake oil, and the kind of
old-fashioned deceptive marketing that
separated Marks from their money for
centuries. So, so much came out that
like there was deep fake doctors and
like like completely AI generated ads
that were completely misleading, but it
was using AI and like he stitched
together all these different parts of
the GLP1 supply chain, which is I'm sure
there's lots of scammy stuff going on
everywhere, but he did it and you could
do it and like you can picture doing it
and any of us could picture doing with
AI. So, so I actually think the revenue
number aside, I do think this is
actually a terrifying but actually
probably more true than people are
giving it credit for story about an AI
first business.
>> Man, I had the same reaction. I think it
would have been great if they just
switched the tone a little bit, right?
Like the Medv story story shows how a
little AI and maybe kind of I don't want
to say scamming but whatever is close to
that uh can get you to scale really
quick and he picked the right industry
GLP1s and no one has any illusions about
what GLP1s are do or do not right like
the fact that he and maybe I'm giving
too much slack here but the fact that he
made AI images of people's weight loss
it's like okay like yeah of course the
guy misrepresented what he was doing on
a number of fronts runs, but like we
know what the people come to GLP1s for
the same thing.
>> Um and and he delivered it to people at
scale with AI. Um but yeah, the the
Times did end up with a editor's note.
After this article was published, many
readers noted that Medv was facing legal
and regulatory actions for its business
practices. Our piece should have
included the information to give readers
a fuller picture of the scrutiny that
the company was facing. We updated this
article to a warning letter from the FDA
and a pending class action lawsuit
accingu accusing Medvy of violating
California's anti-PAM law. You could
probably say the same thing about a lot
of, you know, GLP1 startups right now. A
>> as we're talking now, I'm even more like
it is a it's true. I mean, again,
headline revenue number aside,
>> this actually is a really important
story, but I mean again, yeah, it's how
they framed it. Like if it is uh if it's
like AI turbocharging the ability for
people to kind of like scale sketchy bit
again like if you have like the the
world's first AI scale drug dealer where
one person can now some with some drones
and whatever else can operate like an
entire cartel. Um that like is that that
could be the first billion dollar AI
business. But yeah, I uh it's the
framing, but but it is it it's
important. It actually is important. And
this it I think it's real. I just don't
think
>> it's necessarily a billion dollar
business, but I think it's real.
>> I mean, it could be, right? I mean, I I
guess we're both med. I just signed up
right now. I've uh got a full year
supply of Monaro from
>> Well, also
>> Dr. Samantha Almanson. again like like
this is where not to get too into it but
like you know the way revenue would be
recognized anyways is like this person
is taking a tiny fraction of whatever
the actual end price of the product is
and it's like could even be selling it
at a loss um and so like again yeah what
was the actual
>> probably not at a loss
>> no very little overhead
>> yeah a good
>> it's just like what he's like drop
shipping GLP1s to people from like some
compounding pharmacy Yeah. No, no, I
mean it's not just drug, but there I was
reading and I only very superficial
knowledge of this, but from what I was
reading, like there are even more parts
about how you can get the kind of
prescription automatically done. There's
all these other parts of the GLP1 supply
chain like outside of just traditional
retail and drop shipping, but that that
have become there's all these players
rising up that are kind of filling and
automating those. So he basically had a
whole it's kind of like agencies in a
traditional marketing world. So he just
had a network of those and was just like
connected to them and communicating them
to them via AI.
>> This guy's diabolical. All right, we we
got to uh cover one more story before we
get out here. It's called token maxing.
All right. Meta employees vive for an AI
token legend status. Employees at Meta
want who want to show off their AI super
user chops are competing in an internal
leaderboard for status uh as a session
immortal or even better a token legend.
The ranking set up by Meta by a Meta
employee on its internet. Use company
data. Measure how many tokens uh
employees are burning through. Dubbed
Claudonomics after the flagship product
from anthropic to leaderboards.
Leaderboard aggregates uh AI usage from
85,000 Meta employees listing the top
250 uh power users. The p the practice
is emblematic of Silicon Valley's newest
form of conspicuous consumption known as
token maxing. Um, since the story went
out, Meta took the thing down because
they were embarrassed by it. But do you
agree with me that this is obviously
like not the right way to incentivize
people like to use tokens? Like if you
gamify token usage, you're just going to
get people burning tokens to compete
with each other.
>> Okay, man. This one hits home very hard.
So, so at Writer we actually had where
we had an internal we actually had a
similar it wasn't like a leaderboard but
we had a report that was like oh this is
like token usage and we were looking at
it internally of employees and then
someone had actually screenshotted the
top and my name was on it. I'm like I
was like third out of uh employees and
like had burned like and and I am I've
told you I'm cranking workflows and
agents all day long and like and and and
obsess am obsessed with it. So they had
posted on LinkedIn and then I started
getting texts from like some other
friends were like wait I just saw this
thing going. So like this kind of hit
home in its own small way for me and we
were even discussing like what does this
mean and it kind of caused a stir for us
internally and like what I think when it
is I actually think it is a good thing
in terms of like recognizing just simply
who's actually using a lot of AI like
which at this exact moment I do think
using a lot of AI is the only way to
learn and the right way to like it
constantly experimenting in every single
possible way. Now if anyone ever try if
it ever became important in terms of
your review with your boss or I think
then the like incentives become too
screwed up and like it's just the whole
thing becomes a little more corrupt,
performative and weird. But I it was it
was interesting cuz like you could just
see it right there. And even like even
at my work like the people who I'm
always talking to about oh like every
morning what did you build? Oh check out
this cool thing I built. It was the
people who are at the top of the
leaderboard. So like when it's not being
done in a performative way, it's
actually a good indicator of like who's
really just heads down just obsessed
with this. But I mean on the meta side
also I was wondering like if that was
true at Meta and they have unlimited
budget, what percentage of Anthropics
ARR was like Meta Engineers just melting
tokens.
>> Yeah. So first of all, I've I've heard
now from multiple people that this is
something that happens in many
companies. I mean I guess it's
everywhere now because they are trying
to incentivize use of the tools. So
okay, I get that. Um, but I I will also
say that
Anthropic this week just came out with
new revenue numbers. They are doing 30
billion ARR now. And I'm pretty sure
what that is is you take the 10 minutes
that Meta pays its token bill and you
multiply that by whatever number gets
you to a year.
>> I mean, dude, you know my rant on this
one. Like, everyone's like, they went
from 12 billion to 30 billion in two and
a half month. Like just say the freaking
numbers. Like anthropic, come on. It's
okay. Because it just doesn't sound as
exciting. And it is. It is exciting. And
if you're doing two
billion, whatever it is in revenue in a
month, that's insane. But like, yeah, I
don't know. With no clarity on that,
it's just bunch of claude heads on cla
what was the name of the Facebook thing?
>> Claomics.
>> Claudonomics. Just And and it's meta
with their
just sitting there just melting claw
tokens.
>> Yeah, that's what it is. All right.
Well, soon enough we'll have access to
mythos and then that leaderboard will
rise even even further and then we'll
get some numbers by the way because
sooner or later these companies are
going to file to go public and we will
certainly be able to play hype board
true as we look through that S1.
>> Uh last question before we drop.
>> Yeah. Will they hire law firms and banks
to go public?
>> You know it. You know it. They use
Salesforce. So Okay. Yes.
>> Obviously. Definitely. What do you
think?
>> I think I an I think Anthropic is going
to do something interesting. We've seen
>> it would be speaking of marketing would
be the most baller.
>> The most baller. We we we did not hire a
law firm but we are so confident in all
of our filings. Like why not? Why not?
>> Yeah, it'll be the the first harness
IPO.
>> First harness
>> and everyone will be thrilled.
>> All right, Ron John, great to have you
back. Looking forward to next week. Uh
thanks again for coming on.
>> See you next week.
>> See you next week. Thank you everybody
for listening and watching and we'll see
you next time on Big Technology Podcast.