Are AI's Economics Unsustainable? — With Ed Zitron

Channel: Alex Kantrowitz

Published at: 2025-07-23

YouTube video id: UZEn-s9mllI

Source: https://www.youtube.com/watch?v=UZEn-s9mllI

Does the AI business have what it takes
to survive? Our guest today says no.
That's coming up right after this.
Welcome to Big Technology Podcast, a
show for coolheaded and nuanced
conversation of the tech world and
beyond. We're joined today by Ed Zitron.
He's the owner of EasyPR, the host of
Better Offline and the author of the
Where's Your Edsletter. He's here to
speak with us about his criticism of the
AI business and why it may all soon
collapse. Ed, great to see you. welcome
to the show.
Great to see you. Thank you for having
me.
Okay, so we've had some varied uh
different varieties of critics on the
show. We've had people who've said it's
poisoning society. We've had people like
Gary Marcus who've said that the
progress is gone. Uh we've had in
various iterations folks who've talked
about uh how it can be uh used by bad
actors to do things like enhance
viruses. Soon we'll have someone who's
going to come on to talk about escape
risk. But you are uh in a different
category. you really think that the
business of Open AI and and the AI
industry uh is unsustainable.
This is something we talk about a lot on
the show. I'm very familiar with your
work and it's great to have you here uh
to discuss it.
Yeah, I it's just all very silly when
you look at it. Right now we are sitting
there the most important company in AI
is open AI. They will burn probably 1213
billion after revenue this year. That's
based on projections. They also have no
pro path to profitability. They don't
have one. They claim 20. The information
is reported a few times like 2029, 2030
they're going to magically become
profitable due to Stargate. Now, how
will that happen? Nobody actually knows.
And OpenAI will not tell us because
OpenAI doesn't really discuss their
revenues other than in really vague
ways. They go like, we have three
billion business users. What's that
about? But when you look at the
underlying finances, it's genuinely
insane. And it's more insane outside of
OpenAI. The information also reported
that Microsoft will only make about $13
billion, not profit, just revenue on AI
this year. 10 billion of that is
OpenAI's Azure cloud spend. 3 billion is
them selling Cobella. That's an insanely
small amount, man. 3 billion is not a
lot of money in the grand scheme of
things. They do like 119 billion in
profit a quarter and this is on it's
like 50 70 billion of capital
expenditures. These numbers are
terrible. There's an analyst quoted by
Laura Bratton at Yahoo Finance who said
that he only thinks that Amazon is going
to make $5 billion in revenue again not
profit this year on AI. They are
spending $105 billion in capital
expenditures. This is an insane
situation. And the fact that I am ever
framed as radical or like a pessimist
when I'm just doing very basic
mathematics, it's kind of strange. I
think it says something a lot about
media in general, but also the tech
industry in general. And people will
say, "Oh, well Uber lost a bunch of
money." Give it the [ __ ] up on that one.
Uber lost a ton of money in co 2020.
They lost like $6.2 billion. I think
their worst year on record was like an
$8 billion loss. But they had a product
and then they used it to [ __ ] over labor
forces. Like they used it to they just
dragged those numbers down. But
nevertheless, fundamentally different
business and also not a big company. Not
Uber is not the face of the savior of
the tech industry because that's what
generative AI needs to be now. It needs
to be bigger than the smartphone market
which about 450 500 billion a year
bigger than the enterprise software
market about 250 billion. What the the
current combined revenue of all the
generative AI companies is like the and
that's including the big tech companies
is about 3540 billion. It's insane man.
It's insane. And eventually this has to
stop. It has the growth is not there.
All right. So, we're just going to talk
through your arguments on this show and
I think that I will pressure test them
and we'll just go through some of the
objections. Absolutely. And like we do,
you know, I don't think listeners need
to agree with everything that Edis has
to say. Uh but I think I won't call you
a radical. I'm going to give you a fair
hearing today and we're going to go
through some of these
strange though. And I know that you're
not characterizing it in this way
necessarily, but the fact that the guy
who is like, "Hey, this is losing
billions of dollars and not making that
much. I am the one getting the hearing,
I know that that's not meant to be a
negative characterization, but that my
my pointing at numbers that are out
there that are ludicrous is strange and
must be tested versus things like, oh,
we'll have AGI in 2 or 3 years in the
New York Times." It's obscene.
Well, look, we we test all of these
things. I know you do, but it's just in
general it's very strange.
Okay, so maybe we'll talk a little bit
about the general uh vibes around AIB
later on, but let's just get right into
where the value is here. U so if this is
going to all fall apart, it means that
what's happened in AI has to be I think
by definition not valueless, uh but sort
of there's a cap to however good it can
get. You said uh in one of your shows
that AI today is a $50 billion industry
masquerading as a trillion dollar
solution from a tech industry that's
lost the plot.
Yes.
So let me just throw this out to you. I
mean
it seems clear to me that AI will be
useful for search. You yourself have
talked about how search is not a good
product.
I don't agree with that fully, but I
know they keep going.
But let's say it's half as good as
Google. Google is at a $2.14 trillion
market cap.
Okay.
So, let's say it just gets half there.
Then it's alreadyizable
business.
You're describing search as a product
and search as a business. The largest
and most successful search business is
Google.
Google makes what like over 100 billion
on this a year. How do they do it? Well,
it's simple. They own the search engine.
They own the infrastructure. They own
the advertiser. Both the platform that
sells the ads and the platform that buys
the ads. These things are being mangled
by antitrust. You ever notice how
there's no other real competition? I
think Bing makes like a billion, two
billion a year.
Well, it's interesting because even in
the antitrust hearings, because they're
talking about now whether Google will be
able to even pay Apple the 20 billion
plus a year.
Yes.
One of the interesting details uh that
has been overlooked in those hearings is
that nobody, not Microsoft, not
Perplexity or whoever it may be uh can
make money off of search in the way
Google can. Mhm. And that to me suggests
the fundamentally
there's a fundamental weakness in the
business to grow to the size that like
perhaps the search market is actually a
bit smaller. I don't know how much
smaller but you're describing two things
which is search as a product and I do
fully believe that if Google had tried
to meaningfully innovate in search other
than ways to make money and ways to
screw consumers, open AI would have been
nowhere near as big because most people
do use it. Because the big thing is is
that open AI and generative AI, large
language models, are really good at
inferring meaning from a statement.
Really good is probably a push, but you
can give it a vague question like, "Oh
crap, what was that 1971 movie with like
some gangsters in it?" And the and it
will have a much better time inferring
the meaning than anything Google search
has done in a while. That's a big
reason. On top of that, people want
answers. And Google has been hesitant,
if not entirely resistant to giving
answers until chat GPT popped up and
they went, crap, we've got to make a
really shitty version of this. And it's
still a shittier version of what OpenAI
does with search, which I think is a
shitty job unto itself because any
search result that could be hallucinated
is a dodgy one. And I think also Google
has just given up any responsibility to
their products and any to any of their
customers. I don't think people realize
how much Google has had to do to make
search that big a business. Huge
advertising and I mean they bought was
it double click was it double click way
back when? Like they bought the rails
for this a long time ago and to your
point no one else has been able to copy
it despite there being multiple other
companies that could other than Meta who
has created a competing advertising
product. But that's what search has
become. Search as a product is very
different to search as a business. All
told, Open AI would have to build such
significant sales teams, adte tech, they
would have to be a very different
company because selling advertisements
is very different to selling consumer
chat GPT subscriptions or enterprise I
guess, but even then the information
reported recently that that's not going
so good either. So we're in this weird
situation where yeah, you could say open
AI could search GPT could be this. What
happened to that branding by the way?
Remember when search GPT was what it was
going to be called? Now it's just Chad
J. But it's the branding fell away
because you just search within chat
attribute.
I know, but it's like they make this big
thing where they're going to compete
with Google, but it's like what are you
actually and so sure they could make a
they've already made a competitor to
Google. I think a lot of their success
has come from the fact that you can't
search on Google search as well. Google
search does not understand what you're
asking it. Chat GPT often does kind of
kind of sort of
I think it does a great job with search
in the in certain use cases. Yeah, it's
it's replacing Google for me.
Yeah, and it has for many other people.
But that's the thing that just means
that Google search is bad. It doesn't
necessarily mean chat GPT is good. And
it's the inherent one of the strengths
of large language models is inferring
meaning from what you're asking it. But
making that into a search size business
is an entirely different thing and will
cost them tens of billions of dollars.
Like it's not something where even if
they could do it, humoring the idea, I
don't think they will. Let's talk. They
would have to do tens of billions. Like
Google owns thousands of miles of
underground cable. They have content
delivery systems all across the world.
Open AAI doesn't own a damn thing of
their own infrastructure. Even this is
the craziest thing that got reported
recently. The Stargate entity does not
exist.
Talk more about that.
It doesn't exist. They haven't formed it
yet. Oracle said it on their earnings.
It it has not been formed yet. So Oracle
is allegedly though Elon Musk claims
this wasn't true. You know, the classic
truth guy, the the guy who never lies,
but he said that this isn't true. But
Oracle is apparently buying allegedly
buying $40 billion worth of GPUs to put
in the Abene, Texas site in for
Stargate. The first I think it's 8 to 11
buildings I forget. Now OpenAI who who
owns those buildings? Who knows? I think
it's Cruso. Cruso just had to raise a
$750 million credit line as well to
build it.
They're data center builders.
Yeah. And they're also they've never
done this before. They've never done um
HP so high performance computing before.
It's also good. good. It's like when you
look at the bids, it goes, "Oh, this is
bad." Oracle has agreed to pay Cruso, I
think, a billion dollars for 15 years.
Like they've they have contracted Cruso
to do the work. Open AAI, according to
the information, hasn't even signed a
contract for the compute in in Abalene.
OpenAI has done a great job of getting
other people to do the work for them.
But if you think building a giant data
center is hard, try building all of the
ones you will need to make a modern
search engine. Perhaps there are
efficiency gains. Perhaps there are ways
of do doing it differently. Who knows?
But it's not something where they can
just go kadunk dunk and now we're a
search company as well. It's not that
easy. And Sam Alman would love people to
believe that. Notice he's not really
talked about competing with search
though recently. Not really heard much
of that. A few months ago, he had that
story about ads within chat GPT as well.
Haven't heard any stories about the
revenue from that either. That's the
thing. Generally when companies are
doing well they tell you and they boast
or they leak it surreptitiously in a
very obvious way. None of the leaks
coming out appear to be positive.
Now why do you think so? Let's just go
to this core issue. Why do you think
that generative AI generative AI can't
be a good replacement for search?
Because right now the unre the
unreliability of search Google search
right now was already a problem. I think
that the core technology of large
language models could really help with
inferring meaning and such. I think it
could at some point be useful in that
way. The problem is it's like replacing
a bad thing with a slightly less bad
thing. It's like I guess you could do
that, but I think that it's pretty
evident that because nobody else has
done it, including chat GPT, OpenAI
even, that you can't really replace the
business of search. But we are getting
mangled up in the technology because
yeah, I think large language models are
really useful for the intake of
information. I don't know about the
presentation of information out the
other end. I don't think that they're
great for research. I don't think that
they're great. I've used it for search
and been like, "This is a pain in the
ass. This is not what I want. There's
too much crap here. I have to sift
through it. I can't trust any of this."
So perhaps there are consumers who are
just like, "I can trust this. Bingo
bango. I'm done." Fine. If that's
mediocre with with [ __ ] with piss, I
don't know what you call it. So, it's
kind of like
AIO reviews is kind of doing that. It's
just such a mess because you can hear me
kind of hesitating over the details
because it's like, can you replace it as
a product? Yes. Is it going to be good
at it? No. Yes. It's you like it as
other people do because it understands
what you're asking it way better than
Google search does. How has Google not
copied that as well? That's the other
thing.
They are in the process of copying it
with AI mode.
Views are so crap and AI modes. It's
I'm not saying that they're better than
I would say is better. I think that the
real argument around Google is the
models the models perform quite well on
the leaderboards but you don't see that
proficiency when it comes to actually
building it into the products.
I don't think they even do it the same
way because you ask a question to chat
GPT it generates a result. With Google
it's like all it feels just like a
disinterested uncle who's reading the
newspaper with a kid like knocking his
knee. It's like what do you want to I
think it's [ __ ] this. Here's a bunch
of Leave me alone. Cuz it's like AI
overviews do not do the same thing as
how ChatGpt handles search. ChatGpt
spits out an answer for better or for
worse. AI overviews goes, "All right,
here's I think the answer with some
links. I don't know if they're good.
Here's some other links. What do you do
here? I don't know. I'm just here to
show you ads." And that's why it's so
hard to use. Goodbye or hello. Please
stay on the page. I need money. It's
just a really weird it. It's so weird.
It's just so strange that you've got
these companies with trillion dollar
market capitalizations who run services
that look like a dog's dinner. It's just
insane to me and it's everywhere.
Did you Did you see the thing on Threads
today?
Talk about it.
So Threads, there was um I don't exactly
know what happened, but everyone's
messages were coming up as the same
thing. So you had a bunch of accounts
saying like, "I don't know what's
[ __ ] going on. It's the same thing
again and again and again." Threads is
terrible.
I agree with you on that front.
It's just But that's the thing. I think
the reason chat GPD has been able to
make any meaningful progress against
search is not because of the proficiency
of open AI pretty pretty good UX works
clean pretty pretty snappy it's because
everyone else has given up
but isn't this how it's supposed to work
I mean isn't it no I'm let me let me
talk it through like isn't it supposed
to be that uh some company gets the lead
in something then a uh challenger comes
through builds something slightly better
and then puts everybody on notice that
if you don't improve, you're going to
lose the lead.
Which is funny though because you're
right, that's how it's uh worked. I
think that that era ended like 10, 15
years ago. I think that they kind of
we've not seen that kind of competition
and Open AI is actually a great example
because to compete with big tech, you
need big tech to
support you. So you my OpenAI is a
Microsoft subsidiary. That's what
everyone needs to just accept right now
and what's happening in the news, which
I imagine we'll talk in a bit. So, there
is no competition. There is an agreed
upon substance that they all agreed to
sniff and then they all sniff the
substance and make money selling it.
They all agree that AI is the thing
they're doing now. So, they're all going
to compete in the same kind of soft
punchy way. You've got Amazon and Google
backing anthropic. You've got Microsoft
backing Open AI. You have this weird
thing where Google filed a suit to try
and stop their exclusive deal to sell
OpenAI's models. Microsofts.
No one's trying to make better search. I
don't think even Chat GPT is trying to
be better search. They're trying to sell
a thing by claiming it's AI that does
something they can't really specify.
They're not sitting there going like,
"How is this a better search product?"
Because if it if they wanted that, they
would have they would have built a
deliberate search product that
represented it as search rather than
just an everything search. A search for
thoughts, which may or may not be
correct. A better search I don't even
know what a better search platform is.
But that was not what OpenAI started
with. That's not where I don't actually
think that OpenAI had much of a product
vision.
Oh, for sure not. I mean they talked
about how they released chat GPT as a
demo and have sort of iterated on that
since but
which is pretty much how all like u I
was told by a reporter once that
apparently Microsoft saw chat GPT and
the reason they bought all the GPUs was
because they wanted it in Bing that they
wanted to do that in Bing hundreds of
billions of dollars based on being like
what if Bing was better somehow
and that didn't work.
It did not work. But let's talk more
about this because so I think the thing
that's nice about the searching through
these bots is that they do I think like
you've talked about they give you they
understand your intent better. Uh I
think they are getting better at
presenting information and they are
getting better at linking information.
Okay.
So so let's just say that this continues
on a trajectory where it does even if
it's not the core intent it replaces it
replaces a good chunk of search. I'll
just make the business argument here and
throw it out to you, which is that yes,
marketers really care a lot about the
signal that they get from search or the
fact that they can, you know, with some
uh consistency measure their media spend
on Google and know if it's working or
not.
Uh but ultimately, if people move from
Google search to OpenAI or to some other
LLM search, my anticipation is that the
money won't go away. I think marketers
have gotten and advertisers have gotten
used so used to spending online that
they will they'll be willing to spend
even if they don't get the same signal
like we saw.
When you say signal, what do you mean?
Like whether people are going and buying
the products that they're advertising.
I just so I'm clear your argument is
that they'll spend the money even if it
doesn't work as well.
Yes.
When has that happened? I mean, I think
one example, I'm curious what you think
about this, is when Apple cut off
Facebook's ability to measure whether
people were buying after seeing their
ads and then Facebook got
that was the unilateral app transparency
thing. So, it wasn't just focused on
Facebook,
right? Right. Absolutely.
Right.
Well, they I mean, you could also argue
that Apple wanted to build its own app
install business,
which they absolutely did. They built
their own app. So maybe it was not
entirely focused on Facebook, but you'd
have to argue that Facebook was a big
motivation there. Um, advertisers are
still spending a lot of money on
Facebook, even if the signal is a little
bit uh murkier than it was previously.
That's because Meta has effectively a
monopoly on on social networks, which
are a different advertising platform. On
top of this, right now, OpenAI, I don't
even believe has an ad network. I'm not
sure your history, you know how
multifaceted these things are. the
infrastructure is not there. And the
reason that Google makes so much money
is because they built the
infrastructure. And from what I know
from the digital advertisers I know,
they're they will try stuff, but they'll
try stuff and if it doesn't work,
they'll stick to what they know.
Correct.
Now, if OpenAI can get great CPM, great
CPA. Fantastic. They've proven
themselves. Can they do that at the
scale of Google? I don't think they can.
And I don't I don't know whether they're
we don't know the exact cost, but we
know they're burning billions. If
they're losing billions of dollars, it
don't matter how good their ads is if
the numbers don't add up. It there's so
much they have to spend as well. The
staff they would require. I really
should have brought I should have looked
up the amount of advertising staff that
uh that Google has before this, but
Jesus Christ, they don't have the people
and they are still hiring and hiring and
having to spend all of this money on
salaries. They have to I think there was
a one of their executives recently said
that they have this incredible pressure
to grow. adding the pressure of building
an ad network and then building the
market for it. Because remember, you
can't just say it's identical to search
cuz it ain't right. These things aren't
presented as results within a thing.
They are presented as answers to a
question. Theoretic theoretically that
could have a different reaction, a more
sticky one. But has anyone [ __ ]
proved that yet? Perplexity hasn't. They
wanted $50 a CPM.
Bloody Aravind to get his head out of
his just pie in the sky right now. And I
don't know that they have the time. I
don't know that they have the time to do
this. Nor do I think that they have the
resources because they also have to do
this and build data centers and build a
chip with Broadcom. All the crap they
promise for 2026 is bonkers. Lo, I I
think you're hitting on exactly what the
problem is going to be. And we've talked
about this on the show a bunch, which is
that you can you can attract I think
they will attract a large chunk whether
it's open AI or Google through their AI
mode, which will evolve. I think we are
going to see a lot of search funnel
through these large language models
eventually, but it's a different format.
It's a different experience. It's very
easy to mess up.
It's not a slam dunk that it happens.
Do I think that if they get there,
advertising money will probably follow?
Probably if they get that. But it's it's
an if. It's an if and I think you're
really um spot on in pointing out that
this won't be a slam dunk. So let's talk
quickly about some other uses because
you know the promise here or the the
idea from these companies is that you
know maybe you like you said they're not
saying that they're a search engine.
OpenAI isn't saying that. So maybe you
do some search and you build a search
business. But then let's say your bot
can also help uh people code better. So
that's got to be worth some economic
value. So you can amalgamate. Good.
Well, yeah. Talk about your your view on
whether these code uh co-pilots are
valuable in any way.
So they're valuable. They are valuable
in the sense that software engineering
loves automating [ __ ] They love
shortcuts. It's an industry that adors
it. But I think that people
misunderstand what a software engineer
does. They don't just code. Sure, the
junior level ones might, and there will
be some early stage people, but we don't
know yet. And the numbers being pared
are [ __ ] Nevertheless, as you said
earlier, I think this is a 30 to50
billion TAM to addressable market
business. I think that what's like the
IDE market development environment. I
think that's like a $13 billion one.
Like there is a business there. Of
course, there is. It's the code has
problems and there's tons of studies
about it that suggest that there's real
issues with it. But I think that's
probably the most lasting one. But just
because a business exists and is viable
in some sense doesn't mean that it adds
up to a trillion dollar industry or even
a hundred billion dollar industry. And
indeed this is one of the most
commoditized things. You've got a cursor
came out of nowhere and everyone's like
wow look they're going to be so big. Was
it 200 million AR or something? It's
like great. That's a really solid public
company SAS business. No one should be
doing back flips. It's changing the
world. Is it? It's making developers
faster. Is it? How is it doing it? Which
developers? These questions, they really
harsh the flow. So, people don't tend to
ask them too much. But one of the common
misinterpretations of my work is, and
I've definitely said it like a year ago,
I've said it is useless. There are use
cases. It's just they're here like the
industry is this big and everyone's
acting like it's the biggest thing ever.
And it's just it's not like they want it
to replace coders. It's actually not
going to because of the hallucination
problem because of the probabilistic
nature. There was this insane [ __ ]
blog man that was on tech meme whereas
guy was like something about AI critics.
Oh yeah.
And he used all
my AI my AI skeptic friends are all
nuts.
Yeah. I ran that through a couple
software engineers like Cole Brown over
uh internet of bugs and they just kind
of [ __ ] laughed at it cuz he was
saying something and it said, "Oh yeah,
mediocre code's fine. Is it now? Is
mediocre code fine? How do you think
like Carl Brown from Internet Bugs
brought up a heartblade? That was like
one thing that a bunch of software
engineers missed for years in an open
source product. Just because we we as
human beings can catch things doesn't
mean we will. And just because it might
be able to catch something wrong with
your code doesn't mean it will either.
But I trust a human over that more. If
we're turning ourselves over to
something we know to regularly get
things wrong, I don't know how much
infrastructure you can turn that over to
which is the only way you're getting to
these massive revenue streams. Unless
you can really rely on this, they've
already got code automation things they
hadn't before large language models. So
yes, use cases, but how big are we
really meant to believe that cursor is
going to make 5 billion a year? Is that
going to happen? Hey, is cursor
profitable? Has anyone asked whether
curs is profitable? You go and you see
like a company like you.com and saying
wow they got a valuation of a billion
dollars annualized revenue of like I'm
going to misquote this it was like 12
million 20 million that's insanely small
man this is crazy it's just nonsensical
almost and everyone's saying that
because we are here we will be 70 miles
in this direction in two years it just
confuses I guess it doesn't confuse me I
think people want it to be true
well Let's that that hits on the
question of whether you think these
models are done with getting better
because there's like undeniable there
have been undeniable leaps from
something like a GPT3 to a GPT4.
Uh and so I think you get an environment
but that when did GPT40 come out?
Okay, let me just finish the question
then you can shoot it down. Um I you
know
I would say that there is you have an
environment where you get the $12
million valuation uh funding or the $12
million in revenue and the billion
dollar valuation where you have venture
capitalist I'm not going to stand on the
table and defend a venture capitalist
but where you have them say there is
potential for this technology to get
better and therefore if this company
continues to do what it's doing uh and
the technology gets better then maybe
they can hit that market and and they'll
they'll bet on 10 of them and if one of
them actually hits where uh where they
think the puck is going, excuse the
sports metaphor, then they will then
they'll be, you know, well rewarded for
it. And so that's why I think you're
seeing this environment. It's all
predicated on the belief that these
models will get better. So I am curious
to hear your perspective on why you do
you do you factor that into your
analysis or do you think it's kind of
done?
I think the word better is where we need
to start.
Okay. What does better mean? This is
actually a point made by Jim Cavllo at
Goldman Sachs last year. It's like these
models get better.
Mhm.
But what does better actually mean? We
look at these benchmark tests which are
built specifically because these models
can't really do regular testing. You
can't really give them human testing cuz
they don't do they don't do the things
that they're meant to do. So better does
not mean actually it might be Darren
Asamoglo from um MIT who said it was in
the same Goldman report, but it's like
better does not mean more capabilities.
It does not mean that these models now
can do a new thing. Even reasoning what
what happened there? What I mean it
allowed some more. It helped with some
coding things sure but and there was
some growth but it's to what end? What
can we do now? What is the new thing?
And I think that's the craziest thing. I
don't I don't know what I meant to be.
I'm a I love new crap. I knew I love
gizmos and gadgets and all that [ __ ] I
if there was a way that chat GPT could
do something for me, I would make it do
it just because I'm like cool. This is
why I love technology. I love doing
things.
What's new? What's new? And if the
argument is, look, it's improved coding
by XY Z. Awesome. Describe it in that
term. Describe it in the terms of boring
software as a service or cloud compute.
Talk to talk about it like you talk
about Docker. Talk about it like
virtualization. Talk about it like a
technology that's a branch off. Don't
talk about it like it's replacing
everyone forever always because it isn't
doing it. So I, by the way, you're
completely right with the VCs. They're
doing exactly what they've always done,
which is make a bunch of bets, talk them
up, see when you get in, like see what
happens, cuz that's venture capital.
That's that's the root of it. I'm not
defending it either, but it's they're
not doing anything different.
The problem is is that we're in
hysteria. We really are. We're in a
hyster.
It's very rare that venture capitalists
see the books, the actual accounts, and
they almost never see the code base.
That's wild.
It's [ __ ] crazy, man. And it only
gets worse because as deals get more
popular, it's like, you don't want to do
it. I got five more [ __ ] over here
who will. So, which is the mark of a
classic bubble. So, it's like nothing
about what I'm writing or saying comes
from a place where I'm like, this this
is something that I've walked into and
said, "This sucks. I hate it." But cuz
when Chat GPT came out, I digked around
with it for hours and hours trying to
find out why everyone was so excited. so
excited. Everyone was so excited. I'm
like, "Okay, so it can generate crappy
text." Like this is like this is the
most like 20 like 19-year-old at college
ass text. No wonder it can replace
college students who aren't taught to
write. It writes like them in the same
kind of bland intro body conclusion way.
Okay, not a business. But the actual use
cases of this stuff have never emerged.
They've never emerged. The reason that
we keep hearing about agents but never
about what agents can do is because the
most common feature of agents is that
they fail. There was a Salesforce paper
that came out fairly recently that said
I think that they just categorically
break down on multi-step processes like
they only complete like 30 something% of
them. Multi-step processes by the way
referring to tasks in general. Could you
think of just one thing but they failed
at a remarkable amount of onestep ones.
But you're you're answering the question
about what happens when the models get
better. It's that they not getting
better.
But this is the Well, well, I think
you're saying also when
look, let's go step by step, right? If
the models get better, then they're able
they'll be able to handle these
multi-step processes in a way that they
can't today because they are brittle.
If my grandmother had wheels, she'd be a
bicycle. Okay, I hear what you're
saying, but but like like I like I said,
like so you're
let's circle back to the question I
asked you at the beginning of this
conversation, which is like you you're
you're pretty confident that there's no
more improvement because I asked you
about improvement and you said there's
no such thing as improvement or we can't
feel improvement, but now you're saying
improvement is
improvement is a is a metric that they
have gained with the benchmarks.
I'm not I so this is interesting with
the benchmark side of things. I I really
think that like they're useful in some
ways, but they're not the be all end
all. And it's weird to talk about, and
I'm sure you have a response to this,
it's weird to talk about the vibes of
the models, but like really,
but let's do it. Uh I do think that you
can with GPT uh uh 03 03 from uh OpenAI.
It's definitely the vibes are better
than GPT4. It just feels like it's able
to do more.
I tried 03 out the other day. I took a
photo of a thing I had hung up and I
said, "How much space from the bottom of
that photo of that picture, the poster
to the floor. It took four minutes. It
wrote multiple Python scripts to give me
the wrong answer."
Well, this is why I mean it's it is
interesting that and this is why I think
people are talking about how they're
going to be good at some things and not
good at others. Okay? And some there'll
be some capa like capabilities where
they're going to be quite effective and
some like the one you
and the thing is that's a reasonable
position. If that was how this industry
had been sold,
but they're not selling it that way.
Exactly. If it was No, I really want to
say
except Sundar Pide is from Google
talking about jagged intelligence. But I
think jack off intelligence. [ __ ] I
find that so disgust that man last year
he lied about what agents can do. It was
during IO he said, "Oh yeah, you're
going to have an agent that will be able
to like do a full shoe return with a
thing with your email." And they went
and that was theoretical.
What the why? I can't go and lie to the
bank. Why can he lie to the market? It's
just I think though getting back to the
point cuz I think it's important to say
this if they were selling this as yeah
this is kind of unreliable but
interesting tech and we're expecting it
to there are some things it can't do
there are some things you shouldn't rely
on it very clear about that I wouldn't
hate it if it was just like this is what
I until you get to the stealing from
everyone and the horrible environmental
stuff and then it gets even worse again
but putting that aside if this was being
sold as like an experimental branch or
even just a industrial use of cloud
compute. Okay, I would I wouldn't judge
them for that. I judge them for
everything else. But they're not selling
it this way. You've got Andy Jasse
claiming, "Oh yeah, we're going to
replace an indeterminate amount of
people at an indeterminate time in some
way or somehow. I'm not really sure how,
but it's going to happen." And it's on
the front page of [ __ ] TechMe. It's
insane. The idea that Techme had Sam
Alman's gentle singularity,
we should be calling 911 and doing a
welfare check on that man. That thing
was [ __ ] insane. the if I said that
they would check me for a concussion.
some of the things Sam Alman suggested
that we'll have data centers that could
build themselves just that's the thing
that is the real distance because you've
got what large language models can do
and as far as them getting better better
how they'll increase those percentages
there is in very clear and Gary Marcus
was just on talking about this there's a
very clear gap between what a large
language model can do and what it needs
to do to be reliable and that gap I
think is much larger than people realize
it's the classic problem with all AI
with self-driving cars
where it's like it's not the fact that
it's can't do some things well. It's
that it can't reliably do anything.
Self-driving cars require someone
watching them on all times just in case.
You can't do that with chat GPT. There's
too many of people. So, it's just this
interesting industrywide cognitive
dissonance. I guess it's insane. It
drive when I when I talk about this
stuff, it makes me genuinely worried how
many people have been taken by it. You
brought up this statement by Andy Jasse
by now it'll be a few weeks old about
how he wants to replace uh he wants to
well wants to replace people with AI or
believes that it will be
uh people replacement and I think that
is so I've talked we talked about search
and coding. I think the thing that's
been unspoken so far is that when it
comes to the valuations for a lot of
these companies, they're going to need
to have to replace full-time employees
or at least the work that a full-time
employee does
in order to be successful.
Agreed. And people either want
completely autonomous or they want
Jarvis. They want to be able to say, "I
need you to look up blah blah blah."
Okay, give you an example. Manis, is it
Manis? Manus
Manus. Yeah,
it should be Manus. Um, I asked Maynus
to look up every article written about
me in the last two years and could be a
list of links in the spreadsheet and I
probably guess like a hundred of them is
what what the actual number 11 minutes
later and like a ton of Python. These
[ __ ] could love Python. It
gives me 11 links,
right? I tell it you missed a few. Gives
me another nine after another 10 minutes
I think it was.
Is this how close is this replacing? Who
is this replacing? because it's not even
replacing offshoring, which I think is
what companies really will plan to do.
They just want to ship people overseas
and get cheap labor. It's always been
the case. Google loves it. People talked
to at Google, they're saying, "Yeah,
they're just getting rid of people and
replace them with contractors in India
or in other countries in in the global
south as well." It's very strange what's
happening. I think that I'm actually
shocked that so many reporters are still
saying agent with a straight face
because what job is being replaced? code
is. No, sorry, it's not. You've got
companies firing people and claiming AI.
But notice that none of these big sexy
Kevin Roose stories about replacing
people actually include a single [ __ ]
person replaced. Now, Christopher Mims
had a story in the Wall Street Journal
about a year ago, really good one, where
it was artists, art directors, and copy
editors who had been replaced with AI.
But the real story was they had been
replaced with shittier versions of their
product. Their process was not replaced.
Their job was not replaced. They were
basically contractors rejected by idiot
business idiots as I call them. People
that don't really understand the process
of their work. And it's [ __ ] tragic.
But there are some jobs that will get
replaced. And not as many as they're
saying, by people who are [ __ ] who
don't respect their customers, who want
to do a shitty job and always will. And
they would have found another way to do
it. They would have gone on 99 Designs.
They would have gone on Fiverr. They
would have found cheap labor to do the
labor that they don't respect. But there
is right now and I don't think there's
going to be any replacement of labor at
the scale that they're discussing. My
evidence is nobody's bloody done it. You
have all the king's horses and all the
king's men. You have Google. You have
Apple. You have Salesforce. You have
Service Now. You have all these
companies who could not talk about AI
more if they tried. Where is the agent?
Because if they did this, if they
actually were doing the thing they're
claiming, they'd be making tens of
billions of dollars extra. They'd be
making an absolute [ __ ] ton. The
information reported a few months ago
that Salesforce does not expect any
growth from AI this year. That is
absolutely bonkers for a company that's
rebranded, and I paraphrase here, as an
agent first company.
It feels like the most egregious lie I
have ever seen told in business history.
Just completely obscene. and people are
lapping it up and it's insane.
Well, I think with the software as a
service companies, um, there's so much
broken in SAS today that you can put AI
in there potentially and like paper up
some of the problems with like systems
talking to each other and trying to
synthesize information that you have in
your systems to make sense of it because
it's spread all over the place and takes
hours to pull reports. So, that's a
possibility. Uh, and maybe that's
economically val. Your argument is that
their systems are so poorly designed
they can't put AI in them yet?
No, my my argument is that speaking of
broken products that AI fixes, they
might be the most broken of all products
with an opportunity for AI there.
Fully agree. And also, if anyone was
going to make money off of it though,
one of the companies, it's not like a
situation where one company's ahead of
everyone else. Open AAI isn't ahead of
everyone else other than scale. And I
would argue they got that because
literally every single media outlet has
been talking about AI for three years.
And when they talk about AI, they say
chat GPT. It is the world's best
marketing campaign ever. Sam Alman is a
genius for that. And he's also like a
business idiot whisperer. He can talk to
guys that run companies that don't know
how their companies work and just be
like, "Yeah, we're going to replace
everyone. It's going to take two
minutes. It's going to be the best
thing."
Donald Trump adjacent.
Sounds like Trump. Yeah. No, he really
is like he listen he's like the
softspoken Trump,
but he
it's just
it's so strange when I get kind of
animated about it because when you start
talking about it, I'm not even saying
anything. I'm saying some objective
statements to be fair, but when you just
say like they haven't done this yet,
they haven't done this yet. There really
isn't evidence they can do it. Like
really there isn't. They don't have It's
not like they have a whisbang moment.
Like you could Whimo is imperfect, but
you can get in a car in San Francisco
and it will drive you around and you
could do that a few years ago in very
controlled spaces, but we don't even
have a controlled space where an agent's
doing something really cool. And I think
the closest they're going to get is like
an agent that can do purchasing on a
platform. And I think that that's just
because they'll connect APIs to APIs.
We're not That doesn't feel terribly far
away, but that's also not a trillion
dollar industry. Ed, are you are you
potentially underrating the um the
bureaucracy part of this and the fact
that like big organizations which this
could help? Uh they move slow, there's
bureaucracy, there's approvals, there's
owners of different groups, like it's
tough for them to do anything. So maybe
it's a people problem and not as much a
technology problem. Maybe it is, as you
would put it, a business idiot problem.
Um
I would buy that if it was not
everywhere and no one had done it. If it
if it was a few people were I I
understand the argument. It's like if
there were a few people that had done
this and like they done a ram shackle
one but it was kind of working. It's
like ooh that would be cool. Would it
still like if someone was doing it in a
smaller situation? I don't know.
Wouldn't open AAI be doing it? Like if
just real blunt wouldn't Anthropic be
doing it? Wario Amaday's out there
saying, "Oh yeah, we're going to replace
like what 10 to 20% unemployment, 50% of
I don't think he said un did he say
unemployment?" He did.
I know that he said 50% of entry level
jobs.
10 to 20% unemployment as a result of
this. If I'm wrong, I'm wrong. I
apologize. But the 50% thing was on.
Alison Moro from CNN has the best piece
on this.
Yeah. Yeah. We actually read that on the
show afterwards.
She is possibly the best living business
journalist. She's absolutely [ __ ]
incredible. So
the thing is, wouldn't OpenAI have these
agents? If they could do this, wouldn't
they be doing this? Indeed, someone once
made an argument to me online that I
actually found quite compelling, which
is, why would you sell AGI if you made
it? Why would if you could make an
agent, sure, you could sell it to
everyone, but you could just run an
incredibly bit profitable business with
like nobody, the one person billion
dollar company. Wario Amday's been
promising everyone.
Next year.
Next year. Oh, sorry. It's in 2026 with
the chip from Broadcom.
That's another thing with Stargate, of
course, the Stargate in the UAE. Um,
also the device from Joey IV that's also
coming. All of this is going to happen
in what 6 to 12 months. I can't wait for
the future. But the thing is, where is
where's the beef? Where's the thing?
Where's the money even? But the money
isn't there. The product isn't there.
And anyone putting this to people who
love AI, quote unquote, where's the
thing? Why are you actually excited
about not what could it do? What does it
do today that even makes you and if the
answer is wow it's kind of like a living
encyclopedia
okay
can I give you a different answer to
that um and this is again this is we've
been talking a lot about use cases I do
want to spend a little time talking
about the business of these companies
but I think it's worth bringing up one
use case we haven't brought up yet which
is
companionship uh that is the number one
use case I think surprisingly is it
there was an HBR article that put
pointed that out
was that a rank I thought that that was
just a list. I didn't know it was a
ranking.
No, it was a ranking and it became
number one and it's clear
that is not a business.
It is clear. Well, people are becoming
friends with these bots. They're paying
for them.
Absolutely.
Um it seems and I'm not a big fan of the
fact that people are replacing friends
with AI friends,
but they're doing it.
It's Oh, it's a sign of something wrong.
It is a We are a decentralized society.
We do not have the shared spaces where
we would regularly meet people. Tons of
people remote working, which is great,
but non-walkable cities means that
people aren't meeting people regularly.
Yeah, that is a use case. We don't know
the scale of it. If I had to actually
guess, I think the majority of people
using chat GPT are using it like Google
search. I'm deadly serious. I think that
there is a growing amount of people
using it and I think it's a deeply
unsafe technology. I also think that is
one of the most easily commoditized
businesses in the world. I don't think
AI friends or AI search
I think well kind of both but really AI
companions feels like something that
chat GPT again because they are all over
the place with all their use cases. It's
something that they're getting because
they are the biggest name said to
everyone at all times. It's something
that can be replaced by any number of
other things. Hey, did you read the
story about Meta and how you can have
John Cena sex your child?
Oh man, you didn't. No, there was a
story Jeff
I I did read that.
Jeff Jeffwitz, the goat. the goat
himself of the Wall Street Journal where
you could have
pedophile conversations with Meta's AI.
So, people are using Met.
Wait, was it was it
you could explicitly have it have you
could say you were underage and it would
have a conversation with you. I think
they've closed the gap now. It's a great
story. Incredible journalism by Jeff.
But it's like, yeah, people are using
this and people are likely using it in
sick ways and it's disgusting. And uh
hey, imagine if we'd have regulated
tech. Imagine if we'd ever done that. if
we had like an EPA for tech, if there
was any restraints on these companies.
But no, there aren't because what what
if we didn't have growth forever. But
nevertheless, it's a use case. But what
does that use case prove exactly other
than this can do that and people are
somewhat easily fooled? It's the same.
Well, that's the Jeff the use case that
Jeff Horitz brings up in the Wall Street
Journal is not one that I think is going
to be a common
companionship. companionship is.
Wait, you don't think that a that a
teenage a horny teenager would try and
talk to that? I hope that the labs build
the
I hope I really No, I genuinely mean
this. I'm rooting for Meta and everyone
to stop this. They need to. It's [ __ ]
horrifying. But yeah, that's a use case.
Is it a business? Is it not something
that can be easily? I would argue that
if they get friendship right, it is a
great business because of
who is they in this case and how big how
big could that be?
I think it could be a big one. I mean,
again, this is not this is not the
direction I'm rooting for the technology
to go into. But if you have AI that
replaces a friend for you or is your
companion,
you would easily I mean pay $20 a month.
I think that would that would be an easy
subscription to charge.
Sure.
But let's get into the business thing
though because I posted this earlier and
I mentioned it earlier. For this to be
as big, it would need to be size of the
software.
Is that because of the funding? Sorry,
you said
because it's because of the investment
in infrastructure. It would have to be
bigger than the smartphone market. So 4
450 500 billion a year. Bigger than the
enterprise software. We can take that
aside if we just focus on the consumer
use cases.
For that to happen, this business would
have to for OpenAI, I think they've
estimated they're going to estimate
these wank just total nonsense. I think
they've said like $126 billion of
revenue a year by 2029 or something like
that. And just to be clear, Netflix made
about $39 billion in subscriptions last
year and Spotify made $16 billion. So
you're telling me that whatever this
market is is going to be bigger than
both of those double. Is that the plan?
No, I'm not saying you. I'm just saying.
No, I want to answer this question
because I'm the one that threw it out
there. Look, I think that we are
inevitably going to see some of the
funding that's gone into this industry
go to zero or very low
without a doubt.
Yeah.
Some maybe I mean if you take it in
aggregate we'll see if it pays off.
Some will win I think but many will
lose.
What does winning mean though?
I mean they'll get their investment
back.
Oh okay. Yeah. That worked out for scale
scales investors.
Exactly. So there will be there will be
big big exit. I think OpenAI will will
IPO at certain point. I think that that
is an astonishing leap of logic.
Well, because Okay, you're talking you
want to talk about the structure and the
fact that they may never be able to
go public. Do you think that they these
horrifying books are going to look good
to the markets? There is nothing in the
markets that looks like this dog.
A company that burns five billion that
loses $5 billion by spending $9 billion.
I don't know. I mean, Cororeweave is up
like an insane amount since its IPO
because people are interested in a
story. That's cool. It's cool. They
don't lose anywhere near as much as
OpenAI.
They're they're 81 bill Core Weave
itself, which is literally just an
infrastructure company that sort of
resells Nvidia chips. 81. Well, you you
tell me. 81 billion uh uh market cap.
And since their IPO, they're up 325%.
Absolutely wild. So, they have a very
small float by the way, most of which is
over like Nvidia and Magnetar,
right? So Corewave will probably raise
another $10 billion by selling another
share sale. They can plug away for a few
years. But what happens if the AI bubble
bursts if growth slows? Coreweave is a
business heavily built on GPUs on
raising money based on they Here's an
interesting question. Is it roundtpping
when Nvidia sells GPUs to a company that
they own part of that they own part of
the stock in that they have a $ 1.3
billion project Osprey uh cloud deal
with? Is it roundtpping if they sell
them the GPUs that the core then takes
the GPUs, raises money from
institutional investors based on the
value of those GPUs and then uses that
money to buy more GPUs from Nvidia? I
don't know, maybe if we had a government
to look into that. But fundamentally,
Cororeweave and OpenAI are even more
insane businesses. Coreweave owns stuff.
They have actual buildings now. I don't
think that they're ever going to scale.
And I do think that that dog will die
and I will dance.
Mostly because people people think that
stock valuations actually change
anything about my argument, which that
that article really drove mouth of
madness. The reason why I brought it up
is you said is the market going to read
the books about open and then hold on
but I I'm just saying that the market
can go with a story
but open AAI has no assets really they
don't they Microsoft owns their IPO
their pre sorry their IPO their
intellectual property they own their
they openAI owns no infrastructure they
have their stuff they have their
research wait Microsoft also has that
they have the exclusive right to sell no
wait Microsoft can also sell their
models. They don't own Stargate. They
don't own the GPUs within any of the
servers. In fact, they don't even make
enough. I've referred to them as a
Banana Republic because they require in
exterior money to come in constantly
because when you look at what OpenAI is,
they don't own very much of anything.
They own part of Coree. They about $350
million worth of Core stock. That's
that's fun. By the way, OpenAI's deal
with Coree is pretty much the only way
that Coree can uh raise more money. So,
hope nothing happens with OpenAI. That's
the thing. OpenAI is an asset light
business with research and IP that's
owned by another company. They don't
have much to trade other than their
name. And their name is insanely strong.
They really do. But as a company, they
would have to at IPO expose themselves
in a way that they never want to because
they would have to say all of the
material deficiencies within the
company. They would have to list the
genuine risks and the risks would be
every single thing I'm saying. They
would say Cororeweave had to amend their
S1 to add the counterparty credit risk
from OpenAI because OpenAI if they stop
paying Corewave Core doesn't get a bunch
of their revenue. Open AAI starts paying
Corweave in October 2025 just as Core's
second loan DDTL2 starts requiring them
to pay probably more than OpenAI will be
paying them. This is the systemic risk
I'm talking about. 500 million user
consumer product
that loses them money that converts
horribly.
All right, I want to talk about that.
Let's take a break and we'll be back
talking a little bit more about the
infrastructure costs of OpenAI and what
chat GPT is underneath the hood. We'll
be back right after this. And we're back
here on Big Technology Podcast with Ed
Zitron. He is the host of the better
offline podcast. You can also get his
newsletter at where's your ed? What's
the
where's your edit?
Where's your ed. Great domain name. I
know. So, so let's talk a little bit
about the money that OpenAI loses. And
I So, I've been listening to your
podcast and whenever someone brings up
this argument that they will learn how
to deliver what they have today more
efficiently, uh your next line is
something like I will uh squash you like
a bug or I will compact you like a cube
in a car.
Yeah. Exactly. That's accurate. So do
that to me, Ed, because
because I mean the the stuff is without
a doubt getting cheaper to run.
Why do you say without a doubt?
Because if you look at the I mean you
could just look at the way that they're
Oh, shoot. Now you gotten But if you
look at the what the price that they're
selling this stuff at, it's
doesn't mean a godamn thing.
Well, what about Okay, so now let's So
here I am in the in the trash compactor.
But but I mean if you think of you don't
think so do you deny that there's any
algorithmic efficiency being had within
these
I'm sure they're trying
but the one public
you think that this is sustainable they
had um they were selling so GPT4 open
AI's GPT4 was 3 cents per 1,000 tokens
okay
prom tokens mini is uh it's a what is it
a$110 per million tokens
it's much cheaper Okay.
So, you think that they're just losing
more money as opposed to becoming more
efficient in the way that
maybe there's some calculation where
they're losing less money, but they're
still money.
There have been I'm going to get a
little out of my depth here because I'm
going to talk about model architecture,
but there have been architectural
innovations that have made it cheaper to
run these models like the mixture of
experts model.
When you say these models, which are you
referring to?
I mean, you could talk about uh I yeah
big foundational models.
Okay. But we're talking specifically
about open
open eyes. So, so I think they do use I
mean let's just talk about the mixture
of experts model, right? So instead of
lighting up the whole model to get an
answer,
they will channel your query into the
area where they think the model can
answer.
I mean the the the
folks who build DeepSeek, it seems like
that was a big part of the way that they
were able to make it cheaper,
right? Why do you think Okay, I
shouldn't really be asking the question.
It's your podcast with Deepseek. Isn't
it weird that we didn't really see any
efficiency gains discussed by a single
one of the model companies that none of
them even seem to do the same thing
other than perplexity releasing like a
1776 version of R1 without the Tianaan
Square thing. Just one of the Aravan
he's like he is so so lame. Just
Okay, you brought this up a couple
times. Just let it out about Perplexity.
What don't you like about them?
Well, first of all, they're an insanely
badly run company. They they did like 35
million. They lost an in I forget
exactly how much they lost, but they did
refunds or discounts of like $30
million. They're literally giving money
away to make people use it. And even
then, they only have like 50 million
users. Uh I also think that as a CEO,
Aravan just goes and says [ __ ] that is
just annoying. He just he could be
I'm surprised that you're saying that
you want him to behave better.
It's not I want him to behave better. I
wish he'd just be more direct about what
Perplexi can do, but every [ __ ] few
weeks he did this whole touchdown dance
after the Google search trial and then
nothing else. He's not. It doesn't feel
like he's trying to create a competitive
to Google. It feels like he's making a
Silicon Valley hero story out of himself
and it's boring and lame and it's a bad
business. Give it up. Okay, that means I
don't mean like shut down the company,
but he's good at raising money, I guess.
But back to the back to the model thing
and the efficiency thing. Yes, they are
losing money because it just real easy
one they would be saying if they
weren't. You think that Sam Alman if
they had managed to make this profitable
would not go out there and tell
everyone? He absolutely would. Also,
he'd be telling investors immediately
they one of the great reports of the
information. I quote them a lot because
they're doing some of the best tech
journalism out there. John Porter
uh it was um might be Anisa Gardez or
Stephanie Palazzo. John Victor over
there is excellent and [ __ ] He's
gonna kill me. Cory Weinberg's done some
excellent work on Cor. There's also a
new person there who I'm forgetting who
did that's a good number.
No, but like they've got like a really
excellent team, but
where was I? You would get leaks that
say that they've gone profitable and
that would be
Well, I don't think they want to go
profitable. They're just trying to at
least at the moment most startups at
this stage don't want to be profitable
at OpenAI stage.
I think so.
They're on like the equivalent of like
series D or E. That's absolutely when
you go private.
But okay, so again, let let me just
and then they need to go public.
I'm gonna bring up their side of it just
for sake of the argument. Um I think
what they're trying to do is get this
technology in the hands of as many
people as possible and they understand
that it's a more capital inensive
technology than most others.
Mhm. And
and so therefore they're not profitable.
So that they
But I don't think there is a magic
profit button.
No, but that's that that's what I was
going to bring up. I don't think there
is a switch that they could flip today
and be profitable and deliver the same
quality of models.
Could they just like switch chat chat
GPT to GPT4 and potentially be
profitable?
They Sam Orman has suggested that they
would take away the model selector
months ago. He he likes to say stuff and
then just they disappear. Gets the
articles, nothing happens. Very good for
Sammy. It's the thing is I think that
what's happened is everyone thought
about a year and a half ago that this
was going to change. It was going to
because there was that big jump from
GPT3 to GPT4 to GPT4. There was the
multimodal side. It was like a oh this
is really interesting. The voice mode
was interesting. It's like oh I can
extrapolate from here that we made this
big ass leap. So in 6 months we're going
to be here and then 6 months after that
except it's like in 6 months we're going
to be
here and then maybe we're here and
another
for listeners Ed is doing the very
incremental
doing like a very small hand movement.
So that's the thing I think that they're
all wrapped up in it and yeah Open AI is
absolutely trying to get as many users
as possible. The problem is if you're
losing money on each one and also their
conversion rate. Here's my favorite Open
AI stat. Well, more of a question I
always ask, which is why do they not
show monthly active users? They talk
weekly. And the reason is because if you
compared what their real monthly active
users would be, 500 million weekly. So,
I'm going to guess 700 million monthly
divided by the 15.5 million customers
that pay for it. That's a dog's doodoo
of a conversion rate. That is so bad.
So, you're saying that they're giving
the lower number that's more active
because they don't want to make it seem
like very few convert to.
They don't want to they don't want the
conversion rate out there. They don't
want people to say, "Oh, you have a
conversion. I can't do math very well.
Me and me and Chat GPT share a problem."
Um, it's
Yeah, just throw a number out there.
No, but be very confident about it.
2%ish like a really [ __ ] conversion
rate for the most notable company in the
most notable industry with all the press
and all the marketing. That's their
conversion rate. That's bad, man. It
means that they can't work out and no
one else can work out what the hell to
sell this on. Indeed, Sam Orman loves to
say, "Oh, yeah. I can't wait to see what
you build with it." Mate, what are you
building with it? You're the [ __ ]
owner and they want their API business.
It sounds like also weirdly Anthropic is
doing better on API that they're selling
more a larger percentage of their
business is API, but they still lost
like $5.2 billion last year. It's
completely insane. But it's it's just so
strange because you can have something
this big that fails. You can have
something this big. And when I say fail,
I don't mean chat GPT goes down and
everything and all the people in the
building get thrown out. It would be
somewhat messier. And I can go into that
at some point, but I think that we are
in a moment of mass delusion where no
one really wants to talk about these
numbers because when you talk about
them, they're scary. And here's why.
Okay, Magnificent 7 stocks make up about
35% of the US stock market. 19% of that
is made up by Nvidia. Nvidia's revenue,
I believe it's like high8s based on GPU
sales. data center revenue in last
earnings from Nvidia was below analyst
expectations. No one really wanted to
write about that one because Nvidia is
pretty much holding up the stock market
on some level. It is every time Nvidia
earnings come around there is some story
on like take him from Baron says I love
Nvidia and then everyone else says I
hope that this is good. It really is
like I hope that this is good because
I think you're right about that
and the reason Nvidia is making all the
money is that everyone's agreeing to buy
GPUs today. And so couple weeks later
from this, obviously Amazon said
something that they're using that
someone, it might have been anthropic,
forgive me if I'm wrong, but they're
using 500,000 Tranium GPUs, their own.
What happens if Tranium takes a
meaningful chunk out of Amazon's spend
with Nvidia? That's a chunk of revenue
gone. What happens if Microsoft's data
center pullback means that they
eventually finish retro because I'm
assuming that they are retrofitting
Blackwell chips into into their previous
service? I would humor that argument.
Open AI is if Abolene Texas goes well,
which I don't know if it will for that's
$40 billion of revenue once for Nvidia.
We are basically saying that Nvidia will
continue growing because it's not like
Nvidia could just keep doing this well.
The market requires growth forever.
Nvidia, we are saying that within the
next year or two, Nvidia will be making
hundred or more billion dollars in GPU
sales and the year after that it will be
at 120 150 a quarter. That's I'm the
crazy one for suggesting that's bad.
Well, I think that
this is all dependent
on one thing, the continued purchase of
GPUs for generative AI. What happens if
that's not the case? What happens if I
don't know there's say the efficiency
gains are there? Say that happens. Say
that Google there's they mentioned that
one H100 can one run one of their Gemini
models. I forget which. What if that is
how they scale? Wouldn't that mean they
need less GPUs? So put aside all of the
gains and the growth. Nvidia is just
holding everyone up and the capital
expenditures from the rest of the
Magnificent 7 is holding Nvidia up. What
happens? What happens? What happens? The
market goes tits up. Do you think the
market will go, "Yeah, well, they're not
banning the GPUs and Nvidia's doing
badly, but we still love AI." [ __ ] no.
They're going to say, "What did we spend
all this money on? I'm going under the
bed. I'm going to find the pornography
you've been looking at. You're all in
trouble because people don't like tech
right now. People are pissed at the tech
industry." And this is all vibes, man.
Cuz when you look at the numbers,
numbers are bad.
So yeah, my long and short of it is the
reason I am alarmist about this is the
these numbers are alarming and I am
shocked and actually kind of disgusted
at some of people in the media for not
being more alarmed because if things
progress in the way and I really think
it will in this way people's pensions
retirements are going to be [ __ ] So
much lies on this retail investors make
up a large chunk of the buying for
Nvidia as well recently as well. It's so
worrying
and the growth from AI isn't there
either. These companies are not making
[ __ ] tons of money. Microsoft two
quarters straight said they would tell
you their ARR for AI. They think it was
one quarter they said 10 billion ARR
which is month times 12. Next quarter
they said 13 billion. Next quarter they
just didn't bring it up probably because
the growth rate's flat. What are we
doing man? I think that there's that
you're right that a lot of this trade is
predicated on scale working and that is
a lot that is a risk because I mean what
we're hearing from the tech companies is
that we're they're getting diminishing
returns from scale and like in terms of
making these models bigger building up
the GPU clusters training them with more
data like it's not as
they're out of data as well
that's true that's true and I think
maybe that's why you see the scale
acquisition from at Meta
insane acquisition one of the mo like
top of the market [ __ ] $14 billion
for Alexander Wang, the labor abuser at
scale. I I mean lowercase S there. And
on top of that, basically cutting off
the fuel supply for multiple companies
for training data at a time they're
running out.
Well, it's interesting because a lot of
those companies are they're cutting it
off on their own. But yeah, you're
right.
Open AI was moving away.
You're right.
But Google was their biggest customer
and they pulled away. But I think just
going back to this scaling thing,
everybody is now admitting that there
are diminishing returns for making these
models bigger.
Uh, and I think we're really going to
hit a point where they're going to say,
do I need if if I'm, you know, okay, I'm
just buying this, you know, billions and
billions more Nvidia chips to make my
model a little bit better. Um, do I need
to be doing that? Like just to go back
to a conversation that I had a couple
weeks ago or now a month plus
um with Sergey Brin where he said he
thinks that the improvement is going to
largely be algorithmic
uh of these models meaning not by adding
um more GPUs and data by like actually
changing the algorithms inside these
models to make them better things like
reasoning. Okay, I'm just saying that
like right now within
let's just talk about it. uh right now
within these tech companies there is a
consideration that maybe scale scaling
up these models isn't what's going to
get them there and then there is that
risk to Nvidia um and and if that goes
down then it could be a problem it will
go down like that's the thing at some
point putting aside my feelings about AI
at some point there will not be enough
space there will not be enough space for
these GPUs there will be not enough
space on the earth to fill with them
there will eventually not be a need to
are you saying that micro because the
assumption here that this keeps going is
that Nvidia either comes up with a
completely like the Reuben for example
are we meant to believe that everyone
who's just getting Blackwell when Reuben
comes out is going to go yep I
definitely need that is that the that is
the gamble and it's just it's kind of
scary because whether or not AI succeeds
because also the growth isn't there the
software sales aren't there even if they
made the software sales profitable
tomorrow. The actual revenue is really
pissed poor. Like it's not that much.
Even if OpenAI was profitable. Okay,
they're the biggest AI company. Cool.
Are they going to 100 billion a year?
[ __ ] No. And also, if they made it
profitable, someone else would and they
would get price [ __ ] It's just it's
such a brittle industry. There's never
been anything of this scale this bad
within tech. You can say the fiber boom,
but no, you didn't have every single
software company selling a fiber
solution. you did every consumer because
you didn't have apps back then in the
same way, but you didn't have Notepad
and Microsoft Word trying to sell you
fiber or saying the new glory of fiber
is here partly because of the society we
lived in at the time. But it's like this
is bonkers.
The argument that the Nvidia would make,
it would be that eventually AI use is
going to be so intense that you'll
actually need more GPUs to fulfill that
demand. Fascinating. What I Jensen Hong
I give him credit. He's got great
leather jackets. Sounds like horrible to
work with. But
why do you think horrible to work with?
The reports like there have been
multiple multiple reports of it. Like
he's an an aggressive CEO. It's probably
worse. It's probably better. But he's an
aggressive [ __ ] CEO and he humiliated
someone at CES. There was someone at CES
sound guy and he called him out by name
in front of everyone. Disgusting. You
have you have a bazillion dollars. You
should be you should be happy to be
there, but he'll never be happy. He
wants to sell more GPUs. It's It's
frustrating though. I understand. But
also, what's Jensen Hong meant to do?
He's go up on stage be like, "Yeah,
we're [ __ ] We're going to People are
eventually not going to buy these. I
should let you know as the No, he's not
going to say that. He's going to say,
"Yeah, well, there'll always be." He's
done it before. He'll do it again.
That's the Nvidia will be fine long
term. They're actually positioned
because they make real things and Jensen
Hong is a pretty good CEO. They have
actual innovation there. They have tons
of different layers to the company. The
actual value creation, they have the
monopoly on the consumer graphics
market. They do make good stuff. There's
a lot of problems with their consumer
hardware right now. Well, sorry,
consumer graphics hardware right now.
There's where basically they've killed
the midmarket. It that sucks, but it's
still a business that sells things and
owns things. The rest of them right now,
I think it's more likely at some point
they go, "Why are we doing this? This is
so annoying. This is so annoying. It's
so costly.
I think Sachinadella is also really
tired of Sam Alman from everything I've
heard based well by which I mean read
I'm not like an insourced with them I
wish I'll fly on the wall in those
everything that's been reported
journal's done some really good
reporting on this has basically said
that that relationship is frayed because
I think Samman thought he had more power
than he does and Redmond you're against
the ultra monopolist you're against like
the OG the Michael Jordan of monopolies
like they beat the anti I trust claims
with MS DOS and Windows.
You I know we're going to talk about
this at some point, but the
conversation, sorry, the convers the
story about the whole uh threat of
antitrust from OpenAI is
Oh, just just bring it up now. Now that
you brought it up.
Yeah. So, it's just been on my mind ever
since I read the story. So, right now,
OpenAI in this wonky thing is trying to
convert part of itself into a for-profit
entity with control from the nonprofit
board, which Samman's still on, but
whatever. Part of that conversion
requires Microsoft to say okay. And
Microsoft says okay well we'll convert
in exactly the way right now 49% of
shares and we'll continue having your IP
and up until you get AGI which is no. Um
and we also get to sell your models
exclusively and we have all your
research too. Sounds great to us. And
Sam Alman said no actually you should
get 33%. You shouldn't be able to have
access to our IP after a certain point.
Also, the Windsurf acquisition. I don't
know if that's ever going to happen
because Microsoft is according to the
journalist Burbag Jin over there.
Apparently, the Windsurf acquisition has
become a major problem because OpenAI is
saying, "Well, we can't give you the IP
from them. You compete with them with
Copilot." And Microsoft says, "Actually,
our contract says you have to. We give
and the line in the article hilarious.
It's like Microsoft gave the blessing
for the Windsurf acquisition uh under
the current terms." It's just like,
"Yeah, of course they did." And the
thing is, OpenAI has allegedly hinted
at, by which I mean leaked to the
journal, I assume. I don't I don't have
any interior knowledge there, that they
were considering an antitrust action
against Microsoft for some reason.
People sign away their first amendment
rights and NDAs all the time. Like they
people make contracts to give away their
rights all the time. It's not
anti-competitive because you don't lock
a contract. Also, even if they filed it
today, good luck seeing that [ __ ] in
front of a judge for three years. You
don't have that kind of time.
The fact that they're saying that
suggests that things are desperate
because understandably Microsoft said oh
also um OpenAI wants to reduce
Microsoft's revenue share.
It's like I put it in the monologue I
recorded today as like being in a
hostage situation putting a gun to your
own head and saying if you don't give me
what I want I'll give you the hostages
and kill myself cuz it is it's like
Microsoft the only reason Microsoft
would agree to these terms is because of
reputational damage. because Sam Orman
believes he is the most popular
well-liked special boy in the world and
I think he believes that Microsoft would
just roll over and Microsoft said why
why should we bother we don't have to do
that and sure they could work it out
there's every chance that Microsoft just
goes oh [ __ ] it I don't care but also
why would they why would they why would
they do that what possible value indeed
now it would be a reputational harm to
Microsoft it would suggest that
Microsoft can't negotiate and then the
information had another story today
where it and so a couple weeks ago where
it was saying that OpenAI has been
undercutting Microsoft in deals selling
their models and undercutting their
enterprise subscription deals
and just making a deal with Google. By
the way,
oh the Google Well, oh my god, are you
talking about the Google compute deal?
This is my favorite deal ever signed.
Okay, here is how the Google deal works.
Open AAI is contracting Google for cloud
compute. Google is contracting Corewave
to serve that compute.
Why would OpenAI not just hire Core
Wave? Well, I I assume Google needs to
add some revenue even if they're
probably just losing. But it's the most
strange situation I've ever heard. Just
I feel like we need more tech analysts
who just look at the absurdity of all
this cuz it is absurd. But no, so within
this situation, you've got OpenAI
competing with Microsoft to sell their
own models and undercutting them.
Microsoft provides all their
infrastructure. Sure, Microsoft probably
fears some anti-competitive action if
they start taking measures against
OpenAI, but Microsoft never
I don't think that Microsoft has to
provide them the discounted like a
quarter of the price as your costs,
which they at least as recently as last
year were providing them. I don't think
Microsoft has to give them any of the
things they have to. Open AAI signed a
dog [ __ ] deal, a really bad deal that
made sense at the time because I assume
that they thought this would do
something different than it did. Now
they're in a price war. And what OpenAI
is doing, the undercutting thing, that's
a Michael, sorry, Michael Jordan, the
Michael Jordan of Monopolies, I should
say. That's a Microsoft move. A
Microsoft move, yeah, we're just going
to lower the prices until you die. You
can't do that when you lose billions of
dollars a year, [ __ ] You've got
Microsoft does that because they have
the ability to just go we will pay
ourselves using our monopoly over
business software. We will use it over
our monopoly over Azure and one of the
three companies that really makes
meaningful cloud revenue. Like that's
the thing. Microsoft can bankroll that
crap. Open AAI can't. And on top of
that, if OpenAI does an antitrust
action, I think I mentioned it earlier.
2,000 people in Microsoft's legal
department. 2,000 people. You got a
small ar you got more people working
legal at Microsoft than work at OpenAI
all told. It's just brazen. And I think
that I think it could there is a chance.
I'm not saying it's for sure, but
Microsoft could kill OpenAI because they
need to by the end of the year OpenAI
must convert to a for-profit entity or
SoftBank does not have to give them more
than $20 billion total. SoftBank's
already given them $10 billion. Another
problem, another just this is a small
one. I I'm sure this is easily going to
be solved. Soft Bank to give OpenAI that
money and to buy Ampair for I think 6
billion or something, they had to get a
one-year 15 billion convert convertible
no um bridge loan even. And uh it they
had to go to 21 banks.
21 their credit rating.
Yeah. Um I think that there there was a
threat like there was a story was there
was a consideration of hurting their
credit rating. I don't think it's
happened yet. And on top of that,
SoftBank does not have the money to do
the next the the next $30 billion. They
don't have it. They would have to raise
more money. Now, another story that went
out where it was saying that now they're
going to the SASs and they're going to
Reliance I think in India. And it's like
you don't go and do the SAS unless
things are not looking good and Soft
Bank is so if they raise another $30
billion Soft Bank will only be providing
20 billion of that.
So 10 billion will be syndicated. So,
OpenAI on top of SoftBank having to do
all of this gump to make this happen to
have find money that they don't have,
they will have to raise $10 billion, one
of the largest private rounds of all
time. And if they succeed, they will
have to do it again and again and again
and again and again because OpenAI will
be, according to their own projections,
burning money until 2029, 2030 when
Stargate, which will somehow exist,
which will also require another $19
billion from Soft Bank that they don't
have. Once that happens, they will go
profitable somehow. It's just really
strange that this is considered a like
an outlier position versus arguably one
of the least stable financial situations
in history and perhaps not tantamount to
the subprime mortgage crisis cuz that
was so that was so clearly like when you
saw the fundament of it. I don't think
that if this I I'm not an expert in
mortgage security so forgive me but I
can't imagine it would have happened in
the same way if it happened today just
because there was more access to
information but in that case where you
just had millions of consumers with
loans they couldn't pay off that was
bigger and would have more widespread
damage because there were people losing
their houses and then it [ __ ] the
economy. I don't think this is going to
be super far off when it happens because
of the mag seven problem in my video.
And what's holding it up is one company
that burns billions of dollars, their
sugar daddy out in Japan run by
Masayoshi son, who is well known for
losing money and making really bad
investments. By the way, another
question. All the reporters talking
about the $3 billion a year in agents
that SoftBank was going to buy. Where's
the [ __ ] reporting on that?
Absolutely egregious. almost as
egregious as people claiming OpenAI had
closed a $40 billion round. They didn't
do that. They ain't got the money. No
one's got the money.
Mhm.
Why is OpenAI raising money for a round
that they claim was
It's just it frustrates me because
people will get hurt.
So, let me What is the best argument
against the claims that you're making?
Have you heard one?
Honestly, I would love to. Uh it very
much is if a frog had wings, it could
fly. It's like if they get better, sure.
If they manage to make this much much
cheaper and they end up working out a
thing that could sell really well, sure.
Can I ask you, you run a PR firm? That's
your core business.
What do you mean?
That that's the what brings in the most
revenue.
Um, with Easy PR No, it's I mean, it's
spread across the businesses.
Wait. Okay.
As in like as in
media and a PR. I mean, how do you as
someone who owns a PR firm decide that
this is I'm just curious. This is not
like uh
a lot of my business is working with
journalists to pitch clients to them,
right?
And I stay away from AI stuff like I I
don't work with I worked with a consumer
TV company. I didn't write about
anything like that for obvious reasons.
The thing is a lot of my business is
talking to journalists. Journalists want
to be presented stuff that matters to
them that comes from a person who's
considered and read their work. The fact
is I consider and read their work all
the time. It's what I've done for like
the 10 15 [ __ ] years I've been doing
this business. like it's the same thing
except I started writing and yeah um I
fairly well demonstrated that I
understand what I'm talking about in the
writing I do and I also firewall that
very precisely.
So it hasn't hurt the PR firm then.
No. Okay.
No. And in fact, the CL clients kind of
like it like they they appreciate the
fact that
I can elucidate that I understand
business. And it's one of those things
where
yeah, it is at some point the media
stuff will probably take it over. But I
just I'm having a great time doing all
of it. But on top of that, when it comes
to doing PR, doing media relations, so
much of what PR people don't have is
basic knowledge. And I do pride myself
on knowing what I'm [ __ ] talking
about.
And it helps. And it's great. And also
there are strict firewalls. CES is a
great example. So I had a client at CES
at the time. I would pitch them for the
show, pitch a journalist to come on my
show beforehand before I pitch from the
client because I didn't want any
possible situation where they thought
for even a second, even though I don't
think they'd think this, that them
saying no to my client anything to do
with the show.
And it and there were there were people
that said no to stuff who came on the
show and it was fine. Who gives a [ __ ]
like it's like they they are separate
entities and my clients are very
respectful of that as well.
Can we just take a moment of levity
because the way I first found out about
what you do was when speaking of CES I
think uh you told you told like a bunch
of people that you would meet them at
Updog and
they would say what's up dog and you
would say nothing much. You
oh that was so much fun. They were so
pissed. They were so you got them good.
Yeah, you pants someone they get. No,
that was great as well.
So, what happened there?
So, what it was was I was I had was
heading back to England. I think it was
a few days before I headed back and I
was getting spammed and I was like, I'm
not I went to CES because I think I had
a blog at some point that got me in the
media system. They screenlight you
automatically. So, I said, "Okay,
um I'm just going to respond to these
people who have not." And none of these
people have considered who I was for a
second because they just spam me. So, I
respond with like, "Can you send me more
info on Updog?" and they'd be like,
"What's up, dog?" I'm like, "Nothing
much. What's up with you?" Most of them
didn't respond. Some respond with, "I
can't believe you do this. I can't. This
is so unprofessional." One of my
favorite tweets at me was like, "Oh,
making fun of your pierce. You're a real
douchebag." I have that tweet somewhere.
It's so funny. Because it's like, look,
if someone got me like that, I'd be
like, "Oh, fuck."
Like, it's like yesterday I said to my
dear friend Casey Kagawa, I said to him,
um, yeah, I've hit this number of paying
subs. He said, "You'll never eat all
those."
And I got so pissed at him cuz it was
such a good dunk cuz he was suggesting I
was talking about sub sandwiches.
Not a great joke, but he got me good.
If you get done with a funny joke in a
professional scenario, you should enjoy
the fact that you're not having to talk
about business for a second. I don't
judge anyone who failed for that. You're
a [ __ ] PR person emailing a bazillion
people. Laugh with me. We're all having
a good time. Or you should be.
Apparently, a major agency though sent a
companywide email saying, "Warning, Ed
Zitron." Which is really funny. And uh
that was your screen name for a while.
It was it was it good call back real OG
fan. I no it's um yeah that was really
it was really funny. I meant no harm
with it and I think anyone who took it
see anyone who took offense to that go
outside.
Let me let me ask you this to to end. I
mean, we have we have listeners here. I
think that um believe in the power of
AI, are working in it, are implementing
it, are building it, and
some that are concerned about it,
worried about it, and really are curious
about the business side of things. And
sometimes those people overlap.
Uh you've built a sizable audience among
people who are really concerned about
this. And I think that every time we do
a show about like the downsides of AI,
people grab on to it. I mean, even with
the Gary Marcus show, like there are
people that will like go in the comments
on YouTube months afterwards and be
like,
"This helped me like sort of have come
down from all the AI based fear."
Like that yesterday.
So, what do you think why do you think
people are uh so concerned about this
technology and why do you think the
criticism of it resonates the way that
it does?
I think there's a few things. I think
one is the most obvious which is I think
anyone would be afraid of someone taking
their job. I think it's a natural thing
of the thing I have someone might take
it and when you have the entire media
and most public companies saying I can't
wait to replace humans you would mean
nothing to me. Yeah that's scary people
when you have Ezra Klein and Kevin Roose
saying AGI is just around the corner
baby and it's going to change
everything. Notice that they never say
how. Uh that's very scary and this is
not saying people are stupid or
uninformed. The average person does not
have my very special stupid mind where
I'm like, I must learn all the numbers.
And most people don't have the time to
sit down. They have jobs. They have
families. They have things to do. More
fun things I imagine. So they see the
[ __ ] news and they get scared. And
then I think there's a layer deeper
where tons of people realize that
something is being they're being told a
lie that they go and use chat GPT and
they go, "Okay, this search is better.
My friend talks to it like a therapist,
which is worrying." But they keep
describing they referring to big
companies Sam Orman as the next big
thing and the power of AI. But when a
regular person looks at they go the this
isn't this isn't what they're saying but
everywhere saying it is and their bosses
are saying AI and everything. And I
think that people feel this cognitive
dissonance and they feel it profoundly.
It's the same way they felt about the
metaverse. It's the same way they felt
about crypto AR VR. all of these things,
but none none of those were this
pungent.
And you've really just seen companies so
horny for the idea of replacing people.
They're so excited. There shouldn't you
you as a CEO, unless you care more about
your shareholders and growth, which is
Andy Jasse is an MBA,
as are all of them. I think I think all
of them other than Mark Zuckerberg have
MBAs now. All the major big tech CEOs. I
don't know if Jensen does. Anyway, um
you
wait, let's let's get this right. So
I think Tim Cook does Sachin Nadella
Sunda Pashai. He worked at McKenzie.
Okay.
Andy Jasse. I even think the guy who
replaced Andy Jasse at AWS has an MBA.
Um
okay, pretty sure I'm correct on those.
If I'm if I'm wrong, score me. Um, but
people realize that there's a
disconnection by from what's being told
and yet they are very clearly seeing how
lascivious people are around the idea of
replacing them. So they have this dual
offense of you haven't even built the
future yet, but you're doing the
touchdown dance and you're so proud of
the fact you replace me. You're so
excited to replace a real person. Mark
Zuckerberg wants you to have fake
friends. Sam Orman wants you to have
fake coders. And then they see that the
outputs are kind of [ __ ] They see that
it doesn't really replace people. It
replaces an aspect of labor and a small
aspect of labor in exactly the same way
that bad bosses mistreat their
employees, do not value their labor. I
had this thing I wrote called the era of
the business idiot. I did a three-part
episode on it. And my principal thing is
I believe throughout most power
structures, there are people that do not
understand work, that do not want to do
work, and exist as a kind of ultra
middle manager. I think Sam Orman is
their antichrist. Which sounds dramatic,
but hear me out. Sam Orman is the single
most gifted business idiot whisperer of
all time. He convinced, look at what
he's done. I think he's reprehensible, a
real scumbag, but I cannot I cannot
ignore the work he's doing. Just he
convinced [ __ ] Oracle to do all these
chips. He convinced Masi Yoshi's son,
Sachi, Nadella. Of course, he's
confident that he can con Microsoft. I
think he's wrong because he's done it
before. He convinced everybody that
generative AI was the future without
really proving it. Someone else did that
work for him. Someone else built Chad
GPT. How many of the people who built
Chad GPT is still there? Ilas gave a
respect to the guy for just doing his
own scam. Mirror Morati, same deal. They
get Steven Levy piece being like, "Yeah,
they're going to build an AI thing." And
everyone's like, "Oh my god, oh my god."
And then on top of all of this, you have
this [ __ ] about AGI. The most
fictional of all fictional concepts. I
I've said this a few times. It's like
having a bunch of billionaires saying
they're going to hunt and capture Santa
Claus. It We are closer to the Ninja
Turtles. I'm deadly [ __ ] serious.
I've talked to biologists.
That's about as firm as Sam Orman can
get with AGI too. Because that's the
thing. You have all these people hearing
that there's going to be this conscious
computer
and they're [ __ ] scared of that. Of
course they are. Even though it's a
complete lie, even though it's a
falsehood because Kevin Roose was at a
dinner party with some other credulous
people. You don't like Kevin Roose?
I think Kevin Roose was very good at his
job and he has now gone anti- remote
work pro metaverse pro NFT
multiple the Pudgy Penguins column was
disgusting.
What was that?
He joined a penguin NFT club.
Okay,
that was the artic one as well.
I will say and I think it's important to
note unlike crypto metaverse um AI feels
different to me.
It is different.
It is. It seems far more useful. There
are more products. There are more actual
I will I indeed when this bubble started
I pushed back on people said it's just
like crypto. It's just like metaverse
because there was a thing here
right? Was it as big as people said? No.
But the egregious lie with the metaverse
was like we've made a VR space. This is
worth 100 bazillion dollars now.
But with Roose he did an article about
Helium a crypto company. And then Matt
Bender I believe over at Mashable. He
got outplayed by Mashable man. Actually
Bender is amazing. No, not tons of great
people there. Celio, I think. Anyway,
with Ruse, he did this story where he
was like, "Yeah, Helium works with Lime
and Salesforce." Turns out they didn't.
Turns out they didn't. Matt Bender went
and asked and they went they went, "No,
we didn't." Kevin Roose added by saying,
"Skeptics have suggested or critics have
suggested that this wasn't the case."
It's like, "Motherfucker, come on. I
don't like Kevin Roose because he has
this amazing power. He has this huge
audience and he chooses to support the
powerful." He did a story about AI, an
AI welfare guy, an AI welfare guy being
added to Anthropic. Just a Ninja Turtles
expert. It's we will find the ooze. The
ooze is here by talking about
I thought that was an interesting story.
I thought it was [ __ ] stupid because
it didn't discuss the welfare of AI. If
you discuss the welfare of AGI, if we
have a conscious computer, you are
describing a slave.
If this thing has consciousness, you now
have issues of personage.
Are you open to the idea that it could
be? I think it could be possible in
30 50. I think we are.
So you're just now saying you're open to
this idea of AGI.
I'm open to the idea in the same way
that I'm open to the idea of Teenage
Mutant Ninja Turtles in the sense that
if we got the use that could do it. AGI,
we do not have evidence it's possible.
We don't understand how humans think.
How the [ __ ] are we meant to create it
in a computer? But say we do. And this
is the dirty part of the conversation no
one wants to have. Say they succeed. Are
you saying that this conscious being
that Microsoft owns by the way Microsoft
owns this conscious being with
intelligence and consciousness and a
personality? Are you saying we wouldn't
let that free? Because what you were
describing there would be a slave and
yeah it should not be the goal
and no but that's what they're thinking.
Now they could say oh we'll do it but
we'll make it so its consciousness just
focuses on doing whatever Salesforce
wants. Still a slave
right? This is the thing. If Kevin I
genuinely would have respected Kevin had
he done that thing and then had a really
like agonized discussion which is
genuinely interesting saying what would
be the ethical ramifications of owning a
conscious thing. Fascinating.
But doesn't that story that that the the
labs are thinking about this kick off
that conversation? Like I don't think
that you can
I'm Wario Amade. I have decided that
okay
that I'm never calling him his real
name. Um Dario I'm sorry Dario. Wario.
Um, I am I am him. I am trying to work
out reasons for people to invest in me
in the future. I think probably give
let's call it a million dollar salary,
probably a couple mill more in stock.
I'll make a new guy. A new guy will come
in and his thing will be AI welfare.
What does that mean? What if it's life?
We can do a Google doc back and forth.
Karen How's Empires of AI does an
excellent job of discussing how many of
these people's fart in a glass and sniff
it because they have jobs there where
they just sit around going, "What if
this happens? What if this happens?"
It's a marketing spend and it worked. It
worked on a guy who it's worked on
before. Kevin Roo did an article
recently about a company claiming that
they were going to replace workers. You
know what they hadn't done? Even created
the environment they do it in. Kevin can
do good journalism. He's done really
good. It's the young money he did was
great. like like there is actual things.
Casey Newton's the same way. Like
they're good journalists. They could do
good journalism. They could even do if
they were optimists, they could engage
in actual optimism. There'd be
interesting thing. The welfare story is
a great example. Man, having a
conversation in the Times about what's
considered human or not. Why where have
they not been doing that elsewhere?
Anyway, pass. Um, it's just it's this
frustrating thing where the ultimately
the people that suffer will be the
people who depend on the markets for
their pensions. The people the markets
do eventually affect the workforce. And
on top of this, the other thing is that
we've got major people in the media hot
and heavy over the idea of replacing
people. Hot and heavy. They're excited.
I think that's disgraceful. On top of
it, who are you fighting for? Who are
you writing for? It isn't clear.
All right. And I think you and I will
will disagree on Kevin Roose and on some
other things, but I I am I am glad that
we've had this discussion. Me, too. Uh I
don't agree with everything you've said.
Um I think it was good that we had a
conversation where we, you know, brought
some of this out there, tested it. Uh
and I think the one thing I'll say is I
leave open the space that you're right.
Yeah.
And that's and I think that that is why
I think you have very interesting
perspective on this. And I think that's
why it was important for us to have this
conversation.
I'm really happy to be and we've talked
a good amount about this and like for
like I'm really excited to be here.
Thank you for having me.
Definitely. Well, thank you for coming
folks. If you're interested in the
podcast, it's better offline. The
newsletter is where's your ed. Uh and
it's also the still alive and kicking
easy PR easyr.com. All right everybody,
thank you Ed.
Thank you.
Thank you for watching or listening and
we'll see you next time on Big
Technology Podcast.