AGI or Bust, OpenAI’s $1 Trillion Gamble, Apple’s Next CEO?

Channel: Alex Kantrowitz

Published at: 2025-10-13

YouTube video id: I4yN5JlSaaY

Source: https://www.youtube.com/watch?v=I4yN5JlSaaY

Are we now at the point where the AI
investment frenzy means it's AGI or
market collapse? OpenAI's $1 trillion
infrastructure investment may impact all
of us. And he's the front runner to be
Apple's next CEO. We'll cover it all
right after this. Welcome to Big
Technology Podcast Friday edition where
we break down the news in our
traditional coolheaded and nuanced
format. We have a great show for you
today. We're going to cover all of this
crazy AI investment and ask ourselves,
are we insane for not sounding an alarm
and saying this is going to get much
worse uh before it gets better? A lot of
money is going into AI. Uh and it's time
to look at where that money is actually
going, what it's premised on, and
whether it is putting too large of a bet
on OpenAI, maybe betting the entire US,
maybe the global economy on one
company's fortune. We'll also talk about
who might be the next Apple CEO and
whether if it's time for him to just
step up right now. Joining us as always
on Friday is Ranjan Roy of Margins.
Ranjan, great to see you.
>> Good to see you. I'll admit it. I am
going to be the next Apple CEO. It's
breaking right now. He might be the
front runner to be the next Apple CEO.
>> If you fix Siri, I think a lot of people
will be happy. So, no uh no pressure.
It's obviously an easy thing that not
many people have tried and is quite
simple to do.
>> That was my pitch to Tim. He bought it.
He bought it. He's been listening for a
while and he realized the only way to
fix series to bring me in.
>> Okay. Well, I uh look forward to being
able to do our next set of episodes with
you in the big UFO and Certino. So, um
that is assuming of course that we still
have a global economy by the time you
take over. And I'm starting to think
it's not such a sure thing because uh
we've seen two things happen. One is an
even greater increase in investment in
AI infrastructure over the past let's
say 2 weeks and it's been two weeks
since we last spoke and there there is
so much that's happened. But as that has
come up, there has been an increasing
chorus from people even within the
industry that are that's starting to say
uh does this make any sense at all? Is
this going to be a problem? And let us
return to Dave Khan, the Sequoia partner
who wrote of course a great piece couple
years ago or a year ago even about the
$600 billion question around generative
AI and whether there's going to be
enough profit to actually justify the
investment. He now says that that
question is quaint because we are at a
stage of the buildout uh that is much
further than that. So he says this one
thing has become clear nothing short of
AGI will be enough to justify the
investments now being proposed for the
coming decade. This is happening even as
AI's potential is being realized. Chacht
has continued at its epic rise to north
of 12 billion in run rate revenue.
Anthropic has reached 5 plus billion in
run rate revenue in a meteoric rise and
there's a new club of companies scaling
quickly from zero to hund00 million in
revenue. There's a version of the world
and this is the version that Microsoft
and Amazon increasingly seem to be
pursuing where the next frontier is AI
adoption. The models have proven
themselves to be great and now it's time
to monetize these investments and drive
a worldchanging technology evolution.
But that point of view is by no means
widespread. Outside of these giants, a
debtfueled second push is happening.
Labs are taking all their profits and
capital and plowing them right back into
new data centers. And a new breed of
companies, namely Oracle, Meta, and
Cororeweave are going allin, no holds
barred. Given the scale of these
investments, the only objective that can
be that can explain this strategy is
AGI. So I think Khan is making a really
good point here. Two things are
happening. One is you have companies
like Microsoft, which came on this show
last week to talk about how it was
basically being more rational with AI
spending and and Amazon, you could put
them in that bucket. Uh and then you
start to see this crazy buildout that
Open AI, Oracle, and others are driving
that the numbers really only make sense
if you get to AGI, if you get AI that
cure cancer. And so much of that is
speculative. And that is where we might
be getting into a danger zone. So Rajan,
I'm just going to turn it over to you.
What do you think about that?
>> Yeah, I'm really glad that he's making
this distinction between those who are
kind of just pushing the idea that it's
time to monetize these investments and
drive a worldchanging technology
evolution versus this debtfueled second
push. I've been looking at this a lot
over the last couple of weeks around the
Oracle deal, the AMD deal. All these
deals are focused on capex and not
product. Again, I mean, there's been
some incredible
moments in product over the last couple
of weeks, SOAR included, which we're
going to talk about, and I've been
waiting to talk to you again. But I
think overall, this idea that, you know,
it has to be AGI. It has to be something
that justifies all of this capex, you
know, investment is what Oracle, Meta,
Corewave, all these companies are
betting. and and there's been absolutely
nothing that shows us that we're
actually headed in this direction. So,
it starts to feel more uncertain and
irrational. Nothing's stopping it right
now, but I think it's a good thing that
we're talking about it, right? And over
the past couple weeks, I've been asking
myself like, am I a lunatic for thinking
that some of this infrastructure
spending is just not following the data
that you're seeing in AI research? And
it's something we've talked about on the
show with people in the industry for I
don't know how how many months now, six
months, a year where we've talked about
how the gains that you're seeing from
scale are leveling off and that's
something that seems to be somewhat
consensus as close to consensus as
you're going to get in AI because
there'll be some folks uh maybe like the
Dario Amund of Anthropic who are saying
that scale uh you know is a way to get
to AGI scaling up LLMs but everybody
else is saying we're seeing
uh diminishing marginal returns. So you
have that seeming consensus and on the
other side you have investment that's
building as if that's not true. That's
building as if you can just scale your
way to AGI. Let's go back to Dave Khan
here. He really makes this point well.
What's surprising me is that this
doubling down on capex is happening even
as the dream of AGI seems to be cooling
off. Two things have happened. First,
new model progress has tapered off
despite much larger training clusters.
Second, as a likely consequence, AI
luminaries have started to walk back
their AGI timelines. In December, Elask
said that pre-training is dead. In June,
Sam Alman said AGI will be more of a
gentle singularity. And that same month,
Andre Kapathy forecasted a decade of
agents rather than AGI in 2027. It's
such an amazing divergence between what
the people in the field are saying and
what Wall Street and the investors are
buying. What do you think about this?
>> I like my singularities to be gentle.
So, I'm glad I'm still glad Sam's saying
that. But no, no, I agree because
there's kind of two parts of this. It's
one, you know, like is this capex
investment? Are these investments in
data centers actually going to be
required? And is this need for compute
going to be like are we going to get to
AGI and just these very heavy compute
processes that solve all of our
problems? But then the other part that I
actually think Dave Khan didn't really
get into is it kind of present was
presented as binary that like heavy
compute leads to AGI but also the idea
that doing these things in a more
compute efficient way is still another
third path I think and we I mean you on
stage with Google hearing about how it's
algorithms not just raw compute. I think
we've been seeing a lot more around
that. I saw some paper around I think it
was called like a tiny recursive model
can actually achieve very sim similar
results as deepseeek. The idea that if
you actually do things in a more compute
efficient way that makes things a lot
more costefficient for companies and
people will much prefer that to the
heavy GPT5. it's going to think hard and
long about every single problem even
very simple uh queries that you give it
just to kind of just drive compute
usage. So I think that the overall the
only way any of this makes sense is if
we realize this vision of just like
heavy compute AGI which there's no real
signs pointing to at least that I can
see and as you said even when even Sam
and Karpathy and all of the and Ilia are
all saying this as well I it's such a
disconnect from the actual investments
that are being made. Right. And we'll
get into sort of what the logic might be
even if we're not going to get AGI
simply from growing these data centers
and models. But I think we can both
agree that it's crazy making in a way
what we're seeing right now in terms of
the the investment and where the
research is pointing completely
disconnected.
>> Well, I have a question. What's what's
your current definition of AGI other
than Whimos driving around New York
City? But in this context,
>> obviously that's the first definition.
>> I mean that that's the the official
industry standard. But but in all the in
all these context I'm curious like you
mentioned AI curing cancer is kind of
one like highlevel interpretation of
this but but how do you look at what
what could be AGI in this context? I
think just let's just go with the
definition that I think these companies
are thinking about and that is that AGI
that can do more than 50% of white color
work today.
>> Okay. I mean which but I guess that's
the part where I still have trouble kind
of squaring this because I really
believe you can probably do 50% of white
collar work without incredibly heavy
compute. kind of in that Microsoft and
Amazon camp that the models are good and
obviously longtime listeners know where
I stand on product versus model but like
I think there's a world where you can
build these very complex workflows and
you can do this work and it doesn't
require AGI it requires the current
state of technology and all those data
center and we'll get into the actual
like does the feasibility these data
centers from kind of an investment in
chip standpoint even make sense But but
I think you don't need that
interpretation of AGI to actually make
this make AI realize its potential.
Okay. So what you're doing right now is
you're giving the perfect rationale for
why this buildout makes sense because I
think what the what the labs would argue
to their investors is even if we stop
today uh we have technology that can
with the right orchestration automate
50% of white collar work and therefore
this investment is going to be uh is
going to be worthwhile and you actually
kind of hear it slip from people like
Dario who says that 50% of entry- level
white collar jobs may be automated
within a couple of years. Um, if you
don't need a massive technological
advance to get there, then that would be
the logic for this buildout and make the
investment worthwhile. Do you think I'm
I'm curious. Do you think that level of
a case even has to be made for to
investors? Like, do you think that was a
pretty good pitch? I'm I'm halfway in
there, but but do you think they're even
getting to that level right now or what
do you think these conversations look
like between
>> I think some of them
>> some of them Yes. You're going to talk
about like open eye and AMD and we're
going to get into it in a moment. Um
>> yeah, the Matt Levie piece was
extraordinary.
>> Some of it is like yes that needs to be
made. I feel like that was a version of
the pitch made to Lightseed for instance
because I spoke with the Lightseed
investor who wrote a billion-dollar
check into anthropic and he basically
for the Dario profile did the math for
me and sort of explained it in a way uh
similar to the way that you just
explained it. Um, so I I do think that
yes, that's where the conver
conversation gets in some areas, but I
also think there are others who are
like, let's just uh let's just do a
deal, please. We need the OpenAI brand
shine and and that's where where it nets
out.
>> Yeah, I I can see that. I I can I can
see that it can go both ways. And and on
that question of AGI cuz I think like
Dave Khan had even kind of pointed out
three thing three kind of like
underlying factors that make things even
less likely. And and I thought this was
interesting because we all talk about
AGI as this kind of like vague concept
that the labs will get us to. But in
reality like the first big thing which I
don't think I hear very much about at
all is that the labs are starving PhD
programs of talent. So like now actually
that really kind of foundational
research moves only towards labs away
from universities and kind of more
traditional research even though that's
where all of this started is actually I
think a dynamic that is totally
overlooked and could have kind of
longerlasting consequences around this.
But then and then one of the other ones
we I liked was corporate politics tends
to favor invoke consensus ideas over
more radical unpopular ones. I think
it's fair to say that like even though
the Sams and the Daario still present
themselves as kind of renegade uh you
know like us against the world taking on
these kind of challenges. These
companies are becoming corporations. I
mean, valuation certainly, but you you
have to start to imagine I mean,
OpenAI's internal politics are the stuff
of legend, but but overall the idea that
they're going to still be able to
operate truly in that kind of like
intense innovation way versus they're
starting to get a little Google
cloudified, I think uh I think is
another risk to this. Yeah, I I think
this is a great point that Khan uh
brings up, which is basically you're g
you're putting so much money towards LM
development and that you're investing as
if scaling LLMs is a straight shot to
AGI that when you get to the if it's
not, you're actually going to slow down
uh that uh pursuit because you're so
focused this way. I I think it's a great
it's a great point. I think that you do
hear from folks like Demis saying that
we need a couple breakthroughs beyond
LLMs to get to AGI and of course Yan Lun
has been loudly talking about that. Uh I
think this is a real risk though. I
think he's totally right. If you if
you're being thrown millions of dollars
to join, let's say, Meta to work on
LLMs, uh, and you would have otherwise
been pursuing a nonLM solution that, you
know, sort of out of the box and maybe
had a 10% chance of working, but if it
worked would be a big breakthrough, uh,
then in aggregate, what's happening now
is probably slowing down the AI field,
which is really interesting. And let
alone what would happen if this actually
goes bust.
>> No. And then I mean on that like you
have to imagine what are the actual
human dynamics underlying this like this
is a small group of people a lot of
these people work together so you have
to imagine the group think like really
has to pervade the way they're
approaching or thinking about anything
and and obviously I think that's why
deepseek was such a moment because it
was like okay completely
separate teams and people actually are
can play in this realm and have a
different way of thinking, but it it
really just becomes more and more clear
that this smaller group of people have
the kind of same mindset and think the
same way and and this is the bet they're
putting us all in and the entire global
economy potentially as you said.
>> And then one last part of this is that
he talks about how uh the incentives
inside these organizations drive
short-term thinking on the order of 1 to
three years. I think that's really
important as well. It's just that, you
know, even if you're well, especially if
you're open AI, you're this, you were
founded as a lab on long-term research.
Now, you have to return what, like a
trillion dollars in investment over the
next couple years. You're not going to
be like, let's work on the frontier and
experimental stuff. You would like
basically have found a method that you
like and you're you're going for it. Um,
so so kind of ends his piece asking what
the new compute is for. I'll just read
read this part. If new compute
investments aren't getting us closer to
AGI, then what's the point? One argument
is that the compute is a commodity of
the future and that sp stockpiling this
resource is likely to be valuable
regardless. I think this sort of goes to
Ron John's point. Setting aside the
issue of depreciation, which makes this
argument tenuous at best, the bigger
question becomes how long financial
markets will be willing to underwrite
such stockpiling and whether investors
even understand that this is what
they're doing. My sense is that while
researchers are increasingly uncertain
about how compute translate into
capability improvements, Wall Street
hasn't fully woken up to this. So let's
just say let's just talk about this, you
know, wrap a bow on it. Basically, I
think we're both a little bit concerned
that even if you get to a place where uh
with the current systems, you can create
real economic val economically valuable
work uh maybe this current buildout uh
is so overenthusiastic
and is vulnerable to a the technology
not improving or b uh efficiency
improvements that there's a nonzero
chance that they're light they're
effective I don't want to say lighting
this money on fire. But maybe that's
what's happening.
>> I think uh it's a generous
interpretation almost. I think uh yeah,
it's in terms of uh
where this takes the economy. I think we
we definitely need to get into that
because like how much of this money is
real, how much of it is being spent, how
and I actually there's like a I listened
to this really good podcast talking
about the data centers and like flying a
drone over and seeing you know uh like
actually hundreds of people working and
it's the size of lower Manhattan or I
think it was like Central Park down to
Soho. So there's stuff being built which
I think is at least a good reminder for
me that this isn't all just kind of like
completely made up. But I think uh yeah
that we haven't had a genuine discussion
investigation around the dollars and how
they flow. We've been talking about this
for years around like what investments
are just compute, what investments are
actual cash, where are things being
built, what's you know like uh it hasn't
really been dug into and kind of really
analyzed in a traditional financial
sense and maybe this is the moment that
that starts.
>> All right. And why don't we start doing
some of that right now? So the Financial
Times has a a article about it. uh
saying that opening eyes computing deals
top $1 trillion and then sort of asking
whether this makes sense because
ultimately someone has to fund right all
this all this buildout. Now here's the
story. Openai has signed $1 trillion in
deals this year for computing power to
run its AI models commitments that dwarf
its revenue and raise questions about
how it can fund them. Here's some of the
deals. Uh the deals with Nvidia and AMD
could cost up to 500 billion and $300
billion respectively. Oracle's deal with
OpenAI could cost another 300 billion.
Coreweave has disclosed computing deals
with OpenAI worth more than 22 billion.
OpenAI has also launched an initiative
with SoftBank, Oracle, and others known
as Stargate and pledged up to 500
billion in uh US infrastructure for
OpenAI. Um, it's not clear how the
Nvidia and AMD deals would fit into the
Stargate plans, although I think we do
we do believe that they're including
that as part of the 500 billion. The
deals would give OpenAI access to more
than 20 gawatt of computing capacity,
roughly equivalent to the power from 20
nuclear reactors over the next decade.
Each gigawatt of AI compute capacity
cost $50 billion to deploy in today's
prices making the cost total cost about
$1 trillion.
And then they go to this analyst Gloria
at DAD Davidson. OpenAI is in no
position to make any of these
commitments. Part of Silicon Valley's
fake it until you make it ethos is to
get people to have skin in the game. Now
a lot of companies have a lot of skin in
the game on open AI and as we've
mentioned uh in the past OpenAI is
expected to lose 120 billion between now
and 2029. So how does this math work
Rajan? I mean it doesn't it doesn't I
think uh like from again any kind of
like standard rational analysis it
doesn't but I the thing I keep thinking
about is do does this kind of like does
this force open AI and others who are
playing this game to push again going
back to the topic of kind of like
heavier compute solutions like pulse we
talked about a couple of weeks ago this
have you used it yet or do you know
anyone who has
>> I have not used I do know someone that
has Dan Shipper uh from every he he has
uh some good things to say about it.
Have you used it?
>> I I I've not but but again this idea
that it's just going to be like sucking
up compute all night long just to give
you some uh some updates in the morning
and then potentially ads as we talked
about and I've kind of like come to
definitely believe that's the direction
it's going.
Sora itself or even that this was one of
the big issues that you brought up
around the GPT5 launch, but like kind of
pushing the model and the the the the
platform into much much heavier compute
thinking and reasoning when it's not
required. It feels like that is I mean
everything around how they're building
this company is incentivized to push the
absolute least efficient solutions
possible to actually make their own
economics work. So that that part I
think like they're they have to go in
that direction and they're they're
definitely going in that direction but
otherwise it just yeah none of this
makes any sense to me. Wait, explain how
pushing least efficient compute uh
projects make the economics of OpenAI
work.
>> Well, someone has to pay for it in the
end. So, assuming that you'll start
paying your $200 instead of $30 for GPT
Pro to get your pulse updates and at a
certain point, they're going to have to
charge us for making our cameo soras
that to actually account for the amount
of compute it's requiring. Um but but
it's basically that people are not that
compute's not going to be leveraged or
used if we're just stuck in the current
paradigm of what models are needed and
what kind of computes needed. So, so you
have to again like what we were talking
about earlier, can you kind of create
these workflows based on the technology
that exists today and kind of make it
more and more efficient that it's to
actually show that we are utilizing this
compute this investment in 20 gawatt and
uh like all these nuclear reactor
equivalents. It's it's going to be very
clear very quickly that it's a bad
investment in idea unless they can
actually show like just as important as
revenue is actual like compute
utilization for them right now. And I'm
sure internally these are conversations
they have to be having because otherwise
you're putting all this money in and
very quickly people be like well that
next trunch of money we don't need to
actually release because no one's using
this stuff.
>> I get it. So for them they want to
incentivize massive compute usage
because as they go to their investors
they want they're using that compute
usage as a proxy for the value of this
technology.
>> They're saying this is why our
technology is value. We can't we don't
have enough compute. And if they're able
to tell that story then uh then they
might get more money. So they're that
incentivizes them to use a lot of
compute for stuff they don't really
need.
>> Exactly. and and us as consumers are
benefiting right now because no one
actually has to pay for it on the other
end. Um, and this kind of makes the like
2010s VC subsidized Uber rides look
like, you know, a quaint memory where
we're just able to get all this benefit
as as consumers um and generate our Sora
videos, but in reality like no one's
paying for that right now,
>> right? Do you worry about the debt
that's coming into the picture? So, of
course, a lot of this has been funded by
VC money. Uh, now it's starting to move
toward debt. Uh, this is from the FT
story. OpenAI, valued at 500 billion
this month, is preparing to raise tens
of billions of dollars of debt to fund
infrastructure. This is also from the
Wall Street Journal. Debt is fueling the
next wave of the AI boom. And again,
this is like the phase two that Khan
talked about. A few smaller companies,
most prominently Coreweave, uh, have
been relying on creative financing to
vault themselves to the AI forefront for
a while. Uh, Oracle is also part of
this, uh, to make good on its end of the
contract with OpenAI. Oracle has to
spend uh, on infrastructure before it
gets fully paid by OpenAI. Analysts at
Key Bank Capital Markets estimate in a
recent note that Oracle would have to
borrow $25 billion a year over the next
five year to fund these commitments. And
of course, just, you know, talking
between us, a lot of this is based off
of revenue predictions that are
exponential increases really for OpenAI
and not just incremental increases. Um,
Oracle is already high highly leveraged.
The company has a long-term debt of
about long-term debt of about 82 billion
at the end of August and its debt to a
equity ratio was about 450%. By
contrast, uh Google apparent Alphabet uh
debt to equity ratio was 11.5% and
Microsoft's was about 33%. Don't we get
into trouble when uh these these bubbles
end up taking on debt and they can't pay
it back? Well, I think from like a
larger economy standpoint, it's still a
bit unclear because this is still very
concentrated. So, you know, it's one
company here with $82 billion of debt
and a $450% debt to equity ratio. Um,
this isn't, you know, like homeowners
across the country taking on
unreasonable debt against their
household. So, so what that kind of
spillover looks like, I think it's still
pretty unclear, but but my favorite was
actually just a couple of hours ahead of
this recording, my favorite Soft Bank,
they announced they're taking a $5
billion margin loan that secured against
its chip unit against ARM Holdings.
They've actually taken out $18.5 billion
of margin loans against ARM shares. Like
I mean this is the stuff that uh is
Masa's son, God bless him, but this is
the kind of stuff that I feel you when
you look back on and it just doesn't it
doesn't feel right or make sense.
>> Are you worried that there would I mean
obviously there's a chance of a real
equity pullback, but are you worried
about something like a soft bank going
insolvent from something like this and
what the ripple effect is there?
>> Am I worried about a soft bank going
insolvent? I think is uh I Masa will
always make his way back but I I don't
worry around the direct spillover
effects. I really think this is still
the it's been still like a relatively
contained group of people that have made
the most money off of this. It's been
relatively few companies that have truly
benefited from this. I think yes if
there's an equity pullback what does
that mean like geopolitically and I
think there's the world is certainly not
in a stable place and any kind of
additional uh uncertainty does not help
you know kind of maintain any kind of
stability but but overall I don't know I
I still haven't I I've been I haven't
seen a compelling argument about how
this really spills over other than just
an equity pulled back. I mean, there's
no growth story for the global economy
if this goes away. Um, it's driving the
entire growth that we've seen over the
last couple years, but is that just kind
of bringing back to rationality? And is
that or is it actually I don't know. You
you mentioned is will the global economy
still be here next week? So, let let's
hear your take on that.
>> We can get into that. We can get into
that. So, um, well, let's do this, then
we're going to touch on, uh, maybe the
AMD deal again. But, um, this is from
the Financial Times. America is now one
big bet on AI. The hundreds of billions
of dollars that companies are investing
in AI now account for an astonishing 40%
share of the US GDP growth this year.
40%. AI companies have accounted for 80%
of gains in the US stocks so far in
2025. This is helping fund US growth as
the AIdriven stock market draws its
money in from all over the world and
feeds a boom uh in consumer spending by
the rich. In a way, America has become
one big bet on AI. Outside of the AI
plays, even the European stock markets
have been outperforming the US this
decade and now the gap is starting to
spread. Uh so far in 2025, every major
sector from utilities and industrials to
healthcare and banks have fared better
in the rest of the world than the US. Um
okay. And that suggests that the US uh
and its economy will lose, you know, if
AI doesn't deliver for the US, the US
and its economy will lose the one leg
they are now standing on.
Your thoughts?
>> Well, I'm still back to you on this one.
Do you think what does that actually
mean? If that let's say suddenly there's
like predictions that the compute is not
going to be effectively utilized, we get
into GPU depreciation and people someone
does the math. What do you think is the
worst case scenario?
>> Probably was probably I think maybe
global economy blowing up is a was an
overstatement. Okay. Now that we're
talking about it in our coolheaded and
>> not not blowing up disappearing, I think
you
>> Yeah. Well, you know, that might have
been the, you know, drama podcaster in
me, but
but no, it look I think it would be bad.
I I I think, you know, we obviously the
global economy would be intact. You
would think um I think you're really
right in pointing out that the debt and
the investment is really contained. Uh
but when it if it were to go away, if
this AI moment were to go away, you
would see um you would see some really
negative uh economic consequences in the
US especially and maybe outside. Uh here
this is from Deutsche Bank. They say
they say the AI boom is unsustainable
unless tech spending goes parabolic and
it's highly unlikely. Uh AI is saving
the US economy right now, says a
Deutsche Bank analyst. In the absence of
techreated spending, the US would be
close to or in a recession this year.
And this is from Bane. $2 trillion in
annual revenue is what's needed to fund
computing power, the computing power
needed to meet anticipated AI demand by
2030. However, even with AI related
savings, the world is still uh 800
billion short to keep pace with demand
and the boom is not sustainable.
Uh yeah, I don't know. It seems to me
that there There's going to have to come
a point where the rubber is going to
meet the road here and something's going
to go bad. And I don't know exactly
where that's going to be or how
widespread it will be. But even with all
of AI's promise, and this is what Khan
was getting at in the piece we read at
the beginning, a correction is there.
There will be a correction here.
I I think one thing that's still kind of
like it was interesting to me the way
the Bayan company uh report had it
talked about yeah the two trillion
dollars in annual revenues needed to
it's what's needed to fund computing
power to meet anticipated AI demand by
2030. However, even with AI related
savings, the world's $800 billion short.
It's still again putting it in that
paradigm of like you need that revenue
to to cover the computing power that's
going to be needed for anticipated AI
demand. But but I think it how these
numbers get calculated and extrapolated.
I it's it blows my mind. It boggles my
mind. It's like like trying to is it
just a straight line interpolation? Is
it uh is it I mean it's it's exponential
but like trying to forecast AI demand in
2030 given how much things have changed
and again like I think what I saw what
was it Google it was like 1.8 8
quadrillion tokens now being leveraged
by Gemini. Like the numbers are pretty
spectacular, but still extrapolating
that out for the next, you know, five
years and trying to make any sense of it
and really try to put numbers behind it.
I don't know, especially when the
machine god's going to come and AGI and
super intelligence are going to come. I
don't I don't see how you do that with a
straight face.
>> Yeah. Well, Ranjan, I think I came in
here really uh down in the dumps looking
at these numbers, but you've talked me
off the ledge appropriately, so I
appreciate that. All right, let's do two
two uh small dives into a couple
companies, Oracle and AMD, and then hit
the break. Um, first of all, what do you
think about this Oracle story? So,
they're buying obviously a ton of uh
compute. This is uh from the
information. Oracle became the best
performing mega stock of 2025 after ex
its executive said last month that the
once sleepy database firm will generate
an astounding 381 billion in revenue
from renting out specialized cloud
servers to OpenAI and other AI
developers over the next five fiscal
years. uh but the margins they're
getting on those uh are averaging 16%
uh and in some cases Oracle is losing
considerable sums on rentals of small
qual quantities of both newer and older
versions of Nvidia chips in the 3 months
that ended uh in August Oracle lost
nearly 100 million from the rentals of
Nvidia's Blackwell chips. So you have a
software company that's used to 70 or
80% margins. Now they're they have
margins of a retail business. Obviously,
their sales are up. Um, does it make
sense that their stock is up about 80%
this year and their market cap is at 854
billion with a 69 PE ratio?
>> No, no, I think this is a really good
point here like a and I think like
everyone should start really getting
into these kind of numbers because again
we have not talked about like you know
gross margin on these businesses. We've
all said theoretically for a long time
that you know like generative AI has a
different financial profile than
traditional software and and again it's
not something that has like infinite
economies of scale or near infinite but
instead has a real cost underlying it.
And to see that is actually kind of
mind-blowing. Like again, as you said,
70% margins down to averaging 16%.
And maybe you can argue that this is
just at this point as they're kind of
kind of like getting this business up
and running and scaled, but in reality,
like there's no reason that the margin
profile should change over time. Like
maybe it starts to improve, but it's
incremental. Maybe you get to 20 25%
maybe you you squeeze out 30%. But in
reality, like this is a different
business than the sleepy database uh
company that was just churning out cash
for so long that uh it it really
should call into question I think like
how what the economics of all these
companies are going to look like and
it's going to be different and maybe
it'll be fine like maybe these will be
retailer style businesses that are
gigantic and operational but it's not
going to be 70% margin software
businesses,
>> right? I think Gil Laura was and the
analyst that we quoted before was on
CNBC making this point basically saying
like it doesn't make any sense for
Oracle to be valued uh as at more for
Oracle to be more expensive uh than a
Microsoft where Oracle is basically like
um setting up these data centers and not
pulling in margin where you have a
Microsoft that's actually setting up the
infrastructure and making money off of
it uh and making profit. So, um I don't
know what I think sort of what we're
seeing is uh this push towards AI might
make sense overall over time, but there
is some silliness around the margins and
that will shake out. Uh, of course,
there's been some silliness with the
OpenAI AMD deal. MGC was here on Monday.
We were talking about how, you know,
maybe it didn't make sense. Uh, and now,
uh, there's a really interesting fake
conversation that Matt Lavine published,
uh, that might be what OpenAI and AMD
uh, discussed before uh, AMD agreed to
give OpenAI 10% of its company
potentially. Let's talk about that right
after this. And we're back here on Big
Technology Podcast uh, examining. I
think someone said we were uh they love
how this show has become a detective
show for the AI bubble. Um I love that
comment. Let's let's keep at it. Um so
of course OpenAI and AMD had this big
deal uh this week where uh OpenAI and
AMD are basically going to spend tens of
billions uh working on uh data center
development. So OpenAI can use AMD chips
for inference. And there was this weird
element of the deal where AMD said if
certain milestones are hit, OpenAI will
have the opportunity to get 10% of AMD's
uh company's stock uh basically for a
penny a share. So Matt Lavine at
Bloomberg surmised why this might have
happened and how this might have come
together. Uh here's the fake
conversation. OpenAI. Well, we were
thinking that we would announce the deal
and that would add 78 billion to the
value of the company which should cover
it. basically paying for the chips. AMD
dot dot dot OpenAI dot dot dot AMD. No,
I'm pretty sure you have to pay for the
chips. Open AAI. Why AMD? I don't know.
I guess it seems wrong not to. Open AAI.
Okay. Well, why don't we pay you cash
for the value of chips and you give us
back stock. When we announce the deal,
your stock will go up and we'll get our
$78 billion back. AMD. Yeah, I guess
that works though. I feel that we should
get some value. Open AAI. Okay, you can
have half. You give a stock worth like
35 billion and then you keep the rest.
Lavine says the deal between OpenAI and
AMD was obviously going to create a lot
of stock market value. The announcement
of the deal would predictably increase
the value the market value of AMD and
it's not like it decreases the market
value of OpenAI. Why not use that stock
market increase to subsidize the deal?
What do you think about this R? I mean
th this is actually this what I asked
you earlier about like what do you think
these conversations really look like
behind the scenes. I mean I loved this
because I I don't think it's completely
out of bounds that this is actually some
of the conversation that's happening
right now like that. Well, obviously
when we announce this, obviously when we
announce this, it's going to boost your
stock. That should definitely cover some
element of the overall cost of the deal.
Like it's and and we've been seeing this
forever. Like I mean not forever, for a
few years. This is not that different
than imagine a Google or like Amazon and
Anthropic and it's like well we'll give
you 5 billion but it's going to be like
3 billion in compute but it's going to
raise your valuation by this much. So
you know like overall this kind of funny
money-esque element of it has has been
there for a while. It's just at a much
grander scale. And one thing I I kind of
want to bring up like I think what we
were talking about a second ago the
average like my life for the last decade
has kind of been at the intersection of
retail software media and like average
retailer PE ratio is around 20 on a good
day. Average software company even
Oracle right now is sitting around 60.
Do you when you start to actually try to
bring some rationality to this and some
discipline and rigor are these companies
going to be more like retailers or what
I don't know like maybe it's going to be
industrial equipment. I've seen a lot
around how Stargate is not an AI play
it's an energy play which is a good
business and can be and is interesting
but shouldn't they be valued more like a
traditional energy and energy
infrastructure company rather than AI
and software? I don't know. I think that
is something that we're going to be
seeing a lot more of and people trying
to come up with some new metric for an
AI company that's very different than
software and and even all these AMD open
AI kind of uh circular
ways of approaching financial analysis I
think start to look ridiculous.
>> That's right. I think as long as the we
have good times then there will be
massive multiples that will be attached
to AI companies but second we see a sign
of a slowdown I think that will deflate
exceptionally fast.
>> Yeah I think you've heard it here folks
go out come up with your new financial
metrics for this new breed of company.
It's not software it's not quite retail
somewhere in between. We need we need uh
new ways to measure this stuff.
>> And this of course is not investment
advice. So take that.
>> No, we're we're telling you to go out
and do the research.
>> Yeah.
>> Telling our audience. Yeah.
>> Right. Uh all right. Let's let's finish
this long very long already spanning
almost our entire show segment with um
with just a look at at uh at at how we
we both feel about this. I mean what's
your scale of concern here from like one
being not concerned to 10 being
concerned?
>> Very concerned. I'm still going to put
it and and regular listeners know that I
am often concerned about many things.
I'm still putting it at three 3.5 to be
exact to get to the exact uh feeling of
concern. I really think this is a
shakeout. It's kind of like a back to
reality, come down to earth moment. But
I don't think there's going to be
massive spillover effects from any kind
of uh any kind of rational analysis on
what's really happening right now and
investors just start to start to come
back down a little bit. I think it's
going to be okay. What about you?
>> I'm at a five. Uh I do think that
there's a nonzero chance that uh
progress stalls and a lot of this hype
around AI just
translate that it just fades. Um people
realize that it's harder to implement
than a lot of the hype was making it out
to be and it takes longer. The the
timeline is longer than anticipated and
uh and maybe that just leads to
stagnation for a while. I don't know.
I'm I'm not convinced that's going to
happen, but I I definitely appreciate
the possibility. Um, all right. Let's
talk a little bit about OpenAI's
announcement this week. They had this
dev day, developer day. It seems like
everybody just wants to build. Have you
heard this before? OpenAI held its
annual dev day on Monday where the
company rolled out its plan to build
apps into Chat GPT. The demo showed how
programs like Spotify and Figma can be
called or discovered without leaving the
chat GPT window. With so much of the
tech world barreling towards AI
integration, OpenAI's demo was the best
picture yet of what an AI first internet
might look like with interfaces like
chat GBT quering information and
executing commands directly.
I'm I I I don't know why I can't get
excited about this or maybe I do feel
like I've heard this from Google, from
Amazon, from Apple and OpenAI again. Um
and uh and I'm not like going to lose my
mind over the platformization over of
chat GPT. Am I underplaying this?
>> No. No. I until I So I actually tried I
used the Figma app for chat GPT a bit
and I and as as a non UI designer, but
someone who's curious, I was like, okay,
can I actually start to make mockups and
start to like build out my own app
interface? And in reality, you can make
like a flowchart in Fig Jam and it was
okay. Um, it wasn't anything
revvelatory. But honestly, to me, like
that whole app integration side, I think
it should work at some point. But to me,
like the fact that Google Flights I
still go to as opposed to Gemini
integrated directly into Google Flights,
which it is it owns and is the same
company. Like I no one has actually
shown what this can look like
successfully yet. But I but I do think
like using it. It poses an interesting
question for the actual app companies
themselves because at what point is the
value your database basically or is the
value your actual UI and interface? And
so like if I can go use ChateBT to
create my Spotify playlist for me, then
all my my DJ on Spotify, like does that
all become useless? And do these
companies let ChatGpt actually kind of
take the UI layer away from them? And
they're going to push back. So I I don't
see this moving that quickly.
>> Did you try me either?
>> I haven't tried it yet. I just I can't I
really can't get excited about this. Uh,
I I'm I'm sick of hearing this story and
maybe I'm shut off in a way that's like
bad because I should be open as a
reporter covering this stuff. Um, but
like I set up Alexa Plus recently and I
was like, "Oh, it's cool. You can call
an Uber from, you know, your Echo
device." And I was just like, I'll just
do it on my phone. I don't know. I'm not
sold yet that this is going to be the
platform of the future.
>> Oh, wait. Alexa Plus, what are your I I
did set it up as well. What are your
thoughts?
not enough use to really uh review it
yet. In fact, like I have it set up uh
on my phone, but have had some trouble
getting it on the actual device itself.
Uh but I did speak with Panos Pane, the
head of devices and services uh of at
Amazon and he said by the end of October
everybody should have it. So, um, no
more early access and I think, you know,
if that doesn't pan out, we we'll see
what happens. But, uh, there's the
latest promise and I'm very excited to
air that episode which is coming up in a
couple weeks. I'll say I'm liking it.
It's getting kind of like somewhat on
par with chat GBT voice just in being
able to ask questions while I'm cooking.
I have a Alexa Echo Show in my kitchen
and like asking more detailed questions
and it works pretty well. But my
favorite was last week or I think on
Monday I was asking for like NFL scores
and I was looking at them.
>> It completely made up that the Jets won
>> which I thought was kind of amazing and
hilarious in our context. Alex is a Jets
fan. It's the only universe where the
Jets can win is AI hallucinating their
victories
>> in in you should just create your own
entire Sora parallel universe where the
Jets are winning Super Bowls, all of the
above.
>> I did see a Sora video I think probably
in my Jets Discord this week where the
coach was cheering on the players after
a loss because they were in line for a
better draft pick and that's really all
the organization cares about it. It's
really like too close to home. It's like
we love to lose.
>> That's as far as the AI can go.
>> You know, this is a bit of a diversion,
but what's the deal with Bill Bichc and
UNCC, Mr. Patriots fan?
>> I I'm just thinking about Drake May and
the Bills game on last Sunday night.
Bellich I Yeah, it's such a tough one. I
mean, the younger girlfriend, the the
the terrible start. I I I if you became
one of the greatest, you know, as one of
the greatest independent journalists and
media personalities around and build up
your legacy over decades to just throw
it away like what drives someone to do
that? That's
>> I don't know. It would be the the uh the
parallel would be yeah becoming one of
the greatest journalists and then going
to become the editor-inchief of a
college newspaper and plagiarizing. That
is what Bill Bich is what exactly what
Bill Bellichic is doing right now.
>> Lord Almighty. All right. Let's talk
about uh about speaking of Sora. Let's
talk about Sora. I mean we could go on
the Bichc thing forever but we'll leave
that to Pablo. Um so yeah uh the the
Verge has this story. So obviously you
and I didn't get a chance to speak about
Sora last week. We had Max Zephin. Um so
I want to speak with you about Sora
briefly. The copyright thing uh and the
usage thing. Uh OpenAI said it wasn't uh
Sam Alman basically said he was
surprised at the U reaction or that
OpenAI wasn't fully aware of what the
reaction would be. Um he said I think
the theory of what it was going to feel
like to people and then actually seeing
the thing people had different
responses. It felt different uh to
images than people expected. This is of
course about copyright and rights. He
was surprised that rights holders were
sort of up in arms about the fact that
Sor had copied their stuff and gave them
an opt out as opposed to an opt-in. Uh
that seems like crazy and not exactly
truthful. How do you not anticipate
that?
>> Well, okay. I wanted I definitely wanted
to bring this up last week while I was
on vacation. I was itching to be on the
podcast just to talk about Sora because
so I made video of myself in the Mario
Brothers movie fighting Bowser on the
streets of Brooklyn and like obviously
my six-year-old son loved it, but as I'm
looking at it I'm like this is insane.
Like I cannot believe that this is okay.
Now, two days later, OpenAI did
introduce uh like more strict content
guidelines. And it still blows my mind
that Sam Alman acted like he was
surprised by this. But I wanted to read
the statement from OpenAI 48 hours after
like everyone was creating new South
Park episodes was basically it was
people are eager to engage with their
family and friends through their own
imaginations as well as stories,
characters, and worlds they love. and we
see new opportunities for creators to
deepen their connections with the fans.
This is Verun Shetty, the head of media
partnerships in OpenAI. He does say
we'll work with rights holders to block
characters from Sora at their request
and respond to takeown requests. I think
this is nuts and a big deal because they
are basically saying like it's out there
and you're going to have to come to us
to bring takeown requests, but we're
basically okay with this and we're even
kind of pushing you to say uh you know
you create media but people really want
to deepen their connections by putting
themselves in your copyright material.
like I don't know that to this in the
overall OpenAI story like this is their
approach to copyright and kind of
intellectual property. So to me
you I I hear a lot of sensitivity around
like people and what data are they
really going to upload to OpenAI and I
think this was such a reminder that I
don't know basically whatever your
personal feelings you're telling your
chat GPT therapist might get
autopublished into a Sora post uh one
day
>> right and rune of course comes from meta
and if you ever had to deal with meta
copyright uh infringement the process uh
it sucks and it's almost an insult to
copyright holders to have to go through
something like that. It's it does not
incentiv arduous it does not incentivize
you to um work on it and by the time
Meta will take something down uh the
thing has already uh already spread to
the point where it doesn't really make a
difference. So of course doesn't
surprise me.
>> Is that is that sadly bullish OpenAI
then in terms of this is the right
approach?
I I maybe it's good for the business,
but um ethically I think it's it's
dubious at best. All right, we have five
minutes left. Let's talk quickly about
the uh the potential successor for Tim
Cook. Um so it's from Bloomberg. Apple
puts hardware chief John Turnis in the
succession spotlight. When Tim Cook
eventually steps down as CEO, it's
likely he would remain involved in some
capacity, perhaps as board chairman,
that would put him on a path taken by
other tech leaders, including Jeff
Bezos, Larry Ellison, Bill Gates, and
Reed Hastings. Big question is, who
would run Apple on a day-to-day basis?
In terms of a formal trans CEO
transition, hardware engineering chief
John Turnis remains the leading
contender. Uh, and German says, "Apple
probably needs more of a techn
technologist than a sales or an
operations person. Uh, the company has
struggled to break into new technology
categories, even though products like
the iPhone 17 are clearly resonating
with customers. What do you think about
this? I think it's the right the right
the right move. What's your
perspective?"
>> I think some new blood is required.
Sorry, Tim. I think it's uh I I don't
know. I still feel I don't And I'm going
to make a call here. They should buy
Snapchat and make Evan Spiegel CEO. I
said it. I said it. I want some product
vision at the company again. Keep keep
the operational guys, Tim. It worked
from a shareholder perspective. It did
not work from a true product
perspective. And the day they can only
squeeze so many 29.99 Apple One
subscriptions out of me before it before
this just has to go somewhere else. So,
>> and you know who would love that?
>> Evan Spiegel would love it. I've heard
from multiple people uh around him that
he thinks he's Steve Jobs. So, certainly
>> Evan would be all about that. Would
Apple do it?
>> I just gave it to you, Evan. I just gave
it to you.
>> I I doubt Apple would do it.
>> No.
>> When How long do you think Tim Cook
should stay in the seat for? Um, I tend
to think that he should probably he
should probably uh step down sooner
rather than later.
>> Yeah. I I think it's actually like it
could be the most kind of smooth
like no one's going to hold it against
him. It's not a It's not like he was
fired or pushed out. It's just time to
move on. It's the next phase of the
company. Like it's clear that they have
to figure out what's next. And I love
Tim. He's not the guy to to figure out
what's next for the company. We've seen
it. We've seen this for a few years now
that they're not going to and again like
other companies are just driving ahead
with innovation and new product
development that like Apple hasn't done
anything with it.
>> Internus has been at Apple for 24 years
since July 2001. And I almost I I
actually like the Spiegel idea even more
because I'm of the belief that Apple
really needs a not Steve uh era CEO
because so many things inside that
company are done there just because
that's the way Steve Jobs did them. Uh
the silos, the secrecy and obviously it
served them very well. Uh but eventually
you have to be like all right let's try
something new.
>> Yeah. No, I think uh okay you've heard
it here. This is the proposal from
Canitz and Roy. It's it's it's a long
shot here, but it's out there.
>> Crazier things have happened. All right.
>> Yeah.
>> Uh Ron John, thank you so much for
coming on. Really great having you as
always.
>> All right. See you next week.
>> See you next week, folks. Rick
Heightsman, the uh managing director of
Burmark Capital is going to be on the
show on Wednesday to continue our
conversation about AI's economics. And
then Ron John and I will be back with
you next Friday. Thank you so much for
listening and we'll see you next time on
Big Technology Podcast.
[Music]