OpenAI Bailout?, Elon’s $1 Trillion Pay Deal, Amazon Sues Perplexity

Channel: Alex Kantrowitz

Published at: 2025-11-10

YouTube video id: I8dJXJ1tnL8

Source: https://www.youtube.com/watch?v=I8dJXJ1tnL8

Is OpenAI looking for a government
bailout if things go wrong? Elon Musk
[music] gets a $1 trillion pay package
to build an entirely new Tesla and
Amazon sues perplexity. That's coming up
on a Big Technology Podcast Friday
edition right after this. Welcome to Big
Technology Podcast Friday edition where
we break down the news and our
traditional, coolheaded and nuanced
[music] format. We have a great show for
you. A big show for you today because
we're going to talk all about this big
controversy over opening eye potentially
requesting a backs stop of its debt from
the federal government or not. Some
disputed [music] reports on that, but we
have a point of view. Elon Musk also won
a $1 trillion pay deal that obviously
goes [music] in different steps, but is
uh has finally been passed by Tesla.
Tesla's [music]
uh shareholders and Amazon has sued
perplexity putting the bot internet into
question. Uh joining us as always on
Friday to do it is Ranjan Roy of
Margins. Ranjan, good to see you.
>> Who knew that the biggest socialism
story of the week would be open AI.
[laughter]
>> You were cooking that up for a while.
>> I've been waiting for that all week.
>> Good. [laughter]
Um, also Ronan, I'm very glad to see
that uh you know you're at your computer
and not messing people's stuff up with
your with your personal robot
upgrading. I know we had concerned about
that.
>> I left uh the robot we're waiting
another another few weeks before I start
destroying people's houses via humanoid
robot.
>> That's good. Well, we will take our
precious time left on this earth before
Ronjan gets access to destructive
technology to talk about um the state of
open AI. Really, it's it's a story about
them looking for a backs stop from the
federal government. Uh but but more
broadly, it's a story about whether
OpenAI is ready for this moment in the
company history. So, uh let me set the
scene. Opening eye CFO Sarah Frier is
out at the Wall Street Journal's uh big
tech conference and she st she basically
says explicitly that they are going to
be looking for a backs stop on the debt
that they take out uh should things go
wrong. Let's listen to a clip. Maybe
even um uh governmental um uh the ways
governments can come to bear
>> meaning like a federal subsidy or
>> um meaning like just first of all the
the backs stop the guarantee that allows
the financing to happen.
>> Okay. So so Frier says uh we're looking
for this backs stop this guarantee that
allows the financing to happen that can
really drop the cost of the financing
but also increases the loan to value. So
the amount of debt that you can take on
top uh of an equity portion and so the
reporter asks so some form of of backs
stop for chip investment and she says
exactly. Um Ranjan let's just start with
the logic here. It's pretty logical for
a company like OpenAI which has not been
turned down by anything really lately.
Basically everything it touches it gets.
Everything it wants it gets. Uh why not
ask the federal government to guarantee
your loans? Well, I think though
everything that it's been trying to get
is actually it con committing to
spending money, not asking for money.
Remember all of the I mean it's
obviously raised an ungodly amount of
money, but in the last few weeks every
big announcement is we will spend $38
billion with Amazon, hundred billion
with Oracle and everything else. So this
is a very very different ask I think. I
mean, we we'll definitely get into what
it means, but but I do agree that
there's a certain kind of I don't want
to say arrogance. It's almost just it
feels like they just can say whatever
they want in whatever big numbers they
want and it's okay because it's it's
worked out pretty well for them so far.
>> Yeah, I think my broader point is that
OpenAI has just been on this like run of
a century, right? Just like funding
deals left and right. uh the fastest
growing product in history. And so when
you think about like what you can do,
why not ask, you know, whether the
federal government can guarantee your
loans, for instance, because you can
position it as a national security
thing. Um here here's here's Frier and
and these are the exact comments that
she gave at this uh event from the Wall
Street Journal. she goes. I think we're
seeing that and I think the US
government in particular has been
incredibly forwardleading has really
understood that AI has an almost na
almost as a national uh strategic asset
and that we really need to be thoughtful
when we think about uh uh competition
with for example China. Are we doing all
the right things to grow our AI
ecosystem as fast as possible? So she's
coming out and saying it. There's no
ambiguity about it. She's clearly asking
for a backs stop and she's positioning
it as a national security situation for
the United States. And
>> this is where it is difficult. I'm going
to try to take open AI side in this like
let's say this is the single biggest
national security battle that we're
going to face over the next 30 to 50
years at that point. And we've all
talked about I think on all sides of the
political spectrum the idea that America
has fallen behind in in critical
infrastructure.
So if that is the case and we are buying
into this story that AI is going to be
the battleground of the next century,
does it seem okay that they're asking
for federal guarantees or a backs stop
in terms of all the debt financing
that's being taken out?
>> I mean I think that's my point here. I
think it's person it's perfectly
reasonable for OpenAI to ask. Um I don't
think that the US taxpayers should be
backstopping the company's debt though
because and Sam Alman in a follow-up
made this clear if they fail you'll
still have Google you'll still have
Anthropic you'll have many others that
are going to be building this. Um, so in
in other words, I do think that the
United States is is going to be in a
good position and I also don't really
think that the government should be
picking winners uh and giving OpenAI
these guarantees assuming that OpenAI
would be the only one to get them. So to
me, it's just like it's a personal it's
a perfectly reasonable ask. Um, and it's
a perfectly reasonable uh no from the US
government. And David Saxs, the AISAR,
basically said no to bailouts. I still
love that we have AI ZARs. Like I think
>> ZAR,
>> you got to have a ZAR.
>> Doesn't have a ZAR. Now, if it's not a
zar, I mean, especially in the context
of not wanting to have Soviet Union
references across our government, but
side conversation, I think on this I I I
think on this, so to me, where this is
the most interesting is as we're talking
about this, the idea, okay, in the like
halls of power, having discussions,
looking at the long term, what is the
threat from China? What critical
infrastructure do we need? I mean, even
if you look at like the Biden IRA,
there's plenty of money that was being
given to fund critical infrastructure,
especially across energy. So, these
conversations to me seem completely
reasonable as well. What's so
fascinating to me is how all of this
played out just in the last week. What
kind of words were actually uttered by
Sarah Frier and then posted on linked by
Sarah Frier and then tweeted by Sam
Alman.
They're trying to get out of it and then
not get out of it and then get out of it
again. Like that to me is it's almost
shocking, but it's not shocking because
it's open AI, but you would think that
they would have a more cohesive strategy
on something this important and big
>> and that's that's exactly it. So, we
both started this show talking about all
right, like we're not going to, you
know, say it's the craziest thing in the
world for them to ask. I'm not saying to
be granted. I don't think they should be
granted, but why not ask if you're open
AI for something like this? The internet
just blew up and basically it was like,
you know, we do not need the, you know,
citizens of the United States of America
to be guaranteeing the loans for a $500
billion company. It's crazy. Um, fair
enough. And then the weird thing
happened, which is that, uh, OpenAI's
walk back started. And it wasn't just a
walk back saying, you know, after
consideration, we don't want these
guarantees. It was uh almost like hey we
actually never said that. Um here's
here's this is from uh Sarah Frier uh
CNBC headline. OpenAI CFO Sarah Frier
says company isn't seeking government
backs stop clarifying prior comment.
OpenAI CFO Sarah Frier said late
Wednesday that the artificial
intelligence startup is not seeking a
government backs stop for its
infrastructure commitments. I used the
word backs stop and it muddied the
point. Frier wrote on LinkedIn, "As the
full clip of my answer shows, I was
making the point that American strength
in technology will come from building
real industrial capacity, which requires
the private sector and government
playing their part." Sam Alman, I would
like to clarify a few things. First, a
few things. First, the obvious one. We
do not have or want government
guarantees for open AI data centers. We
believe that governments should not pick
winners and losers and that taxpayers
should not bail out companies that make
bad business decisions or otherwise lose
in the market. If one company fails,
other companies will do good work. This
is the weird thing. They are going back
on it, but that's exactly what they were
asking for.
>> Yeah. I think it's so it's always so
frustrating where where Sarah Fryer of
course going on LinkedIn and saying like
it muddied the point my the full clip of
my answer. She was trying to almost make
it seem like pulled out of context when
I mean in this case it was so explicit
the reporter even asked her to confirm
it and she confirmed it and said the
word backs stop again. So like I think
it's just already that's just a little
bit disingenuous and just and does not
land well. But I think the reason this
is so salient right now is I think
because it comes on the heels of I think
it was last week the Brad Gersonner
alimter interview where Sam Alman where
he asked him around you know you're only
making 13 billion but you're committed
to 1.4 trillion in spend. How are you
going to do this? and Sam got very
defensive and then suddenly we get start
to get leaks that they're seeing $20
billion annualized re at ARR, not
necessarily actual revenue. Um, but
basically the numbers have never added
up. Like we've talked about this at
length, no one actually has really
mapped out a genuine understanding of
how that 1.4 trillion in spend actually
is justified. So it I think it's more
real because it feels more emotional
almost because they almost will
certainly need this government backs
stop to actually make this work. Like
it's not just you know this is some
opaque financing deal that no one
understands like 2008 like where you
know only PhDs are able to kind of you
know like untangle complex sum prime
mortgages and see credit default swaps
like they're pretty clear numbers and
they don't make much sense. So when you
say government financing and backs stop,
it feels a lot more real. I think that's
why this is hitting so hard.
>> That's exactly right. And on CNBC, I was
on CNBC talking about this earlier in
the week and I said basically, look,
Opening Eye is going to be a trillion
dollar IPO company. It's already $500
billion and its executives need to learn
to speak with a little bit more
discipline when it comes to questions
like this. questions like Sam got at the
Brad Gersonner interview and then
subsequently now uh it can add on what
Sarah Frier said and I I think that
you're right that this is the reason why
people freaked out here um wasn't
necessarily because of the ask itself or
maybe it was and we're going to go into
some more things about what what Frier
actually said at this conference because
there's some more really fascinating
aspects of her talk that we should cover
and not just this one thing uh but it's
the fact that basically the entire US
stock market the entire global stock
market to some degree is counting on
OpenAI to one execute on the promises
that it's made to companies like Nvidia
and AMD and Oracle and Microsoft to some
degree. So there's that kind there's
that um you know dependency there and
then you if we all we both know and we I
think we all know everybody listening to
this that if there's weird stuff going
on in OpenAI's books um that's going to
cascade and so that's why people are
like wait a second this does not sound
like um you know all taken together like
the remarks of a company that's mature
enough to be able to have everybody
counting on it. No, that's a that's a
good point because I was actually
thinking about like I mean Sarah Frier
spent over a decade at Goldman Sachs.
She was the CFO of multiple I think yes
Square Next or CEO of Next Door. Like
she has been seuite of publicly traded
companies. She's she was at the most
buttoned up investment bank in
existence. It's like you would think
decades and multiple decades of just
like knowing how to be disciplined
around messaging would just be so kind
of deeply ingrained in her. Yet you go
to OpenAI and suddenly you see how this
like misspeaking or potentially actually
saying what you're thinking but then
trying to walk it back having to post on
LinkedIn. Meanwhile, your CEO is
tweeting out all other sorts of things.
But then in the background we I had just
come across that there was a letter that
was submitted to the office of science
and technology where again they have an
entire section where they talk about uh
like the need to counter the PRC
people's republic of China by derisking
US manufacturing expansion provide
manufacturers with the certainty and
capital they need to scale production
quickly the federal government should
also deploy grants cost sharing
agreements loans or loan guarantees to
expand industrialbased capacity So that
from a policy perspective, they are
actually pursuing this. And again, as we
were saying, that's not the most
ridiculous, unreasonable thing to at
least have a conversation around, but
like and and we're going to talk about
the Ilia Sam uh uh battles of the past,
but it's almost like she she shows up at
OpenAI and suddenly her entire comm
strategy is like uh Elon situation or or
just kind of this kind of like, you
know, totally chaotic Open AI comm
situation. It brings you down.
>> That's the thing. It's not the ask
itself. It's the back going back and
forth. And then it comes on the back
comes on the back of this comment where
Sam tells you know Brad Gersonner enough
when Gersonner asked how are you going
to fund 1.4 trillion in investments with
13 billion in revenue. Uh
I'm not like I don't want to be the like
you know disciplined police. Uh I
certainly don't hope to be that way. But
again, if you think about just the
magnitude of the, you know, how much of
the stock market, how much of the
economy is depending on this to work
out, you want to see something more
buttoned up than that. Now, I do think
we should get to some of the other
comments that Frier made.
>> Wait, wait, hold on. Hold on. I would
like that we are the disciplined police
in this situation. Come on. I I want to
I don't think you should say like that's
a bad thing. Come on. for the companies
that are val once you cross a certain
threshold just can we h go back to
having a little bit of communications
discipline. I have worked in this for
many years even in companies in the
hundreds of millions of revenue and that
are you know like even then we had a lot
of hand ringing and oversight and and
I'm not going to say it was necessarily
always the most pleasurable thing to
have to deal with but like at every
other level every you know did you see
the snowflake chief revenue officer I
believe who was speaking to a tick
tocker and accidentally like said
something about revenue guidance
and they filed an AK. They still are
trying to it was a screw up, but they're
still trying to play by the rules. And
they're like recognizing that this is
actually, you know, like
an important thing that actually to play
by the rules, we have to still it was a
screw-up and we're going to try to fix
that versus companies like this that are
just completely the entire, you know,
like discipline of the US financial
markets just kind of making a mockery
of. So, so I'm going to say we need more
disciplined police around comms. Please
just be a little thoughtful when you're
speaking.
>> This is why we need humanoid robots. So,
Ron John at scale can go into your
living room and start flipping tables
over if you're undisiplined.
>> If I had my humanoid robot army and uh
it got me my trillion dollar pay package
because actually there's a humanoid
robot army in that actual language in
the pay package. Um, yeah, I'd be
flipping tables right now.
>> Well, well, hang on. I mean, does it
matter that they're private? They're a
private company. Like, they don't need
to file with the the SEC for these kind
of things.
>> No, no, but
>> you have a little grace so you can ramp
up up to this stuff. You tell me.
>> But see, this is where the whole like
uh, you know, ballooning of private
valuations, I do think becomes even more
problematic. This is a whole much larger
rant in general, but like it's because
it allows the company to become this
critical to our over as you said like
open AI might be the like entire kind of
like the bottom of the pyramid holding
up the entire US economy right now and
to because they have not had to go
through any kind of rigor that even like
a a public company with a couple hundred
million in revenue would have to go
through. That's how we're ending up in
the these kind of situations. And it's I
mean it's a little bit scary that like
you know on one hand you have your CEO
getting mad about just being asked a
very simple basic question that he
should have had a very clear answer to
and on the other hand you have a CFO
who's had decades of training on this
and suddenly seems to be backsliding
into you know a way of communicating
that you know you would kind of
attribute to a firsttime CFO.
That's that's a good that should be an
article. That's a good good opinion
piece right there.
>> That's uh [laughter]
>> I like that. Okay. So, I do want to talk
about some of the other things that
Frier said, but before we get there, let
me just ask you this one bigger
question, which is that uh I'm curious.
Do you think the backlash that we're
seeing here? I mean, because obviously
OpenAI backtracked because of the
backlash to Frier's comments. Do you
think the backlash is finally a sign
that OpenAI is hitting the ceiling of
what it can get financially? like
eventually it's going to push and there
will be push back. Is this sort of a
sign of the top of its of its abilities?
>> Yeah, I think it's it's not just that.
It's also like a really good indicator
of where it is in the national
conversation or the global conversation
right now. Like we have talked about it
for a long time. Our listeners and our
friends and networks have been talking
about it. that they are inserting
themselves and they they are becoming
not just like a an economically critical
part of the US economy but also just
known by every person in America. So I I
think it's a sign that they have a
branding problem. They're going to
continue to have a branding problem. And
so the idea that like they are getting a
backs stop when you know they've had
secondary sales of at I think they had a
secondary sale at the $500 billion $500
billion valuation. So like already
people are getting insanely rich off
this. So it just makes it that much more
unpalatable for any normal person to to
have to hear this kind of thing.
It is interesting because their product
really is beloved uh but they are
starting to again speaking of our
discussion last week run into some like
Facebook territory on the comm side and
what you don't want to be is like viewed
as like big bad tech. So I don't think
they're there. I think people really do
love chatbt. To me it's already the one
of if not the most useful tech products
I use on a day-to-day basis. Um maybe
the iPhone is is right above that but
it's pretty close. And I don't know. I I
think that this is a concern for them.
>> Yeah, agreed. And and yeah, I think it's
a good point that that that the
distinction between like people love the
product versus love the company.
Facebook actually shown Meta shown
pretty well that you can do both and
succeed. So if your product remains that
sticky and like addictive. So so I think
it doesn't mean from a product
standpoint that it's it's genuinely
troublesome for them. But I don't know.
I I feel at a certain point c like the
more people kind of pay attention to how
open AI has been run for this long and
it's not just us and others talking
about it I think it really starts to
kind of bring a a different kind of uh
light shining on it.
>> Now I do now let's get to these comments
by Frier because this was fascinating
and it sort of makes you think this is
how they're going to pay for that 1.4 4
trillion in investments. It's the best
explanation I've heard yet. And if
they're able to pull off this vision,
then maybe OpenAI will be the most
valuable company ever. Ever. And so this
is sort of what Frier said. Um I'm just
going to get to the last two parts of
what she said. She said they will, this
is from um Garrett Dink, who is a
Washington uh Post reporter. They will
do creative commercial deals. They're
not just going to sell access to a
pharma company on a per token basis.
They're going to demand a revshare of
the profit the pharma company brings in
from developing drugs using chatbt. This
applies to commerce too. They want to
take a cut of both the discovery and the
transaction when someone searches for a
product using chatbt. This is already
happening with Walmart, Etsy, Shopify,
all all announced on dev on their dev
day. That is very interesting. So what
do you think about these monetization
strategies?
>> You know what I the pharma thing is very
interesting to me because the commerce
side of it is kind of par for the course
from a platform. I mean that's the
Amazon's entire retail business. I mean
with the on the third party marketplace
but everyone accepts if you sell on my
platform I will take a cut of the
transaction. But the idea that like our
technology helps the a pharma company
create drugs in a much faster way and we
would want to take a revshare. I I think
that's that's really interesting and you
can imagine how many other ways that
becomes applied. But then also at that
point, are they taking on the risk as
well? And then like they're giving away
free compute in order to get that
longerterm revshare which just adds
actually an amazing whole additional
layer of risk to the business. Like I
wonder what like what could that look
like?
So I actually thought this was much
cooler when I saw it for the first time.
Um, and then when I think about it, it's
like, all right, so let's say, let's
just go crazy here. Let's say OpenAI
develops medical super intelligence that
enables pharma companies to do things
they never could before, uh, at human
scale. Somebody else is just going to
develop super intelligence as well, and
they're going to get into a price war.
So, how long can you say, I want to
party the profit of this drug that
you're going to develop with our
technology where like somebody else can
be like, well, here's the platform. Just
pay us a licensing fee. Right? that only
that pricing model only works if you're
the only one that can offer that. I just
don't think they're going to be
>> well. No, no, but I see I would think
about it differently because not from
the competitive standpoint, but from the
again going back to what business are
you in when we're when when they're
looking at their trillion dollar IPO and
we're going through their S1, you know,
the way any investor should rationally
look at these things is just try to
understand what business are they in.
We've talked about this at length that
you know like already we don't know the
economics of what a generative AI
business would be. It's not traditional
software because there's actually you
know like incremental cost to the
utilization of it. The more compute you
use the more expensive it is for the
provider. But still subscription
business for consumers uh API token
revenue that's a pretty straightforward
business and I think that could make
sense if it is we are an AI cloud
business a consumer devices business
which Sam Alman said as part of that
Gersonner interview and now we're also a
drug b a pharma business because we're
going to be taking on some kind of risk
around drug drug development in the
actual underlying economics of our
business like I mean my god that uh that
that's quite something. I I I like I I
respect a good creative contract but
[snorts] like that is that's too much.
>> If they can pull it off I mean good for
them. And I don't think it's a nonzero%
chance that they can that they can pull
it off. But again the question is you're
not the only one developing this. If you
were the only one developing it fine.
You could get a percentage of the
revenue. It's just going to be harder
than I think they think um to do that.
Now, okay, let's just go quickly and
talk about this government
infrastructure uh side of things. So, um
Sam Alman says, "What we do think might
make sense is governments building and
owning their own AI infrastructure. And
one area where we have discussed loan
guarantees is part of supporting the
buildout of semiconductor fabs in the US
where we and other companies have
responded to the government's call and
where we we would be happy to help.
What do you think about the idea of a of
a government uh AI governmentowned AI
infrastructure? Does that make sense?
And we can we don't really have to talk
too much about this chip thing because
that's been done. But I think the idea
of government AI infrastructure
obviously lots of governments are
talking about sovereign AI. Um I'm
curious what you think about that Ron. I
mean it if we're saying I think was it
Sundar was like it was more important
than electricity like if we have
>> and fire and fire if we if we have like
nationalized inf utility infrastructure
though it's still not fully nationalized
in that way but still like you know like
much more public private types of
infrastructure in that way. Sure, maybe
it makes sense if we're really saying
this is going to be the backbone of the
entire economy. Um, but again, if that's
truly the case, we should be building
that today with that in mind and not
where private citizens can actually just
make ungodly amounts of money on before
we even get there and then uh be
backstopped by the government. Then it
should be like if that was really the
case and there were anyone this is the
part that feels disingenuous again to me
cuz like if Sam if Jensen and other
comments like if anyone was really
serious about this you would be giving
up economics within your own firm for
the public side of things but no one's
doing that yet.
>> That's right. That's a very good caveat.
But I would say I'm for it. go out and
build the government AI infrastructure
because in case this thing does become
we I think we'd both agree there's a
chance that this thing can reach AGI. I
don't know how high of a chance there is
or reach you know I mean it's the amount
the amount that it's advanced in three
years is actually insane like I'm now
used using chat GPT and it is like
reliable on many many searches like
they've built they've answered a lot of
the questions like hallucinations
reliability
um capabilities things like math and
science or assisting with these these
capabilities it it can do that so uh
given the progress we've seen so far I
think a government should invest in it.
I don't fully know what a government
does with it. Uh
>> well, no, but but that that's like that
that that's what does it look like in
that case if you think about it. Like is
if you're running a company, do you get
a bill from the US government for your
API consumption? Like I cannot imagine
any of the folks involved in Silicon
Valley or AI are going to be advocating
for that. But but so what does it look
like? Or is it just the government backs
stops your loans and heads does it heads
you win tails they lose?
>> I mean ultimately I think what a
government wants is for its country to
be strong and so that can play out in a
couple of ways. First of all government
can develop AI technology on its own
build a foundational model uh and make
that available for you know I guess
anybody in in the country to use um at a
low cost and see and spark innovation
from there. So that's like external use.
And the other side of it is um if you're
an effective and efficient government,
you're going to end up, you know,
creating a stronger country. And we know
that we don't have an efficient
government. Doge obviously wasn't the
answer to make it efficient. But do you
want to build very powerful AI and then
use it to sort of connect all the
disperate data that you have, you know,
with within the country, you know,
assuming you can do it in a way that's
sort of privacy compliant, which is a
big if, and then use that to make better
decisions, uh, and then run your country
better. If you can do that, then then it
would be very valuable.
>> So, are you advocating for US GPT or
some kind the foundation model? I I'm
okay, maybe. I'm I'm feeling it a little
bit.
>> USGPT build it.
>> N nationalize the LLMs. [laughter]
>> Well, you could have you could have the
private sector and the public sector
both doing it, but I I think given where
we are today, why not? I mean, it's not
it's not going to be a massive portion
of your of your uh budget to be able to
do stuff like this and I think the US
government should. Now,
>> I think uh I are you running is this
2028 platform pitch?
>> No,
>> I think but we're running. Let's joint
ticket. Joint ticket. Joint ticket.
>> This is the platform. USGPT.
>> I'll be your vice president who can run
against
>> I don't know whoever comes up.
>> JD JD Vance, right? JD Vance would be
pro pro- US.
>> No, it's nationalizing the LLMs. We're
We're the team USGPT. The people want
it.
>> Maybe we can run under Elon's new third
party. [laughter]
>> I think we would definitely be the ones
he's looking for. So but but there is
this this um battle between countries
right uh Jensen Wong from Nvidia on
Wednesday as this is all going down says
as I have long said China is nanoseconds
behind America in AI it is vile it is
vital that America wins by racing ahead
and winning developers worldwide. I mean
the man wants to sell his chips into
China. I don't know how that makes the
US uh you know stay ahead if you're
anti- export restrictions. Uh what do
you think is going on here? And and is
there a value in in one country having
more advanced AI than another? And
probably yes, but what do you think?
>> I don't know if I'm overly salty this
week just given uh socialism as the
topic of the news. Meanwhile, we're
having to hear all these kind of things.
But like even statements like this, it's
always kind of just China is nanoconds
behind America in AI. Like is he using
nanconds just to sound more technical
and smarter versus either just say China
is tied with America or almost catching
up? But I feel like it almost is just
trying to make it he's saying it to make
it sound more techy. But that's more
just a you know a slight criticism. I
think more importantly, as you said,
Jensen has not shown I mean, he he's
lobbied hard to sell chips into China.
So, if this was really an issue and you
really cared about it, you would not be
doing that.
All right, let's close this segment out
with, of course, put some bubble talk.
This user OnX said, uh, Edward Doubt I
says, I see a pattern. Ultman and Jensen
smell the end of the bubble. financing
drying up and are going to ask daddy for
taxpayer money citing national security
issues. your read.
>> I think it feels like I mean that's how
you started this segment and it feels
like that is kind of where things are
that there's this it's still a money pit
in the way things are working currently
and if it's if if you need to continue
to have money pouring in and private
capital is going to dry up at some point
you might as well find the next uh the
next wave of money
>> inevitably. I mean, you there's only a
certain number of sources that you can
go to for money. Uh, and then you
eventually look for governments. I just
don't think we're at that. I think that
stage is still a couple years away. I
don't think we're there yet.
>> Yeah. I I think I'm realizing why I'm
even saltier on this whole topic now is
having worked in the finance sector and
the tra on a trading floor during the
last uh during the global financial
crisis when there was bailouts and it
was you know like watching it firsthand
was a very problematic thing. So I I
genuinely think like it's it's always
stuck with me in a way but Wall Street
never like openly called maybe Dick Fold
did a little bit but like Wall Street
wasn't you know like begging for this
stuff even before anything happened. So
that's why it's just come on come on
tech industry don't ask for a bailout in
a back stop before we're even we're even
there yet. maybe behind the scenes start
kind of laying the seeds, but don't say
it at a Wall Street Journal tech live
event. That's all I'm asking.
>> It's got to be a record. Is it the first
uh non-public company to float the idea
of being picked to fail? Probably. Okay.
Well, one person that might be able to
backs stop Open AI is Elon Musk after he
got a trillion dollar pay package
approved. Of course, the money won't
come right away, but we'll talk about
the big pay package, whether it makes
sense or not for Elon to get that money,
and how he'll get it right after this.
And we're back here on Big Technology
Podcast Friday edition. Uh, someone's
had a good week. His name is Elon Musk.
Tesla shareholders, according to the
Wall Street Journal, approved his $1
trillion pay package. uh flanked by
dancing humanoid robots on stage bathed
in pink and blue light at the electric
vehicle makers Austin, Texas quarters
headquarters, Musk than thank the crowd
of shareholders who supported the
trillion dollar pay package with more
than 75% of the votes cast. We are about
to embark upon what we're about to
embark upon is not merely a new chapter
of the future of Tesla, but a whole new
book. I guess what I'm saying is hang on
to your Tesla stock, Musk said. Um, do
you like the trillion dollar pay
package? I mean, of course, like Tesla
has to like hit some crazy goals, which
is an 8.5 trillion market cap. It's
right now at one half trillion, so you
know, 5x the company. Uh, what do you
think, Raj?
So, I I saw from like non- tech friends,
I saw a lot of kind of angry posts about
this and that lived more in the
political realm, but I'm actually going
to say I and I've been plenty salty this
episode so far, but this one did not
bother me in the sense that if you're
getting Tesla to 8.5 trillion, take a
trillion, Elon. Like if you're going to
sell 11 12 million new vehicles, even
when you've only sold 8.5 million
vehicles in the lifetime of Tesla,
take your trillion. Like if you create
this humanoid robot army and it's
remains out of my hands and I'm not
destroying people's apartments with it,
you know, take your trillion. all of
that. I I had written in the past a few
times like to me the problem that
happens here is I my my grand theory is
that like Elon and Tesla as a company
was a great car company the moment he
had his 2017 uh pay package that was
completely aligned to valuation. That's
when all the shenanigans around just
trying to like focus on appreciating the
stock, you know, like just really
building this army of just people who
are like religious about the stock
began. So to if we're now that we're
moving back to just another just insane
goal around uh around the actual stock
appreciation is a primary driver of
this. just reminds me we're going to go
through endless cycles of Elon saying
crazy things already. I just think he
was like this morning tweeting like so
many rabbits to pull out of the hat
right now. He said he wants a big enough
ownership stake in Tesla to be
comfortable that the robot army he was
developing did not fall into the wrong
hands. So maybe he listened to our
podcast episode last week, but uh
>> he's he's mostly trying to do this to
keep his robot army away from you,
Ranjan. Even though you'll be president
and Musk's third party, he wants to
control those robots. I think that's a
fair bargain,
>> which I which I I can't argue with. I
respect this.
>> Um but I Okay, so a couple of things.
First of all, uh it is interesting to me
that this is Musk effectively saying
he's done with Tesla as a car company. I
mean, that's the only way that I read
it, right? Like, oh, are they gonna
still produce cars?
But the ambition is robo taxis and the
ambition is humanoid robots. And if he
can pull that off, I think it's he he'll
be worth the money. Um especially if he
can pull that off in a way that like
makes the market cap go to 8.5 trillion.
I don't know. It seems to me like robo
taxis, humanoid robots are much more the
future than electric vehicles. Even if
you know maybe they that those robo
taxis are EVs. Um what do you think
about this?
>> Yeah. No, no, it's a good point which is
like I mean this is where the the
disconnect is always it's been around
for a long time already but like like
it's it's almost comical is analysts all
try to analyze every like uh Tesla sales
number and like from a car company
standpoint but then the stock price is
just completely dependent on humanoid
robots and things that don't exist
today. But but I I think that's actually
a fair point. Like Tesla is no longer a
car company in any way. Elon is not
pretending it's a car company in any
way. We should all stop looking at
actual vehicle sales. That should be an
afterthought. That should be like the
least important part of the overall
business and people should just focus on
the bigger opportunities here.
>> Do you think Elon is sincere when he
says, "Look, we're going to be producing
all these humanoid robots. Um, and if
we're doing that, I don't want anybody
else to control them. I want to have
oversight and therefore I should have
25% of the company. So, I'm curious
actually, this is a two-parter. One, do
you think he's sincere? Two, if you know
that Elon wants to control this humanoid
robot army, what exactly are you signing
up for if you buy one? Like, is that
yours or is that his? [laughter]
>> That's a good point. And actually, now
that you repeat that, it is kind of like
a it's it's a threat. It's like
extortion to the shareholder uh base
where it's like, listen, if you don't
give me this, this humanoid robot army
we're building could fall into the wrong
hands and only I can save you. But but
but you better give me my uh 25%ed Tesla
if we hit these milestones because
otherwise watch out for those robots.
So, so I think I mean I don't know with
Tesla it's always something like this,
but it is it's pretty amazing that
that's just being said out loud.
>> Yeah. He also effectively threatened to
leave as well. So, um maybe this is the
goal. Maybe true world domination means
having a humanoid robot in every house
and if somebody annoys you or is
undisiplined in their PR statements
about financial stuff, you just go smash
their
>> Yeah.
The true promise of the humanoid robot
army is world do I mean actually you
know what
>> it's got to be the the the path to world
domination is necessarily having control
of a humanoid robot army. I think that's
it's a pretty safe bet.
>> Exactly. All right. So, I don't know if
that segment made me more optimistic
about the state of the world or less,
but I I I think it's it makes sense that
Elon got that got that package and we'll
see what he does with it. It'll
certainly be an interesting story moving
forward. I doubt he'll ever get the
trillion. Uh but if he does, good on
him, I suppose. Okay. Uh let's go to
this story. Bloomberg says, uh Amazon
sues to stop perplexity from using AI
tools to buy stuff. I thought this was a
fascinating story and it really goes to
sort of the question of whether this
agent agentic web will ever be allowed
to take off because they're going to be
companies that are just going to say we
don't want your agents using our our uh
technology. We never this our stuff is
for humans and not bots. Here's a story.
Amazon Inc. suing perplexity to try to
stop the startup from helping users buy
items on the world's largest online
marketplace, setting up a shutdown, a
showdown that may have implications for
the reach of so-called agentic
artificial intelligence. The US online
retailer filed a lawsuit Tuesday
demanding Perplexity stop allowing its
AI browser agent comment to make
purchases online for users. The
e-commerce giant is accusing Perplexity
of committing computer fraud by failing
to disclose when Comet Comet is shopping
on a real person's behalf. What do you
think about this? This is a pretty
fascinating showdown. Yeah, actually.
Okay. I feel like this is bringing us
all the our earlier conversations back
down to earth and this is what this is
where like the real work is happening
and it is incredibly fascinating because
like what is a browsing activity? So the
idea that Comet Perplexity's browser
using Agentic browser capabilities to
actually do the shopping of a customer,
which I I I've actually and it didn't
actually work. I I'll say this. I I was
testing something. My son's really into
Dogman the book series. So I was like,
"Here's the books I own. Can you find
all the books I don't own and create an
Amazon shopping list?" It actually was a
disaster and it was unsuccessful. I
tried it on Chat PT Atlas as well. Did
not work. So, just a reminder like when
we talk about these more kind of like
theoretical things about your browsers
doing all your shopping, we're not quite
there yet, but but again, you figure
you're in perplexity. You ask, uh, well,
buy me some paper towels and it goes and
does the work. Like, why shouldn't you
as a consumer have the ability to do
that? like it feels like that seems I
don't know a pretty basic thing but
Amazon Rufus is actually killing it
right now that Andy Jasse came out they
were saying like usage over 250 million
users have actually used it users are
60% more likely to buy a product if they
use Rufus so like they have in their own
closed ecosystem a pretty valuable path
in terms of still owning the entire
shopper journey so they're gonna fight
for it But it's a weird theoretical
thing cuz like uh yeah, is it if it's an
AI doing the shopping for a consumer who
wants the AI to do it, should you be
allowed to block it? Do you think they
should?
I don't know. It's it's a question that
we're going to get. You know, we talked
a lot about how agents could do uh tool
calling. Um, and this is going to just
be a question that will just keep coming
up because
you think about the contract that um,
effectively app builders built with um,
you know, people uh, who use the
internet in mind. It's supposed they
built for human users, right? Uh, and
now what happens if most of the traffic
is bought? So maybe agents working on
our behalf. Uh but it's a completely
different value of visit that probably
doesn't sustain a product the same way
that a human visit would. Think about
just a mapping product for instance,
right? So let's say you're asking an
agent for directions, right? And in the
background it's like going to Google
Maps. It's not Gemini, right? It's going
to Google Maps and it's finding you
directions uh and then you know
presenting to you in a chat window.
Well, Google Maps only made sense for
Google to create because there's ads
there that will support it. So now like
what happens if most of the traffic on
Google maps is agentic? It sort of
really changes the economics of the
internet. Now if I'm Amazon, I wouldn't
block perplexity because what I want is
purchases and if a bot is on my page
trying to add stuff to a cart that
somebody might buy, then I'm very happy
about that. The other thing only thing
to worry about is that Amazon has a big
advertising business now. So does that
cut out advertising business?
>> Okay, that that's a really good point. I
think like the idea especially if
there's no transaction or commerce
taking place agentic browsing totally
destroys the economics of the web. But I
think but even on the transactions I
think the idea is probably like unless
it still introduces a great deal of risk
because Amazon's value really is locking
you in their ecosystem. Like I don't go
to a lot of web pages as destinations. I
go to amazon.com all the time. Like if
suddenly perplexity is choosing where to
buy for the customer, that's a problem.
So I think I think yeah I think I think
see yeah I think it is a big problem for
Amazon. They have to maintain some
ownership of that customer relationship
and not just be one of many choices that
if perplexity gives you the the benefit
of being the where the transaction takes
place that's that's a threat. And it's
just, by the way, one more sentence,
like people use Amazon, like you said,
because they're locked into the
ecosystem. They just find it to be the
most convenient place. You go to Amazon,
it's one site that has everything, the
everything store. Um, and you don't want
to like search 20 different sites for
the same product. The Aentic stuff flips
the everything store completely on its
head
>> because instead of the everything store,
you just type one more query in and now
the entire web is the everything store
inside of the Comet browser. So that is,
I think, why Amazon is is upset at this.
It's just because the value proposition
is gone. If it's just one more sentence
into an AI browser to find you products
across the entire web. Chat bots are the
everything store. I like that take. I
think uh Chat GPT is Perplexity is
Gemini. All these chat bots allow us to
shop for everything and anything at
once, which is what Amazon that's been
their entire value proposition. But but
one question, if a gentic browsing is is
problematic and different, if my
humanoid robot is going to a store,
is that problematic? And will that be
banned? And will they be discriminated?
>> Well, I think we will definitely have a
a war among people and humanoid robots.
It's without a doubt. Even if the
humanoid robots don't assemble into the
army, people will knock these things out
and they're already burning Whimos.
People don't like robots. They don't.
And so
>> so if or go
>> and they will just knock these things
out.
>> Well, but if so, if Perplexity Comet
traffic to Amazon can be banned legally,
like if you can file a lawsuit and say
that traffic should not be allowed on my
website, can Walmart physical retail
prevent my humanoid robot from going and
shopping for me?
>> Absolutely. I mean, bars were banning
Google Glass 10 years ago. So, yeah,
there going to be no no bot no bot
zones.
>> No bot zones. It's discrimination.
>> Bots have rights, too. Neo, don't
discriminate.
>> We're joking. But this will absolutely
be a battle that will play out in our
lifetime. Without a doubt, this will be
I mean, we might go back to this show
when we're podcasting together in 2045
and just being like, "Well, we called
it.
>> Twitter's going to be robot robot rights
Twitter." just like uh [laughter] yeah.
>> Well, I mean, is it a coincidence that
the guy that's building the robot army
also owns Twitter?
>> Huh? Okay.
>> You don't think that's that's a forward
thinking thing? You think it was just
about Trump? I don't think so.
>> It's all about the humanoid robot army
in the end.
>> We should all be working towards it.
>> I mean, of course, we should all and we
will. This podcast will be front and
center in those wars. I promise you
that. Um, now that we've staked out our
positions. Okay, let's end this week on
what I call notes on the coup. Um
because we just got a deposition uh that
had Ilia Suchkever, the former O OpenAI
chief scientist, um his uh testimony
about what happened inside OpenAI before
Sam Alman was fired. So, uh I apparently
put had been planning the coup for about
a year, put together this document, and
this is from the deposition. The very
first page says, "Sam exhibits a
consistent pattern of lying, undermining
his execs, and pitting pitting his execs
against one another." Um, and the lawyer
asked Elia, "That was clearly your view
at the time." And Elia says, "Correct."
I mean, it's just kind of interesting
that we're now seeing that that's the
document coming out. So maybe it wasn't
necessarily the effect of altruists,
although I'm sure they played some uh
role in this, but it was, you know,
open's own executives that caused that
firing. And um it's a pretty pretty wild
thing to read that Ilia had written down
uh this type of thing about Sam. What
are your thoughts about about this?
>> I think reading through all this, the
the the part I still can't square, I'm
curious what you think. Is this was this
done to save humanity from runaway evil
AI? Or was this just very human
corporate infighting? Like this guy
annoys me. He's always lying. He's
stabbing me in the back. I'm going to
try to like go after him as well. Like
it when you read this stuff for all the
talk about protecting us from runaway
AI, it just feels like the most human
thing ever. Like it's just people are
annoyed at each other. There's like
everyone's worked with people like this.
I don't know. How did you read this?
>> Oh, totally. Yeah, it totally shifted my
I mean, you know, it was always
politics. Let's just put it that way. It
was always politics. I don't think there
was any like real fear within OpenAI
that, you know, Chad CHP I mean, it's
easy to say in retrospect, but that CHP
was like going to go self-aware and
destroy the world, you know, at that at
that point. Um, but yeah, it's it was
definitely infighting. And I think one
of the things that really resonated as
this stuff circulated on Twitter was
that it was so poorly planned. There was
I mean, we knew about this already,
right? That it was a poorly planned
coup. Um, but like as the details come
out, uh, here's one Twitter user. It's
the worst coup ever. Plan for a year
without any PR strategy. As they say, if
you go to go for the king, better get
his head. Um, first and foremost, the
skill that any leader needs is to be
able to survive. Uh, Ilia is a great
scientist and great human being, but not
a practical leader. He didn't have the
skill to plot and survive and come out
on top. I mean, uh, here here's another
one. Even though I I don't feel like Sam
even though I don't like Sam Fraudman, I
still think it's good that Ilia failed.
Somebody that has the planning ability
and theory of mind of a toddler
shouldn't be in charge of AGI. I mean,
it is interesting that like you saw this
half play out and like yeah, it goes
back to it. It's like these are the
people that are supposed to be
protecting us from AI gone bad. I don't
know.
>> Yeah, I think that's where Ilia wasn't
going to be our savior and controlling
the humanoid robot army. I think uh the
it is again, yeah, the documentation it
it's as clunky as we all imagined it to
be. And I mean like going back to where
we started the show today, like this is
the kind of stuff that's just been going
on at this company for so long and
everyone has just been waiting for them
to mature into a different type of
organization and it it doesn't seem like
it's happened yet.
>> All right. And of course, my favorite
part is when the lawyers bicker at each
other. Uh here's what one attorney says,
uh don't raise your voice. Uh, the other
one says, "I'm tired of being told I
talk too much. I'm talking too much."
First one replies, "Well, you are." And
then the next one goes, "Check
yourself."
[laughter] The toddlerification of all
communication, man. It's It's
everywhere. I still think though, just
going thinking back like Mia Marotti,
how many days was she CEO? Two in the
past.
>> A day. Maybe two.
>> I think it was like two days. It was uh
like uh I mean still what an exit from
that two days as CEO and then go on to
raise a billion dollar pre uh raise a
pre-product round at a billion dollar
valuation that actually
>> and actually Yeah. Yeah. Everyone won.
Everyone won. We're going to be bailing
them out in the end anyway. So,
>> exactly.
>> Start filing your taxes and preparing
cuz
>> we are going to all be funding the
lifestyles of every AI leader out there.
>> Okay, one more last little fun thing
before we leave. Um, the New Yorker had
a story on the data center buildout and
uh the apparently they the reporter
spoke with this farmer and they go, I
asked the farmer if he ever used AI and
the farmer said, I use Claude. Google
sucks now. I mean, obviously it's not
every farmer, but I just think that that
was like a pretty interesting line that
shows just how uh how much people are
using AI and how this has proliferated
well beyond uh Silicon Valley. What do
you think?
>> No, no, I think that's a good place to
kind of end this because for all of our
talk, I mean, you alluded to it multiple
times. I firmly believe it. This is one
of the most incredible technologies that
we will experience in our lifetime. And
like it's everywhere. Like there's no
doubt that everyone at every level, my
parents, you know, like it it's just
it's already
being used at least at the kind of like
base level of promise. So it's something
will be amazing and continue to evolve
and change and there will be a lot of
money and market capitalization
realized. I think like that that never
leaves my head in all of this. is just
uh how we get there and what it looks
like is definitely uh it's it's going to
be quite a story
>> and therefore we must save it at any
cost.
>> At any cost. Yes. And give me my
humanoid robot army.
>> 2028 we're running.
>> We're in. We're in. We'll uh we'll
figure out the rest of the platform
later. But uh wait, are we pro or
anti-humanoid robots? I can't I can't
tell.
>> No, no. We're we're now pro as long as
we're in control [music] and we are the
only ones who can save everyone.
>> I I think we won't get any votes.
>> I think we'll get a couple. I mean, Eric
Adams got 5,000 votes. So,
>> Oh, yeah. If Eric can't. I mean, some
[music] listeners out there will will
give us uh at least at least something
for it. All right, Rajan. Good stuff as
always. Thank you guys [music] for
coming on.
>> See you next week.
>> All right, everybody. Thank you for
listening. Next Wednesday, Mustafa
Sullean will be on to talk about why he
thinks LLM's may be the route to super
[music] intelligence after all. And then
Ron and I will be back on Friday. Thanks
for listening and we'll see you next
time on Big Technology Podcast.