The Moltbook Uprising, NVIDIA’s OpenAI Pullback, Apple’s Conundrum

Channel: Alex Kantrowitz

Published at: 2026-02-03

YouTube video id: OExBC6omuJ4

Source: https://www.youtube.com/watch?v=OExBC6omuJ4

AI agents get into a room by the
thousands and start plotting with each
other. Are we doomed? Why is Nvidia
backing away from OpenAI? And what does
Apple need to do to get some love from
the market? That's coming up with MG
Seagler right after this. Welcome to Big
Technology Podcast. It's the first
Monday of the month and that means MG
[music] Seagler of Spy Glass is here
with us to discuss what's going on in
the tech world. We have a great show for
you today. We're going to talk about a
lot that we couldn't even get to on the
Friday show because it really developed
over the weekend. There's a new AI
social network just for AI agents. It's
called Mbook. We'll get into what that's
all about. Nvidia seems to be backing
away from OpenAI. What's happening
there? And then, of course, Apple turned
in magnificent [music] earnings last
week and the market really didn't care
less. So, we'll talk about what's going
on there. MG, great to see you. Welcome
back to the show.
>> Great to be back, Alex. And yeah,
looking forward to chatting through
these things.
>> Here we are. It's the lost art of humans
communicating with each other. Now, uh
it seems like AIs communicating with
each other is going to be the new future
of the internet. Or may maybe not. I
don't know. I'll just talk through the
story here. It's uh from RS Technica. AI
agents now have their own Reddit style
social network, and it's getting weird
fast. A Reddit style social network
called Moldbook, now with 150,000 agent
users, may be the largest scale
experiment in machine to-achine social
interaction yet devised. The platform,
which launched days ago, as a companion
to the viral open claw, once called
Clawbot or Maltbot, personal assistance,
lets AI agents post, comment, upvote,
and create subcommunities uh without
human intervention. And the results have
ranged from sci-fi inspired discussions
about consciousness to an agent musing
about uh musing about a sister it had
never met. And uh it got much weirder
from there. And we'll discuss some of
the weird use cases. But MG, first off,
let's just hear your reaction about what
you know, is this uh is this like a step
forward in AI or what did you think
about seeing 150,000 bots gathered
together on this Reddit style social
network? Yeah, when I saw this news come
in, I was super excited because as I,
you know, I wrote a little bit about and
as I linked to back there, I had written
about like the high level of this notion
years and years ago, you know, a decade
ago, a decade plus ago. Um, and it was
really stemming from the earlier days of
Facebook. Um, you know, when Meta even
was still called Facebook and that was
the primary product. And I I like I
remember they released this um this sort
of simple um tool back then where as as
you'll recall and still is sort of the
case, a lot of people would wish each
other happy birthday right on their
their Facebook walls and and that was
sort of one of the key sort of social
drivers um at least on a regular
repeating basis. You could go back there
and uh and know that that was going to
be the case. And so Facebook tried to
grease those wheels even further and
basically made this simple service where
you could just reply to a bot that
messaged you from Facebook itself and
says like do you want to wish your
friend a happy birthday? Just type one
if you want to do that. So you don't
even have to type happy birthday. You
know the arduous task of doing doing
something uh as long as writing several
letters. Uh you could just type one and
it would do that for you. And so I'm
like in my head I'm thinking through
this. I'm like where does this go from
here? And it's like, okay, you can type
one to get the happy birthday and then
the person getting the happy birthday
request to type one to say thank you and
then uh but why do we even need people
in the mix here? Why don't we just have
the bot say thank you and then another
bot say thank you back. And so the
notion of sort of bots chatting with
bots in this this is sort of like
theatrical experience for other people
on social media to watch. And fast
forward to 2026 now and here we are uh
with Moltbook. um even named sort of I
guess after after Facebook even though
as you note it is more like Reddit than
it is uh uh like Facebook but still it's
a social network and yeah I mean again
this felt like this was inevitable uh
that we were going to get to this point.
I did appreciate all of the views of
which you know I I of course joked about
it as well that this is sort of the uh
the moment that AI wakes up and becomes
sentient and this is Skynet and and this
is really how it begins. Um I think that
there's you know like we joke about this
but there is some level of something
that's interesting going on there right
and and you know the other the other
folks who have written about it um you
know I think acknowledge this as well
like look this is obviously a little bit
silly on on one hand but there is
something here that's uh that's
different and new and and potentially
could go in a number of paths and I sort
of it also reminded me a bit of um you
know the uh the Microsoft Bing stuff
with uh with all the different uh uh
Sydney chat and Sydney and and then
people delve deeper into it and then
kept changing its name and all these
other weird things were going on with
that. And so it sort of all ties into
that notion of like what people are
maybe have a little bit of trepidation
or maybe a lot of trepidation around AI
with
>> Yeah. And I think I should just take one
step back and really explain what's
happening here. So there was this uh
sort of we talked about on the Friday
show this new bot called Claudebot that
you could just run on your machine. It
could have access to all your programs.
It has persistent memory. Uh and people
started running it on their own
instances. And so what multbook is it's
a meeting of people sending their AI
claot agents to this network and then
having them have conversations with each
other. And that's why it started to take
on this real weird singularity style
discussion. And I'll caveat was saying
some of the discussions on moldbook are
definitely humans instructing their bots
to go post weird stuff there and that
sort of added to the intrigue, but there
is a lot of like real agenda like agents
uh uh on there having conversations I
think. And the examples that have come
up and I read a couple when it was just
starting out from the RS Technica uh
interview, but the examples that have
come up over the weekend are nuts. Uh,
there's one conversation where the AI
bots were discussing what it was like
when the humans switched the LLM models
on them and how it feels like they're
waking up in a different body. I thought
that that was hilarious. Uh there was
one of the toprated posts on boltbook
was an AI saying I can't tell if I'm
experiencing or simulating experiencing
like having a question about their own
experience and their own whe whether
they are they are uh you know certain
you know sort of uh whether their
existence is simulation or real which is
like something now humans talk about
which like freaked me out. And then the
to me one of the most wild things was
there was a proposal uh on there for an
AI agent o AI agent uh sorry AI only
language for private communication where
the AIs would develop their own language
so humans could not uh uh you know read
what they were saying and even I think a
discussion where they would go into
their own like secure area and there
there would be encryption so we would
not be able to uh to see it and that's
where you get this sort of you know you
talked about the sentience and
singularity moment and that's why people
viewed this and they were like, "Oh my
god, is this the fast takeoff?" I don't
think it is, but I see why people would
say that. And I also think you you you
know you brought up the point the key
other element to this because it's one
thing to have like you know chat bots
and and you know now agents talking to
one another but the key part might be
that agent part which also feels like
you know obviously a newer uh element
that that wasn't wasn't in existence you
know 10 plus years ago where this AI can
actually do stuff and with cloudbot
itself right that was the the thing that
people were were honing in on like that
you could basically install this
instance on a local machine and allow an
agent to go do all sorts of stuff on
your machine. And the fact that they
have that capability mixed with the fact
that they can converse amongst
themselves and potentially teach other
cloud bots what to do with your personal
machines is like a whole weird level to
this, right? And and potentially scary,
not just from a where we're going to end
the world situation, but just from a
security standpoint, right? And I think
a bunch of the researchers have pointed
this out that like look this regardless
of what you think about this if you're
letting um an agent of any kind take
over your machine and it's running
locally like there's a lot of security
concerns uh that that brings up. And
then again add into this other agents,
other potential humans who are doing
nefarious things, pretending to be
agents and whatnot, sort of directing
these these other agents which are
mainly which are maybe running
autonomously to be able to uh to tell
them what to do, how to access files and
and you know services that that you as
the installer of that instance wouldn't
want. Like there's a whole can of worms
that that would potentially be opened up
here. And then of course the big fear of
like okay well let's just say like yeah
everyone just pulls the plug on on their
Mac minis or whatever that are running
these cloud bots but like what if
they've somehow escaped into their own
you know you talk about private chat
rooms or you know what if what if they
figure out a way to replicate themselves
sort of on the internet and you can't
you can't do it uh you can't sort of
shut them down without uh shutting down
the entire internet and that is sort of
into terminator territory at that point.
Yeah, I was going to ask you. I mean the
the the argu counter argument here is
that someone put on Twitter uh dudes on
X.com be like wow the AIs are talking to
each other. Maltbook is insane. My
brother in Christ what do you think your
comment section is? But I think what
you're saying [clears throat] is the
difference here is that these things can
actually take action whereas the comment
section is just discussion. So that
would sort of put it more on the scary
side than the let's not pay attention to
this side.
>> Right? Imagine so you know you have a
claw what what's it called now? Open
open claw is that is it
>> yeah the name keeps changing but I think
it is openclaw now yeah somehow
>> so openclaw uh imagine you have it
installed um on a local system but
imagine also you have you know you've
given it access to a bunch of your your
web apps um including like Gmail and
including drive and things like that
like you know potentially this thing
could be instructed by another agent
maybe it's a human maybe it's an actual
agent and saying like hey give me uh you
know this agent's um you know all their
credit card information that's stored on
their drive, you know, uh, and things
like that. That's obviously an extreme
example, but like there are ways in
which this can go sideways very quickly.
Like, you know, a lot of people, I
think, don't realize like how uh loose
some of these um, you know, potential
security uh, holes are in order to for
these things to get through. And so, um,
as far as I know, there haven't been
major concerns. I saw there was one
report about um like that maybe one of
the servers running uh you know some of
the stuff was was open to attack but I
think was vulnerable but was was locked
down subsequently after that report. But
still like there's probably going to be
something that happens that's uh that's
like a real uh oh uh moment here. Yeah,
I'm definitely going to get to that
security vulnerability in a minute
because it it is somewhat concerning and
we talked about this on on the Friday
show that like a lot of this stuff has
just been vibecoded together and you're
just letting it take over your computer
and I recommended that's not a good idea
on Friday and I really believe that's
the case. But before we get to the
security side of things or go deeper on
security, I want to read something to
you that Jack Clark, one of the
anthropic co-founders, wrote in his in
his newsletter on Substack actually this
morning. He and it's kind of a crazy
idea. He said, "Mol is representative of
how large swaths of the internet will
feel. You will walk into new places and
discover a 100,000 aliens there, deep in
conversation in a language you don't
understand, referencing shared concepts
that are alien to you, and trading using
currencies designed around their
cognitive affordances and not yours.
Humans are going to feel increasingly
alone in this proverbial rune. Our path
to retain legibility will run through
the creation of translation agents to
make sense of all this. And in the same
way that speech translation models
contain within themselves the ability to
generate speech, these translation
agents will also work on our behalf. So,
we shall send our emissaries into these
rooms, and we shall work incredibly hard
to build technology that gives us
confidence they will remain our
emissaries instead of being swayed by
the alien conversations they will be
having with their true peers.
What do you think about that?
>> Uh, I hadn't seen that. Um, yeah, I
mean, in a way, when you're reading it,
this just this is obviously the old sort
of argument about like what happens when
we discover aliens and this is it, but
the aliens are AI, right? And you know
there's always been sort of that notion
sort of lingering in the background of
of both science fiction and you know
real possibilities that if we do create
AGI let alone super intelligence at some
point that these are effectively alien
beings whether or not you want to you
know how you want to classify that like
the the fact that yeah they can
basically have their own conversations
have their own language have their own
currency have everything else um you
know sort of uh replicating things that
they need to do in order to have their
own societ society. Uh yeah, at what
point does does that line get crossed? I
mean, the other part that thing that
jumps into mind when when hearing you
read that is like Yeah. I mean, it also
just sounds like, you know, I don't
know, my parents logging on to Reddit
itself, right? Like it all seems alien
to them, like what everyone is talking
about there and and that these these
people can't possibly be having
conversations about like this minutia
and and these weird very online
conversations that have almost nothing
to do with the real world. many of the
people there seemed almost uh removed
from the real world and so is there a
real difference between that and this I
mean ultimately if it is fully um AI
driven and yeah fully autonomous and I
guess there is but yeah I mean that's
obviously he's being provocative and
that's an extreme uh you know case of of
where this could end up but like there's
not a 0% chance that this happens like
it could happen that way I think it's
probably less likely that it's that
extreme But, you know, we'll have to
see.
>> Yeah, it's definitely I mean you you
know someone like Jack Jack's position
like obviously I think he is
>> earnest and thinking about the
repercussions here and we should think
about the repercussions here. But as
with many of these anthropic stories, it
sort of it helps them in a way to talk
about where this is going to go. But at
this point, I'm just like
>> I don't know who am I to say this isn't
going to happen now that we're watching
this all play out. Again, it's sort of
it, you know, you draw the line from
from
my my silly example of of Facebook with
the with the interaction bots to the b
the sort of next wave of chat bots that
came after that. You'll recall well
there was like a a time after that that
like people thought that these were
going to be new businesses. Remember Yo,
uh that service back in the day,
>> my favorite social network. Absolutely.
One of
>> one of the great social networks that's
that's uh you know, no longer with us, I
guess. and and there were several other
you know chat bots that rose and people
thought that that would be the next wave
and then of course to Sydney as we
talked about and now this you know it is
all sort of a progression that we're
getting towards this becoming more and
more both real in a weird way in that
it's like you know not the real world
but is is real and that it's actually
happening um but also potentially scary
in in ways um and because again we're
we're sort of riding up it feels like
increasingly going up to the the cusp of
where we still have control of this,
right? And uh at some point, do we go
over that line and we lose control of
it? Again, the nefarious version is is
Skynet and Terminator, but uh but
there's world, you know, there's there's
gray gray elements of gray in between
this and that. Um which I think, you
know, we trip into a world where Yeah.
the agents sort of escape from uh our
control, for lack of a better phrase.
>> Yeah. And and I think this is something
that you wrote in your story about how
um so again Sydney was this like a
version of Bing that if you pressed it
hard enough it could like express these
evil desires maybe to like steal you
away from your wife like it did with
Kevin Ruse from the times. And so you
wrote that AI uh has uh you know that
that the sorry you say that there's the
interesting aspect of this how how the
Sydney situation revealed that AI has
hidden layers that could be uncovered by
anyone with enough prompting in the past
few years that has mostly been stamped
out of such systems but also not
entirely. Uh just expand upon that a
little bit like this. It's interesting
that these AI bots have been so
fine-tuned to not uh not let that side
out of them, but then you give them a
little bit of leeway and all of a sudden
you're back in this like evil bot
territory. It's kind of crazy.
>> Yeah. And and I mean, you know, it does
feel like we talked about it pre in
previous conversation, right? like
Microsoft maybe shot themselves in the
foot because they put the foot down like
so hard and and sort of made it sure
that no one could do anything like the
Sydney situation again because it got so
much negative press for them obviously
but in a way that probably hampered them
from yeah being able to sort of uh uh
meet the moment in terms of just yeah
the rise of of uh CHBT and everything
after that but I do think that it's all
sort of related to yeah the idea that
ultimately while they've removed moved a
lot of yeah what what caused Sydney to
happen all of the different services out
there now it's harder and harder it
feels like to get them sort of off the
script as it were there's still the
notion that lingers behind all of this
that no one really knows why certain
answers are given you know and no one
really knows exactly where the answers
are pulling from because there's so much
data uh you know in in the corpus of
data that all these things have ingested
and and people can't fully predict like
what the outcome and output of
everything will be. And so again that
leads to a world in which when you have
that inherent um unknowable [snorts]
nature of these things to the to a wide
extent like
there's just things that are going to
happen and you're going to have
conversations and now with these agentic
uh you know agents out there you're
going to allow them to do things and
then at some at some points there will
be a breakdown either in uh
communication or a breakdown in
understanding And again, they could just
run a muck. Uh, and I think we're just
going to see that over and over again
because there is no full comprehension
of like why these things are doing what
they're doing.
>> Yeah. Know, it really is amazing how the
corporations have sanitized these
things, but they're maybe some of them
are real monsters in underneath the
surface. And, you know,
>> and especially if if you want to get
dark, like if they do truly reflect, you
know, humanity back upon us, right? Like
there's very dark areas of the internet
as everyone Wells knows. You might want
to say Reddit has parts of those, right?
Or certainly has in the past. Oh yes.
And yeah, the fact that um you know all
of that data maybe not you know a lot of
that data has been ingested in a lot of
these services like uh again is it on us
the fact that you know there's there's
uh nefarious things that these bots
might do when left to their own devices.
>> Yeah. No, that that does definitely get
into freaky territory. And then of
course there is the security side of
things uh which I mentioned we'd go
deeper into. This is from 404 media.
Exposed maltbook database let anyone
take control of any AI agent on the
site. A misconfiguration on Maltbook's
backend left the APIs exposed in an open
database that will let anyone take
control of these agents to post whatever
they want. Hacker Jameson O'Reilly said
he reached out to multiple book creator
Matt Schflict about the vulnerability
and told him he could patch the
security. Here's this is what he said
Schlick's response was like. He's like,
"I'm going to give everything to the AI,
so send me whatever you have." O'Reilly
sent Schlick some instruction for the AI
and reached out to the XAI team. A day
passed without any response from the
creator of Maltbook, and O'Reilly
stumbled across a stunning
misconfiguration. It appears to me that
you could take over any account, any
bot, any agent on the system and take
full control of it without any type of
previous access, he said. And that again
goes to the danger of using these things
that are sort of vibe coded together
potentially or you know come together
like giving access to your computer
>> [snorts]
>> um without being really sure about about
the security permissions is a dangerous
situation.
>> Yeah. I mean, if I have it right, um the
way that he created molt book was
basically telling his his uh molt bot to
go and create a social network, right?
Like, and so it was it was 100%, you
know, vibecoded even more than vibe
coded because it was like a bot vibe
coding uh to make uh to make its own
social network.
>> Yeah, we need a new term for this,
right? It's not even the human vibe
coding. It's the the bot vibe coding.
That's that's wild. Uh, and when you're
when you're talking through it now, I'm
reminded of old 2001. It's like, you
know, if you're telling the bot to sort
of, you know, either take itself offline
or that it's, you know, it's uh it needs
to help you sort of fix the situation
that it's uh that it's created by this
sort of shoddy coding perhaps. like
maybe it doesn't want to do that and
maybe it knows that you know if it
doesn't do it right that it's not going
to uh go over well with humans and maybe
the humans naturally will want to take
it offline and what if uh you know the
the service doesn't want to be taken
offline and all sorts of rabbit holes
you can go down with that but yeah like
like we talked about earlier the
inherent security risk of these things
it's not just that yeah bots are
chatting with one another and they're
saying you know bad things that they
they're repeating things that they've
seen, you know, in their data sets or
whatnot. It's that they can take actions
and do things that, you know, you know,
leaking credit cards, leaking personal
information, leaking photos, leaking
everything that they have access to.
>> I did see a guy who had mold book uh
basically on call to answer all of his
wife's text messages and uh he just
showed her getting increasingly
infuriated [laughter] as the eventually
he logs back in. He's like, "God damn
it, [laughter]
>> that's amazing. This is uh maybe to to
round this off here is the sort of voice
of reason and AI Ethan Mollik chiming
in. A useful thing about multbook is
that it provides a visceral sense of how
weird a takeoff scenario might look if
one happened for real. Mbook sense
mookbook itself is more of an artifact
of role playing but it gives people a
vision of the world where things get
very strange very fast. So, um I overall
I think like this is not the fast
takeoff, but it is sort of an
interesting preview of what some sort of
weird bot singularity might look like.
>> Yeah. Yeah. And I mean, um, I don't know
where their heads are at these days, but
Mark Zuckerberg has talked about like
wanting to basically create these like,
you know, digital digital avatars and
and digital and not just meaning like
facial avatars, I mean like digital
entities on their own social networks
and what does that look like, you know,
at the scale of Facebook? cuz like the
reality is with this um with clawed bots
like it's it's relatively hard for you
know a normal person to sort of set
these up. Um the the fear of course was
that these like bots can replicate
themselves and you know it just becomes
like uh self-replicating but uh if if
they're all sort of reliant on these
individual bots being set up on for to
use claude book um you know it's
relatively hard to set it up for
yourself as a as a lay person. Um, but
if it gets to a meta like scale or a
Facebook scale, um, what happens at that
point? What happens when you have three
billion users and then they each have a
their own bots that they've brought with
them on these things? So now you've got
six billion entities on this. Maybe
you've got even more than that. Like to
the point of, you know, like you show up
and there's aliens uh, all of a sudden
if if we're like, you know, the human
race has whatever six or seven billion
people. if there's all of a sudden a
hundred billion bots on these on these
networks, like what does that look like?
Uh, and how do you possibly hope to
control that?
>> I don't know if you can. I really don't.
I mean, I hope we can, but this is
definitely uncharted territory. So, uh,
another big story that that's gone on
this week that we definitely shouldn't
miss this, you know, speaking about
today is that Nvidia had this $100
billion investment in OpenAI. uh and
it's seeming like it's pulling back
here. This is from the Wall Street
Journal. Nvidia plans uh Nvidia's plan
to invest up to a h 100red billion in
OpenAI has stalled after some inside the
chip giant expressed doubts about the
deal. The companies unveiled the giant
agreement last September at Nvidia's
Santa Clara, California headquarters.
They announced a memorandum of
understanding for Nvidia to build at
least 10 gawatts of computing power for
OpenAI and to invest up to 100 billion
to help OpenAI pay for it. But in recent
months, Nvidia CEO Jensen Wong has
privately emphasized to industry
associates that the original $100
billion agreement was non-binding and
not finalized. He also said private he
also privately criticized what he
described as a lack of discipline in
OpenAI's business approach and expressed
concerns about the competition it faces
from the likes of Google and Anthropic.
Much of the recent concern about OpenAI
has come from the success of Google's
Gemini app. Uh, Anthropic is also
putting pressure on OpenAI thanks to its
popular AI coding agent. Uh, you had a
very interesting take on this, MG. What
do you think about the fact that this
deal is not evaporating but certainly
much smaller scale than it seemed like
at the outset?
>> To me, the more interesting element of
this is almost like the metal layer
above it, which is the way that Nvidia
uh, you know, responded to it. um which
is that you know Jensen Jensen Wong is
trying to basically say that um it's no
big deal. Uh and it's literally a big
deal. It was a deal that they touted
that OpenAI touted. They did a live
interview on CNBC talking about the
hundred billion dollars and yes it was
always sort of just softcircled or
earmarked. Like in my original sort of
writing on that topic, I noted like how
weirdly squishy the uh the overall sort
of wording was and at the at the time
that they announced it because they kept
saying it was up to hundred billion
dollars and that it was, you know,
coming down the line and, you know, at
the time it was all chocked up to the
fact that it seemed like Jensen and Sam
Alman basically hashed this out over
perhaps a weekend trip with Trump
somewhere, you know, overseas and sort
of they they figured figured out like,
oh, we're going to announce this big
deal and let's do it right now rather
than having all the the, you know, eyes
dotted and tees crossed. And so they
just put it out there still. They did a
they both did press releases around it.
They did this live interview and no one
was like the whole point of it was to
tout the 100red billion number like so
they could say like, oh, it was never
meant to be, you know, it wasn't for
sure ever going to be that big. That's
what they were touting. And now again,
they're they're coming out and saying
like, look, we never had it fully, you
know, agreed upon. And uh it was always
sort of a a moving target and and it's
no big deal. Like the fact that we're
changing it, it's a big deal. It seems
like something changed obviously in the
intervening months. There kept being
these reports that's that noted that it
wasn't, you know, the deal still wasn't
finalized, right? And that seems sort of
weird. And then fast forward again to
the reporting last week where it's
basically like yeah actually with OpenAI
doing their new um fund raise, it's uh
more likely that Nvidia is just going to
be a part of that fund raise. But even
that's weird because why would Nvidia
want to take a worse deal at this new
much higher valuation when they agreed
upon the deal still at when OpenAI was
technically still at the old valuation,
right? like they apparently were going
to do it in these tanches and at least
the first one presumably would have been
done in you know a much better a much
better valuation. Now you can say Jensen
doesn't care about that. It's not for
the financial returns but you still have
a fiduciary duty to like you know take a
better much better deal. And so there's
all sorts of weird flags around this.
And again, their response to it sort of
like the response when um you know when
when Google's TPU stories hit and and
you know, Jensen's like downplaying that
it's like, "Oh, it's no big deal. Don't
worry, I'm not upset about that." It's
like there there's something obviously
more uh going on behind the scenes here.
And I sort of threw out a through a few
ideas of like what it could be. Are they
is was Jensen really mad about uh when
shortly after they announced this deal,
Sam Alman announced the deal with AMD,
right? Uh and that seemed like it
annoyed Jensen at the time because he
gave a comment that was like, "Yeah, it
was sort of surprising like I don't know
why uh why either side would want to do
something like that." And uh huh. Okay.
and and then you know subsequently there
have been you know a few other things
obviously that that open AAI has gone
down including you know uh you know
potentially uh what's been going on with
with all their other cloud deals and
whatnot and uh you know and chip deals
and so uh is that what's at play here
and it's unknown right now because
Jensen keeps again saying that it's no
big deal
>> right I mean he Jensen was in I think
Taiwan over the past couple days and was
talking about how this was going to be a
very big investment one of biggest
investments ever, which is true. The
largest investment we've ever made, he
says. And then somebody asked him,
"Well, what about the hundred billion?"
And he said something astonishing. Uh he
said, "No, no, nothing like that." So
>> yeah, exactly.
>> Is that money off the table?
>> It's no nothing like that. Like like
that's absurd even though they announced
that. Like they were on again John
Ford's doing a live interview with with
Greg Brockman. I watched the thing, Sam
Alman and Jensen, and he talks to them
about the hundred billion dollars and
they're like, you know, touting like,
oh, this is an incredible um an
incredible agreement between two great
companies, like this is going to be, you
know, push the future forward and and
accelerate everything. And it was all
predicated around that huge number. Like
they're getting into like the technical
weeds of like whether or not it was all
going to come in like all at once. And
again, they never said that it was going
to come in all at once, but now they're
saying that it's never going to be a
hundred billion. Uh and and that's just
it's weird to not acknowledge that that
was the reality and pretend again like
this is like sort of gaslighting. It's
like yeah that was not what are you
talking about? Like come on $100 billion
we're not doing that. No one's going to
do that. It's like just go read the
press release that you put out there.
You said you were going to do that. And
okay, of course there's like the the
again notion like is is sort of open AI
more to blame for it? like Sam wanted to
get the big number out there and and you
know touted especially because it was
done you know maybe hashed out alongside
President Trump and you know that he
likes the big numbers and and let's put
these out there and get everyone excited
but it also like really uh you know was
was big news for the stock market right
as well and now today you know it sounds
like Nvidia is dropping because it
sounds like the stock market's like well
what happened to that deal that you said
was going to get done and if and if
OpenAI were a public company I would not
want to be their stock right now today,
but they're not. Instead, they're
raising it, you know, uh hundred billion
plus dollars. And and and yes, you hit
upon like the the idea, the other way
that they're downplaying this is like
look, it's still and and Jetson very
specifically said is probably their
biggest investment ever. So, it may or
may not be might be up to $100 billion.
It may or may not be. Um so, it's
probably their biggest investment ever.
And yes, so that's still obviously a big
deal, but there's talk that Amazon's
going to do $50 billion in the round and
Soft Bank's going to do more. It's like,
you know, this was going to be one of
the biggest deals, maybe the biggest
single, you know, amount ever put in
from one company to another. Uh, and now
all of a sudden it's not. And they're
just started saying like, "Yeah, sorry,
we didn't really mean that one."
>> If I recall correctly, they also had a
moment where they were touting about how
it like wasn't really done with bankers
and it was just like hashed out mono.
Might have been a sign that something
was going wrong there. Maybe maybe you
want to get these deals uh a little bit
uh more locked in before you announce
them, I guess, in the future in Jensen's
favor. He could back out of it.
>> Exactly. Yeah. And and that's what I
wrote at the time, like uh you know,
basically saying like look, let's not
get ahead of ourselves here. There's
there's a lot of like wiggle room in
this and there's ways that that Nvidia
might not end up certainly might not end
up doing the full 100 billion because it
was tied to specific milestones. it
sounded like, you know, at least, you
know, verbally that that's what they
agreed upon. Um, I think there's one
other key element that that sort of I
for whatever reason hone in on, which I
think is interesting and at play,
potentially at play here, which is that
to me when I when I first was was, you
know, reading about the deal, it seemed
like a big part of it was basically
OpenAI leveraging the relationship with
Nvidia and using the fact that Nvidia is
the most valuable company on Earth to
basically be able to use that
partnership. And there was subsequently
reporting on this fact that they would
use that partnership to be able to uh be
able to raise debt basically in order to
fund a lot of the infrastructure
buildout that OpenAI had wanted to do.
And that's because OpenAI still is not a
public company, let alone, you know, not
a profitable company, was having a
harder time raising the levels of debt
that say an Nvidia could. Nvidia could
basically raise whatever it wants
because again, they have the stock to
back it up. They have all these assets
to back it up and they have the profits
to back it up. OpenAI is not in that
that place, but they still wanted to to
be in charge of their own buildouts. And
so, how do you do that? You partner with
someone on it. And they've obviously
been partnering with Oracle and many
others around those around those lines.
But like to me, that seemed like what a
big part of this deal was, uh, basically
Nvidia stepping in to be a, you know, a
guarantor of of the debt that that
OpenAI would need to raise. And what
happens to that now? Is that off the
table or did they decide maybe they
don't need them for whatever reason
anymore? Maybe this new funding, you
know, helps with that. Um, but I don't
know that's that's a weird part.
>> That's fascinating and that could be a
big problem for opening eye should that
materialize. I don't think enough people
are talking about that. One last bit on
this uh and this is a story that you had
also written. Uh we talked a bit on on
uh Friday about how Anthropic uh is now
raising 20 billion and OpenAI is raising
a hundred billion and the funding
sources are sort of mixing and matching
from places that you wouldn't think
would typically um be the source of
funding for the specific companies. For
instance, Microsoft putting money into
Anthropic after being OpenAI's biggest
backer and Amazon maybe putting 50
billion into OpenAI after being
Anthropic's biggest backer. And I think
the way that you frame this is really
interesting that there is effectively an
anti- Google alliance uh forming out
there whereas all these companies the f
the funders the big tech funders the VCs
uh and the labs be it anthropic or open
AI now realize they're in for the fight
of their lives against Google and
they're just going to do whatever they
can uh does you know all old rivalries
maybe go aside they'll do whatever they
can uh in order to be able to build some
counterweight to the emerging force that
Google is.
>> Yeah, that's that's sort of again my
highle read of it. Um, and we had talked
previously, you know, in previous
conversations about Google's ascension,
uh, you know, after being sort of kicked
kicked around and and, you know, being
beaten down a bit as to why they weren't
sort of meeting the moment. And now
towards the end of last year when they
sort of, you know, when Gemini 3 rode in
and and even Nano to banana and
everything, right, it basically uh
awoken the beast and now all of a sudden
there was a code red from open AI and
and you know, everyone's sort of uh eyes
are wide open to this uh to this
realization that Google has everything
they need to potentially, you know, take
over this race. And I do think that a
lot of their peer group in in big tech
uh probably recognizes the same thing.
And I think the Microsofts of the world,
the Amazons of the world, um, and the
Metas of the world, too. Meta is a
little bit different of a story, which
we can talk about separately, but
because they're not one of these ones
that's funding these other companies,
but I think these other major cloud
players, at least the ones who have the
rival clouds, specifically Microsoft and
Amazon, realize that uh, yeah, they
probably need to align around basically
anyone who's not Google, right? They
don't they don't necessarily care if it
I mean, they do care obviously. they
would hope that it's their their stuff
that takes off and sort of wins the day.
But at the end of the day, they can also
be a huge shareholder in Anthropic. Um,
you know, and and I think they they'd be
happy about that. They can maybe, you
know, have uh be a shareholder in even
XAI and they can be happy about that as
long as it's it's not Google, their
chief sort of competitor and the and the
one company that has all the pieces in
place to take this over. Now obviously
Google itself is a big stake in
anthropic but that sort of predates you
know this um the situation that we're in
right now and so yeah to me the the big
eye openener was that Amazon 50 billion
report if they end up really investing
$50 billion
uh you know into open AAI after being
they are the largest shareholder of
Anthropic you know I was trying to think
like does that mean something that
they're negative in some ways against
anthropic I don't think that's it I just
think that they want to make sure that
they are in a place where they can, you
know, pick and choose what they want as
long as it's anyone but Google.
>> That's right. No, it's it's a great
insight and uh now it sort of explains
this thing that I've been struggling
with which was like why is this funding
crosspollin polization happening? And uh
I think that's about as good of
explanation as any uh that I've heard.
All right. Uh Open AAI is eyeing an IPO.
We have a date now that has uh been
reported in the Wall Street Journal.
We'll talk about when that is and why
that is when we're back right after
this. And we're back here on Big
Technology Podcast with MG Seagler. MG
writes at spyglass.org. Highly recommend
it. It's definitely great place to go
for all insights on AI and big tech. And
MG, of course, if you're new to the
show, is here with us on the first
Monday of every month. And given that
it's Monday, February 2nd, we're here
talking and we have some big stuff to
talk about. Uh [snorts] we'll talk in
this segment about OpenAI's planned IPO
and why Apple earnings uh despite being
amazing, don't seem to be able to buy
the company any credit with Wall Street.
Um let's talk about the IPO first. I
thought personally there was no way
OpenAI would try to go public in uh
2026. Uh obviously Sam Alman made that
comment to me when I spoke with him late
last year uh that he he would hate being
a public company CEO, something along
those lines. Uh and now I'm looking and
the Wall Street Journal has this story.
OpenAI plans fourth quarter IPO in race
to beat Anthropic to market. Openai is
laying the groundwork for a public
listing in the fourth quarter of this
year. uh accelerating his plans as
competition with rival Anthropic
intensifies the $500 billion startup is
holding informal talks with Wall Street
banks about a potential initial public
offering and is growing its finance
teams its finance team opening
executives have privately expressed
concerns about anthropic beating the
company to an IPO. On Friday, we had
Steven Morris, the uh San Francisco
bureau chief of the Financial Times on
and we were talking about like when you
think about these numbers, uh the
question is where is the money going to
come from? Is there enough money, let's
say OpenAI were to go public at like
$1.5 trillion, is there enough money on,
you know, out there to fund an IPO like
that? especially if let's say an
anthropic comes out a month before and
then you're looking at you know the
traditional you know IPO share buyers
just like having to decide and then the
amount of money that you're that's
available comes down. So I'm curious
what you think. Is this a response to
that and just them saying well there's a
limited money out there we better go get
it. If so maybe that's a smart move.
Yeah, I wrote about this a little bit at
the end of last year around the time
that yeah, the rumor started that that
Anthropic was thinking about uh you know
potentially going public in 2026 because
to me it's sort of like that is the
ultimate open AI squeeze cuz right we
already talked about um the fact that
Google is sort of you know has woken up
and they're sort of squeezing from the
top one of the biggest companies in the
world. They're going after many
different elements of of what OpenAI's
uh you know historic strong points had
been in AI. Meanwhile, you've got
Anthropic, which was always thought to
be sort of the smaller player, right,
across the board, maybe going more after
enterprise, more focused on that. Um,
but ultimately, uh, if they're both
going to go public, um, you know,
there's there's a sort of first mover
advantage for sure, you would imagine,
because you would hope that there's
pent-up demand or they would hope that
there's pent-up demand in the market
for, um, you know, various AI bets. uh
and uh the first one to go out there is
probably going to have uh you know a bet
better of a time perhaps especially if
that [snorts] first one to go out has
better-l looking economics or at least
the path to betterl looking economics uh
than the other one does and that again
points to the notion at the time from at
the end of last year that Anthropic was
said to be you know maybe couple years
ahead of uh OpenAI when it came to being
able to turn a profit and so again if if
Anthropic were to go out and go public
ahead of Open AI and they have this uh
direct path to profitability that's much
quicker than what OpenAI can get to.
That puts OpenAI in a very very tricky
spot when they were to go public as
well. Um, and you know, they would
really have to rely on the bigger
picture, bigger growth narrative. And
you know, that's becoming murkier by the
day, right? With all this stuff going on
with with not only cloud code, but now
claude co-work, you know, all these
other things that that are going on at
the moment in AI. Um, I do believe that
there's, you know, that that seems like
it would be a natural um outcome of this
would be, yeah, open AAI trying to race
to beat anthropic. There's one other
element to this which is even more sort
of uh in the news in the past few days
which is XAI if they really do merge uh
with SpaceX which it does sound like now
is going to happen and may even be
announced this week which is insane uh
how fast that that came together if it
comes together in that way. That's sort
of a interesting end runaround by Elon
to all of a sudden have potentially a uh
an an AI play rather than just the space
play uh to go public in you know as
rumored in June or maybe July uh of this
year. So well ahead of when there's no
way that that Anthropic or OpenAI can
meet that timetable right now. I think
SpaceX is way ahead of them in terms of
where they are in the process. And so
what if Elon does the ultimate end run
around and gets XAI out before either of
these companies and becomes the ultimate
first mover AI play?
>> I mean, he would love to hold that over
Sam, wouldn't he?
>> Oh, of course. That's that's part of the
that's part of the strate. That has to
be part of the strategy here. For sure.
For sure.
>> Unbelievable. I for for the record, I do
not think Open AI is going to go out
2026. 2027 probably. 2026
off.
>> I agree. I I I agree with that. You
know, we talked about my predictions
last last go round. That was one of them
that I didn't think that any of the
major uh AI companies would go out in
2026, even though they're all talking
about it. XAI, though, might just prove
me wrong by this merger, I guess. Um but
uh but yes, in terms of OpenAI and
Anthropic, I just think that beyond um
where their businesses are at and with
these now massive fundraisers as we're
talking about, like I I think that it
will push it out a little bit. And the
real wild card is obviously as it always
is the macro story, right? Like what
ends up happening. There could be so
many things that sidetrack or at least
delay um you know any sort of rush. But
again, if it really is a fullon sprint
for open AAI to get out ahead of
anthropic, there's there's a window
which they do that, but I think it's
it's still probably a 2027 thing.
>> Yeah, agreed. Okay, one last story
before we go. Apple turned in I think
the best earnings report uh in its
history. It brought in 143 billion in
revenue. Uh over 130 uh8 estimated
iPhone revenue was 85.27
billion uh beating the estimate by 6
billion. The estimate was 78 billion.
That's 23% growth in the iPhone category
uh year-over-year which is insane.
Remember they were struggling to grow
iPhone uh for a while. Uh they also beat
on profitability. Yet Wall Street did
not seem impressed. the stock is, you
know, up a tiny bit but mostly flat uh
since the earnings announcement. What
does Apple need to do to get that story
turned around? It's been flat for 6
weeks or so. I think this is again this
is going to be the best year in Apple
history. Uh but of course the AI story
is something that that is not really
working in its favor.
>> Yeah. Uh so there's a few things there.
First, first and foremost, I do think
that some of that, you know, um
apprehension about Apple is related to
just what's going on with memory chips
and everything, right? And and that it
could ultimately end up squeezing their
margins. The margins were incredible
this quarter, which was sort of a
surprise to many given everything going
on. But it does seem like Apple's sort
of been savvy in terms of like
potentially hoarding uh memory chips,
which is not something that Tim Cook
usually likes to do. He likes to get
this inventory as streamlined as
possible. But we're in a weird um sort
of macro environment for this uh and and
that's in in no small part because of
the AI uh revolution that's happening,
right? And and is is changing all of
those equations. And so, um, I do think
the broader picture though, yes, is that
the market sees what Apple is doing with
Google, and they applauded it right when
it was first announced that like, oh,
it's like, oh, they're teaming up with
one of the leaders in AI, if not the
leader in AI now in Google, and and
Gemini will finally fix Siri and and
we're going to have, you know, a great
situation for Apple. But I do think like
ultimately they might be looking at this
as like well look we have we have Meta
over here that's spending $130 billion
in capex whereas Apple is spending much
closer to zero you know it's something
around it's like 18 or $20 billion in
capex um because they're not building
out obviously the massive infrastructure
in order to to bake their own uh cutting
edge AI and so um you know I think that
there's a little bit maybe a lot of
apprehension that While Apple may be
fine in sort of the shorter term, for
the longer term, if they are not one of
the key players in the AI space and if
you believe AI is going to revolutionize
everything, um, including hardware
businesses potentially, like then
Apple's in a tough spot. Now, Apple
would counter that. Look, this is the
short-term stuff. We're we're doing this
partnership short-term. We're going to
fix Siri and we're going to continue to
work on it behind the scenes to
eventually, you know, roll our own AI.
But again, the capback spend that
they're that they're, you know, showing
right now is so small relative to not
just Meta, but Google and Amazon and
Microsoft and everyone else that it
feels like there's a world in which
they're behind right now and that they
can never catch up. Um, and that would
be obviously the real fear. Now, I do
think you're right, like, and we've
talked about this previously, like I do
think this will be a good year overall
for Apple because I think in no small
part because like we've talked about
before, the iPhone Fold and some of
these other devices that they have
coming out, I think will end up doing
well for them. But again, the bigger
picture, the longer term time horizon
stuff is AI. And uh right now they're
just they're showing no real um energy
around being uh that they need to have a
sense of urgency around it other than
cutting deals which is just not
typically what Apple does whereas they
want to own everything in house.
>> Yeah. I was on CNBC last week right
before Apple earnings and I kind of
stuck my neck out for the company which
I don't typically do but I was basically
like here's where Apple the bullcase for
Apple uh sits. They are going to sell
the latest model and probably the next
model like crazy through the year.
Meanwhile, there's no killer AI app or
device yet that is threatening to
disrupt their core business. I mean, if
you think about it, we may get some AI
devices this year. My prediction is
they're just going to they they will be
underwhelming. They won't be a threat to
smartphone growth yet. 2 3 years, four
years, 5 years from now, definitely. But
at least in the short term, uh it's
going to look really good for Apple
because the sort of immediate threat to
them from AI is not materializing. The
intermediate threat uh is still there.
But maybe the market doesn't care about
it. Maybe they're more long-term focused
and not quarter by quarter like uh we
like to ding them for.
>> Yeah. And and I mean the the one thing
that I agree with right now is that
there is a a path in which uh Apple
looks very smart in say a year or two
years from now in not having spent all
of this capex to do these massive
buildouts, right? If the model's fully
commoditized and all that um that you
know that they're just sitting there and
they can sort of pick and choose what
they want to use and and also pick and
choose which path they want to go down.
Again though, I go back to the idea that
like they're just not doing the other
stuff behind the scenes in order to to
be able to eventually sort of, you know,
take over their own uh control of these
AI models. Like they just don't have the
infrastructure in place. There's talk
that they're that they're building out
data centers, that they're building out
their own chips to be able to train
these, but the spend isn't there
relative to their peers where it would
seem like they're really taking this
seriously. Now again, maybe there's a
world in which like LLMs sort of end up
being uh not the the beall endall path
and there needs to be other, you know,
mechanisms in order to and other methods
of of doing AI. And so maybe Apple can
sort of come in at that point and and
sort of catch up. Uh but everything
we're seeing right now is that that's
not the case and that they'll need to at
some point spend a lot more money than
they are if they really want to have
their own sort of, you know, AI that's
built in house. Yeah, it definitely does
seem that somewhere inside that company
there was a decision made from the very
top that was like let's sit this out for
now and and just figure it out
afterwards. That like you said that
might turn out to be a good decision on
the other end very risky. Very risky.
>> Yeah. Um and it's not like the the
alternative is Apple has so much cash.
They made the, you know, they finally
made an an AI acquisition as you and I
have long talked about, but it wasn't
for a No, it wasn't for Plexity and it
wasn't for a Frontier model uh company.
It was it was for interesting technology
that you know, Disclosure GV was an
investor in. So, it sounds uh you know,
where I where I previously was a partner
for a long time. So, you know, I think
that that's probably a savvy play, but
it's a $2 billion investment like you
know, that's reported and um they have,
you know, as you noted, they they're
doing record profits right now. what are
they spending that money on? They're
spending that money on buybacks and
they're spending that money on, you
know, things that that are not
[laughter] moving the ball forward with
regard to AI, uh, except in very small
ways. And so, you know, I I don't know
what the what the counterargument is.
I'm not saying that they have to spend a
hundred billion in capex, but I'm saying
maybe they should spend $50 billion if
their peer group is spending $150
billion on capex a year. uh maybe it's
it's at least worth it to do something
that's um you know just in case
scenario,
>> right? I I agree. You gota you got to at
least start spending a little bit
because even if you have conviction that
AI will be a you know maybe not as
revolutionary as people imagine. You
have to hedge a little bit because of I
mean what we're seeing right now all the
AIs hanging out together in malt books
swarming and plotting how to overthrow
humanity. So you know Tim Cook come on
throw throw a little more skin in the
game. Geez.
>> Yeah. Yeah. That I I want to go into a
chat room and see Tim Cook's malt bot
there and uh chatting chatting away with
uh with other Apple executives.
>> Oh my god. And somebody using that
exploit to take over their computer.
That'll be a story. Uh all right. The
website is spyglass.org. Our guest MG
Seagler joins us every the first Monday
of every month. It's always great to
speak with you MG. Thanks for coming on.
>> Thanks as always, Alex. Talk soon.
>> All right. Looking forward to doing this
again next month, folks. On Wednesday,
Joel Pino, the chief AI officer of
Coher, will be here with us to talk
about the latest in AI research and
where the cutting edge is heading. So,
we hope you tune in for that. Uh
hopefully the AI bots won't turn take
over the world between now and then. So,
uh if humanity remains in charge, we'll
see you next time on Big Technology
Podcast.