Mark Zuckerberg’s Personal Superintelligence, Layoffs & Payoffs, Writing With AI — With M.G. Siegler

Channel: Alex Kantrowitz

Published at: 2025-08-04

YouTube video id: q8YyzOWgBAg

Source: https://www.youtube.com/watch?v=q8YyzOWgBAg

Mark Zuckerberg wants to build personal
super intelligence, but does the term
have meaning beyond marketing hype?
We're seeing $100 million payouts and a
lots of layoffs in big tech. How does
that work? Plus, should you use AI to
write? That's coming up right after
this. Welcome to Big Technology Podcast,
first Monday of the month edition, where
we talk about everything happening in
big tech and AI with the great MG
Seagler of Spy Glass. You can find his
writing at spyglass.org.
And we have so much to speak with you
about today, including what super
intelligence actually means. Uh whether
people are actually worth hund00 million
as big tech companies are laying folks
off and of course whether you should use
artificial intelligence to write. MG,
it's great to have you back on the show.
Welcome back.
>> Thanks for having me, Alex, as always.
And yeah, there is so much going on
right now, so I'm excited to dive into
it a bit.
>> Definitely. All right. So, we're gonna
discuss the definition of super
intelligence on this show or the lack of
definition of super intelligence on this
show. I think the first way for us to
get into that is just looking at this
document that Mark Zuckerberg has laid
out uh about personal super
intelligence. So, I'll just start off.
He says, "Over the last few months, we
have begun to see glimpses of our AI
systems improving themselves. The
improvement is slow for now, but
undeniable. developing super
intelligence is now in sight. Okay, so
before we get into exactly what Mark
Zuckerberg wants to build, what is this
guy seeing? And in fact, what are all
the AI labs uh seeing? Is this just them
talking up the potential of the
technology to get bigger funding rounds
and better valuations on the public
markets? Or do you think that in concert
they are all seeing that we're at the
beginning of this curve?
>> This memo is very interesting on a few
fronts. Um I do think sort of it's
everything you're talking about is sort
of true, right? It can be true that we
are at a unique time and that you know
there these these companies are seeing
that AI is starting to it sounds like
you know a lot of them are framing it
that they're now able to sort of work on
their own and sort of are are uh
levering up to sort of this new level of
intelligence. And again, we'll go into
the definitions in a bit, but um I also
think Zuck's in particular angle here is
unique because of course he's not trying
to necessarily raise money, though they
are it sounds like I think raising debt
and stuff to help, you know, fund some
of their expansion, but he's not raising
from VCs obviously as they've long been
a public company of course, but he has a
different incentive which is that he's
trying to bring in all the talent in the
world that he possibly can in AI. And to
me, when I first saw this memo, it felt
sort of like a response to um sort of
some of the the stories that you hear
that some people are reluctant to go
over to Meta, right? And so he needs to
lay out a vision, a cohesive vision for
what Meta is trying to do that's both
different from sort of what some of the
other players in the space are trying to
do and also you know hopefully inspiring
to some people because again I think a
lot of there's at least in the press
there's a lot of sort of shade being
thrown at at Meta that look are you
going to go there to build the next um
you know sort of silly social AI element
while these other places are trying to
cure diseases and and do and you know
basically solve the future of work and
do all these other things. And so
Zuckerberg, I think, you know, realized
that his one angle that he can
definitely play up is obviously the
historical strength of of Facebook of
Meta dating back to the Facebook days
and all the other sort of services and
apps that they both built and bought is
obviously around social media, but he
won't sort of use that framing these
days, but it's more around
personalization and doing things uh you
know for personal reasons, interpersonal
reasons. And so I think that he's
viewing that as his angle. Uh we can go
into a bit whether or not I think that
that's a good good take to get, but I I
appreciate that he's trying to do it.
He's trying to differentiate and again
the high level of the memo itself. It
sort of is almost like a Sam Alman memo
in when I was read first.
>> Um and you
>> even look at the formatting and this is
a post that was written
meta.com/supintelligence.
He didn't even post the thing on
Facebook. Uh, and it's the if folks, if
you haven't seen it, this is what it
looks like. It legitimately is like
black text on a white background. It
looks like the AI researcher websites
where they like kind of want to show
that they don't really care about
branding even though they really do and
they're like in courier and like blue
links and stuff like that. It has
exactly that feel. And it does have just
if you look at the optics of this and
Mark Zuckerberg knows this very well. It
is sort of a a an attempt to position
himself as a someone who knows what's
going on with AI research and b you know
one of these AI visionaries.
>> Yeah. This is like the old school
version of you know when CEOs might
write like a Tumblr post back in the
day, right? Because they're being cool
and edgy and they're they're meeting
they're meeting their employee base
where they are rather than
>> he should have just written the thing in
like entirely lowercase and then he
really would have been which is the way
that Sam Alman writes then he really
would have been on point
>> or he should have Yeah. done a copy and
paste of of having written it in
terminal in a terminal and then uh you
know yeah basically used a screenshot of
it.
>> Yeah. And I know it's like it's easy to
make fun of but I actually think this is
like smart a smart play. You kind of
have to show to these folks uh that
you're speaking their language.
>> Yeah. I mean again I think that he's
doing what he feels like he needs to. I
think he's been out there obviously
talking to all sorts of people um you
know in terms of who he could
potentially bring on board. Some have
come on board, many have not. it sounds
like, you know, from all the various
reports on it. And I think he's
probably, you know, using those
conversations as as inputs in his own
head of like what's resonating, what's
not, what do I need to do differently?
What do I need to, you know, how do I
need to frame this? And again, I think
one of the key things is like having a
cohesive message about what Meta is
trying to do and hopefully one that's
different from what the other
competitors are trying to do. And I
think that that's where this sort of
netted out.
>> So, let's get into that message. So he
says in it's clear in the coming years
AI will improve all our existing systems
and enable the creation and discovery of
new things that aren't imaginable today
but is an open question what we will
direct super intelligence towards and
now here he is. He says super
intelligence has the potential to begin
a new era of personal empowerment where
people will have greater agency to
improve the world in the direction they
choose. and he says Meta's vision is to
bring super intelligence to everyone.
This is distinct from others in the
industry who believe super intelligence
should be directed centrally towards
automating all valuable work and that
humanity will live on a dull of its
output. So he's basically saying like
listen uh there are others that want to
use this technology for automation. We
want to give everybody the benefit of
being able to use this technology. And
there he is setting that well, let's say
unique vision for now. Um, but I'm
actually gonna question it. But is that
is that the right read on it, MG?
>> I think so. Um, yeah. I mean, it's
basically like look, there's all of
these other AI companies out there, um,
including Google, by the way, which are,
you know, saying that they're that
they're closing in on solving, again,
we'll get to the branding, but whatever
you want to call it, AGI or super
intelligence. Um, and that that sense
that it's though this nebulous term,
which I think everyone would agree with,
right? That it that it can be used for
anything. So maybe maybe it's a problem
because no one really knows then what
it's actually going to be used for. And
in some cases that's that's scary to
people to a lot of people, right? And so
what I think Zuckerberg is trying to
frame it as like we're not trying to
boil the ocean. We're not going to use
this this technology to do every single
thing under the sun. you know, we don't
necessarily uh, you know, focus on
enterprise like some of the competitors
do. We focus on personal relationships.
And so, we're going to use we're going
to get to that technology to this level
of technology and we're going to use it
for your own personal purposes to
empower you to use it for your own
personal purposes, right? And so it's
trying to be again an empowering message
versus some of the others where it maybe
sounds a bit not nefarious but a little
bit nebulous and scary in that regard
because it can you know who knows what
it will uh ultimately be used for. And
so again the framing I understand I'm
with you that I'm sort of skeptical
about yeah that that necessarily that
framing of it but again I think from a
pure comm's perspective when you're
trying to make this this message to the
market I think that that at least makes
sense. So now, yeah, I I said I have
some hesitancy about going fully along
with Zuckerberg saying that they're
going to be distinct from the other labs
in terms of personal um empowerment
because if you look at what chatship BT
is, I mean that seems to me like that's
the goal. Now, was Chadship BT a demo
maybe so that they could sell their API?
Yes. But has it overtaken uh that API
business at OpenAI? I'm pretty sure it
has. So I think that like all of these
chat bots if the models improve if we
get to let's call it super intelligence
or even AGI
every single one of these chatbots are
going to have that property. Don't you
think?
>> Yeah. Again the the thing that
immediately popped into my mind when I
was reading that is is along the lines
of what you're saying. It's basically
like it reminds me of trying to say that
look we've built the first personal
computer but it only can be used for
personal reasons, right? It's not going
to be used for work and it's like silly.
A personal computer, a PC can be used
for anything and it should be used for
anything, right? We we use it for work.
We use it for personal endeavors and so
it's both. And I think that AI, I mean,
in my head at least, it's going to play
out the same way that yeah, we'll have
tools like chat GBT and and other
services and people will use them for
both work and and personal usage. And
that's what's happening right now. And
so Zuckerberg trying to sort of yeah,
narrow this into just one use case.
Again, it's like it feels like
unnecessarily confining and I don't
think it's going to play out that way,
>> right? So, I don't think that meta will
necessarily differentiate itself that
way. Although, you can choose the types
of features and the types of
functionality you want to focus on to
make them stronger. For instance, Daario
told me uh on Wednesday that Anthropic
chose to focus its energy on making
coding better. that's obviously, you
know, worked out for them or that it's
actually shown out shown in in the
product. So, Zuckerberg says, um,
everyone having a personal super
intelligence will help you achieve your
goals, create what you want to see in
the world, experience any adventure, be
a better friend to those you care about,
and grow to become the person you aspire
to be. I think he leaves out that you'll
also, I think, in their vision, will
become a friend to that super
intelligence in meta products. But I
guess that's neither here nor there.
Now, despite the fact that I don't think
that they're going to have the stringold
on this potential product uh route,
seems to me that this is if AI continues
a pace, all these things are actually
going to be the pro one of the main uses
of the technology. Like the vision isn't
exactly off. I just don't know if they
could do it, but the vision sounds like,
yep, that might be where we're going.
>> Yeah. And remember the other side of
this which is like Zuckerberg must be
looking at the market and knowing like
what advantages do I have and the one
major one you know there's a few few one
I think there's a few that he has but
one of the major ones is the fact that
they run services social networks that
that are used by billions of people and
so if you believe you know that this is
going to be as you're saying like a
sliver of what AI is they have a
distribution advantage that others don't
in terms terms of being able to leverage
the technology for these sort of more
social use cases. And so I think that
he's smartly sort of honing in on that
that fact of it.
>> Yeah, he definitely addresses how this
could benefit the Facebook or the Meta
business model in a pretty direct way.
He says, "If trends continue, then you'd
expect people to spend less time in
productivity software and more time
creating and connecting." It's like, all
right, so if you're able to get your
work done in five hours, then maybe
you're going to want to post, you know,
your AI slop on Facebook. Now, that's a
little snarky, but um it seems like
that's where what he's thinking.
>> Yeah. And then that's sort of using some
of the arguments that are out there from
Sam Alman and others against them,
right? Where they're saying like, look,
in the future, this technology is going
to enable us all to have a lot more
leisure time. And Zuckerberg's saying,
we know what to do with leisure time,
right? That's one of our specialties.
And so we're we're ready for that future
and and we're going to help enable it
and we're going to help um sort of fill
in that gaps. As an aside, I'm sort of
skeptical of like the the idea of, you
know, not necessarily productivity
gains. I do think that those will
happen, but I sort of go to the analogy
of like when a freeway is overcrowded
and so they expand it and you know, you
hope like, oh well then traffic will
free up and instead just more traffic
happens, right? And so I feel like when
we use technology to uh free up time,
we're just going to do more work, you
know, in different ways with that time.
I don't think that everyone's just going
to be sitting around like Wall-E, you
know, watching uh the videos on our
Hovercraft things. Um I do think that
that uh you know, that's that's maybe
again playing into what Zuckerberg hopes
happens, but I'm not sure it's going to
play out that way.
>> Okay. But then let me ask you this.
Obviously, Facebook started on the
premise of connecting with your friends
on the internet. Um, that's gone well in
some ways, but like really not well in
others. Um, and I'm not even talking
about meta scandals. I'm talking about
the fact that like there was a clear
shift on the social internet where
people stopped caring about what their
friends were doing and they just wanted
to watch fun stuff. And we went from the
friend feed to the for you feed. Uh, and
we don't really speak to our friends
online. Maybe we do in messaging apps. I
mean, we do in messaging apps. So, I'm
curious if you think, and this is kind
of more on the weirder side, but what
does Facebook become if it instead of
like using the internet, instead of
enabling us to use the internet to
connect with our friends, it like
basically helps us replace our friends
with AI friends.
>> Yeah. I mean, in some ways, this is uh
this is a very fundamental debate,
right? And and yeah, dates to sort of
the earlier days of social networking.
And I do think also, you know, with with
young kids about try to extrapolate how
this plays out. Um, it feels like
there's a definite sort of downside
scenario that you're alluding to where
it's it it sort of accelerates the trend
of of loneliness, you know, if you want
to call call it that. and um and people
sort of being more insular and spending
more time with machines, you know, from
obviously computers and phones on down
um to not need to spend time with
friends and not need to go outside and
not do face to face time because we have
all these new tools that sort of make it
more fun in some ways I guess to
interact with people. It's like, you
know, is in his ideal world, uh, would,
and he would never, of course, frame it
this way, but again, if you're just
trying to extrapolate out what he's sort
of suggesting, it's like, is it better
to not go outside and play with your
friends and instead do, you know, like a
game with your friends that's built by
AI, you know, personalized for the two
of you in in real time. And, you know,
again, that sounds dystopian. I want to
be mindful that, you know, like there's
various different ways obviously that
that people will use these technologies
and some of them will be incredible and
some of them will be good. And I do
think it's like I I I'm not of the mind
that um social networking and and even
connectivity is is fully to blame for
loneliness epidemic and whatnot. I think
it's part of it, but but I don't think
it's, you know, fully to blame for all
that. And I do think it can help in some
ways. The the sort of end state though
of that though is is what you were
hitting at earlier where it's like
it's not necessarily another human
friend that you're playing with, but
instead, you know, an AI friend who's
who's uh yeah, just been generated. Uh,
and sort of what that means uh for your
own internal psyche.
>> Yeah, it could end up being that we
become less lonely but more insular,
which is not like a trend that you would
anticipate. Uh, but there was we had
Elon Boro on the show. She's an AI
researcher and CEO who used to work at
uh Meta in the fair organization. And
she brought up this great point where
like the there are social limits to like
what you can do with friends. Like you
you really can't like continue to call a
friend at like 3:00 a.m. in the morning.
You know, if you're working through a
problem or something like that, you can
sort of make unlimited phone calls to AI
and that's fine. So like humans have
limits, AI doesn't. Is that good? I, you
know, not. We don't know yet, but
potentially. So, all right. Again, I
think that there there's
>> you can make the both sides argument to
this. Um, and I do think that there will
be good elements to it, but it that's
that nuance is way too hard to try to
explain in a in an open letter, right,
as as doing here.
>> Yeah, that's right. He prefers
bluntness. And he does end with a very
blunt mic drop. Meta believes strongly
in building personal super intelligence
that empowers everyone. We have the
resources and the expertise to build the
massive infrastructure required and the
capability and will deliver new
technology to billions of people across
our products. I'm excited to focus
Meta's efforts towards building this
future. So obviously like Zuck has had
these moments where he's like said like
video or you know VR is the thing we're
focused on. He puts all the company's
attention and himself on the project.
Clearly that is AI. It does seem like
this is the recruiting pitch by the way.
It's like we got the money, we have the
expertise, we have the infrastructure,
you know, get on board because whatever
else you're doing is not going to be
around for much longer after we're done
with this project. And that is sort of
the the final note that he sends to the
public and maybe to those who've spurned
him and maybe those who are still
considering his offers.
>> Yeah. But and I and I wrote about this a
little bit when sort of the the news was
first coming out after the scale AI
acquisition about what they were trying
to build with the super intelligence
labs.
I'm skeptical that it will work. um uh
you know and we you know we've talked
about that but I think that there's a
little bit of the Lucy pulling the
football thing that Meta has long had
both with the public in terms of yeah
pivot to video and all the news
organizations rush to pivot to video and
then Meta Facebook decides that they
want something else for you know their
algorithms prefer something else when it
comes to engagement and so you know they
basically have the the media
organizations then do something else but
the same thing is true with both I think
users and also their employees base,
right, where it's like, yeah, we're
going to focus on this thing. And
obviously, they pivoted the entire
company to be around the metaverse. And
it's like we're not hearing too much
about that these days aside from as it
relates to AI. And again, they're
they're sort of doing some hindsight nar
narration about how, oh, it was always,
you know, a part of the plan and AI, you
know, we Zuckerberg has said in the
past, right, I didn't real I maybe
didn't realize that AI would take off
ahead of, you know, the metaverse taking
off. we thought it'd be sort of the the
inverse of it. But there was always a
plan that the two of them were going to
interact together. And so, you know,
again, I think that that's the argument
made in hindsight. But, um, if anyone
who joined Meta for, you know, the
metaverse buildout and this was going to
be the future, like what are you
thinking right now as as you have
colleagues who are getting offered
hundreds of millions of dollars
potentially to build out AI? I don't
think any of the the metaverse uh team
is getting those packages right now.
>> Yeah, you're you're big sad. I mean,
they changed the name of the company to
metaverse or meta. You're you've been
laboring away on the VR goggles, and
some guy who's been working at two years
um on AI at OpenAI just got $100 million
to come over. I mean, I'm sort of being
facitious here, but that wouldn't make
me happy. Although, no,
>> I sort of think you you sort of respect
the game like if LeBron James comes to
your team, um you know, you know he's
going to make you better. But, uh, I I I
don't know. It's, uh, it's hard to see
all these LeBron Jameses coming over and
you're still in the meta metaverse unit.
So,
>> the the one other thing I would just add
to this, um, that I was just reminded
of, uh, is
>> we often, you know, bring up, yeah, the
meta the metaverse element of it as as a
easy one to to mention that Zuck was,
you know, obsessed with and then sort of
quickly change focus. But if you
remember, he didn't change focus right
away to AI because it obviously took a
little bit for it to come around. At
first there was the focus then became um
if I remember right in terms of order.
At one point he was highly focused on um
like encryption and personal um
one-on-one conversations versus you know
he wrote this whole memo
>> private to private.
>> Yeah. Private to privacy. Yeah. Yeah.
Yeah. And so that was a focus for a few
months it seems like because then then
crypto stuff started happening, right?
And they had a whole team that they spun
up that was early on it and like Kevin
Wheel was on that team if you recall and
there were many other people within Meta
at the time who were on that team uh who
you know it seems like they were going
to be at the forefront of uh you know a
new crypto boom that of course they were
just way too early for. um if you want
to consider it happening now. Um and
Libra, I think was yeah the the name of
of the product that they were working
on. But again, that was a focus for a
while until they just totally got rid of
that because it was not advantageous to
them anymore.
>> That's right. And so pivot to privacy
was their attempt to capture where the
social sharing was going. Like I
mentioned earlier that we weren't
sharing in the feeds anymore and all
sharing was happening in messaging apps.
So that was their way of saying that's
where it's going and we're going to try
to be at the forefront of that and and
that you know in some ways worked. They
have WhatsApp and that is the leading
messaging app in the world. So I think
there is I mean they they had obviously
bought it years beforehand but clearly
put a lot of product focus there um for
better or worse that obviously also just
introduced advertising after promising
they would not. So I think there is some
merit in running around uh what from
what seems like on the outside as
chickens without a head trying to go
trend to trend. Um
there are definite stories of big
companies in the past who've been like
oh we're too cool for that next new
trend. Um and then you know they're
making scanners and copying machines as
the world has moved on. So I think that
that meta is is um looks a little
ridiculous for the strategy, but there
is some wisdom to it.
>> Yeah. And um you know I I do think that
because and this is a big part of of
Zuckerberg's pitch, right? Because they
have this fundamental underlying
advertising infrastructure across their
different social properties, they can
take a lot of swings whereas other
companies cannot, right? And they are,
if you remember, like Google used to
have definitely the um the notion thrown
at them that they were a one-trick pony,
right? That they could only ever do uh
search ads and that was the only
business. Now it's still obviously pime
the primary element of the business, but
they've done a pretty good job, I think,
um you know, diversifying in many ways,
right? They have the cloud business. Um
you know, they have Whimo now, which is
up and coming. They have all different
sorts of of ways that they can
potentially make money. YouTube, that's
that's a different form of advertising,
but still it's, you know, a different a
different product line now that they
have uh that's fully monetized, which is
incredible. Um, and Meta still hasn't
found that yet, right? Like they've been
trying with with the metaverse stuff and
and selling VR uh goggles and and
selling different types of products. Um,
but still almost all of it comes from
social, you know, feed advertising right
now. But again, you want to take these
swings while you have that in the
position of power. And if you believe
that um that uh business is going to get
disrupted in some way, be it by AI, um
you know, people using other products,
people spending more time on YouTube or
Netflix and whatnot. Um you know, you
want to get it uh done while you can.
Take all those swings while you can.
>> Okay. Okay, so this entire conversation
we've been talking about Meta's super
intelligence effort, personal super
intelligence. Um, and as with many
conversations, we have yet to define it.
Uh, but that is intentional because we
are going to make time to say what is
this word? Why do people keep using it?
And what could it possibly mean? And
we're going to do that right after this.
And we're back here on Big Technology
podcast MG Seagler Monday of first
Monday of the month edition. You can
find MG's writing at spyglass.org.
And speaking of the definition of super
super intelligence, MG, you actually
have a piece on this, the endless
rebranding of AI. Basically saying that
for a while AI was good enough. Then
people went to um Gen AI. Then they
started talking about AGI. And now AGI
isn't enough. And now we're talking
about super intelligence. Uh what's
next? What what does super intelligence
mean or is it purely marketing hype?
>> I mean this is so
obviously the term has been around for a
bit. It first really came on my radar I
think actually when Ilasgiver spun out
and and made his company called safe
super intelligence. Right. And
presumably he did that, obviously he did
that on purpose, but presumably he did
it also to help differentiate from what
OpenAI was doing, right? They were at
the time it was all the talk was the
march towards AGI famously now they had
it baked into their contract with
Microsoft which is still an ongoing
dispute it sounds like. Um and so you
know they're deciding like okay well AGI
nebulously defined as sort of uh
computers reaching human level
intelligence um though some would
disagree with that and then super
intelligence is a level above that right
it's it's uh it's going beyond what
humans are capable of um again there are
depending on what you read depending on
who you talk to people disagree about
this no one the part of the problem is
no one ever really defined what AGI
itself was and then you know going back
again as you did like with with AI
itself it's like that was the
all-encompassing thing but then we got
all different sorts of like subsections
of it because people were wanted to be
more focused on it and they thought AI
was was too nebulous a term um and then
AGI came along and then AGI was too
nebulous a term and so now super
intelligence and now super intelligence
is too nebulous a term because we have
personal super intelligence and we have
safe super intelligence and we and
Microsoft is working on their own
variant of of uh super intelligence that
they that they want to talk about. And
so, uh it's it's getting a little silly.
It feels um all it's it just is
obviously all marketing at this point.
And so, I I do feel like that, you know,
you got to at some point pin people down
to say like what are you actually trying
to build and what what is the outcome of
that going to be? Yeah, I think we can
agree here and this is something that
we've been talking about in the first
half. It's the same damn thing. It's the
same thing. Everyone's building the same
thing.
>> But people would argue with you about
that, right? Because they'll say, "No,
AGI again is is sort of human level
intelligence." And we haven't we're
close to that maybe, but we haven't
quite gotten there yet with you know all
the various math tests and and the
different um you know tests that they've
built to to prove that. And super safe
or sorry regular super intelligence is
just a level beyond that. Like there's
lots of people who would argue about
that and lots of people who would argue
for that they were the same exact thing.
>> I I'll stand on the table here and say
it's the same thing. I think it's if
there were different techniques I think
this is what really what it comes down
to. If there were different techniques
to build a a AGI and super intelligence
I would say okay sure you know
Facebook's building super intelligence
over here and OpenAI is stuck building
you know you know stale old AGI over
there
>> but they're they're just training large
language models. Yeah. Um, and it
it's wild that we're trying to define
the sort of offshoots of any of them
when we haven't defined the first part
of it and dating back to the, you know,
original sort of definitions of these
things. So, it's uh it's a little bit
silly and that's obviously why I wrote
the post once I saw that Microsoft, you
know, was jumping in the ring as well to
try to uh to come up with their own uh
individual branding for super
intelligence. And then, of course, you
could see it. So, Zuckerberg's new memo
that we're talking about, I it's titled
that, right? But he started dropping
this. The first time I heard him mention
this was in an uh an interview with
Jessica Lesson of the information where
he just started like to casually mention
personal super intelligence. And it's
like, oh, did he is he just trying to
like say something off-handedly about
how they're, you know, they have a sort
of micro focus, but no, he just kept
saying it over and over again that they
clearly were going to try to make this a
branding thing for Meta.
>> Yeah. I mean, it's I think that it's
very exciting when the word super is in
there. I mean, really.
>> All right. When are we going to get
ultra intelligence? And
>> I'm ready for ultra
>> super ultra mega intelligence. It's
gonna it's going to just blow away what
we had. Remember when we were talking
about AGI? Forget that.
>> We were such losers back then. Now super
mega ultra intelligence. That's where
it's at. One one interesting thing I saw
in your story was that um Microsoft
cannot actually work on AGI legally.
This is what you write. a pesky open AI
contract says as much which sheds light
on the talking points that Mustafa
Solleman delivered over the past year or
so that Microsoft is happy to see seed
the frontier of malwork to open AAI. So
maybe this is maybe this is um
meaningful that like well Microsoft
can't build AGI because open AI is doing
it but it can work on super intelligence
because that's not in the contract.
>> Yeah. And so this I feel like is
definitely undertalked about. been
reported before, but um I feel like now
because um OpenAI and and Microsoft are
maybe in the midst in the heat of of a
new negotiation over the AGI term and
and how that's going to play out for,
you know, their relationship going
forward. Um it has come to light again
that yeah apparently a part of their
original um contract with you know the
quote unquote clause in it around AGI
was that Microsoft said that they would
not um you know themselves go to work on
go after AGI and working on that and
that in hindsight brings up all sorts of
yeah interesting points. I think you
know when you've talked to even Mustafa
Sulleman he's brought up the point of
like yeah you know we're happy to let
open AI work on the the cutting edge um
you know foundation models and we'll
just we'll sit back and work on sort of
more tailored customtailored one when
the reality was yeah they're they're
technically not allowed to work at the
cutting edge if it's going to lead to
AGI. you know, they could say maybe it
wouldn't, but you know, everyone else
would probably disagree with that. And I
also think at an even higher level, part
of the reason why I would imagine that
you even saw Sam Alman sort of pivot his
talking points from AGI to super
intelligence at one point in various
memos that he's written himself was
because of yeah the clause in the
Microsoft contract and it was clearly a
point of tension you know between those
two companies and it's like you've heard
uh Sachi Nadella talk on on I think on
podcast about like the you know I think
at one point he called it ridiculous the
notion of sort of that we close to AGI,
right? Because obviously he's very
incentivized for them not to be close to
that because it would potentially sever
uh the business terms with uh with Open
AI. And so it was almost like was Sam
Alman sort of throwing him a bone by
starting to talk about other things
other than AGI and rebranding as super
intelligence. Um but yeah, the if the
clause is in there uh originally that
Microsoft is not able to work on AGI and
now they're they're happy to have this
rebranding too where Mustafa Sullean can
work on all all the variants of safe of
super intelligence that he wants to um
because it's not AGI. It sort of brings
us to this point that we're talking
about how Microsoft and OpenAI are
linked and they have these like weird
contractual terms. And you also point
out that something that's not talked
about enough is just that big tech owns
so much of this AI moment even if they
go by, you know, different names. So
this is the list uh that you put
together. Nvidia owns stakes in OpenAI,
XAI, Mistral, Perplexity, Coher, Scale
AI. Oh, that's a bunch. I didn't realize
they had stakes in all.
>> It's even more than that. Those are just
like the major ones that I was pulling
out. They own so many stakes in
everyone.
>> So Microsoft owns stakes in OpenAI and
Mistral. Apple owns stakes in none of
the big AI startups. Uh but you know
there's potential for you know maybe
perplexity. Okay, I'm going to let go of
that dream at this point. Google owns a
stake in Anthropic. Amazon owns a stake
in Anthropic. Meta owns uh stakes in
scale AI and safe super intelligence
now. and Oracle owns a stake in Coher.
You say this is a lot of hedging. Uh
does that mean basically that like if
this AI moment takes off like big tech
is just going to be you know they're
going to be good whether it happens in a
start whether they build super
intelligence at home at meta or whether
they a startup actually is able to do
this.
>> I mean obviously to be clear they want
to do it themselves right and they want
to capture that full opportunity. But it
is it is interesting to me that this
isn't talked about all that often except
around you know when new fundraisers
happen in particular with Anthropic
right because Amazon and Google own such
uh relative large portions of that
company and famously Anthropic you know
apparently built into their uh
fundraising um uh documents that those
companies can only own up to a certain
threshold because they didn't want to be
beholden you know famously when when
Daario left OpenAI there was some talk
that he didn't like the Microsoft
element getting involved right with with
Open AI and what that was going to to do
to the mission. And so um but yeah all
of these all the big tech companies
aside from Apple as mentioned you know
basically have their their stakes in
these different companies and some of
them with multiple stakes now because
these companies have had to raise so
much money. Um and I do think if you
were just to sort of yeah extrapolate
out in the ultimate success state of AI
and say say that OpenAI and Anthropic
and maybe a couple others are are are
the next multi-trillion dollar companies
if say Microsoft again in the
negotiations that they're going through
right now if they end up owning onethird
of Open AI and it's one day worth $10
trillion you know that's that's
meaningful money to even to Microsoft
soft it's already meaningful money to
them you know because that company is so
highly valued but it's not uh you know
it's not more meaningful than a lot of
their other sort of core businesses but
again this is relative early days if you
believe this is early days of AI and
these players own these giant stakes it
will end up being a huge huge uh element
of those companies and and I'm I was
reminded of when back in the day when
Microsoft invested in Facebook before
they went public and people were like up
in arms about that deal think it was
such a bad deal like Steve Balmer was
doing a another boneheaded thing and
ended up being one of the best deals
that he ever struck certainly uh you
know easy of course to say in hindsight
but I think it was a savvy move um you
know because they did it alongside a
partnership I think it was involved with
um with ad sales and various other
things at the time maybe Bing was
involved in it as well it was a long
time ago but um but I do sort of view it
through that lens now their mistake was
selling off that stake in uh in Facebook
you know, a little bit after perhaps it
went public. Whereas now that would be
worth billions and billions of dollars.
And again, if Microsoft, if Google, if
Amazon hold on to these AI stakes and
these companies end up being worth
trillions of dollars, these stakes
themselves could be worth trillions of
dollars down the road.
>> Yep. You always got to get Bing
involved. That's important. Can't let
leave Bing out of the equation. But it,
you know, it's interesting because it it
really underscores how much money,
resources, attention, like we just spoke
about how much money and attention
Zuckerberg is putting into this. Um, we
talked about of course like these
massive investments like when we talk
about Google has some of anthropic,
Amazon has some of anthropic. Actually,
it's I think three and 8 billion
respectively. They own something like
you had this in your story, something
like 30% or or more of the company.
Actually, Google and Amazon own more of
Anthropic than Microsoft owns of OpenAI.
So, it's wild. And I just wonder,
>> I mean, if we go back to like everyone
building the same thing, u is there a
way that these bets pay off? Like, what
would it take? So, you've spent a lot of
time as an investor. Uh, what would it
take for these bets to pay off? Like,
what do we need to see for this money to
be money well spent? So, and just to be
clear, yeah, uh Google and Amazon
combined, I believe, if my math is
correct from all these court documents
that I sort of went through, but I
believe it's it's roughly correct that I
think them combi those two combined own
something just north of 36%. Um whereas
Microsoft again potentially in this
scenario would own 33 or around that uh
percent of uh Open AI if and when they
can convert um to actually have equity
uh ownership in the company. To answer
your question though, um yeah, what what
what would it take?
>> Takes a business model first and
foremost. I mean, uh you know, both
OpenAI and um Anthropic certainly are it
seems like are growing well. You know,
on the from an ARR perspective, you
know, their models are obviously
slightly different. Um but they're doing
a good job in different markets sort of
selling selling their wares. Um but it's
certainly not enough to turn either one
of them profitable at the moment. And so
they're going to need they're going to
have to do a number a number of
different things. Either they have to
keep growing at the incredible growth
rates that they've been growing at. And
that's, you know, possible up to a
certain extent. Um more likely that
they're going to have to add other
layers to their business, other stools
of their business to figure out how to
to make sort of the money work. I think
the main wild card though is much like
Google back in the day it might end up
being that these companies in the
ultimate success state if they're able
to reach it end up creating sort of new
models and not necessarily new that
we've never heard of before but much
like Google and even you know Meta to
some extent right they basically created
new forms of advertising that ended up
being more lucrative um than previous
forms of advertising because of search
ads because of intent right and then
with Meta because of the feed. And so
these are different forms of an old
school version of uh of monetization.
But so what does that look like in the
future version of AI? And you know,
OpenAI is famously sort of said, right,
they that Sam Alman didn't want to go
down the path of advertising. Um, and I
I I mean I would chalk that up to I do
think that they will eventually go down
that path, but I also chalk that up to
the fact that again I think that if they
go down that path and if it's meaningful
to them, it will look different than
what sort of search advertising has
historically looked like. I think there
will be a part of that because obviously
search is a part of of chat GPT right
now. But I think it just look it
probably has to look different. And
again, you ask about what it takes to
make these companies ultimately
successful. I think it's going to be
multi- um sort of stools of their
business. So, not just selling APIs or
not just selling access um you know in
premium versions and whatnot of their
products. I think that they're going to
have to have a lot of different things
including probably devices, right?
Obviously, we know OpenAI is working on
that. Um, but I do think that there will
be different flavors of advertising, new
fangled versions of of different old
business models is what leads to their
success with with the investor hat on,
with the VC hat on, how do you how do
you try to model that out? I mean,
again, you can model out where you think
the growth rate is, uh, you know, just
of the current business and and what
that looks like. And I think OpenAI has
said in their models that they're going
to get to profitability by 2029.
The late Again, having had some
experience not with with OpenAI in
particular on the investor side, but
with other companies, just, you know,
generic X companies, say, those models
that are 5year, 10ear horizon models,
those never uh play out the way that
they're modeled out to. Now, there could
be upside to that. Maybe they play out
better in some cases, but they never are
exactly like it, right? It's just it's
just to try to paint a picture of how
you can get to a space where you could
say be profitable even when you're
burning billions and billions of
dollars. This is just the extreme
version of that with AI,
>> right? So, I spoke with Ravi Matra from
Lightseed for the Daario uh profile that
I wrote and I said, "Tell me how you
thought about this investment." And I'm
going to get it like just directionally
accurate. It's not going to be precise,
but he basically is like, "Well, you
know, we took a look at the uh the
entire pool for like knowledge labor and
then said, well, could this just kind of
like, you know, either augment or
replace that?"
>> Um, yeah.
>> And he said that's a Yeah, the market is
15 to20 trillion. He goes, you work
backwards and just say at 60 billion or
100 billion, could you get a venture
style return? You absolutely could.
Sometimes it's about how you size the
markets top down. And then he told me
that they put a billion dollar check
into anthropic. So, it's just
fascinating how these how these it's
it's so unknown uh and and such wild
projections to get to the numbers you
have to get to. I'll give you one um
just example of like you know an last
generation of of company that sort of
this this uh played out at that you know
that I was a small part of which was you
know around an Uber investment where
it's like you're looking at the market
they just had the black cab market at
that point and it's like how do you do
what you're talking about basically the
TAM analysis right the total addressable
market um and and how do you think about
that and with with Uber back in the day
it always had to be like that there
would be something else to it. And you
know, there was the notion, I think the
early notion of what Uber X would become
where it, you know, it more democratizes
the model and it's not just about black
cars, but it wasn't fully baked yet. It
wasn't rolled out yet. And so, you sort
of had to buy into this vision of it. It
ended up obviously working out that way.
But there still were other elements that
were key to what they were doing,
including things like Uber Eats, right,
which came along, which you never would
have sort of envisioned was was
necessarily a part of it. there was some
inkling like yeah could they be like the
next version of uh you know again like a
democratized FedEx or something like
that right because they have all these
drivers on the road and you can you can
um move parcels around and move
different things around and and but it's
all just theoretical um and so yeah you
try to do like this these market
analysis and figure it out but it's it's
all very uh uh you know silly to look
back upon at the end of the day because
no one actually knows including the
companies right it's it's just it
almost always goes back to the notion of
like build a good product. If people
love using it, they're going to continue
using it. And that doesn't necessarily
mean that you're going to figure out the
right model for it. But if you have
enough users and you figure out a way uh
to monetize those users, it's going to
be a massive opportunity obviously. And
at least the good news for these AI
companies, unlike some of the other sort
of if you want to call them VC
subsidized businesses back in the day,
they have models that are working,
right? Like there were so many different
companies uh even just a few years ago
where again it was VC subsidized. They
had no actual models and it was just
like uh let's get to scale dot dot dot
profit thing, right? And now at the very
least we have uh we have you know money
coming in and it's just a matter of are
they going to be able to ever uh slow
down the spend uh that they have to do
uh to train these models um in order to
get to uh yeah a scale at which they can
actually finally sort of equalize that a
bit. And in that context, how would you
analyze the investments in people that
they're making? Because think of it this
way, something mantra told me. Amazon
went public at $400 million.
>> Mark Zuckerberg is handing reportedly
billion dollar offers to people.
>> Yep. Yep.
>> So, what would you need to do to get the
payoff on that?
>> This is where I am sympathetic to
Zuckerberg's arguments. you know, he's
made the argument um that basically it's
it essentially boils down to and I wrote
a post about this that it's all
relative, right? Like, okay, if you are
Meta and you're spending uh whatever it
is, $75 billion on capex right now per
year um on this on infrastructure
buildout for AI,
does it not make sense to spend uh you
know, some subset of that on the people
who can actually make this work? And
then it's just, you know, it sort of
becomes like this this math equation of
like, well, what makes sense? And you
could certainly make the argument that
to date, uh, the human level of
investment has been way under index
versus what the again the capex or the
the infrastructure, um, has been. And
it's and it's a weird dynamic, right,
for reasons that we were talking about
before where he's he's creating this
this weird disturbance in the market
because it's totally changing uh you
know what the historical norms have been
in terms of compensation. Um so you can
have people who are you know getting
paid what the quote unquote normal rate
was for an engineer before which is say
hund in the hundreds of thousands of
dollars at at one of these tech
companies and now you have people
getting paid hundreds of millions of
dollars potentially um to come work on
this. And again trying to be sympathetic
to to Zuckerberg's position here. If
you're needing to catch up in AI, you're
spending $75 billion plus a year on
infrastructure.
The amount that you're spending on
people is going to be even even with
these hundred million contracts is going
to be such a small subset of what you're
spending on infrastructure. And if
they're a key ingredient to making it
work, you got to do it
>> right. And so we've already poured one
out to the, you know, poor engineers who
are working on the metaverse. But, you
know, I part of the title of this
episode is layoffs and payoffs and what
what do you make of the fact that we're
seeing uh this moment in the tech
industry where like literally tens of
thousand pe tens of thousands of people
are being laid off as these mega offers
come in? It seems like they don't really
need to lay off those people from a
financial standpoint. So, why would they
cut so many? I mean, thinking about
Microsoft in particular.
>> Yeah. And cuz Sachin Nadella wrote a
memo about this which he released
internal memo which he released publicly
I'm sure because it was going to leak
and so you know they they wanted to to
get ahead of this anyway but he's
basically trying to explain they've done
I think the number is in total now
15,000 people have been laid off from
Microsoft over a couple rounds or a few
rounds this year and that's you know
again in the same time as we're talking
about that they're that they're spending
insane amount of money on capex and um
and just to hire people like Mustafa
Sullean to come over you know and
acquiring his company and uh uh and so
how do you how do you sort of square
that circle and and Sati Nadella you
know basically tried to do it um that
calling sort of tried to do it he called
it an enigma I think of a of a time and
and that it doesn't make a lot of sense
on paper um but that basically you know
we're in this transformational moment
with AI yada yada what everyone says but
I do think like there's a there's a
number of things going on here one level
deeper which is one I do think that
where businesses in general are sort of
looking at the landscape and figuring
out do they need all of these people
tech companies in particular now almost
all of them are around 100,000 or more
people um you know on their payroll and
a lot of those people are doing things
which you know the unfortunate truth is
that uh the you know the internal powers
that be the inf the leadership
infrastructure probably views them as
like not necessarily they're not bad
people but they're doing stuff that was
last year's innovations right or they're
they're doing stuff that they they might
think that AI is able to eventually
replace, if not now, but sooner rather
than later. I also think that there's a
part of it where look, they know that
Wall Street at some point is going to
get skittish on this capex spend and
especially if the AI revenue isn't, you
know, growing in in concert with it at
some point soon. And so I do think that
this is a way to throw a bone to Wall
Street and say like, look, even though,
you know, it looks like that we're being
um sort of wild on our spend when it
comes to capex, we are being prudent
financially, we're thinking about, you
know, how how we, you know, basically
organize the business overall and and
that includes, you know, letting go of
some of the people that we've had
previously that we feel like we don't
need anymore. uh while we continue to
bring on new people that we do feel like
we need, you know, from from the AI
perspective and and building that out.
And so the the weird things of how this
plays out to me, it's almost like a
microcosm, right, of like the the
overall debate being had about um you
know, as I titled my thing, like sort of
the halves and halves not
>> a great No, you you write the have lots
and the have nots, which I think is just
it really
>> it's above just having it's having more
money than as I as I put I think at one
point. It's basically they're getting
offers more money than than anyone gets
offered for any position, including the
CEOs of these companies. Maybe there's
some star athletes who make, you know,
something comparable when you when you
consider their endorsement deals
alongside whatever they make in salary,
but it's just an incredible incredible
amount of money. And how do you weigh
that against um yeah, these layoffs? And
again it's like I think of it as like
sort of a microcosm of yeah like the the
world in which we live in which you know
there's the extremes of society with the
ultra billionaires while people are
struggling you know on the ground in a
daily basis and and the microcosm that
that that can be within a company itself
what does that lead to in terms of
internal politics and and and weird
inner company dynamics and I think you
know that's part of why I think Sachi
Nadella and others have written that
basic memo of like uh I I think that
They're trying to get ahead of that. And
also lastly, I would just say because
we've seen this a few times now and I
think from from Andy Jasse and others,
they're basically giving the message
that look, you got to incorporate AI
into what you do or or you got to get
off board right now because
>> this is coming and this is how we're all
going to do business. So do it right now
or else you're gone.
>> Yep. You go full tale of two cities at
the beginning of that piece. It was the
best of times. It was the worst of
times. I mean, that is the way to start
that piece. Okay, so let's end here. Uh
AI for writing, it's pretty
controversial. I think AI is getting to
the point where it's starting to write
adequately. Not well, but adequately. I
don't use it for writing. Uh but when I
read AI written stuff, I no longer have
a gag reflex. U but you write, I think,
something very very um spot-on about
what it means uh for our society or our
lives. And it's that writing is not just
a way to communicate information. It's a
way to organize your thoughts. It's a
way to think. And so we focus so much on
the output of AI writing, but very
little on the input. And that's where
it's going to hurt if people start
relying entirely on these AI tools to
convey their thoughts. So kind of take
us through your thinking there and talk
a little bit about whether you think AI
writing is going to be a net positive or
negative.
Yeah, this this came from me just trying
to extrapolate out how I would view
using AI to write which I don't like you
I don't right now and it's like would I
ever um and I was trying to think
through like yeah how how good is the
technology now and to your point it
seems like you know it's getting good
enough for a lot of different types of
writing um and I've also been thinking a
lot about one thing I hate is email as
many people do but I've long written
about my sort of dislike of it famous
long time ago 15 years ago I quit for a
long time doing email Uh, and I wrote
about it in Techrunch when I when I
wrote there back in the day and I got a
lot of a lot of flack for that uh,
story. But but I think it also resonated
with people because it's like, well, you
actually need to do it. And I think
we're getting to the point with AI where
you probably don't need to do it
anymore, right? Like because you could
basically have AI write your emails for
you. And then I do think eventually the
way that it plays out is that then you
have AI reading those emails on the
other end and sort of it ends up these
bots sort of writing to other bots and
then sending you like a to-do list of
whatever uh needs the action items need
to be from that. And so I was thinking
about it from that perspective but then
I then I go to you know my actual love
of writing and and sort of what I what I
do and why I love to do it. And I would
never use AI to write say an article
about even AI um for me because that
would take away what I the value I get
out of actually writing which is writing
like many people it helps me think it
helps me form my own thoughts and so I
do think that that's true of a lot of
people even if they don't necessarily
realize it. I think that people who
write for a living um or write a lot uh
do realize that you know real element to
it but I think people who don't you know
probably gloss over that fact. Um, but
I, you know, I do think like as a
society, I would imagine that it plays
out in a in a way where maybe there's
some level of writing like email, like
the tedious stuff that does get
automated away. But the the sort of
certainly the creative endeavors, but
also even just Yeah. like the the memos
that that help you sort of formulate
your own thoughts, those don't go away
because it's just about the proc it's
just as much about the process of
writing as it is about what you put down
on paper, what goes on in your head and
how you formulate those thoughts.
It does make me wonder about whether,
you know, I see the value obviously in
writing, but I also know what's
happening right now where like there's
these great screenshots that go around
Twitter of like professors scorning
their students for like being late on
their assignment and ending it with chat
GP ask chat GPT. Like they're copying
and pasting it directly out of the
chatbot and those students are going to
be like, "Okay, well here's the
assignment." And then they're going to
paste their assignment and it's going to
say ask chat GPT. So we're just gonna be
a society I think of just people
shuffling AI written text back and forth
which like given I mean maybe you're
right given the amount of tedious stuff
that we've been writing uh in our lives.
Uh that might not be the worst thing but
I think there is that risk of the fact
that like yeah we're not going to think
as deeply about things anymore without
writing.
>> Look the last thing I would say about
that is student cheating is nothing new.
it's always existed, right? Like
>> just began with large language models,
>> calculators and everything, you know,
before that. And so I think ultimately,
you know, that will sort of sus itself
out in some way. I do think it's a real
issue obviously because of how good
these LLMs are and how new this, you
know, this this whole wave is right now.
And so it's going to take a lot of
learning curve to get to the point where
we sort of again reach an equilibrium of
where it where it makes sense, where it
doesn't. But I do think at the end of
the day like
if you're doing something for your own
you know purposes be it writing be it
many other different uh elements of life
like you're only robbing yourself if
you're using like a tool you know to do
it in an automated fashion. So yeah, for
maybe for some writing assignments that
you don't want to do, fine. You didn't
want to do them, that's fine. But
ultimately, there's going to be
something that you do want to do or that
you get value out of that I think that
that will work itself out. I agree
completely. All right, folks. The
website is spyglass.org.
MG Seagler joins us the first Monday of
every month. This is part two of what I
hope will be a long series. MG, great to
see you again. Thanks for joining.
>> Thanks so much, Alex. Talk soon. All
right, everybody. Thank you for
listening. I'm Jud Masad, the CEO of
Replet, will be on with us on Wednesday
to talk about vibe coding. What is it?
Will it work in the long run? Is it
sustainable? And what will it change? So
stay tuned for that and we'll see you
next time on Big Technology Podcast.