Sam Altman’s Gentle Singularity, Zuck’s AI Power Play, Burning Of The Waymos

Channel: Alex Kantrowitz

Published at: 2025-06-14

YouTube video id: _qk4iejFXqA

Source: https://www.youtube.com/watch?v=_qk4iejFXqA

Welcome to Big Technology Podcast Friday
edition where we break down the news in
our traditional, coolheaded and nuanced
format. So much to talk with you about
this week. If you thought we were going
to spend the entire episode talking
about WWDC, I'm sorry to say uh that's
not going to happen today. Instead, we
have so much going on, including uh a a
vision setting document from Sam Alman
uh at OpenAI. Uh some really interesting
news coming out of Meta as Mark
Zuckerberg tries to write the AI ship.
Okay, we'll talk a couple minutes about
WWDC because the company seems to be
digging itself into a deeper hole. And
then of course the image of the week,
Whimos lit on fire in Los Angeles amid
the protests. So joining us as always on
Friday is Ran John Roy. Ranjan, great to
see you. We're going to have a lot to
talk about this week. Whimos are ablaze
and listeners cannot see but Alex is
holding a Tik Tok style influencer
microphone I think in a corner of a
hotel room maybe or at a friend's
apartment. So I do want to say for those
listening watching uh I brought all the
proper equipment to record normal
podcasts uh this week from San Francisco
and this is the third I'm doing on the
influencer mic. Uh but I forgot one
cable. So, one cable that that is
podcast life. I think we're we're doing
pretty good. We have serviceable audio,
but the quality of course that you
expect uh on that front will be back to
full strength next week. Uh all right,
let's talk about this post from Sam
Alman. The gentle singularity. Kind of
an interesting way to put it. I'll just
read the beginning. We are past the
event horizon. The takeoff has started.
Humanity is close to building digital
super intelligence and at least so far
as much as it's much less and at least
so far it's much less weird than it
seems like it should be. We have
recently built systems that are smarter
than people in many ways and are able to
significantly amplify the outputs of the
people using them. The least likely part
of the work is behind us. The scientific
insights that got us to systems like
GPT4 and 03 were hard one but will take
us very far in some sense some big
sense. Chat GPT is already more powerful
than any human who has ever lived.
Rajan, I got to ask you. I mean
obviously like you know you can make a
case for many of the these uh claims as
the CEO of OpenAI. Um why now? And why
do you think Altman feels the need to
come out with this post? because this is
like a major I would say vision setting
document from him I think so normally
when I see a blog post from a founder of
a company like open AAI called the
gentle singularity that's very bombastic
and future-looking I think I usually
will kind of discount it as more just
marketing content but actually I don't
disagree with a lot of the things he's
saying I I think he actually provides a
pretty realistic view in terms of 2020
25 we'll see we're already seeing agents
that can do cognitive work writing
computer code. 2026 we'll see the
arrival of systems that can figure out
novel insights. 2027 may see the arrival
of robots. Then getting into like
imagining what could 2035 look like.
I've been a long like proponent of the
idea that innovation has slowed that
like a cell phone today looks like it
did in 2011 basically. that there
there's a lot of like for our day-to-day
lives it there have not been just kind
of like dramatic changes since the late
2000s early 2010s when we did see a kind
like fundamental shift in the way we
interact with technology so I'm actually
this oddly enough was kind of exciting
for me and kind of actually had me
thinking about what could life in 2035
look like so in this post art Alman
artfully writes uh a response to a lot
of the core complaints that we see about
AI. Uh just to paraphrase, he says you
you got it to write a paragraph. Now you
want a novel. You got it to write uh you
know to to help a scientist in research.
Now you want to come up it to come up
with the discoveries of its own. So like
first of all okay well who's setting uh
that bar exactly? You know it is hype
posts like this. So you're almost
arguing with yourself Sam. Uh but the
other side of this uh which is really
interesting is yes we've seen look we
are happy to talk about how impressive
some of this technology is but we
haven't really seen it take the next
step right it's amazing in the chat bots
right now um and you know trying to
apply it outside is not as easy and in
fact there is a new paper uh that just
came out there where they looked at a
company going from 0 to 30% % of its
code written by AI. Um, and a key
measure of productivity only went up by
2.4%. Now, that's billions of dollars in
the real economy, but it's not exactly
making a one a normal engineer a 10x
engineer. So, talk a little bit, I mean,
I I understand like this is the
trajectory that OpenAI wants to go on
and if you believe AI is going to get to
the place uh that a lot of folks are
saying, uh, then this is what we expect.
Uh, however, how do you how do you sort
of contrast that with the clear
limitations we're seeing with the
technology today? Well, no, I that's
where I think we're at that inflection
point. I think it's going back to the
model versus the product and the app
layer, but like we have seen the just
kind of like foundational
advancement over the last few years just
accelerate to a dramatic degree. But now
we're going to start seeing this applied
and and that's everyone is working on it
and actually getting this like genuinely
applicable at scale because as you said
like now a single engineer or across an
engineering department you can automate
a lot more of the code writing process
but what does that actually do to
overall productivity it's still minimal
so actually bringing AI into larger
scaled systems both in our personal
lives in our professional lives across
enterprises. I think we're going to
start to see that more. There's more of
a focus on that right now. So, I think
again I think the next two to three
years we see a much much bigger jump in
the way work changes, our lives change
versus the last few years as everyone
just it still was living in kind of the
toy phase of things.
But even if like it's an application
issue, I would say that like I've you
know some developers who will say that
this code uh you know won't necessarily
code in the way that your company codes
that will bring in legacy code that
you've phased out. We'll have junior
developers that will code things and not
understand how they work and then ship
them and break the app. And I think
those applications are in the most
powerful application of this technology
right now which is coding. And clearly
that goes for like having it right thing
and work across systems. So talk a
little bit about where you see the gap
between what this technology is capable
of and why we're seeing these issues in
implementation. I mean part of this has
to be organizational or even in an
individual level just trying to find out
the right use cases. And it sounds like
you believe that there's a long way to
go in terms of what we can do even with
the current systems. No, no. I I don't
think there's a long way to go. I think
we're finally working on the right
pieces of it that it like the the
foundation model race has gotten boring.
I mean, I feel like when's the last time
any of us got truly excited by some new
foundation model update? Now, the things
that are exciting, the what you hear
about like what everyone's are actual
outputs and like actual applications of
AI. So, I think we start to see the
change a lot more. I think again to me,
and I've been asking this for this for a
while, like if we could all just take a
breath and move away from like the
almost rat race of foundation model
advancements and actually be like,
"Okay, now how do we take the technology
that exists as of June 13th, 2025, and
then actually implement that into our
lives?" and and we're going to get into
the Craig Federigi Joanna Stern from the
Wall Street Journal interview in just a
little bit, but I actually thought a lot
of what came out there was people
expected, even companies like Apple, you
would just kind of plug in an LLM and it
would just solve everything. That's not
how it works. Everyone's been learning
the hard way that it takes a lot more
organizational and systems nuance to
make things work. But it but the the
reality is finally set in Apple like
understanding that better than anyone
else. But now the real work can begin.
So now that Ron John has made a
principled stand against uh AI hype and
building up the technology beyond its
capabilities. I am now going to continue
reading Sam's post and give you all a
dose of AI hype and building the cap
technology back past its current
capabilities just for the you know
exercise of getting Rajan to respond to
some of these claims. So you you had
already mentioned that 2025 we'll see
agents that are able to do real
cognitive work. Altman says 2026 will
likely see the arrival of systems that
can figure out novel insight insights.
2027 we may see the arrival of robots
that can do tasks in the real world. Uh
he says the 2030s are likely going to be
wildly different from any time that has
come before. We do not know how far
beyond human human level intelligence we
can go but we are about to find out. So
what do you think about these
predictions? Are you on board with them
having said the beginning of Sam's post
is directionally on on point?
It's I mean 2035 given the last two
years of technological advancement it is
kind of crazy to think about what life
could look like by then and it's kind of
exciting I think like I genuinely and
also terrifying in certain ways but like
it it should be different like given
what we have to work with right now and
I even like I have with generative AI
large language models I I am a true
believer like I don't agree with the
Gary Marcus' of the world in terms of
saying the technology is not good. I
think it has not been used to its
potential or in the right way to date
outside of chat bots. But but I think uh
I I'm still sticking with it. 2035
thinking about how different life could
be versus
10 years from now, 2035 versus 2015 to
2025. Like how much has life really
changed driven by technology in the last
10 years when you're just I'm looking
around my apartment right now. Like it
doesn't look that fundamentally
different. the way I go to work and when
I sit at work and all that stuff. I I
guess like virtual conferencing and
stuff is a big big change, but other
than that, it all kind of looks the
same. People dress the same. 2035, we're
all wearing moon suits and uh have a
have a robotic best friend. Well, more
than moonuits. Uh here's what he says.
Uh the rate of new wonders being
achieved will be immense. It's hard to
even imagine today what we will have
discovered by 2035. He then gives a
bunch of examples, but concludes this
paragraph by saying, "Many people will
choose to live their lives in much the
same way, but at least some people will
probably decide to plug in. I think that
means connecting their brains with the
AI." Ranjan, you talked about, you know,
wanting to live differently. Are you
plugging in? Ah, man. Sam, you had me.
You had me until there. Uh, I don't
know. I I remember like I I have an aura
ring on my finger. I ended up getting
one. I have an Apple Watch. Like I have
the surface of me now is connected in
many ways. I have AirPods in right now.
I wear metaray bands when I'm walking
around. So like it's not injected into
me yet, but definitely like I uh I don't
know. I I
What do you think
your outfits will look like in 2035 in
terms of will they be covered in
technology? Will you be have a brain
computer interface?
Will you have a Johnny IV medallion on a
big Mark Zuckerberg chain around your
neck? What's it going to be? I'm going
full Wall-E. Get me in a go-kart. Give
me a big soda and put me on autopilot.
full Wall-E. No, I mean, I think it'll
probably look a lot like it lives it
looks like today. I I do anticipate that
we'll have humanoid robots around, but
the question is um how good can the
industry get them and how safe can the
industry get them. I think humanoid
robot safety is something that's not
talked about enough. But if one of those
things go rogue, you could have a
Terminator problem. And you don't want a
Terminator problem. Never again. Don't
want that. That's one of the things you
want to try to avoid. But look, if you
do your best and it happens, no one can
really blame you, right? Yeah. I mean,
you tried. You did fine. All right. It's
the fault of Congress. This is an idea
that Sam had in the piece that I thought
was interesting. He goes, "If we have to
make the first million humanoid robots
the oldfashioned way, but then they can
operate the entire supply chain, digging
and refining minerals, driving trucks,
running factories to build more robots,
which can build more chip fabrication
facilities, data centers, etc., etc.,
then the rate of progress will obviously
be quite different. So he's describing
like a humanoid robot uh robot explosion
similar to like the ex the intelligence
explosion that some expect with AI. I
thought that was an interesting idea. I
see I think and I am going running
counter to this the greatest tech minds
of our of our time but like I don't get
the whole humanoid robot thing. I think
we've debated this in the past as well.
Like to me, applying the human form
factor to robotics rather than actually
having specialized robots that actually
solve specific problems and are built
because again, right now, you go to any
automated warehouse. It's not humanoid
robots moving around. It's robots that
have been specifically designed to
handle repetitive tasks of picking up
boxes and moving them and placing them
and pulling out items. Like like I'm
still team specialized robotic form
factor versus team humanoid form factor.
Robotic form factor. I
am on team humanoid guy. Maybe humano
humanoid with like six or seven arms. Uh
what you're describing. Why not seven
arms then? I would go seven arms. Yeah,
go seven. No, why not make it 12? Um do
a full um what's it? The um the you know
the goddess with all the dorga. Yeah,
Dorgga. Yeah, I mean those people who I
mean I don't know if it's divine or not,
but it was obviously a very good design
design decision uh to give those arms to
Dorgga. Um I think listen the um this
idea uh that we have these functional
robots makes a lot of sense because
those robots don't have a world model.
They don't understand the world as we do
because they don't see it as we do. They
don't have understand physics really. I
mean they might be able to grasp thing
and have that hardcoded in them but it's
similar from going from like hardcoded
AI to a large language model which
understands right but like you know can
can be conversant on a bunch of
different topics when you build AI with
a world model that understands physics
objects how things work together then
you want to go humanoid robot or maybe
you know souped up robot that takes a
humanoid form because um all of a sudden
you can be functional like the idea that
you can have humanoid robots which is
one function do all these things that
Sam is uh discussing which is again uh
digging refining materials driving
trucks which we already have steering
wheels and they have hands right running
factories and building more robots and
building chip fab facilities
uh that is an an exceptional form I
don't think you want to go too
specialized for each because ultimately
what's you know this is a very complex
world that requires complex maneuvering
around to be really useful
In a weird way, I guess that's like the
most human centric or human forward view
of it because I want to just kind of
rebuild and remap everything to actually
be more efficient for the specialized
robots. But I think maybe you're right,
the Dorgga model souped up 8 to 10 arms.
Maybe that's some wheels on the feet,
right? Yeah. Yeah. That's what So is
anyone working on that? Boston Dynamics
I'm sure is probably I mean we're
talking about eons of evolution like
something got something happened in a
good way to get us to where we are right
now. It really does work.
All right. So let's just um sort of
conclude this by bringing this back down
to earth with with um the final passage
from uh Sam's paragraph Sam's uh article
which I think is like really good. Um,
he says, "For a long time, technical
people in the startup injury industry
have made fun of the idea guys, people
who had an idea and were looking uh for
a team to build it. It now looks to me
like they are about to have their day in
the sun." Um, oh, let's pause here for a
second before we finish. But this is, I
think, pretty interesting. It's kind of
an homage to vibe coding, but there has
always been this idea of like, you know,
someone's so many people are like, "I
got an idea for a startup." and they
just never build it because they don't
have the the technical talent or let's
say the charisma to get a bunch of
people around them to build it. These
idea guys and the technical people can
just go out and build it. Uh but with
vibe coding or with AI coding maybe it
does become the age of the idea guy.
What do you think? Yeah, I'm going to
Sam ends with me in agreement here. I
100% agree with this. Like I mean I was
having a conversation with like a early
stage startup founder recently who had
not built a prototype and would still
just had a pitch deck and I was like to
me there's no excuse for that right now
like anyone can build at least basic
things right now and actually many
people you do not have to be have a full
technical team to build a functional
product and that means that anyone with
an idea should be able to actually
realize ize that idea in some form and
that's that prototype at least prototype
but even get to some level of
functionality and I think that's
actually exciting that's like the best
most exciting part of generative AI for
me so I think idea guys it's it's your
time
all right so final thing let's talk
about super intelligence this is the new
word Sam says open AI is a lot of things
now but before anything else we are a
super intelligence research company. We
have a lot of work in front of us, but
most of the path in front of us is now
lit and the dark areas are receding
fast. We feel extraordinarily grateful
to get to do what we do. Okay, two
questions for you. One, uh why is
everybody talking about super
intelligence now? We're going to get to
it in a moment with Meta. Um I thought
AGI was the buzz word. Is that now
something that they don't even uh is too
low of an ambition? And I guess that
when you raise $40 billion, that is what
it is. Um, and second, you don't take
any issue with this. It it does seem to
be,
again, you're someone that doesn't like
hype. This is hype. I mean, got to call
it out for what it is. Sorry. I mean,
again, this has been quite the emotional
roller coaster for me going through this
uh because I've been supportive. And
then we end again with to me like how is
it not a bigger story
of the AGI to ASI artificial general
intelligence to super intelligence
rebrand. It's crazy. It's weird. It's
ridiculous. Like it's just it happened.
Everyone has just comfortably moved on
from AGI started using super
intelligence. I think it that's the name
of Ilia's new start or Yes. Safe super
intelligence. Safe super intelligence.
That was kind of the fir from a thinking
from a pure branding perspective. That
was the first inkling. Clearly the
messaging worked. Everyone started
saying it. It it absolved people from
having to achieve AGI or when everyone
is saying AGI is already here yet life
doesn't feel significantly different.
So, I'm going to give Super Intelligence
the like I mean from a branding
perspective, the fact that they've
shifted to this conversation and now
we're all just accepting it and moving
on is crazy to me, but it it's happened
across the industry, I feel. So kudos to
uh the kudos to Ilyia from a branding
perspective and to to the comms folks
whoever came up with super intelligence
first as a term. You done good or you
bought a couple more years of runway.
Yeah. Well, IA obviously has raised
billions without releasing a product. By
the way, on the subject of Ilia, next
week on the show, Dwares Patel is going
to come on, and he has some very
interesting thoughts about what Elilia
is up to and the type of AI that he's he
may or may not be building and uh and
how that might help advance the
state-of-the-art. So stay tuned for
that. That will come next Wednesday with
Dwaresh Patel or Wednesday, I think it's
the June uh Wednesday, June 18th with
Duarkh. So stay tuned for that. Really
fun conversation. Okay. As this happens
though, we are seeing model improvement.
And Ranjan, you said that when was the
last time we were excited for a model
release. And it's funny cuz I've sort of
been the like pork cold water over this
Sam Alman statement while you've been
sort of enthusiastic about it um through
our conversation today. But I will say I
definitely was excited for the 03 model.
That model to me is like the first model
that really works and is useful in
various ways to me in my daily life. And
now OpenAI is releasing 03 Pro, which is
a better version of the model. It's
going to be available to those initially
to those paying $200 a month to OpenAI,
which unfortunately no longer includes
me. But there's a substack called latent
space that talks a little bit about why
uh this model is an improvement and why
I think it's going to help lead to
better products. Uh just to throw that
out there one more time. Um, first of
all, the the post about current models
says they are like a really high IQ
12-year-old going to college. They might
be smart, but they're not use a useful
employee if they can't integrate. So,
this talking about 03, the authors say
this integration primarily comes down to
tool calls. How well the mo model
collaborates with humans, external data,
and other AIs. It's a great thinker, but
it's got to grow into being a great
doer. 03 Pro makes real jumps here. It's
noticeably better at discerning what its
environment is, accurately communicating
what tools it has access to, when to ask
questions about the outside world rather
than pretending it has the information
access, and choosing the right tool for
the job. When you think about
improvement in models and what that
leads to, I mean, we're going to see,
right? This is just the very, very early
reflections on what this can do. I think
a model that does understand its
environment like I talked about super
important uh can ask questions to people
and then understands which tools to use
when it has to do a task. To me I would
say that's that's pretty uh important
and I'm excited to at some point get my
hands on this. No no I will fully agree
the next great battle in AI is tool
calling. That's where we're going to see
the maximum amount of like actually
bringing these models and agent in
gentic AI that's all that matters is the
ability for an action to understand its
con context and then take the next
correct action and to do that you have
to know what tools you have access to
and which tool is correct to interact
with next. So I I think this is huge
actually like this is where and I'll
give you it's on the model level so fine
models matter fine but like I I I think
this is very astute that tool calling is
going to be the key to agentic AI which
is going to be the key and like integ
into into integrating into existing the
existing world systems companies
processes organizations everything what
is tool calling just explain what that
is It's the ability of the model to
actually call out to another tool either
via API or script or whatever like
resource it uses to access another tool.
It's ability to so currently you might
be doing that manually by actually like
coding out API calls. There is a world
where like a large language model should
be able to generate that on the fly,
understand what tool it should call out
to and then actually generate that
connection in real time and make that
call, transfer whatever data needs to be
transferred, take whatever action needs
to be transferred. So, so like right now
if you use deep research, you kind of
start to see it in action. What is it
doing? It's calling out to a bunch of
websites. That's via the the internet,
the worldwide web. it's calling those
websites. Maybe it's downloading
documents and then it's going to parse
them. Like each one of those is an
action that often requires a specific
tool. But then you imagine that in large
systems that exist already. And the the
ability so you don't have to manually
map out every single block on an agentic
workflow. Like that is a huge area of
opportunity right now. And I I really
think that's what that's the next great
AI battle and it's at the model layer.
So I'll give you that. Okay, that's
super interesting. We definitely should
do more on that. So folks, expect more
conversation about tool calling on the
show. We have so much more to talk
about. We got to talk about this meta
thing. Very quick reaction in WWDC and
the fact that Whimos are on fire. Uh
we'll have a very fastm moving second
half right after this. And we're back
here on Big Technology Podcast Friday
edition talking about all the weeks AI
news. It's a lot of uh more theoretical
stuff. Let's get more practical in
business here in the second half. Meta
is making a very big investment in scale
AI. I call it like an aqua higher
zition. It's weird if they're not buying
the full company. Uh but this is from
the I actually think I said that on air
on CNBC. No, I I I hold on. Can you coin
that term trademark aquaireition?
Aquaire. That is what this is and that
describes us better than anything else.
And it's amazing. The word came out of
my mouth on air and I was like, what did
I just say? I'm going to roll with it.
Stick with go aqua higherization. So,
uh, this is from the information meta to
pay nearly 15 billion for scale AI stake
in startups. 28-year-old CEO. I love how
like companies are investing uh in other
companies and you get the CEO and like
the top talent because of it. I mean
thing that something that's happened
multiple times including with inflection
uh with Mustafa Solleman going over to
Microsoft and Meta which has had some
reg regulatory issues is taking note. So
this is from the story Meta has agreed
to take a 49% uh stake in the data
labeling firm scale AI for 14.8 billion.
Metal will send the cash to scale's
existing shareholders and place the
startup CEO uh Alexander Wing, former
big technology podcast guest uh in a top
position inside Meta. Meta would put him
in charge of a new super intelligence
lab. There is that word, hit the bell,
along with other topscale technical
employees.
Um that will put him in competition with
some of his customers and friends
including OpenAI CEO Sam Alman. Another
interesting point from the story, Meta
CEO Mark Zuckerberg has been actively
recruiting top AI researchers in an
effort to boost his company's AI
efforts. He was frustrated with the
reaction to its latest AI offering,
Llama 4, and aims to catch up to
competitors such as Google and OpenAI.
Ranjan, your reaction, good or bad move
from Zuckerberg? I This one is tough for
me. I do I really go back and forth in
terms of good or bad. I they are taking
some action. They have been falling
behind and clearly they want to catch
up. So that's good that they're willing
to take some bold action. But again,
aqua hire zition $15 billion for a 49%
stake just to hire the guy. Like I was
even confused again if it was truly an
aqua hire, but actually like they
announced that the chief strategy
officer of scale AI, Jason Drogue, uh
will now be CEO and and Alexander Wang
is fullon meta. Like he's he's he's not
a little scale, little meta some kind of
weird Elon Muskian dual role. He's all
in. Um is it really is he worth that
much? Is he just uh some consultation to
Mark Zuckerberg over the last few months
and giving him good advice? Is he worth
that much?
Let me do my best to make the case for
this deal because I think it is worth it
and I don't think it's going to be the
last one because if you read through the
lines, it's not just Alexander Weighing.
Uh this is from Bloomberg. Uh Zuckerberg
has been on the war path uh recruiting.
Let me see. I just want to make sure I
pull up the right story here. So, this
is from Bloomberg. Uh, Zuckerberg has
spun up a private WhatsApp group chat
with senior leaders to discuss potential
candidates. The chat, which they've
called recruiting party, is active at
virtually all hours of the day. And
Zuckerberg has also been hosting uh
folks at his homes in Palo Alto,
California and Lake Tahoe and personally
reaching out to potential recruits.
Okay, let me state set the stage here.
So, if the things that we talked about
in the first half, if Sam Alman's
predictions come true or half true, that
this is a rapidly advancing technology
that's going to determine the future of
technology, you really can't afford to
be mediocre for a couple years and hope
to catch up. And I think that's been the
alarm bell around Apple. But certainly
it's an alarm bell with meta because
after they took the lead in open source
uh they were surpassed by deepseek and
then llama 4 was not up to expectations.
So I think Zuckerberg sees this and what
he's doing is looking out at the
landscape and saying there's basically
three vectors that you can compete on
with AI. Uh the first is GPU just
scaling up GPUs. Meta has that right
they've they had a ton before this
moment. They use them to build, you
know, very impressive lot of models uh
off the bat and they've got the GPUs.
The other two things that you need are
data and talent. And Meta has a lot of
data, but scale has proprietary data
that's being basically is being used to
help uh companies scale up their models
beyond just using uh GPUs. And then the
talent thing is is super important. And
you'll remember that Sergey Brin came on
the show a couple weeks ago and said
that he believes that algorithmic
scaling, not necessarily compute
scaling, will lead to the most
improvements. And the way you get uh
algorithmic scaling is building new
algorithms. And the way you build new
algorithms is with talent. So to me,
this is Zuckerberg clearly seeing an
issue with his company and making I
would say the strategic exact right move
to fix it. unlike another company that
we've seen within FL flagging AI product
that is it seems like it's still in
denial uh about what is wrong. So that
to me is the case for Zuckerberg not
only going after scale and Alexander
Wing uh but uh starting this recruiting
party and going hardcore on uh
recruiting top talent. the there are
there are reports that he's offered um
eight and nine figure cash amounts to
top talent engineers uh to come over to
Meta that's like in the tens and maybe
even up to a hundred million dollars to
a person not a company a person to come
over uh so that I think he realizes the
stakes and he's making it happen and
he's shown an ability to do this in the
past that is I think the the bull case
what do you think what do you think
about that Rajan all All right. I think
that's I mean again anything when you
look at these relative to market cap or
even cash on hand are not like uh
existential for Meta. So in that sense I
think it's not unreasonable.
I think it's also fair too from like a
shareholder perspective that they the
further behind they fall there's more
risk to Meta Meta stock in terms of
people being like questioning strategy
versus spending like a very aggressive
move like this. It's still I'm I guess
at like scale AI you know kind of helped
open AI build their model so they have a
clear understanding of what kind of data
along with other I mean major companies
so so he has been at the center of all
of this so maybe just that kind of like
proprietary knowledge is also has a
significant amount of value um it's
still more this whole aqua higher model
is just a sign of the times, I guess,
more than anything else that I've seen
in a while. But, um, I I I buy what
you're saying a little bit. And it I
mean, it's clear that they do need new
talent because, as you pointed out to me
in our text messages uh, off air, the
product isn't exactly working even
beyond the models.
This was my favorite story of the week.
So this is the most meta and let's go
let's just call it a Facebook thing
because this is like old school
Facebook. So the new Meta AI app which
uh many people may have downloaded and
it's a separated app that's essentially
kind of like the chat interface chatbot
experience that you would expect from a
chat GPT or Perplexity. But one of the
small nuances is they had also
positioned it as somewhat of an AI
social network. Now it was a bit unclear
what that meant but we people started
noticing and I actually had not even
noticed the discover experience in Meta
AI and I don't use it I use it for
generating images that are fun with my
son like he wants to do like half
animals half dolphin half squid or
something like that and I'm like all
right I'm going to use Meta's image on
this one. Um, if you go to the discover
tab, it posts entireties of chats from
people that probably don't realize that
it's being posted. And it's like as a
social network. Even crazier, it posts
people's voices prompting Meta AI. So,
it has like a audio clip, audio message,
voice recording. And there's all types
of crazy situations. A lot of people
very personal asking about like a legal
brief for a custody battle, asking about
like relationships and depression. My
favorite one of this one I found on
Reddit, a screenshot, someone saying,
"You're supposed to be my wingman. Where
my big booty future wife at?" So, all
types of requests, but but people almost
certainly unknowingly
posting their AI chats to uh public
social network feed.
Thank you, MetaF Facebook, for bringing
some of that old un unicipated
sharing activity back to social
networking.
Yeah, what's old is new again. And uh
you know there's some funny parts of
this like like you mentioned the guy who
asked for the AI to be his wingman and
trying to find his big boot booty future
wife and towards the end of the
screenshot that someone shared he says
big booty and a nice rack and the AI is
like you got specific tastes I like it.
It's like what kind of conversations are
these? Um, but they're also it's it's
quite sad and it sort of goes into this
conversation of people needing AIS for
companionship given that our society has
done such a poor job in building
community and and sustaining and
fostering community that people feel
like they need AIs to be their friends
and you know you just listen to these
conversations with people people and
these AIs who the AI has become their
companion in in many cases and it's just
like h it's just such a glaring magnify
glass or I don't even know if that's a
I'm joking. I'm like trying to I'm
making a light of it, but it's actually
it's terrifying and it's sad and like I
mean a lot of the the queries that have
been posted around are I mean are it's
people who are just really looking for
help and answers and companionship, but
do you want to hear my conspiracy theory
on this? I always love a good conspiracy
theory. Okay, we have a podcast after
all. What will we be without
conspiracies? Yeah. So I was thinking
about like I mean on one hand to do
something this clumsy I actually can see
like it's just one product manager makes
this decision and I saw that there are
some people who actually it looked like
they were purposefully posting to kind
of show expertise around a subject or
even like if you go through a lot of
people do like prayer affirmations and
stuff but then their handle is like a
church or something religious. So then
you're like, "Okay, I can actually see
this person knew what they're doing."
And this idea that you push your prompt
into a feed and then it's getting liked
and shared makes some sense. But then I
was like, what's Meta's biggest threat?
It's the chat GPTs of the world owning
like the true human relationship and
data and questions and queries that
really get into the soul of a person.
Suddenly the I think this is going to
continue get become a much bigger story
and like suddenly
this idea that people are going to share
everything with a chatbot is a little
scarier the more people start thinking
oh wait you know what happened with Meta
I'm I'm going to stop asking ChatCT and
Claude these really personal questions
and suddenly Meta is actually in a
better position relative to OpenAI on
kind of that personal connection to
chatbot
What do you think? That's a great
conspiracy. That is a great conspiracy.
I won't rule it out entirely. All right,
let me ask you one more question about
this scale thing before we move on to
Meta. This came up in our Discord. Um,
there have been plenty plenty there's
been plenty of reporting on why Meta
wanted to buy Scale AI, but why didn't
Scale AI want to sell? Are the main LLM
providers getting good enough at
obtaining trading data themselves? Did
DeepS seeks uh signal uh top for
services like this? What do you think?
Yeah, I I I definitely think so. I also
think synthetic data in training uh
foundation models is going to become
more and more of just a standard
practice. Like the we've exhausted the
like race for real world data.
Foundation models have also gotten very
very good and regular listeners will
know that I'm definitely of the school
that we don't need bigger and bigger and
bigger models. So I think in that sense
like the game scale I scale AI played
the service they provided was
brilliantly timed. They became like a
critical part of overall LLM
infrastructure. But what they did their
job again like actually having people
manually tag like large networks of
people manually tagging data to make it
more ingestable for a large language
model or for a training. It's not going
to be as relevant anymore. You can even
now have large language models yet do
the tagging itself. So, so the service
they provided was not going to last. So,
good on Alexander Wang and timing in
terms of making this move.
Okay, so we talked again about Meta
seeing an issue and addressing it. Uh,
now let's just go quickly to Apple
because I thought we were done with WW
WDC coverage, but then there's been a
bunch of uh, executive interviews that
have come out mainly with um, Craig
Feder and Jaws who's their head of
marketing and uh, it just seems like
this company is diluted. They've said
that um, they are not looking to build a
chatbot, but also that Siri is, you
know, their mission is to make Siri the
best assistant. uh they said that they
want that, you know, Apple Intelligence
is basically
uh out there already, but that there's
they're not giving a a shipping date
because they don't want to overpromise.
Um MG Seagler said uh this um in Spy
Glass, he said um Apple clearly wants to
frame this as people perhaps being upset
because they simply don't understand the
intentions here. He says they don't want
a chatbot. Uh they want to do more than
that. Baking AI into every product. I
think that's actually a fine strategy,
but only if your AI works really well.
And well, the state of Siri, uh the
actual ship stuff over the past 15 or so
years suggests that it doesn't. They
have to get their AI house in order and
catch up or I'm sorry to hear Apple tell
it there's nothing or yeah, that to hear
Apple tell it there's nothing wrong.
just a minor delay and internally at
Apple it's better than it's ever been.
Uh you're crazy to think otherwise like
as it that's the message that Apple is
giving. Um so talk a little bit about
what what you saw from the postWDC
interviews. To me they were even worse
uh than the underwhelming event itself
and where does Apple go from here? Okay.
Yeah. Separate from the event the
postevent the Joanna Stern from Wall
Street Journal interview I mean with
Craig Federigi and uh who's the who was
the other one?
the head of marketing, Jaws. Okay. Um,
it was one of the most fascinating Apple
pieces of media I think I've seen in a
long time because she did an incredible
job just kind of like in a very calm way
but just repeating the right questions.
Craig Federigi, you could kind of see
like getting a bit frustrated but still
having that perfect smile and just kind
of like No, no. There was a moment where
he was uh like he let the smile down and
he looked like he was remember. Yeah.
You see him remember to smile and then
all of a sudden bam cheeks go up. Okay.
Okay. Okay. Okay. So you caught that as
well. There's there watch. It's a 7
minute clip. If listeners maybe you'll
catch it too and let us know. Like
there's one moment I'm like oh he's
about to lose it right now. And then
total recovery and smile. But overall,
it was I don't understand the whole I
the way they approached it. The whole
kind of narrative they're trying to push
is we'll release it when it's ready.
It's not ready yet. It's a very complex
problem. Everyone else is just doing
chat bots and we want to do more than a
chatbot. No, everyone is not just doing
chat bots. There are incredible AI
experiences and solutions and products
that span far outside of a chatbot. And
they kept repeating that again. Like
querying your own data is doable. Many
like you can upload a bunch of documents
to an AI service and actually query
them. Yes, it's a complex problem to do
it across all of your data, across all
of your apps on your iPhone at the
operating system level. I know it's it's
complicated, but so which leads me to
the one thing she did not press them on.
Why did you do that marketing push? Why
did you like Apple in the past, the
beauty of the company was here is this
incredible story around a product and
here is the product and it just works.
And remember those commercials, the like
the girl from Last of Us, uh, like
looking up someone she didn't want to
talk to and finding their information
quickly and I don't know, like it they
were terrible. They launched the largest
Apple style marketing campaign. Why did
you do that if you weren't ready? That's
that's the one question I felt was not
pushed on. Uh, for sure. And but I think
the entire conversation was just a
exposing of Apple for um again doing
that and the attitude for Apple being
like I don't understand why anybody's
upset. We're doing exactly what we said
we would and we're still I mean it it to
me there was like a lack of uh I think
self-awareness and humility there. Yeah.
I think concerning like they could just
say or they could have Yeah. I don't
know. Do you think it would be better to
say, you know what, we have been behind,
we've screwed up, and we are going to
deliver. That's all we're doing where
it's like hair on fire situation to the
company, and we get it, and we're going
to deliver. Or do you think it would be
better that they would took a you know
what, we are the best for privacy. We
only deliver product, which they kind of
alluded to, we only deliver products
when they're at 100%.
So anyone, not just tech foreign people
could use them. They didn't they kind of
alluded to that, but they didn't even
really. But what do you think's the
better direction?
It's hard for me to say. I think this
everything is fine is probably the worst
direction, but ultimately any other
direction it doesn't matter until you
ship. I mean, basically, they could have
just come in and say, listen, like this
is something we wanted to do. We
understand it doesn't matter until we
ship it. So we are working hard to ship
it. That's all. Do you think do you
think they should have cancelled WWDC
given there was no real announcement?
That would have been worse I think
because that shows you just like you you
don't even care to show up. No, no. I
mean you say you say you know what we
are going to all people are working
around the clock. We that we get it. We
are going to deliver the world's
greatest AI assistant that anyone can
use. So there's no reason to have a
whole event to talk about operating
system names and changing backgrounds on
chats and stuff like that. That could
have been like update notes in an iOS
app update or system update like I don't
know. Don't forget about the phone app.
You got to get everybody together to
talk about the phone app and the
messages app. Wait, can you explain to
me Liquid Glass? Why is it exciting? No.
Okay. I want someone to get, you know, I
really I really I saw something where
it's like they're getting back to like
what they're great at design and liquid
glass and I still didn't get it, but I I
want to I want to at least try. You will
try. That's the thing. You can be forced
to at some point. Okay. Yeah. All right.
Uh before we go, let's just very quickly
hit this story. I think look, we're not
going to spend a lot of time talking
about it, but it's important for us to
um just stay on top of this story. is an
important one which is how generative AI
is changing the web. Um this is a story
new sites are getting crushed by
Google's new AI tools. The AI Armageddon
is here for online news publishers.
Chatpots are replacing Google searches,
eliminating the need to click on blue
links and tanking referrals to news
sites. As a result, traffic that
publishers relied on for years is
plummeting. Here are some stats. Traffic
from organic search to Huffington Post
desktop and mobile websites uh just fell
by over half in the past three years,
nearly by that much at the Washington
Post. Business Insider cut 21% of its
staff last month uh as the CEO Barbra
Pang and said that that the cuts were
aimed at helping the publication endure
extreme traffic drops outside of our
control. Organic search traffic to
websites declined by 55% between April
2022 and April 2025 according to data
from the company Similar Web. They do
analytics 55%. That's crazy. Uh and
Google is going to be sending even fewer
visitors with this new AI mode. Not to
mention, Google is now offering employee
buyouts in the search organization and
other organizations uh while not
offering them in places like Deep Mind
that does AI. This is I mean we've done
some reporting on this here um with my
story about World History Encyclopedia,
but it's very clear now that that was
the rule and not the exception at the
and the web uh is in some even deeper
trouble. Remember we said it was kind of
on life support. This is like hospice
now.
Instead of the web is dead, then we uh
tempered that with the web is in secular
decline.
Now, web is in hospice is definitely
another direction to take it. Um, but
yeah, no, I mean this what we've been
talking about forever and it's it's
definitely
going to dramatically affect anyone who
optimized for a prelim world like who
didn't just publish and have the their
website like Business Insider is the
greatest case of a company for longtime
media folks. The invention of the
slideshow on a website to get an
additional display ad to click for every
slide you cycled through was one of the
most like ridiculous but actually
brilliant innovations in monetizing web
publishing. Like Business Insider
forever, that's how they operated and
that is not working anymore. And maybe
there's going to be like chat GPT first
publishers, but trying to game Google to
get traffic to show display ads to make
money is that is that is beyond hospice.
That is dead. I think done for, right?
Yeah. Yeah. I like overall people having
websites and interacting with them in
different ways. I think web has some
some room to breathe and like there's
there's it's not over yet but monetizing
on display ads based on page views that
is long long gone especially if you
built like a powerhouse optimization
engine circa mid2010s on that that's
long gone now talk about this MidJourney
story yeah so we saw Disney and
Universal Studios sued Midjourney
Uh I mean we've talked about New York
Times suit open AI. I one of my
predictions has been like we're going to
start to get some guidance or resolution
I think by the end of this year in terms
of like how copyright will play out and
we need it. Like I feel it's one of the
things holding the overall industry back
not having a clear direction of what's
indemnified and what isn't. But uh my
favorite part of this though was like I
mean the New York Times open AI for
people who had like looked into that
they were able to recreate by prompt the
like essentially the entire text of
articles but that's still not as
visually jarring as like the literally
asking show Iron Man flying action photo
and there's a photo of Iron Man. There's
like ones of the Simpsons, the Minions.
I mean, midjourney clearly trained on
copyrighted info and not and and returns
that info. That's a problem. And like
there has to be some kind of
resolution to all of this before people
will start actually like at a
professional level using these
technologies in a proper way.
It's sort of a perfect leadin to our
final story of the week, which is uh why
Whimo self-driving cars became a target
of protesters in Los Angeles. Uh Time
has a couple of uh theories here. They
they sourced the Wall Street Journal
that say that part of the reasons the
cars were vandalized was to obstruct
traffic. Um they said some social media
users suggested self-driving vehicles in
particular have become a new target
because they are seen by protesters as
part of the police surveillance state
because they they have cameras and they
have 360 views of their 60 360°ree views
of their surroundings and their tool has
been tapped by law enforcements. Uh
other people are just talking about the
fact that you shouldn't feel bad for
them. This is from one organizer. There
are people on here saying it's violent
and domestic terrorism to set a Whimo
car on fire. A roboc car? Are you
kidding me? Are you No, a robot car. Are
you going to demand justice for robot
dogs next but not the human beings being
repeatedly shot with rubber bullets in
the street? Uh what kind of of politics
is this? Honestly, it seems to me that
it's just kind of uh talking around the
issue. I think people are just afraid or
they're uncomfortable broadly with AI
which like despite all the progress we
talk about on the show broadly the
public not comfortable with artificial
intelligence especially as they see it
do things like um sort of
uh run over some of the previously
protected rights like copyright and you
know all these companies are clearly
trying to automate work uh in their own
way and the public is just starting to
really feel uneasy about it or has for a
long time and it's manifesting itself in
the physical form of burning these
whimos. What do you think? I'm going to
not
attribute that level of importance in
terms of I don't know. It's you want to
burn something. If you burn a Whimo,
it'll get a little more traction on
social media. It's also a little more
visually jarring than like other cars
that if you were to burn them. So, I
think it's just I don't know. I think
connecting it to a deeprooted like
distrust of AI.
It's I don't know. I think it's just
people wanted to burn something and you
get a little more engagement by burning
a Whimo than a a Corolla. First of all,
I just want to say I don't condone the
burning of Whimos. I I do not condone
condone it, but no condoning of the
burning of cars. But why do you think
they get more engagement on social
media? It's because of this unease. It's
because there's this feeling that it's
Skynet.
All right. Okay. You're right. You're
right. That you're right. The reason
behind it's more of a story or like
emotionally of an emotionally resonant
thing that will put you on one side or
another to burn away than a Corolla.
Again, we do not condone the burning of
cars here on Big Technology Podcast.
good on our disclaimer at this point.
But uh
um I don't know but if that's the case,
how are we going to have humanoid
robots? Johnny Ives pin. I mean if
people are going to burn Whimos because
they're afraid of cameras. I don't know
about the I guess a humanoid robot would
actually just fight back and not let you
burn it. Maybe not. I mean they're not
going to be programmed to fight back.
like all this alignment work is going to
be done for them not to fight back. And
I think even if they're getting burnt
and Tesla Optimus is not going to fight
back and let it be let itself be burnt.
Maybe Elon's won't. But the other Google
will definitely be like fine whatever
you need to do. But I think you're
really hitting on the point here which
is I so great like we talked about this
in the beginning. Let's just close with
it. like there we're going to hear a lot
of rhetoric about AI in the physical
world, humanoid robots, all all of those
uh things along those nature along that
nature. But there's an assumption that
people are just going to allow this to
happen, especially as even if Daario is
wrong and it doesn't cause 50% of entry-
level jobs to go away. Uh it's going to
change people's lives and this is
something that's happening, you know,
effectively top down versus bottom up in
most cases. There's just going to be
discomfort there and people are going to
keep attacking these things. I'll just
say this last thing. When I was at
BuzzFeed, I did a series where I would
fight with robots. I tried to steal.
Yes, I tried to steal I did effectively
steal. It was so funny. I stole lunch
out of a Door Dash robot. Um I just
ripped it open and took the lunch out of
it with Door Door Dash PR there. Uh I
fought a tackling robot uh at a football
field that this was a series and I think
underneath it all was just this thing
that like I was like I am not going to
be the first. I have an urge inside me
to beat the crap out of these things and
so will a good chunk of society. And I
think we're starting to see the
beginnings of that. Well, it's also good
that you are preparing yourself for all
modes of robot combat and that that
could uh could be required in a by 2035
according to the gentle singularity. So
So maybe got to practice. Maybe I need I
need to start scrapping with robots just
to uh just to prepare myself a little
bit. I'm not going to burn them. No
burning cars. No burning. Fight a robot.
Sparring. A little sparring. Yeah. And
you you'd be surprised because they can
fight back in sometime some situations.
The lunch delivery robot, I beat that
one easily. Just ripped the top off,
ran. By the way, um that video they put
on Jimmy Kimmel for two weeks in a row
where like Jimmy took Yeah, they took
our vid he took our video of the robot
crossing the street and then um like put
like put like special effects and had
like a bus run into it and the thing
blew up. God bless mid2010s media. I was
like, it was a good time. But the the
football uh robot definitely got the
best of me. So very humbling. Watch out
for that one, listeners. Exactly. All
right. So, we'll end it there. We look
forward to a future where humanoid
robots are among us. A gentle
singularity. Unless if you ask the
people and then you might get a
different answer. Raja, so great to see
you again. Thanks for coming on the
show. See you next week. All right,
everybody. Thank you for listening
again. I'll be back on Wednesday with
Dwaresh Patel and we will see you then
on Big Technology That guest