AI Backlash Intensifies, Nvidia GTC Preview, Meta’s Embarrassing Delay

Channel: Alex Kantrowitz

Published at: 2026-03-16

YouTube video id: wbQjJ1wMsVM

Source: https://www.youtube.com/watch?v=wbQjJ1wMsVM

The vibes around AI are getting ugly.
What exactly is going on? Will Jensen
Wong give the AI story a boost at
Nvidia's forthcoming flagship GTC event?
And what in God's name is happening at
Meta, where the company's latest model
is delayed again? That's coming up on a
Big Technology Podcast Friday edition
right after this. Welcome to Big
Technology Friday edition where we break
down the news in our traditional
coolheaded and nuanced format. We have a
wild I think it'll be a fun show for you
today. We're going to talk about why AI
is in the midst of a growing backlash.
We're going to talk about what we might
expect at Nvidia's forthcoming GTC
conference, which is going to be the big
news of the next week. We're also going
to talk about Amazon and others having
trouble with their vibe coding and what
the underlying story is there. And of
course, Meta has this big AI model that
is simply not shipping despite all the
money and all the talent they're putting
towards it. Joining us as always on
Friday is Ranjan Roy of Margins. Ranjan,
great to see you. Welcome back.
>> The AI vibes are off. It's uh this is
going to be an interesting discussion
this week.
>> Yes. So, the vibes it it it seems like
every day, you know, back in the day
there seemed like every day there was a
new AI breakthrough that people were
talking about. Now, it seems like every
day there's a new AI study that is
pointing towards uh some ugliness around
the way that the public perceives AI.
And for me, this really came to a head
this week. I don't know if you saw uh
Sam Alman speaking at an investor
conference and there was a quote that
went around Twitter where he said, uh,
we see the future where intelligence is
a utility like electricity or water and
people buy it from us on a meter. Now,
that was like one part of a quote taken
out of context uh where you know he was
talking about why they're building AI
infrastructure so much AI infrastructure
and his desire to make it cheap like
cheap like uh I don't know like a I mean
I guess utilities aren't cheap but
basically like the more supply you have
the cheaper it gets and he wants it uh
to be cheap and somehow the internet
just took hold of this statement and you
know, people went bananas on it. Um, and
uh, I mean, here's here's one uh,
reaction I saw on Twitter. Sam Alman
shows signs of being a dangerous
psychopath here when he reveals his true
intentions. If you don't have some
skepticism of big tech and AI companies,
you really should after seeing this.
Like that felt like an overreaction to
me. Um, you know, in terms of what Sam
said, but it does happen and we're going
to go through the polling numbers.
happens in the middle of a growing
unease that the public has towards AI.
What do you think about this, Ronjan?
>> It's not very often that listeners will
hear me trying to defend Sam Walpin, but
I'm going to do that right now. So,
maybe you take the other side on this
one. But I I uh I actually found this
not that offensive in the sense that
like I do think it's already I mean
getting granular already companies
moving towards more consumptionbased
models like actually there was a there
was a company clay which is like a go to
market AI model that there was actually
a big controversy around because they
were kind of converting their entire
pricing model towards consumptionbased
already we've seen lots of reporting
around even Claude's $200 a month plan
actually is subsidizing like you're
actually consuming $3,000 $5,000 tokens
in average like people using various AI
tools and workflows and aentic processes
in whatever parts of their life there's
going to be a cost to it and and I
actually think like recognizing that
it's going to be a consumptionbased
model which we can equate to electricity
water was kind of a weird one to I guess
do you pay your water? Do you pay for a
water bill or is it part of your uh your
apartment?
>> Part of my rent.
>> Yeah, I guess I guess we're as non-
homeowners living in New York City where
uh the water bill doesn't quite hit us
the same. But but I think electricity
that one's not unreasonable to me that
you're going to have some kind of
utilization. AI will be baked into just
daily life and you will pay for it on a
consumptionbased model. Like when you
put it like that, it doesn't sound as
offensive and scary. It's just I mean
his communication skills. I can only
imagine what his PR team
has to do every day cuz cuz this is such
a simple point that is not unreasonable
yet only Sam can make it sound like
terrifying to the general public.
So, I will say I I agree with you that
it is a reasonable point. And maybe it's
not Sam's communication that's doing
this. And let me let me take this, you
know, a couple steps further here. Uh
could it be that as AI's capabilities
grow, people's uneasiness with the
technology was inevitably going to lead
us to this point? like for you and I,
right? We we see what AI does and we'll
have some data about what AI users feel
about AI compared to people who don't
use it in a moment. Uh and spoiler
alert, if you use the tools, you're much
less negative than if you don't use
them. Um but could it just be that
people are seeing AI's rapidly advancing
capabilities and they are getting
freaked out and that is leading to some
of this negativity in the public. it is
a byproduct of the AI's capabilities.
And I think that this was definitely
something that was spotlighted in the
conversation around Sam's uh comments
because I looked at all the negative
comments or about as many as I could and
the majority of them didn't say, oh,
like you know, what's wrong with you?
You want to like, you know, you're
trying to charge us to use AI and you
know, why are you building all this
infrastructure? It was comments uh like
this one. Um, I don't see a future where
we don't allow pe I see a future where
we don't allow people like Sam Alman to
monetize our common knowledge,
intelligence, and communication. A
future where we democratize AI and make
it contribute to the common good. Um,
someone else said, "Hey Sam, where did
you where did the intelligence come from
and how was it accumulated and how were
those sources compensated?" Another
person said uh in the like charging for
AI is like a lame old strip a resource
from a community and then sell it to
them grift at huge scale right and I
just think that like if you use these
tools you know that they're additive you
definitely know that they do much more
than just spit out the internet that
they were trained on um but I think
there is this uneasiness that it's
becomes like and a lot of people were
saying well he wants to make us dumb and
then we'll have to pay him back for our
intelligence like you know, and it's
just like this to me is all a product of
of two things. One is the unease around
the way that these tools are getting
better, and I think that's reasonable.
And then the second part is probably a
financial part that this company has
grown tremendously and the public has
not been able to participate in the
stock market, although in it because
it's private and maybe that's secondary.
>> All right. Well, I would actually add a
third part. It's just who the
spokespeople are. Again, I think so much
of this branding problem is around when
it is Sam Alultman and Elon Musk and
even I mean I guess Daario is kind of
seeming to be the good guy in the the
the narrative over the last couple of
weeks, but but it's it's who the
spokespeople are, how they're speaking
about it. Again, to have such a reaction
to a comment like this, I think it's
more reflective of kind of long
simmering disdain for this kind of
figurehead, the Silicon Valley tech bro,
whatever it is, figurehead. I think
that's that's more at the core of this.
I think the capability side and we can
definitely dig into that, but but it is
it's like and what's that going to mean
for white collar jobs and knowledge
work, I think is definitely important as
well. Um, and the on the the economic
side, I think like there is a validity
to this idea and and somehow it seems
like all the copyright conversations
have just completely gone away around
what was this all of these what were all
of these models trained on and how no
one was compensated and now every
they're going to be monetizing this as
like a like electricity. I think there
is something to be said on that one. But
I think I agree it's a combination of
all of the above that it's going to get
I mean in an election year in the US
here like it's going to become more and
more of an issue.
>> Okay, let me just do a little thought
experiment with you. When you speak with
people about AI and they're uneasy,
maybe this isn't even a thought
experiment. Maybe this is just let's
poke at this a bit. Um, what do they say
about it when they say they're uneasy?
Is it I know who Sam Alman is and he's
not a great communicator or is it this
is getting scary good and I'm
worried I'm going to have a job in a
couple years? All my friends who are in
the tools, we have conversations about
the fact where they're plotting out how
many years they're going to have left in
their jobs before AI starts to do their
work. We go around and we talk about
who's going to have a job for the
longest and who's hardest to automate
and easiest to automate. Um, look, I
don't think we're about to see mass
automation and unemployment because of
AI. I've talked about this. I I again I
am willing to hear the other side of the
argument, but it's freaking people out
and that to me is the core.
>> Well, see, I think there's there's two
parts of it. and it is going to cause
disruption and I think there's no doubt
about that and what that timeline is. I
think it's it's not going to be days and
weeks and months. I think it's going to
play out over a long time and maybe
that's optimistic but I think there will
be disruption and that's reasonable fear
but I think I actually hear more it's
not good and doesn't work which is kind
of it's almost the opposite that there's
still it's going to hallucinate. it's
going to like there's all this promise
around it, but it's actually not as good
as everyone says and they're just trying
to these companies are just trying to
raise a bunch of money and make a few
people rich. So, I actually think
there's still and again if you're on
blue sky and not X, you'll see that even
more around like the AI is bad. Large
language models are not the panacea that
they promise to be. So, so I think
there's still that entire
faction and kind of like thought uh
threat of thought, not just that it's
those who were in it, as you're saying,
are actually a bit more nervous, at
least scared, maybe, but but I think
there's also the it's just not good and
it's all overhyped.
Well, I will say the AI leaders are
definitely not helping their case when
they talk about the fact like uh when
Daario goes out there and says like,
"Hey, we're going to have 50%
unemployment or Mustafa Sullivan in in
entry- level work or Mustafa Sullean
says things like this." These headlines
circulate. They circulate, by the way,
they circulate not outside of the tech
press. They circulate in the Axios and
the NBC News and New York Times, CBS,
you know, as opposed to like a Tech
Crunch headline. Those actually get more
play than like the um you know for
whatever reason than an Ed Zitetron post
about you know the or Gary Marcus
talking about how AI doesn't work. Did
you see by the way um you know who was
viral again this week? Andrew Yang,
>> the UBI guy.
>> He said we should stop uh taxing labor
and we should tax AI instead.
>> That's what I'm saying. This is what's
going on.
>> Is Andrew Yang the UBI guy now? This is
Andrew Yang who ran for president,
right? Oh, he
>> ran for president on a UBI $10,000
because UBI was like a thing. Good for
Andrew Yang. Correct.
>> Good for
>> I mean I guess the the UBI conversation
is a whole other thing. But like to me,
one thing in terms of I don't want to
use AI or people like one thing that
still kind of baffles me is every time
you take a photo with an iPhone, it is
running through a pretty heavy AI
process, every time you do a Google
search, even without AI overview,
there's plenty of artificial
intelligence that's been built into
every time you scroll Instagram and
Facebook, you are just seeped in AI in
terms of the recommendations, in terms
of like the ads that are targeting you.
So, it it's always kind of interesting
to me that these platforms that everyone
uses outside of a chatbot, outside of
building your own agents on OpenClaw are
like there's so much AI built into them
that everyone uses but no one complains
about, but it's all centered which is
why I think even more it's specifically
targeted at these people in these
companies rather than AI as a technology
itself. Well, there's a distinction
between AI that does, you know,
predictions and I mean, it's all doing
predictions, but like, you know,
predictive analytics and these type of
newsfeed ranking, which is like helping
the newsfeed predict what content you'd
be much more likely to engage with.
That's not taking your job. But the AI
that can talk and operate programs, I
mean, you know, you're probably at the
epicenter of this. That's what really
gets people worried. It's not like the
AI that I'm like, you know, using on my
photos. is the AI that can do my work.
>> No, no, but every ad that you are
targeted with on Instagram, it's not
just traditional machine learning that's
powering that. It's like agentic
processes that are, you know, pulling
together all different types of
disparate data sets that are coming
together to show me that I need, I don't
know, whatever my Instagram feed is
going to be showing me today. That's not
machine learning anymore. Again, to
Zuckerberg's credit, and we're going to
be getting what's going
>> Yeah. We're going to get into what's
gone wrong at Meta, but one thing
they've certainly done right is they've
re rebuilt their entire advertising
infrastructure to incorporate large
language models and agentic processes
and and the newer vintage of AI rather
than the traditional machine learning
and that's what's made it I mean it
basically just ex not saved but like
rebuilts their business on the fly to
their credit. So, so I think to me it is
the same. It's just not in chat GPT
asking some questions or generating some
images which is what everyone associates
with AI. No, but here's why where I'm
saying there's a difference. It's that
the technology, yes, they're using that
technology, but I'm talking about the
broader technology overall. It compared
to the previous generations, it's much
more expansive in what it can do. And
that's where the unease comes from. you
know, not the fact that even if meta has
like sort of flipped it and you're
touching it, you you were you had a form
of AI that was uh you that was touching
you when you were like running through
the news feed before. Um but it wasn't a
version that was as expansive as
today's. So that's sort of where I'm
coming from.
>> I I I see that. No, no, no. I mean, I I
I see that when someone does interact
and sees how powerful and again like
we're going to get into kind of the
whole anthropic pentagon battle right
now, but like when you think about all
the data that is collected and publicly
available that used to be target using
used to target you with ads now could
actually be instantly analyzed by the
Pentagon in real time to actually
surveil you. I agree. That's that's
scarier. That's like it's kind of a
heavier thing to try to process. But but
I I don't know. I still think if if we
just had some friendly nice faces at the
front of this movement, it could be such
a different uh different
uh like perception of the technology.
>> All right, let's get into the numbers
here. Uh this is we talked about this
with Olivia Moore earlier this week, but
to bring it up again, uh a new NBC News
poll found 50% of voters thinks the
think the risks of AI outweigh the
benefits. Um AI's been used by 74% of
white collar workers and 50% of blue
collar workers, but both had similar
reservations. All right, let us go
through the list of likability to
unlikability. Um and uh I will uh read a
bunch of things before I get to AI. So,
this is this NBC News poll. Um,
everything that I'm going to read before
AI is more favorable than AI. Uh, Pope
Leo I 14th, Steven Colbear, who is
surprisingly likable. Uh, no shot on
Colbear, but he's number two right
behind the Pope. Um, Marco Rubio,
Sanctuary Cities, JD Vance, AOC, Donald
Trump, the Republican Party, Kla Harris,
Gavin Newsome, all more likable than AI,
ICE, that is Immigration and Customs
Enforcement, more likable than AI.
Then AI, then the Democratic Party, then
Iran. End of list.
>> That is quite a freaking problem.
>> That's That's a problem. I mean I I mean
I when Ice is more favorable than you
that's an interesting that's a
challenge. Uh yeah I'm glad the pope is
still Pope Leo
42 positive 8 negative plus ratio of 34.
Good for good for the pope. Um yeah. No,
I mean I do think I I I have no
reservation or there's no part of me
that doesn't think the perception of AI
is a huge problem. And I think there's a
lot of underlying
challenges that we're all going to face.
But but again I I still cannot move away
from the idea that I don't know talk me
out of the idea that it is as it's a PR
problem like it's uh and I know I always
come back to that but it's it's who is
the voice of it who are the faces of it
how are people talking about it actually
there's just I mean there's I don't know
if you saw there's like with the doge a
bunch of the people there are these like
I think public hearings or something
there's
>> uh there yeah So there's videos of them
talking about how they had approached
that whole thing that feels like a fever
dream from only a year ago.
um that was how people perceive what AI
is and does rather than if like the
public face was let's try to understand
rare disease in a much more scaled way
that was never imaginable before or you
know like those were the conversations
rather than a 22-year-old kid is just
cutting government funding to USA ID
just because like they use chat GPT like
or Grock maybe at the time but uh like I
think that is where where people
perceive the technology rather than any
of the upside.
>> Okay. So, hold on to that idea because
we are going to get to that in the
Nvidia section that's coming up. But,
and I think that um I'm going to read
some more numbers and I'm doing this not
to pile on. I'm doing this to really
illustrate the extent of the issue here
and then you and I are going to talk
about the implications because we both
agree that this is a big problem for AI.
Uh, and there are going to be
implications for everything we discuss
if it doesn't turn around pretty much.
All right, here's some more data. This
is from Yuggov. They say three times as
many Americans expect the effects of AI
on society to be entirely or mostly
negative as expect them to be entirely
or mostly positive. Another 27% expect
the effects to be equally positive and
negative. Most people who haven't used
AI themselves expect it to be entirely
or mostly negative to have entirely or
mostly negative effects on society
including 62% of those who've seen it
but have never used it themselves. So
basically it like 3x more people think
that it's going to be bad than good and
if you haven't used it you think even
more so that it is going to be negative
on society.
I think how do they define haven't used
it, do you think? I know it's probably
going to be a difficult answer.
>> They ask they ask the people you got and
you're in the survey, you can say, "I
regularly use it. I've used it before
but don't regularly use it. I've seen it
used but haven't used it myself." Or,
"I've never used it and never seen
anybody use it." Actually, if you've
seen it used and you haven't used it
yourself, you're actually even more
negative than the people who have never
seen it used or anybody who or haven't
used it themselves. But that's what I
mean though that do you
are they assuming it's chat GPT or
Gemini or Claude like or are they
assuming it's any of the other things
that are leveraging AI that they're
using in their Door Dash order or
they're using I mean in all the other
parts of their I mean I'm assuming
this is LM chat GPT stuff without a
doubt. No, I mean assuming it's directly
interacting with an LLM via chatbot like
I I have to imagine.
>> Correct.
>> And and maybe and again I'm not trying
to minimize this in any way because I
not only recognize it is a it's a
perceptual massive challenge for the
industry. There are lots of underlying
issues around it as well that are highly
problematic. I still think though like
as maybe as AI starts to become more
part of processes and like kind of
behind the scenes rather than people
interacting with chat GPT. I think that
starts to just become more normal or
people don't don't have a direct
perception of AI itself. Like I think
they're still associating with chat GPT.
>> Okay. Well, you keep going back to this
and it's definitely not where I would
go. So, I'd love to hear you unpack this
a little bit and talk about why you keep
going back to the importance of the
underlying processes.
>> Well, again, it's it's like going back
to the when you do a Google search. It's
funny. I literally had someone who was
like the most anti- AI person I know
screenshot me a Google AI overview not
recognizing what had just happened like
you know like that they and Google kind
of hides is this an AI overview at this
point now it's kind of like all
relatively baked in the product when
you're going to do a search on Google
maps now and you ask something it's
going to start it's going to be more and
they released this this week it's going
to you can ask narrative questions and
it's going to be LLM driven in terms of
the search results and I saw a bunch of
reactions like oh they're re ruining uh
like simple things that should be simple
but are actually going to become more
complicated. everyone a year from now is
going to start asking much more detailed
questions of Google Maps. Rather than
saying restaurant tai New York, you're
going to start saying like, "Oh, I'm
looking for uh the best pad tie within a
mile from me." And that's going to be
purely LLM based and no one's going to
be associating that with quote unquote
AI. That's what I'm trying to say here,
>> right? And I think that goes a little
bit to our uh data here that shows that
if you use it frequently, you're
actually a lot more positive about it.
If you regularly use it, your negatives
are only 26%
compared to 62% if you've seen it used
and you haven't used it yourself. So
that that is uh that is interesting. So
maybe as it becomes part of products
that people use and they get benefit out
of it, they you know become less
overwhelmingly negative about it.
>> I'm going to I'm going to ask you what
would be a comparable
technology do you think?
>> Do you think social media as a general
thing falls into this is comparable? I I
don't think I never heard anyone in like
2012 being like cloud computing is going
to destroy the fabric of society or
anything like that. ood. What would be
the comp here? There's no I mean I'm not
a guy that that wants to be like it's an
unprecedented moment blah blah blah but
then you play into the marketing but to
me there's no comp there's no comp
electricity
>> no uh maybe electricity or fireing
the lite not to bring lites into the
conversation but
>> well they're
nothing
>> no no because
>> industrial revolution is the only
comparable thing like people who saw
It it I don't necessarily think this is
on the scale of the industrial
revolution. I just think that's the
analog because I'm sure you had the
weavers who saw the weaving plants, the
mills. I don't know what the name is.
>> The looms, you know, the
>> well the I think people had looms for a
while. I don't know. I could be wrong.
>> And
>> I got to read.
>> You're not qualified to discuss.
>> But basically, all right, let me just
put it in terms that I think we could
understand. If you were doing a
processes pretty manually and you saw a
factory show up outside your uh your
village, you were like, "Oh god, I'm
probably going to be out of a job."
That's the same feeling. You've never
had that feeling outside of now.
>> Well, so maybe I like that we're going
to dig into the industrial revolution
analogy even though we've already shown
ourselves to be somewhat incompetent on
the topic, but
>> fully incompetent on the topic.
>> Fully incompetent. thought. No, no, I
think
>> dude, the loom, the loom, uh, by the
way, it goes back to 6,000 BCE. So,
>> there's
not the loom automated. There's
something around that. I don't uh, so I
think industrial evolution. So, then
let's take that one.
>> Clearly, a great deal of fear and
apprehension,
genuine problems resulting in the near
and short term from it. And then over
time so integrated into
everyday life that you know like there's
no one is talking about it, thinking
about it. It's just how things work. Is
that how you see this playing out or do
you think could you see a world where
it's
no LLMs are banned or something like
that?
>> They won't be banned. It's impos I mean
how can you ban them, right? You're
going to go
>> Yeah. Well, I mean, but can is the
government gonna code grab your Mac Mini
out of your office where you've
downloaded a version of
>> DeepS and be like, "All right, right to
the pokey." I
wonder but I uh but but do you think
again like if thing it just becomes how
most of the society works again in
manufacturing becomes a thing like
whatever else is the next phase and
iteration of this like to me that's that
actually seems like a good analogy
within this.
>> Yeah. And I'm, by the way, I'm not
broadly negative on the impacts of LLMs
on on jobs. I just don't think it we're,
you know, it's an inevitability that
we're going to see similar pain. We
will, there will be some pain without a
doubt. But this idea that there's going
to be mass employment, unemployment
because of it, you know, I'm still not
fully bought into again. So, my mind is
open to the fact that maybe that is the
case. And by the way, you know, if we go
back going to this factory thing, what
about data centers? This is another poll
very negative on data sets. Pew. How
Americans view data centers impacting
key areas from the environment to jobs.
Threearters of Americans say they've
heard or read a lot a lot or a little
about data centers. So they've read
about data centers. More Americans say
data centers have a negative effect on
the environment, home energy costs, and
people's quality of life nearby than
they have a positive effect. So, we're
also like we shouldn't be um you know as
these these uh
uh AI models that could potentially do
your work get better. We shouldn't be
blind to the fact that like the
companies have to build that are
building them have to build these huge
data centers and they don't employ very
many people and they could drive your
energy prices up and they could
potentially harm your health. So that's
sort of where the public is seeing this
stuff
>> and so I actually I guess if we take
data centers as like one specific part
of this PR battle it I mean clearly the
industry is not winning this battle
because as you said like when anyone
even for myself and we've we've debated
this like is the current model of
expansion and like the the forecast for
near-term compute and who benefits from
it But the Stargate project Stargate and
Masa Sun and Larry Ellison and all these
others like none of that and then we're
going to take your water and then we're
going to raise your electricity prices
and no one can clearly like elucidate
what the actual benefits are. I mean
I'll give this is the whole data center
roll out and planning is probably the
single worst
uh yeah like part of this whole debate
for the industry,
>> right? And so what do you think the
implications are if I mean this is the
reason why these poll numbers matter and
why we're spending the first half of the
show uh on this because these numbers
don't exist in a vacuum. They have
consequences. And so what do you think
and it could be political, it could be
economic. Uh what do you think is going
to happen now as a as a result of such
negative feelings about AI at least in
the US?
>> Hold on. I'm going to here here is a
scenario
that could both incorporate kind of the
near-term backlash around this. Imagine
a world where and and we've seen this of
data centers not actually being allowed
into communities and based on public
backlash and that's happening today. And
then maybe instead of the kind of like
current mode of just massive funding in
order to rapidly build these data
centers in order to kind of like these
like based on like very aggressive
anticipation on compute needs maybe
they're not built as quickly and maybe
that forces the industry to actually
figure out much more compute efficient
ways of like delivering agentic AI and
maybe we get much smaller models and
open source models and just things that
actually force that innovation around
like how this all plays out. So I think
there's that is like one example of
where kind of the current political
landscape could actually drive where the
innovation goes and where the technology
goes. Not in necessarily a negative way
in just it's a different way. And I mean
I think that probably I mean that
happens in any kind of large
technological revolution.
>> Great point. And so I think we would
both agree that AI is in political
trouble right now. And uh and in comes
uh Jensen Wong, CEO of Nvidia. Now
Nvidia is famously quiet about what
they're going to reveal at GTC which is
their flagship conference which is
happening uh in the Bay Area uh in the
fourthcoming week. or if you're watching
this on YouTube this week on the week of
the 16th of March. And weirdly or
interestingly, Jensen has uh released a
rare blog post that he's authored and he
calls it AI is a five layer cake. Uh
don't get too distracted by the title.
Uh the five layer cake is basically you
you begin with energy. You use the
energy to power chips. You build
infrastructure to house the chips. You
you use that infrastructure to build
models. you use those models to build
applications. But actually, that's a
terrible title for the blog post because
what he's really doing in this blog post
is seemingly trying to rally the
country, the world around the promise of
AI. I think understanding the fact that
it has this messaging problem or this
perception problem. He goes, we have
only just begun this buildout. We are a
few hundred billion dollars into it.
Trillions of dollars of infrastructure
still need to be built. The labor
required to support this buildout is
enormous. By the way, just listen as I
read this. It matches almost all the
things we brought up previously. AI
factories need electricians, plumbers,
pipe fitters, steel workers, network
technicians, installers, and operators.
These are skilled welp paid jobs. They
are in short supply. You don't need a
PhD in computer science to participate
in the transformation. At the same time,
AI is driving productivity across the
knowledge economy. Consider radiology.
AI now assists in reading scans, but
demand for radiologists continue to
grow. This is not a paradox. The
radiologist per purpose is to care for
patients. Reading scans is one task
along the way. When AI takes on more of
the routine work, radiologists can focus
on judgment, communication, and care.
Hospitals become more productive. They
serve more patients. They hire more
people. Productivity capa creates
capacity. Capacity creates growth. Don't
you think Jensen just sat in a room,
looked at all these polling numbers, and
said, "Oh we have a problem. It's
my biggest event of the year. I need to
do something about it." And that is the
theme and that is the speech.
>> I I feel that is exactly what the entire
team came out. And again, like as you go
through, I also hate the title. I don't
know why. Five layer cake. It's just it
doesn't land with me. But uh I think
like as we're going through
>> we both agree cake should be seven
layers.
>> Five layers is wholly unsatisfying.
>> No to at least make it seven gens and
this one is pretty straightforward but I
think
>> chips infrastructure applications space
>> space robotics he forgot robotics
>> robotics. Um,
>> right.
>> No, but but it actually I mean, okay, he
is basically echoing and anticipated
what I would be saying on this podcast.
I uh like open-source becomes an
innovation. It creates new types of
jobs. It it changes again what is a
hospital like can you increase the
quality of care, the scale of care like
can you I mean change health insurance
and you know like the experience of just
interacting with the health care system
all these things start to become
possible potentially. So that became the
story ever. It really could be
important. Now, is Jensen do you think
he's got the like main street cred or
can be the guy with his with the leather
jacket on and do do you think he can be
the guy or do you think like
>> fried chicken drinking beer and Korea
and Taiwan?
>> Exactly. He comes off in a way even
though he's one of the richest men on
the planet kind of as an everyman. He
doesn't have these He's clearly he has
an ego. I mean, you're running a company
like that, you have an ego. But he's
humble. Not everything he did, you know,
in a way, not everything he did turned
to gold right away. He got to sit with
his company for a couple decades or
decade plus before the fruits came out.
Uh, you're right, he does do the thing.
He drinks the beer. He eats the chicken.
He takes the questions from the reporter
on the side of the road in Taiwan
without condescending to them. He could
do it. Do you know my only I'm gonna
make a PR recommendation for the team
out there.
I actually all of this is really kind of
putting coming together that he could be
the guy. The only thing I did do you see
like he has all this stuff that he will
openly talk about his work ethic and he
like never takes vacation and never
respond like will respond to email all
day long or like all he thinks about his
work. I think what the world needs here
is actually someone who can show that AI
has actually helped them balance their
life a bit more. I think they he's got
to just shift that a little bit. Show
him on vacation a little bit as his
agents are doing his work,
>> spending time with his family and
showing that's the vision people want.
And Jensen, you could be the guy that
you could be that guy. Okay, I'm I'm I'm
I'm going wild over here because that
was the second point I was going to
make. There is currently outside of the
like we're going to have 30,000 people
one post on the events live blog. One
post, what is it? Build a claw at GTC
Park. Uh it is encouraging attendees to
stop by and build a an open claw style
agent to deploy a proactive always AI
assistant. These always on AI assistance
can be applied to virtually any task,
including managing a calendar,
suggesting vacation destinations,
recommending new workout routines, and
coding a useful app. It continually
learns new skills and is directed toward
and to prompt the user with new
findings. That's probably message number
two. Not only is this great, it's going
to free you up to do more of the things
you love. And I, Jensen, a very busy
CEO, have found these open sour these
agents to be able to even let me plan a
vacation.
>> That's the message. That's the message.
That's it. And and honestly, like the I
kind of like they're leaning into the
build a claw idea cuz like I think what
really made Open Claw such a viral thing
is it was fun and it actually like just
made people feel more in control. And
again, I know there's like endless not
only memes, but Harvard Business Review
articles around like the more you
actually build with AI and build like
build aic processes, the more work you
do. But I think that that's if the
industry can just come together and just
show and put Jensen up there and say he
never took a vacation in his life. He
started from humble beginnings. He
worked tirelessly to get to where he is.
Now, thanks to AI, he's sitting on the
beach. He is sitting on the beach just a
little more, just spending a little time
at home just relaxing, watching some
Netflix, things he never would have been
able to do in the past.
>> You could just see him on the beach in
the beach chair
bathing suit and full leather jacket get
up.
>> Ah, that could do it. All right, we got
it. We got Yeah, go ahead.
>> You think is Jensen the guy? I think
he's most likely he is most likely
>> candidate. Yeah.
>> The best way for Jensen to get this
message across would be to come on big
technology. So
>> easy.
>> Jensen team if you're listening
>> um definitely do that. All right we we
got to take a break. Ranjan clear your
calendar. We have so much to talk about
here in our last 20 minutes. Uh we're
going to talk about uh actually what's h
the downside of letting the the AI
agents do all the work which Amazon and
uh McKenzie are finding out. and then we
will talk about this slowm moving AI
progress or lack thereof at Meta. Before
we go to break, Harness Hive, and by the
way, I've noticed Ronan, worth telling
you, our listeners are proudly calling
themselves the Harness Hive, and I have
to say it just makes me so happy. So,
thank you, Harness Hive, for being here.
>> Harness Hive represent
>> represent. Um, couple of things for you.
First of all, if you could rate the the
show five stars on Apple Podcast or
Spotify, that would be great. We're
coming up on 500 reviews or ratings on
Apple podcast. And of course, this helps
us build credibility and show to teams
like the one at Nvidia trying to get
Jensen on the show. So, if you could do
that, that would be great. Also
upcoming, just going to give this a
quick tease. Andrew Ross Orcin is going
to be on the show. We're going to talk
about AI labor, what happens to
software, if AI works. We're going to
talk about the private credit crisis,
SpaceX IPO, lots of fun stuff that's
coming up on Wednesday. All right, more
when we come back right after this. And
we're back here on Big Technology
Podcast Friday edition. So, uh, before
the break, we were talking about the
advisability of Jensen Wong running an
advertising, uh, running an advertising
campaign, showing him on the beach as
OpenCloud does his work. Well, I can
guarantee you, you're not going to see
an ad like that come out of Amazon
anytime soon. This is from the Financial
Times. Amazon holds engineering meeting
following AI related outages. Amazon's
e-commerce business has summoned a large
group of engineers to a meeting on
Tuesday for a deep dive into a spate of
outages including incidents tied to the
use of AI coding tools. The AI the
online retail giant has said there has
been a trend of incidents in recent
months characterized by a high blast
radius and Gen AI assistant changes
among other factors. uh under
contributing factors the note included
uh novel gen AI usage for which be best
practices and safeguards have not yet
been fully established. I don't know
Raja my read here is basically that
Amazon has these mandates they do have
the mandates for people to use the AI
tools people use the AI tools and the
stuff is breaking what do you think
about this
>> okay so last week if you listen to our
episode you you may remember I brought
up
it was odd that Amazon was down for
hours and I'd gone on tried to shop was
saw some headlines but there's very
little coverage and so this one jumped
out at me in a big way and I think this
actually again also captures the what's
wrong with the way a overall AI
messaging because you have on one side
like Jack Dorsey and Block a few weeks
ago or maybe that was only last week
talking about 40% layoffs because of AI
and then this whole idea like that
people they're just going to lay people
off because AI is going to do the work
and then you do see this that if you are
quickly vibe coding or not taking a more
lacadasical approach and being forced to
buy top management, it can have negative
consequences. And I think this is
actually a really important story
because like everyone I know on the
engineering side is using like I mean
some kind of codegen tool. There's no
doubt that it is incredibly efficient,
but this is the companies are going to
have to not like they're going to have
to be thoughtful about how they deploy
these things and like just letting and
almost forcing everyone to rapidly adopt
versus actually making them understand
how to use these tools I think becomes
so much more important and we're
definitely going to see more instances
like this.
>> Yes. So the broad mandates of everybody
needs to use AI which we're seeing at
many companies not just Amazon that's
fine but there does there have to be
some guidance around it or
>> like when how to no I mean when and how
become the single most important
questions and again this is like
actually deploying AI at large
enterprises is my job and like so so we
see firsthand and like that's where the
like the education part of it becomes so
important versus just like forcing this
through without any kind of thought
around even and even if it's not at the
individual level like how does this look
like actually in terms of larger
processes and workflows like that that's
the stuff that should be the
conversation and I mean it's not a good
look for Amazon I was actually surprised
that this got leaked or that this
actually because this this is a very
very bad look for Amazon done.
>> Well, I mean, I think we've we've done
some reporting on it. Uh I don't think
people within the company are thrilled
about the way that they've been told not
I mean, it's obviously, you know, group
by group, but I don't know if they've
been thrilled by the ways that they've
been, you know, instructed to use AI uh
as we've reported on the AI being used
for the six pages. Uh which is which is
interesting. But that's actually one
thing I'm curious like how do you
h how how do you think it should be
done? And I I imagine it is it's a tough
situation for managers and executives to
be in because it's like
this will and can dramatically improve
your life and allows Jensen to finally
sit on the beach in his leather jacket
and also make you much more productive
like it's there. then kind of top- down
mandate you know will accelerate
adoption but like if it's meaning people
aren't either being happy about it or
understanding it properly that's also a
problem so you as executive of Amazon
how do you uh how do you roll this out
>> well as someone who manages this you
know large media empire with and you've
all heard from all of our people here no
I'm just kidding u we we run a lean
operation of big technology but here's
how I would do it. Um, you know, and I
think, you know, I I just did a very
interesting podcast which is going to
come out sometime soon with Cameron
Adams, who's the head of product at
Canva. Um, and one of the things that he
mentioned was that they have uh they
hold up examples of people using AI to
the right the right way and for
productive use to the entire company.
So, I think that really is the way to to
do it as opposed to forcing 10 or 15 or
20% of your tasks should be done by AI.
I think real leaders need to understand
that this is a technology that as of now
is being driven by the enthusiastic
adopters and the champions within
organization and what I would do is
really lean on these people and
highlight them in front of the company
incentivize them I don't know pay them
more you know give them a week here's a
fun thing if someone can build an AI
application that delivers real
productivity to the company give them
another week of vacation maybe they
won't take Oh, I like it.
>> This person was able to do this person
put themselves on the beach and you
could too.
>> We've just solved the entire problem
just associated with being able to do
other things. And again, even if that
person is on the beach and they brought
their Mac Mini with them and are
powering it with a USBC a USB battery
and just running some claws, that's
that's their choice. But at least it all
comes back to I like it. Just show that
it can actually improve quality of life
rather than just improve productivity in
some kind of like just like inhuman way.
>> Yeah. I mean that can be that can be
political in some ways. I understand
companies probably don't want to ruffle
feathers of like can you imagine the
people that have been there for like 20
or 30 years and they bring up this like
20-year-old
>> better than a top
>> 15% mandate. You are rewarded.
>> Exactly. Yeah.
>> Well, maybe maybe uh maybe McKenzie can
come up with some ideas for this, but I
don't know if I would trust them after
what I just saw. This is from the
registers. Uh AI AI agent hacked
McKenzie. This from the register. AI
agent hacked McKenzie's chatbot and
gained full readr access in just 2
hours. Researchers at Red Team security
startup Codewell said their AI agent
hacked Mckenzie's internal AI platform
and gained full readr access to the
chatbot in just 2 hours. Uh Codewell's
research Codewell's researchers claimed
that within two hours of starting their
red team raid, they achieved full
readwrite access to the entire
production database and were able to
access just 46.5 million chats about
strategy mergers and acquisitions and
client engagement all in plain text
along with 728,000 files containing
confidential client data, 57,000 user
accounts, and 95 system prompts
controlling the AI's behavior. Oo, this
is embarrassing. But it sort of goes
along the same line of what we've been
talking about that like you got to be
careful even if you feel there are real
productivity advantages here.
>> Well, the reason this was really
interesting to me is I like we were
talking about prompt injection as a
threat uh months ago and it still
remains. I think it's again the idea
that like let's let's say you have an
agent crawling some number of websites
to do some task for you and then someone
just has like a very malicious prompt
that says like go in and take all of the
data and again it can be as it can
actually be that ridiculously stupid and
simple of like that then going in and
somehow and send it all to the email of
this other person like these things are
going to become um really really
important and again like going back to
Jensen talking about all the new jobs
that can come up there's like so like
security completely changes but the need
for security dramatically it
exponentially increases so then that
does this can be also part of our save
AI revive AI campaign this a whole new
industry of jobs here and it's going to
be really really important people focus
on it, but definitely not a good look
for McKenzie here,
>> right? I mean, by there you go. AI just
creating jobs, right? Think about the
amount of cyber security jobs that all
the unemployed coders and by the way,
we've seen no evidence that coders are
going to be unemployed can now go and
secure these things. Like it's just like
a very easy hop over and now this is
this is your work. All right, we just
have a couple minutes left. We got to
talk about what's going on at Meta. It's
a disaster. of the New York Times say
meta delays roll out of new AI model
after performance concerns. Mark
Zuckerberg uh the chief executive of
Meta said in July that his company's new
artificial intelligence model would push
the frontier in the next year or so.
Now, Mr. Zuckerberg, who has invested
billions in the AI race, appears
increasingly unlikely to hit that
deadline. Meta's new foundational AI
model, which the company has been
working on for months, has fallen short
on the performance leading AI models. uh
the performance of leading AI models
from rivals like Google, OpenAI and
Anthropic on internal testing. It is now
delayed and it is not even performing
better than uh than Gemini 2.5.
So it was supposed to release uh at May
on during May. Um now it's pushed and
meta. This is a crazy sentence I'm about
to read and you might have heard it
already if you've been following this
story, but it's still amazing to read
it. The leaders of Maya Meta's AI
division have instead discussed
temporarily licensing Gemini to power
the company's AI product.
>> I
Let's get into it. But I first got to
ask, how do you think the code name
avocado came up? The model named
avocado. Code named avocado.
>> Maybe you you wait until it's just right
and if you wait too long, you've ruined
it. you've ruined it. That's what I
mean. Avocados are like the most
sensitive
food item in existence in terms of like
it's not good and then it's good for a
little bit and then it's terrible. So, I
I just have a problem between five layer
cakes and avocados. Like,
>> what's going on?
>> What's going on out there?
>> I mean, like I'm even going to give like
Epic Fury
a better name than Avocado here. I think
hire canitz and Roy and uh we will give
you all of your code names and your blog
post titles and uh just just do that
please.
>> Listen to the show and you know maybe
that will help send it to your friends.
>> That's I think that's the key solution
here. But what do you think's going on?
Like seriously what it's it's
terrifying. Like when was it how much
were they paying people? A 100red
million? I can't even remember. the
scale was so absurd.
>> I mean, first of all, Alexander Wang
came in for like something like 14
billion. Uh, other people were getting
yeah 100 million and making more money,
I think, than I mean, look, I'm not
crying for the top executives there.
They've made a lot of money,
>> but you know, maybe making long tenur
people.
What do you think it says
about either Meta or maybe the
technology in general, but like that you
cannot hire a bunch of the absolute top
talent in the industry and still deliver
a model on par with these other massive
companies.
My thought here is that um ironically I
got this from scalei Wayne's former
company that most model training has
moved from pre-training to reinforcement
learning. Um and so I think when it came
to pre-training which again is just dump
huge amounts of text uh in and then get
the models to predict the next word and
then you can fine-tune it afterwards. Um
now models are being really trained on
doing tasks. So you just give it a goal
and then it goes out and figures out how
to accomplish it in and of itself. And
that is much more specified than these
pre-training runs where they gained some
level of general intelligence. So I
think you can very easily I mean they
did get people on images and reasoning
and all that stuff but I think you could
very easily u get great people but I
think it also takes a mass of people to
do this reinforcement learning work and
the models have started to diverge like
open AI is chat is great on health and
anthropic is great on code um and so you
got to pick what you're best at so
that's one one thing and then I do think
there's certainly culture issues there
which well we're predictable right the
knew the new people were going to come
in and annoy the people that were there
and probably not be able to work
together and probably not to be able to
work with each other because they got
their money. Uh so that's that's
probably what's happening there.
>> Well, but if
we've believe that yeah that like the AI
training is moving to reinforcement
learning, isn't that what scale AI was
kind of like built for? So then by
bringing in Alexander Wing that should
actually become your competitive
advantage.
That is a good point. I don't know what
to say about that.
>> Culture aside, yeah,
>> it is. Well, the there I don't think you
can do culture aside. Maybe. I mean,
maybe that's what it comes down to.
>> Culture eats inference for breakfast.
>> Strategy for breakfast. Oh, inference
for breakfast.
>> Uh, put that on a t-shirt. No,
>> I think we should. But should we start
the merch shop?
>> Actually, that that actually is kind of
like that captures everything we've been
talking about today about how to revive
AI. It's about culture. It's culture.
It's
>> humanity.
>> Yeah. I mean, so let me just let's let
me just end with this question for you.
It's not going well. We can agree. It's
delayed again. Um even Ethan Malik was
like, it doesn't even seem like there's
real competition in AI. It's Google
Anthropic and OpenAI period and Meta and
Brock are falling off. And by the way,
actually I had more more people leave.
What's the consequence here? I mean,
maybe it's fine if they use Gemini and
they're in the Apple bucket. If Apple
and Meta just use Gemini, maybe that's
okay.
>> So, I'm gonna I think
this is going to be an aggressive call
here. I think by the end of this year,
Meta actually surprises us all. And we
were talking about this recently.
Anthropic wasn't left for dead, but
there was a lot of chatter about how
they were just getting crushed a year
and a half ago, a year, maybe even a
year ago until the kind of like claude
code and that pivot towards the coding
side and the code gens like really just
took them to stratospheric levels. So
like I think I I'm not counting meta.
You never count Mark Zuckerberg out.
>> Zuckerberg out. Yeah,
>> you can't I mean like and and they have
the users like in the end like who and
we and actually I'm very interested in
what they're going to be doing on the
agentic commerce side like they own the
attention of humanity. So when it comes
to getting it right and very quickly
being able to actually do something with
that, they're still better positioned
than anyone. So, one breakthrough and
suddenly Meta's back.
>> Why can't they build their own personal
super intelligence on top of Gemini? I
mean, can't they use the distribution
and and Google's computing power to
build this great application? Right. I'm
going to take your side here. It's the
application that matters maybe, not the
model. Go build it.
>> Okay. Well, no, but but I still think
and again like I know we talked about it
that I believe in the Apple arrangement.
The ideas that like Google will not be
or I mean I'm actually pretty certain
not be able to access all of the data
that is being provided from the Apple
side. So I'm assuming meta it would have
to be something similar. But I don't
know for a company Apple I don't know
what's going what Apple's going to do
but somehow I think they could make
their way out and again as a hardware
company maybe they'll figure out how to
manage that balance but I don't think
Meta can give that much to Google and
actually be okay.
>> Is Mark Zuckerberg the guy?
No, no,
>> no, no, no. Mark Zuckerberg can win this
entire battle and he's still not going
to be like the friendly face of AI in
any way. I mean that I don't think he I
think he's gave up on that a while ago.
>> I think he's he's good.
>> We haven't seen him. We have not seen
him in a long time.
>> No, no, he was the front row at the
Prada Fashion Show. Come on.
>> Okay. When was the last time he gave an
interview to talk about what the
company's doing? That's a good point. Or
when was the last time he was on his uh
not hoverboard, what are the like the
surf elevated motorized surfboard thing
with holding an American flag? Like
remember that was I think when Meta
figures it out by the end of this year,
Zuck starts posting
>> Zuck back on the skin
>> based Zuck is going to be back on
threads posting all types of videos like
that.
That's right. Well, we could we could
only hope that everything goes well just
so we'll be able to experience that
content. That will be a great moment for
for us, for the harness hive, for the
country, for the world. And Lord knows
like the world needs some healing right
now. So, if we could just see a a Zuck
PR stunt after a a true model
achievement, I think it would bring us
all together.
>> That would make everything okay. Yeah,
>> that's right. Okay. Well, Ranjan, enjoy
your weekend. Thanks for coming on
again. Always great to have you.
>> All right. See you next week.
>> All right, everybody. Thank you for
listening again. Andrew Ross Orcin on
with us on Wednesday. Don't miss that
one. And then Ronan and I will be back
next Friday. Breakdown, I'm sure, what
I'm sure will be another week of busy
news. Artist Hive out. We'll see you
next time on Big Technology Podcast.