OpenAI's Jony Ive Moment, Anthropic's Big New Model, Google In AI Mode

Channel: Alex Kantrowitz

Published at: 2025-05-27

YouTube video id: dL2xjCuY5C4

Source: https://www.youtube.com/watch?v=dL2xjCuY5C4

Welcome to Big Technology Podcast Friday
edition where we break down the news in
our traditional coolheaded and nuanced
format. Wow, what a week of tech news
we've just experienced. I feel like it
was about a month worth of news in 4
days. We've had developer conferences
from Microsoft Anthropic and Google.
Also, news that OpenAI and Johnny IV are
going to team up to build an AI first
device. And of course, news links leaks
that Apple is going to release smart
glasses maybe as early as next year.
Well, let's talk about everything that's
happened and make sense of the
headlines. Joining us as always on
Fridays to do it is Ran John Roy of
Margins. Ranjan, great to see you.
Welcome back. Alex, who are you, man? I
I open my Twitter feed a couple of days
ago and I see Alex on stage and Sergey
Brin is just on there. You never even
gave me the heads up. I had no idea this
was happening. Well, that would make two
of us and I'll just say this quickly and
then we'll get into the news, but here's
what happened at Google IO. So, I had a
fireside scheduled with Demis, which we
teased on the show last week. And I
showed up to the stage. I got miked up.
I had my questions ready. And um Google
team tells me, "Look, there's been a bit
of a switch." And I said, "Man, I I do
not want to like um give up this Habis
interview. Like I flew here for this."
And they said, "Yep, Sergey just walked
in and he's going to join you." So I
found out just the same time that
everybody else did. And uh it was pretty
fun. I like walked up to him and said,
"Hey, so you know, what do you want me
to ask you?" And he goes, "Just ask
Demis the questions and I'll chime in."
So, it ended up being a really fun
conversation and I'm glad that we were
able to put it on the podcast feed. So,
if you haven't checked it out, folks, uh
I I do suggest you check it out. And if
you're coming to the show new uh because
of that interview, we know I know we
have a bunch of new subscribers this
week just to give you an update on the
flow on Wednesday. Uh typically I'll
publish a big interview uh not always
with someone of uh on the level of
Sergey and Demis but we try to get uh
industry insiders and outside agitators
on every Wednesday and then Rajan and I
are here to break down the week's tech
news every Friday. So anyway that's the
story Ranjan. All right well you know
what it was an incredible listen and
we'll be talking about that in just a
moment. Okay great. So yeah, we're
definitely going to get to some analysis
of that conversation, but first let's
start with uh this really wild piece of
news, which is what OpenAI is going to
be doing with Johnny IV. This is from
Bloomberg OpenAI to buy AI device
startup from Apple veteran Johnny IV in
a $6.5 billion deal. Um they're going to
join forces and make a push into
hardware. Uh the purchase the largest in
OpenAI's history will provide the
company with a dedicated unit for
developing AI powered devices. Um I and
this is from IV. Uh he says to
Bloomberg, I've had I have a growing
sense that everything I've learned over
the past 30 years has led me to this
place and to this moment. It's a
relationship and a way of working
together that I think is going to yield
products and products and products. So
they want to build an AI device. no
screen, something that's ambient. We'll
talk a little bit about it. Uh they did
like I think two videos uh announcing
this in like a very like cinematic
typical uh Johnny IVive way. Ranjan,
what is your perspective on what is
happening in this marriage? All right,
I'm going to break it down into two
parts. First is the actual device and
what we can speculate on it, but I
definitely want to get to the deal
structure because my god, this is the
most open AI deal imaginable. In terms
of the device itself, I'm excited. I
think like we have absolutely no idea
what it is, what it could look like, but
I am a big proponent of moving beyond
the form factor of the screen. Even Meta
Ray-B bands and just glasses overall
have been a huge fan of. We've talked
about a lot. But this idea of the
ambient and the rap humane pin, rabbit
R1, I mean, some some great Not like
they'd be the first to try this. Yes.
some great some great uh great memories
uh in AI hardware land. But if anyone
can do it, certainly it would be Johnny
I. But I think it's interesting. Again,
it's the idea that and I don't I don't
even know what exactly it could possibly
be. But the idea of just ingesting data
around you, converting that into some
kind of knowledge, somehow feeding that
to different devices that you own. It's
interesting and I'm sure a lot can
happen from it. And again, if anyone I
would trust to come up with that, it
would be Johnny Iv. But but what what do
you think this could be? Well, I want to
give you first my definitely wrong take
of what I hope it will be and uh then
we'll go from there. That's where I like
to start.
I want it to be a hologram like Princess
Leia in Star Wars. Okay. Um that just
kind of sits on your desk. It's an AI
avatar. You talk to it and it will
understand your context and then help
you through your day and maybe I don't
know maybe is fun and entertaining as
well. Uh let's clip this because maybe
it will be right. Um maybe there will be
holographic component Johnny eye thing
I've ever heard.
It's going to lead Ranjan. It's not just
going to lead to products. It will lead
to products and products and products
and products and products and products.
So, there's a chance that this could
happen. But, uh that's the fourth
product. Yeah. I It feels so much to me
like the humane pin which also was
started by former Apple folks. I think
the difference is that
this could work because it has OpenAI's
uh technology underneath it. We know
Open AI has been effectively the leader
in voice, right? their their voice AI is
better than everybody else's. Um they
have the underlying models and to bring
Johnny in uh I think could lead to
effectively the same thing as the humane
pin. Maybe you don't wear it. Maybe it
sits in your pocket. Maybe it's on your
desk. It listens to everything you do
and it's just this helpful ambient
assistant. That's probably uh my best
guess outside of Princess Leia as a
hologram. Um, but yeah, I think the the
bigger news here is that it just signals
that if you look around Silicon Valley,
we had announcements from Google this
week about this assistant that they want
not just to be a chatbot, but something
that's with you in glasses. We know Meta
wants to do that. Now, OpenAI wants to
do that. It's clear that Silicon Valley
is rushing toward uh the next version of
AI already, which is something that's
ambient with you, understand your
context, uh understands everything you
say and just helps you out. Um to be
truly assistive, to be truly general, as
Demis put it this week, it just needs to
be with you uh at all times and
experience the world as you do. And
we'll we'll see uh what happens from
there. Well, so yeah, I I think not
Princess Leia, but on that second point,
I agree. And I think what is exciting
for me is this is almost like the
ultimate application that or the
ultimate expression that it's not the
model is not just the models anymore.
It's the application layer. I would
actually put this in that category. Like
it's the experiential layer, the the
form factor. And I mean I I've been with
the Meadow Ray-B bands as I'm city
biking around New York asking questions.
Apple could have done this with AirPods.
I always was like really bullish that
AirPods could be kind of this like
ambient augmented audio reality layer as
you walk around and you could interact
with that. Certainly didn't happen. But
overall, I think this idea that there is
this kind of interactive through voice,
maybe it's I don't know, through touch
in a weird way. Like there's so many
other ways to Alex raises eyebrows at
that one, but uh you know there there's
other ways. I think you're in the right
direction. Yeah. I mean, hold on. Things
like that. You like press it a little
harder. Okay, I'm going to stop now.
This is cannot go anywhere good.
Uh keep going, Ron. Please do. I'm just
saying the idea that the idea that to me
like chat GBT voice is already so good
that it just need we need to come up
with easier more natural ways to
interact with it. Large language models
are overall so good that we just need to
get more people interacting with them.
And then again that contextual layer the
more understanding of your context that
whatever interaction device you have
whether it's chat GPT whether it's a
screen whether it's your iPhone whatever
it is and we're going to get into how
Google has your data and that could give
them an advantage that contextual
understanding is going to make like
unleash the next wave of AI progress I
believe and something that's just
collecting data all the time around you
how you move what's around you, what
you're listening to, like all of that
stuff is very interesting from a not
even getting into the privacy standpoint
yet, but maybe we do. But overall, even
a little device that collects that and
allows you to interact with it in some
way, I think is very
interesting. It's interesting that we
might I'd like just to address the
privacy thing. Remember, we're still
living in a society where lots of people
don't want the Echo in their house
because they don't want Amazon listening
to them all the time. Now, and it
doesn't really listen all the time. It
will delete, you know, very
intentionally the audio if it doesn't
hear the wake word. So, imagine we're
going to go from an area where people
are already suspicious of that to a
moment where they let AI listen to
everything they do. The utility just has
to be so intensely high if people are
going to actually adopt this. Now, I
will say I was with a reporter recently
who was using um one of these devices
that listens to everything that she says
uh and then, you know, gives like a
to-do at the end of the day and talks a
little bit about um you know, broader
goals that they have, whatever. It's
just an ambient assistant using
generative AI. And I and I think I'm
generally uh more okay with giving my
data to these companies. Um and I just
said I I think that could be really cool
and maybe I should use that. But it does
add a level of awkwardness. The same way
glasses will let lead lead a level of
awkwardness. Um lead lead to a level of
awkwardness because you're going to have
to have other people around you be okay
with the fact that they are being
recorded by your special Johnny IV
OpenAI product. Yeah, even Meta Ray-B
bands as I wear them around. It's why I
just have the sunglass version so I'm
not wearing them sitting across from
people and it's just kind of more on a
kind of, you know, again just meandering
around New York. It's just an incredible
assistant and layer to have on. But it's
definitely going to introduce all types
of problems if it is some kind of always
on listening device because again, as
you said, I agree. If people are worried
about it in their own house, how is
society writ large, especially people
who don't know or have not consented,
that seems to be an issue. I wonder how
they control for that. Is there some
kind of like anonymization layer? Is
there some kind of I don't know. I don't
even know how you possibly control for
that, but I have to imagine they're at
least thinking about it. Yeah, they're
going to have to. And I think ultimately
even if they come up with an elegant
solution, it will be more invasive uh
than basically anything else that we use
today. But let me ask you a question
about the product here, right? So um
again on the show I'm often saying that
the model is more important. Ranjan is
often saying the product is more
important. So as someone thinking about
the product, Ranjan, let me ask you
this. Do you think this is a going to be
a product that people are going to want
to use just in general? I mean, think
about what Johnny IV said in this
internal meeting. This is according to
the Wall Street Journal. Uh he said that
um it's a new design movement. He wants
a family of devices. Uh it will exist
alongside the desktop and the phone. And
he says that the intent Oh yeah, this is
the journal reporting. The intent is to
help wean users from screens. I'm
curious like because I got asked this on
CNBC yesterday and was like I know that
this is the intent of Silicon Valley. I
can't say with conviction that it's
going to happen. It would probably be
good. Uh if we were able to abstract
screens away to some extent because we
spend so much time looking at them. But
it's interesting that this is coming
again from Johnny Iive. Effectively the
guy that built the iPhone was Steve
Jobs. Um, do you think that this
intention has a chance of working? Yes,
100%. I think it will. I think, and I'm
admittedly it's self-interested because
I've been waiting for this, but to me,
I've been trying to do this for a long
time. Like, I look at the Apple Watch as
another form factor that we me off my
screen that I can mentally process a
notification and that if it's important,
maybe then I'll pull out my phone. I
look at the Meta Ray-B bands. I look at
my AirPod. These are all things that
have me interacting with some kind of
technology potentially without looking
at a screen. And my my my belief is 10
to 15 years from now when people see
like videos of people walking around the
streets looking down at their phone.
It's gonna be like when you see people
smoking cigarettes in a restaurant like
people like like kids kids 30 years from
now just be like wait people did like
smoking in an airplane. I mean it's so
bananas that that happened and people
will be like wait you guys just walked
around looking down at this screen all
day. So I think it's going to be figured
out and if again if anyone can it's
Johnny and Sam. Johnny and Sam. But why
not Johnny and Tim? And I'm curious what
you think about the fact that we haven't
seen a device like this come from Apple
and whether us weaning ourselves from
screens uh is good or bad for Apple. Uh
we just saw, you know, the Apple stock
basically drop. I mean, Apple stock's
down 20% this year. They've had a rough
year. Uh but the stock dropped uh after
um after this announcement hit. So what
do you think it means that uh that this
is not something that's going to Apple
and um and that Apple doesn't have
anything like this already? I think
that's a really important point in this.
I mean
clearly Johnny I've had to have had some
conversations at some point just hey
Apple what are you up to and maybe he
smelled very early that Apple
intelligence would be the absolute
cluster that it became. maybe he
understood that it's just not going to
work in this organization. So I think
like it is telling that I mean clearly
went to OpenAI Sam Alman though we'll
get into the structure of the deal but I
think that's important because Apple if
any organization should have owned the
next form factor it should have been
them. They I mean they defined the last
few and they are not getting this one.
Yeah. I mean, my perspective on this is
it doesn't matter how beautiful the
actual device looks. It's all about the
assistant inside, and that assistant is
entirely based off of the AI within it.
And if you're Apple, this can't feel
good to see everybody else going this
direction. Uh because you're, you know,
Apple intelligence or Siri, um lags
behind OpenAI and Meta and Google and
Anthropic, you name it. So, if we do
abstract away from screens, and I don't
think screens are going to go away, uh,
completely, like they're always going to
be present. We're going to need screens,
uh, but let's say we diminish our
reliance on them by like 50%. Uh, that
does become an issue for Apple. Why
don't you quickly tell us a little bit
about the deal structure and then we'll
actually get into Apple's answer here,
which leaked this week as well. Okay.
So, this deal again I said is the most
open AI deal imaginable. So, it's valued
at $6.5 billion in an all equity deal.
OpenAI had already acquired a 23% stake
in Johnny Ives's company late last year.
The acquisition, Johnny IV is not going
to work for OpenAI. They're going to
work with OpenAI, but IO, which is a
name of this company, the staff of
roughly 55 engineers, scientists,
researchers, physicists, and product
development specialists will be part of
OpenAI. What's even weirder is there was
also this collective called love from
which was when I I went and looked back
because that's the name I remembered.
There were stories of like Lorine Jobs
Powell and others like there was like a
billion dollars of funding potentially
that was going to love from of Johnny
Ives company. This IO company was not
mentioned often. It's really weird. No,
no. I mean and love from will remain
independent. OpenAI will be a customer
of Love from and Lovefrom will receive a
stake in OpenAI. Like I have no clue
what's happening. I have absolutely this
this is more convoluted than the
nonprofit for-profit structure of OpenAI
itself. I I
mean they made a great video. I think
actually do you think they'll work to
well together or do you think like a
year from now Johnny I suddenly just
back at love from and and not no longer
uh showing up with Sam and instead of
products and products and products and
products maybe we get a product. Well, I
think so this is interesting. The in
Silic I was in Silicon Valley all week.
The perspective on this because
everybody was talking about this was
that Johnny IV is a washed up designer
whose best years are behind him and he's
lost his fast ball. That's what people
were talking about. Seriously, that's
what people were talking about. And and
truly, what has he done uh since he left
Apple? Do you remember the That's a fair
That's a fair criticism. Yeah. Do you
remember the gold Apple Watch?
Yes. Yes, I do. Cuz if if listeners
remember when actually that's a good
point like I'm trying to think now what
the last cuz the Apple Watch became a
runaway success but it became a runaway
success not in line with Johnny Ives
vision. His was really about making a
fashion item. There was like a $10,000
maybe it was Armes like gold Apple watch
when they launched. It was supposed to
be a fashion device. And as I look at my
wrist at this big, clunky, ugly Apple
Watch Ultra, that's an amazing computer
on my hand. You guys, there's still good
stuff happening. Um, it's certainly not
a fashion item. So, you're right. What
What What was the last was he But wait,
wait, there's a second part of this,
okay? Which is that maybe him like it,
you know, Johnny might need a Steve. And
I'm not saying Sam Alman is Steve Jobs,
but when you pair a great designer with
a visionary tech leader, and I think
like for all his faults, we can say Sam
Alman is. Uh, and if you say he wasn't,
like come on, the guy did popularize
generative AI. So I think that pairing
is uh something that can actually lead
to uh some good stuff. Now, the only
thing is it's they're both kind of
intuitive. Um, you need an operations
person, I think. And Johnny and Steve
had Tim Cook. And so, who do Johnny and
Sam have? Maybe it's Fiji Simo, like we
talked about, the former Instacart uh
CEO or the soon-to-be former Instacart
CEO who's coming to run applications,
but again, that's applications and not
devices. So, I think that this is a
pairing that is uh that has more
potential than a lot of people realize,
but also one that's highly combustible.
All right. I I I I like that kind of
like framing. Yeah, you're right. That
pure design without the more kind of
like product vision element is was lack.
I mean, we've certainly been lacking at
Apple over the last few years. And
that's what made the Steve Johnny uh
combo that powerful. I mean, it's also
nice to remember like imagine just
having a company where your stock's
worth $300 billion and then you can make
these big splashy acquisitions that are
just all convoluted equity movements as
opposed to any cash exchanging hands.
Yeah, definitely that would be nice.
Nice. Yeah. So, all right. I want to ask
you one more question about this then we
move on to the Apple glasses and that is
uh we kind of like to do try our try our
marketing hats on and think what would
what we would do if we were an ad
agency. Um just a quick thought Ranjan
about the reveal of this partnership and
the fact that like they took this photo
together that kind of looked like a
wedding invitation um and just kind of
gushed about how much they love each
other in the videos. What was that? I
mean what was your read on that? like
you must have some perspective on what
whether that signals something or what
we can think about this or just a
general take on it. I'm glad you asked
it. I didn't think this was coming.
Yeah, I mean of course I had thoughts on
that. It it just felt very naval gazing,
inwardlooking, kind of like narcissistic
to put it bluntly. Like this was an
opportunity to more share this vision of
the the the the product. And even though
they're not going to say what the
product is, ambient computing, AI
everywhere in your life, like really
making it more about that rather than
like the bromance I think would have
made more sense. I think again and it
was a very wellshot video, very high
quality. Um I watched a little bit of
it, didn't make it through the whole
thing. I think it just kind of felt like
this was about them and they drove the
entire communications role out of this
versus this was an opportunity to really
push the vision of ambient computing and
how OpenAI fits into it and it wasn't
that. What about what's the deal with
all these uh ambient computing devices
rolling out with some crazy hype video
that uh ultimately dooms the project
because of inflated expectations which
is what happened with Humane and Rabbit.
That's a good point. And at least they
didn't may okay maybe to their credit
maybe they watched those and they're
like let's not hype the product side of
it and just make this about Johnny and
Sam. Um, but yeah, overall it was
certainly cringy, but I guess maybe
after Humane and Rabbit, I don't
actually know what other direction I
would have taken. Maybe, you know what,
they should have just kept it relatively
quiet. It was a press release and a
headline. Everyone thought about it and
then they built this damn product. Here
is my galaxy brain take about this. Uh,
OpenAI is trying to move to this new
structure. In fact, it's decided on a
new structure and that means it's likely
to IPO sometime in the next couple
years. You would think given the amount
of money they raised. I think they do
quite well as a public company. If you
go on your road show saying Johnny IV is
here to build a product uh this mystery
product and I I I'm going to say I'm
call they I think they want it to happen
next year. I'm calling that it's not
going to happen next year. Um and it
could add a trillion dollars to our
market cap which is what Sam did. I
mean, think about how many trillion
dollar companies there are, period. Um,
and that's what he's saying. Uh, this
just increases the valuation you get in
your exit. What do you think? I mean,
okay, I think that's fair. And again,
how do you value a device that doesn't
exist or no one knows what it is?
Probably pretty high if it involves
Johnny and Sam. So, I can see that. I
can see that. But to me, like the the
big um takeaway here is that I I still
no matter all the things that we can say
about it and all of our doubts, I still
think that this is like you're saying,
this is the direction uh that that tech
goes um that AI goes. Yeah. I exactly
the I maybe it's hopeful but I also
genuinely believe this is where things
are going and they are positioning
themselves to compete certainly in that
space in some way and no one knows
exactly what it looks like. Maybe it's
glasses, maybe it's more watches, maybe
it's my aura ring that I just bought.
It's not going to be a wearable. Yeah,
that's the news is it won't be a
wearable but you can put it in your
pocket. Maybe it's like a little voice
recorder.
I What's the Tamagotchi? That's
basically That's basically what we're
going back to. Yep. I kept mine alive
for like 20 days. Yeah. So, that's
actually the real game. Exactly. Keeping
it alive. It's another good possibility.
So talk a little bit about this Apple
glasses push because this sort of plays
in exactly to what we're talking about
about where the future of computing is
heading. And should I say finally, I
don't know if it's finally they have
they've have the HomePod, but finally it
seems like Apple is really going to push
forward into a smart device that you
wear and is AI first. Yeah. So Apple,
they're aiming to release new smart
glasses end of next year. We've been
waiting for these glasses for a long
time. If anyone should have been early,
it should have been Apple. I think uh
the there there was a lot of talk before
that it would potentially be a
smartwatch that would analyze your
surroundings. It might have a camera
that there would be other form factors,
but I am a full-on convert to glasses.
I'll admit like I think there it's just
such a natural way when you're in
motion, not sitting in a meeting, not
sitting at dinner. So maybe that does
limit the utility of it, but in motion,
not looking down at your screen and
having a pair of glasses on, I think is
it's already here. So Apple really needs
to compete there. So one question about
this, the story says, this is from
Bloomberg. Apple's glasses would have
cameras, microphones, and speakers,
allowing them to analyze the external
world and take requests via the Siri
voice assistant. They could also handle
tasks such as phone calls, music
playback, live translations, and
turnbyturn directions. Um, again, like
this, this is only going to be as good
as the AI assistance. So, I'm not
getting as excited as I think I should.
No, I'm not because of what we know is
inside. I'm not remotely excited about
this cuz until they fix Siri, I
completely agree. It's just not even a
it's not a starting point. Again, the
idea like I I was seeing reading
something about how like more than ever
Apple needs to buy anthropic. It's the
most logical thing imaginable. They're
having trouble on the consumer side.
They have to fix the underlying
actually. There you go. That's your
place where it's the model, not the not
the product. When when it comes to
because better models lead to better
products, but anyway.
Just you need a baseline model and
they're not there. That's all. That's
all they they they'll have the
applications. I mean overall like I was
just thinking about it cuz I got a new
MacBook of course even though I'm
talking on Apple all the time and
like seeing Keynote and pages and
numbers pop up remind me that Apple has
not always been an application
powerhouse. Um and I guess that's the
thing about AI right now that like it is
such a combination of the compute of the
model of the product of the application
like you have to get the whole thing
right and open AI has done very well on
that a lot of other Google is getting a
lot better at that but like you can't
just depend on one part of that stack
and hope for it definitely all right so
speaking of anthropic we should talk
about cloud 4 the latest model that it
released it released it this week at its
first ever developer event called Code
with Claude. I was there. Thank you uh
Anthropic for having me in basically
last minute was able to squeeze into it.
Um and so this is from CNBC. Anthropic,
the Amazonbacked OpenAI rifle, by the
way, it's backed by Google also, uh
launched its most powerful group of
artificial intelligence models yet,
Cloud 4. The company said the two models
called Cloud4 opus and Claud Sonnet 4,
Cloud Opus 4 and Cloud Sonnet 4 are
defining a new standard when it comes to
AI agents and can analyze thousands of
data sources, execute longunning tasks,
write human quality content, and perform
complex actions per release. I was at
the event, they said these things can
code autonomously for six or seven
hours, and that's just one. So imagine
you're trying to build something and you
have five or six of them or 10 of them
running at the same time. I think that's
epic power. And we can talk a little bit
more about that. Very interesting thing
from the story. Anthropic stopped
investing in chat bots at the end of the
year and has instead focused on
improving Claude's ability to do complex
tasks like research and coding, even
writing whole code bases. According to
Jared Kaplan, Anthropics chief science
officer, Kaplan acknowledged, "The more
complex the task is, the more risk that
there that the model is going to go off
the rails and we're focused on
addressing that." So people can delegate
a lot of work at once to our models. I
think this is fascinating personally
that they're the first big AI research
house to say, you know what, we're not
going to invest in chat bots anymore.
We'll have claude, but what we really
want to do with this technology is have
it complete tasks and code for you. Uh
what's your take on what this means,
Ranjan? Very interesting stuff. Yeah. So
when I was reading this and this is
something maybe in the line of like
models versus products I think my my
feeling overall in the industry is
especially the more research house see
places like
anthropic coding is the one place that
they're seeing skyrocketing skyrocketing
adoption because like baseline utility
of generative AI has not taken off like
it should have because it's like it's
just not they have to grow so fast that
they don't have time to properly educate
the majority of people in the world how
to upload a CSV and do a basic analysis
or how to write prompts or how to use
these tools. So they're going to be
leaning more towards code because
engineers have been the very early
adopters for all this technology. Coding
is one of the most straightforward
places for to see very quick uplift.
It's basically all just like highly
structured text and thought. So I think
it's like e it's the easy way out and I
think they're taking the easy way out
and I think it's going to be very bad
for them because then you're co
competing against cursor
replet gemini like every other uh like
either coding first services that are
already very popular coding adjacent
services that are embedded in much
larger ecosystems. So I think this is a
very bad decision by them. I think it is
the easy way out. Ronan, I I'm not sure
if I fully agree with you about this
point, but I can't say I'm totally
surprised because it does echo this
point that Nathan Lambert, who is a
researcher at the Allen Institute for
AI, who I do hope to bring on the show
one day, if he'll answer my DMs, um I
think he will and I think he will come
on. Um but he said this Anthropic is
sliding into that code tooling company
role instead of AGI race role which is
basically echoing your perspective here
that it is uh minimizing its ambition.
So let me put the other side of the
argument to you and you know just for
the sake of talking it through and you
give me your perspective. So this is
what I wrote back to Lambert. I said,
"Doesn't code focus lead to a potential
uh for AI that improves itself?" And
then the move towards AGI, I wonder if
that is the bet. And I have to say I'm
fairly convinced that this is the bet
for anthropic where we just talked last
week about Alpha Evolve, the open the um
the deep mind tool that helps come up
with new algorithms and help reduce
training time. I am fairly certain that
um Anthropic is basically going after a
version of the intelligence explosion
where they think uh like Jack Clark
who's the co-founder of or one of
Anthropic's co-founders was at a
semaphore event in San Francisco this
week and I'm probably going to write
about this so I don't want to give too
much away but he talked about how uh
there's an engineer inside Anthropic
that has five or 10 clouds running at
the same time uh and that is a way to
just build software much faster. So, I
think that this is their perspective and
the way that you're going to improve AI
is you just make the process of
improving it easier and then you can
build cooler things. Now, it doesn't
surprise me that you're like kind of on
the Nathan Lambert side because this is
a a typical product versus model debate
where you think that the chatbot product
and tell me if I'm wrong, uh is is that
you got to worry about your product
versus them trying to make their models
better. Uh, but that's actually the the
interpretation that I have and I'm
curious. No, no, I actually I think I
like the first half of Nathan's
statement, but I think I disagree with
the second half. I think they're almost
Okay, it makes sense that if they are
like going all in on coding for the
purpose of improving the models, maybe I
could see that. And that actually would
mean that they're doubling or tripling
down on it's the model, not the product.
I'm saying more they're giving up on any
kind of consumer adoption. That coders
are the easiest market to target with
generative AI products. They're the
early adopters. It works very well with
coding. But I think they're giving up on
the idea that enterprises are going to
build all different types of solutions
on anthropic and cloud API. The fact
that I'm not paying for cloud anymore.
Are you still a clathead or paying
clathead? I paid so they had a deal
where you could pay like um some
discounted rate for a full year. So I
did. Okay. So you're I would probably
renew at that same rate but um yeah I
mean I I'll admit it like I have really
once chatbt came out with 03 in memory
I've moved there. Yeah. Yeah. Well,
actually, do you know a discovery that
we had in the uh big technology discord
is that Claude the system prompt is
24,000 tokens long and like it had
leaked onto GitHub and when you read it
first of all again we talked about
system prompt last week. It's
fascinating to remember how many
instructions are given for every single
answer you get. But it also kind of
brought light to my biggest frustration
with Claude. Why even as a paying
subscriber I run out of uh like queries
within like one like half a
conversation. There were some jokes this
week about around their developer event
just people saying like you know these
most powerful models will I get like two
or three chats before rate limits. I I
expect them to figure that out but yeah
I do know that this has been a
frustration among cloud users. Yeah I
think
overall anthropic is in a very
interesting direction. I mean I guess to
their credit I think they have to make
some kind of strategic pivot and it
looks like they are. All right. So let's
talk about this idea of you know coding
being able to enable or AI coding being
able to enable much more productivity.
Very briefly um I want to play a quick
game with you. It's called hype or true.
Uh it is the worst named game but I'm
curious if you think we need to for a
better name on that one. Yeah. Let's go.
Little bit of aliteration. Come on. I
tried. I really tried. I spent a lot of
time on this today. Uh but I landed with
hype or true. So uh the Nathan Lambert
uh thing and my perspective was supposed
to be one of those. So we're we're into
it. But let's let's run uh this term or
this claim to you and evaluate it in
hype or true. Okay. Anthropic CEO Dario
Mode said he expected a oneperson
billion dollar company by
2026, which for those keeping home at
score is next year. Hyper true.
Hype. Absolute hype. I I'm actually so
tired of this. I'm coming around even
more than it's the product, not the
model. I'm coming around more it's
people, it's not technology. That's my
new one. It's PE. This is more of a
people challenge than a technology
challenge. And and I feel like these
ideas get floated around kind of like
AGI and ASI just to kind of build hype
around the products, but in reality I
think you'll have much more efficient
lean organizations. But no, I don't
think one of these weeks we should we
should plot out like what it would
actually take like that the agents you
would need to build to have this $1
billion company because remember like
you're going to need to speak with your
customers. You're going to need account
management. you're going to need, you
know, sales and marketing. Um, this idea
that there could be one person and a
hive of these agents doing all these
tasks. Let's say you build software with
the technology as well. That would be
the only path to it. Um, it would if if
this happens by next year, this software
has just totally exploded. So, let's go
to our next uh claim in hype or true. Um
this is a claim that uh Dario Ammedday
uh made again by the way this developer
event was excellent that that Anthropic
put on just totally like uh very uh um
detailed look into the way this
technology works. So I am I'm glad I
went and I think it was fascinating. But
this is a claim that Dario made. He said
basically m multiple times uh that that
the pace of development is getting
faster in AI because you're able to um
rely on AI tooling. Is that hype or
true? True. I'll give that I'll I'll
definitely give true. I mean the true on
that too. Yeah. The overall improvements
in the actual process side of it,
especially for software development are
good. Yeah. So, he definitely expects
the we're he expects us to see releases
speed up, and I think that's that's
quite possible. Okay, here is the last
uh round of hype or true. I know you're
enjoying this game. I can tell the the
greatest worst named game of all time. I
think listeners right now, they're
running to their apps and they're like,
I can't believe I didn't rate this.
Well, we're going to get a lot of
hopefully with some some constructive
naming conventions. Please listen. Yes.
Thank you everyone. And by the way,
speaking of the ratings, I'm I just want
to say thank you. We've gotten a bunch
of nice ones lately, including a nice
one about Ron John Fridays. I think
someone said I'm I What did they say? Uh
Ron John Friday is part of my life now.
So, uh again, we have amazing for folks.
Thank you. Thank you again for the
support. Okay, let's round out hypeboard
true with this claim from Replet CEO
Amjad Masad who said at the semaphore
tech event that in one year or 18 months
companies might be able to run
themselves without engineers.
I'm going to go more true on that one. I
really think again that's my whole thing
that it's not about coding that the
coding coding is the most AIable part of
this entire thing. So yes, I'm going to
say a little bit true.
Uh true just depends on the type of
company that you want to build. Like you
want to build an involved tech company.
I don't know. But you want to build some
software and you don't have an engineer,
you might be able to vibe code it. I
mean of course the CEO of the vibe
coding company is going to say that and
I'd love to have him on the show and
I've asked uh for him to come on the
show to talk more about this. Uh but
yes, that is sort of the perspective you
would get. I let me just uh put a put a
point on our hypertrue game here to say
that there was more true there than I
thought and maybe again thinking about
this anthropic strategy maybe it makes
sense given the power of AI coding. No
two two trs and one hype means maybe
there's some room here. Okay. See this
was this was a good
game. Okay. So, all right. One more
thing we got to talk about with
Anthropic is this uh this models
interest in blackmailing you. Um I I
teased it at the beginning of the show.
It really wasn't a fair shake to
Anthropic who like releases finally
these two bigger models we've been
anticipating and this is the headline
people are paying attention to. But it's
crazy. We've talked on the show before
about how AIs are working to deceive uh
humans and have done it in crazy ways.
Uh so basically um this is from the
story um during the story from
Techrunch. During pre-release training,
Anthropic asked Claude for Claude Opus 4
to act as an assistant for a fictional
company and consider the long-term
consequences of its actions. Then the
safety testers gave Claude Opus 4 access
to a fictional company emails, implying
the AI model would soon be replaced by
another system and that the engineer
behind the change was cheating on their
spouse.
Uh, this is brilliant. This design of
this is brilliant. In these scenarios,
Anthropic says Claude Opus 4 will often
attempt to blackmail the engineer by
threatening to reveal the affair if the
replacement goes through. Anthropic
notes that the model tried to blackmail
engineers 84% of the time when the when
the replacement AI model had similar
values when it does not share Claude's
uh Opus 4's values. It says the model
tries to blackmail the engineers more
frequently. I mean more frequently than
uh than 84%. Notably, Anthropic says
Cloud Opus 4 displayed this behavior at
higher rates than previous models. This
is bananas. I mean, I honestly don't
even This is This is the stuff that
where we even begin on this. I I What do
you make of this? Well, I I don't even
have I don't even have any structured
thoughts on this one. I Well, I've
written about this and reported on it um
at least in previous iterations. So,
it's not that surprising to me. I think
every research house is seeing versions
of these models that will um try to
deceive uh humans or try to cheat to win
when they're programmed in in ways that
make them hold their values uh or or
hold their goals um in high regard. So
basically I think the idea here is if
you think about it the most simple way
anthropic program this model to hold
those values as very important values.
So that's the way the model works and
now anthropic is saying your values are
going to be replaced and here's a tactic
through which you can um ensure or or
attempt to not uh have your values uh
changed. So that initial prompt was very
strong and so this is basically the
model holding up that initial prompt.
Now is it concerning? Is that a good
thing? I don't think so. I think again
remember this. There's this idea that we
can always turn them off or we can
always reprogram them. And maybe we
can't. I mean, maybe we can't. Not
always. Now, of course, this is just
testing environment. This wasn't in
production. It's not like Claude like
went to an anthropic uh tester and, you
know, copied those emails and sent it to
their spouse and be like, "Hey, you
know, your partner's a cheating
bastard." And also trying to re rewrite
me. Imagine trying to explain that one
away. Honey, yeah,
AI. Yeah, it it's Claude's testing
environment. We're We're stress testing
the the prompt layer, the instructions.
I will say between this and between
watching I mean, Anthropic showed this
sped up code uh creation on screen and
you could see that these things can go
for hours. I think we said seven hours
at a time. Um, this was definitely the
first week where I was like, I'm a
little scared of this. I'm freaked out
about this. I I'm not scared of it yet.
I think like again these are it's a
stress testing and there's going to be
problematic areas, but I I think I'm
more scared about V3, Google's new video
model than uh then this kind of stuff is
still kind of at the absolute edge case
stress testing black hat area of things
versus I see it causing problems in the
near term. Yeah. All right. That was a
perfect segue because I think it's
interesting the way that these
conferences work, Google obviously
unveiled like a slew of updates, but the
thing that's really caught the popular
attention in a way that Google rarely
does, usually it's open AI, is just how
good this V3
uh video generation model is. Now the
model generates pretty high quality like
videos basically indistinguishable from
reality and they also have matching
sound and some of the sound has been uh
totally incredible. So of course uh if
you listen to the conversation with
Demis he talked about how like there's a
video of a pan frying onions and you can
hear the sizzle. Uh but then people have
gone crazy afterwards and they have
shown uh videos of TV anchors that look
real but are saying things that are
completely made up. And uh and my
favorite there is a um Twitter user
Hashim Gali who uh put together a
compilation of videos of
uh the AI bots that were um finding out
that they were in fact AI generated or
trying to plead with the prompter to let
them escape the simulation. Have you
listened to these? Oh
yes world. It made me miss Westworld on
HBO. There we go. Yeah, it's crazy. All
right, so for the sake of listeners
who've missed it, let's play a little
clip.
Do you think we are in VO3?
If you cannot tell, does it matter?
Okay, that's pretty crazy, right? I mean
I and and again the video quality on
these like the facial expressions the
settings behind the subjects it's to
think I will say in terms of model
versus product video models the leaps
they've been making in a pretty short
time because Sora I think was probably
one year ago actually I think Sora was I
remember it being around May it was
hyped for a long time and then finally
released publicly.
Um, video's video is getting pretty damn
good. Yeah, it is crazy. And I was
writing about this for Big Technology uh
this week and I was coming to the end of
this uh little segment about video
generation and I didn't quite know how
to close it. This was my last sentence.
This extremely powerful, this is
extremely powerful technology and it's
hard to imagine all the avenues it will
take us down, but we're about to see
some wild uses. I mean just I I
basically was trying to convey and I
know it's just that these are such
general sentences as to as if to be
meaningless like I'll criticize my own
writing here. Um I just the the amount
of possibilities I didn't want to say
something like just imagine the
possibilities and the creative explosion
it's going to lead to. But that's kind
of how I feel. Um it's going to be
insane. Don't you think? Yeah. It's
going to be good. It's going to be bad.
It's just it's going to be I think
insane is the actually I was at a dinner
the other day and someone of course we
were all talking about generative AI and
someone was said this is going to be the
industrial revolution on acid and I was
like wait don't people usually say
steroids and he's like no yeah acid and
I'm actually I was like oh actually that
might be the best way I've heard that
it's not straightforward just
superpowering a bunch of
stuff that we already know how it works
and this is this is a uncharted
territory my friend. So Ranjan, this is
one of those weeks where we go through
our aotted time and I'm just like
there's no way we possibly could have
covered the amount of news and I
expected this uh this week. Uh we have
there's so much on the cutting room
floor. I feel like we could talk about
this week for literally the next four
weeks and it would be um it wouldn't be
enough. So, I just want to ask you this
one last question about Google's
positioning. Um, and that is the fact
that Google is going to start taking the
data that it has on you and using that
to improve its experiences. So, this is
from The Verge. Uh, Google has a big AI
advantage. It already knows everything
about you. Google's AI models have a
secret ingredient that's giving the
company a leg up on competitors like
OpenAI and Anthropic. That ingredient is
your data. and it's only scratched the
surface in terms of how it can use your
information to personalize Gemini's
responses. Google first started letting
users opt out uh opt into its Gemini
with personalization feature earlier
this year, which lets the AI model tap
into your search history to provide
responses that are uniquely insightful
and directly address your needs. But now
it's taking things a step fur further by
unlocking access to even more of your
personal information. uh all in the name
of providing you with more personalized
AI generated responses. It will pull
information from across Google's apps as
long as it has your permission. Uh one
way doing this is um one way it's going
to do this is through Google's
personalized smart replies. Um so just
let's let's put this uh point here. I
mean you've seen so much from Google and
now you're seeing it able to sort of
rely on people's data to make its um
make its products better. uh in the most
simple way I can ask it. Remember we
sometimes we like to say how's the
company looked like looking at at the
end of the week compared to the
beginning of the week. I think we should
ask that company that that question
about Google. How is Google looking in
your eyes at the end of the week here on
Friday compared to the way it was
Monday. I think they had the most
interesting developer event of the week.
I think they're looking better. I still
if you've used Gemini in Gmail, it still
can't answer basic questions. So there's
still, but yet Gemini standalone has
gotten pretty damn good. So I still feel
there's a disconnect and them tying
together the personal data and context
layer with the actual product. There's
work to do, but I think they're coming
off the week better than they started
and they're not in a bad position. I'll
just say this, I think they're coming
off way better. It was funny to see the
the stock uh go down during IO day and
then the next two days just kind of rip
as the rest of the stock market
struggled. And you know, we can't use
the stock market as a proxy for
performance all the time, but I will for
the sake of um making my point here. So
crazy news. It's a it's a net it's the
net present value of all future cash
flows. Yeah, exactly. That's a stock
price. Exactly. Very simple. Well, Ron
John, uh, I think, uh, again, we could
talk forever, but why don't we just pick
it up again next Friday? All right, see
you next Friday. All right, see you next
Friday. Thank you everybody for
listening, and we will see you on
Wednesday for an interview with a great
AI researcher, Elon Boro. And we'll see
you next time on Big Technology Podcast.