Boom Times For ChatGPT, OpenAI’s Deep Research, AI Super Bowl

Channel: Alex Kantrowitz

Published at: 2025-02-07

YouTube video id: OblBUk5MWKk

Source: https://www.youtube.com/watch?v=OblBUk5MWKk

New chat GPT growth numbers come in.
Open AI built a pretty good research
assistant and the Super Bowl fills with
AI ads. We'll cover it all in a special
edition of Big Technology Podcast taped
for our YouTube audience. Welcome to Big
Technology Podcast Friday edition, where
we break down the news in our
cool-headed and nuanced format. We have
a major week of news to cover and some
of our own to break and we're joined as
always on Friday in studio by from
Spotify headquarters
by Ranjan Roy of Margins. Ranjan, nice
to see you in person finally. Welcome
back to the show.
Podcast Friday edition is all cleaned up
today. Alex and I are here at Spotify
studios. We're sounding good. Usually
we're sitting both in New York or in
some strange locale having a
conversation through the computer
screen, but today we talk
in person. And we love to cover the news
every week. This week we're going to
break some news or at least share some
new data on chat GPT that I've gotten
from SimilarWeb, which shows chat GPT's
really interesting growth story. So,
we're going to start there and then of
course we're going to cover deep
research with both which both you and I
have spent $200 to try and then of
course it's the Super Bowl this weekend.
So, we're going to talk about why these
companies are spending money on Super
Bowl ads and not on improving
foundational models and I have a feeling
I know where both of our
perspectives are going to be on this
one. Although we might be more aligned
than usual. All right.
Here's the data from SimilarWeb. So,
for quite some time and I even wrote a
story about this last year, chat GPT had
been flatlining. The growth had just
completely stopped. So, you see a very
very quick run up to 100 million um
monthly users or on the chart that we're
looking at now from SimilarWeb they're
measuring web traffic. So, about 2
billion uh visits per month and it
flatlines and it is
basically either down or just barely
touching where it was in early 2023, so
four or five months after chat GPT's
released and then there's an inflection
point. And I'm pretty sure the
inflection point is when Sam Altman
tweeted her because the moment Open AI
releases uh
or not even releases, announces the fact
that they have these superior voice
uh chat type of capabilities where you
could talk, you could interrupt, it
feels live, all of a sudden interest in
chat GPT skyrockets and then we can see
in the chart that we're looking at uh
and for those at home, it's just a
inflection point moment where Sam Altman
tweets her and it goes from 2 billion
visits per month to 4 billion, basically
4 billion. Uh and that's when we start
to see Open AI announce that they've
gone from 100 million users to 300
million users. So, Ranjan, I'm curious
what you think about the boom times for
chat GPT. Mostly just like how important
is this for Open AI that they've
actually found something that's made
their chatbot take off.
I think this reflects that Open AI and
chat GPT is the Kleenex or Xerox or the
household name of of any kind of
generative AI. And the numbers again, we
heard 100 million to 300 million, but
seeing this from a third party is
actually pretty impressive. To see it
from February or April 2024 to late 2024
almost triple in terms of traffic is
incredible, but it makes sense. They're
the household name. They're the every
non-core tech person I talk to does not
say does never talks about Claude, has
no longer talks about Bing. There's a
brief moment they might have been and uh
is only talking about chat GPT. So, it I
think it's both good for Open AI, but
it's also good for generative AI in
general. It shows it's becoming more of
a regular thing. So, my theory is that
this whole brouhaha with Scarlett
Johansson when Sam tweeted her and
people were talking with Open AI or they
thought they could. By the way, they
never really they they did release it,
but just months later.
Generated way more interest in using
chat GPT. Now, there's been so many
other releases. They've done better
models, they incorporated Dolly in um
which is image generation. So, that
might have done part of it. They've also
like they stopped hallucinating, the
responses are definitely better. But I'm
curious. I mean, it's really really
fascinating that chat GPT just stagnated
for almost a year and then picked up.
So, I'm saying it's the Scarlett
Johansson thing. What's your
perspective? I'm
That's an interesting theory. I'm going
to give you uh I'll give you that, but
I'm still going to disagree. I don't
think it's Scarlett Johansson here. I
think this is again, this is reflective
of if I think in throughout 2023, no one
outside of tech talked about generative
AI. 2024 it became a thing. We've we've
talked about this. That was when the
hype cycle kicked in in high gear.
That's when everyone started thinking
about it. That's when everyone started
talking about it. It's every single
headline and chat GPT is the first place
people will go. It it literally it's
shorthand for everyone I know for AI
right now. So, that makes sense. It's it
reflects the industry, not just Open AI.
One of the thing that's interesting
looking at these numbers is just how
unevenly distributed the gains in AI
have been. So, if we're looking at our
SimilarWeb numbers again, this is web
visit web visits. Bing had 1.5 billion
per month in February 2024. It had all
of 1.85 billion per month in October
2024. You look at chat GPT, starts with
1.6 billion and now it has 3.7 billion
per month. So, it's left Bing in the
dust and by God, I mean, the rest that
you mentioned Claude, uh it doesn't even
factor. There is no consumer adoption
basically for Claude. Question here, is
bing.com in the data the search engine
as well or is it That's the search
engine. Okay. Okay.
GPT has surpassed the search engine and
the search engine really hasn't gotten
much of a bump even though it's
delivering so much of the same services.
So, you're right, it really is the brand
that makes the biggest difference here.
Actually, let's take a moment here to
pour one out for Bing cuz remember in
2023 when we would talk, we were Bing
boys. Remember like Bing was on par with
chat GPT as kind of the face of whatever
was going to happen in generative AI.
Remember people having like just the
weirdest, wildest conversations with
Bing. No one is doing that today. No one
is stress testing Bing. With Microsoft,
they just kind of I guess they went all
in in Copilot and enterprise, but Bing
consumer it was a good run. It was a
good run, but we tried. We tried.
do have cameras with us today. So, allow
me to just quickly address the audience.
Yes, uh we were Bing boys and we
apologize for that. And if you're just
joining us today or recently, um let's
wipe that out of our memory and we're
going to pick up as if that never
happened.
I I'm a proud former Bing boy. I'm okay
with it.
Honestly, we all go Everyone goes
through their Bing phase at some point,
right? Well, look, that's
true.
Bing was at its best when it was trying
to steal reporters' wives. Once they
neutered that capability, it was toast.
I mean, look at what happened. Really a
disappointing and a disaster. Yeah. Uh
sorry, Bing, but but you're right. To me
the oh man, it almost makes me question
my normalcy because I'm on Perplexity
all day. I'm looking here at Claude.
These are the places I'm spending a lot
of my day and no one else is. No one
else is.
Maybe we're just ahead of the curve. I
hopefully. I like to think that
sometimes, but uh
But here, look, this is another thing
that we think about coming out of last
week where we talked about how
Deep Seek came out. It's about as
performant as Open AI's reasoning model.
It's much cheaper and it shows you the
full chain of thought.
Um and well, actually that we'll get
into that in a second. But it's it's
about as performant and it's much
cheaper and we talked about how models
don't matter. And if you're looking for
the optimism about Open AI, is that they
have a runaway success as a product in
chat GPT and the numbers just really
push it forward. Yeah, no, no, I think
that's correct that uh and and we talked
to me still
Open AI's greatest trick in the world
and we've talked about this before is
that in the UI, the way it kind of like
let the text stream out to you when it
didn't need to. If you ever call via
API, it just gives you a block response,
made people feel like this was something
magical and it was thinking. Open AI has
always been and we're going to get into
Deep Research, operator is not a good
product, but it's a mesmerizing product.
It's a beautiful product. It's just not
very good. So, they they still have a
strong team and now the head of product
Kevin Weil from Instagram and Artifact
briefly, like they they're playing the
right game in terms of product, I think.
I think. Financially we can discuss
separately, but Last week we also looked
at
Deep Seek's performance and we said,
"Oh, this is bad because they've
commoditized Open AI's model." But
further data that I got from SimilarWeb
shows another story, which is maybe even
more concerning for Open AI. So, we we
all saw Deep Seek go to the top of the
App Store charts. And for me it was
like, "Well, the App Store charts take
into account hotness. Like how hot is
your app?" If your app is super hot,
then you're going to go to the top of
the charts.
But then you look at the traffic and
it's not only that people were
downloading it. It's people were using
Deep Seek a lot. A lot, a lot. And this
is again from SimilarWeb. You see last
week, so January 28th,
chat GPT had 139.3
web uh and mobile visits.
Deep Seek had 49 million. So, it cut
about like further more than any other
company has been able to cut into the
lead of Open AI and it had about a third
of the traffic that chat GPT took years
to build
overnight. And I think part of this is
just because the product the Deep Seek
product, if you go to deepseek.com and I
I can't recommend it because you never
know what's going to happen to your data
there.
Uh but if you go there,
you'll see the the chatbot write out its
full chain of thought, and it's
mesmerizing. You see it the reasoning
work uh in a way that you only get
bullet points with Open AI. And of
course, there was a lot of media
interest which drove this, but for me to
see these numbers and to see that it
basically built a third of what ChatGPT
has to again taken years to do,
that to me might have been the most
concerning thing for Open AI, that
all of a sudden there's a challenger
that might make uh ChatGPT not that verb
or noun or whatever you want to call it.
Yeah, but I think the numbers are more
interesting part of that to me is again
January 28th, 49 million
uh visits versus 139 for Open AI.
That reflects just kind of that just how
quickly this can rise and fall cuz that
had to be driven by the media hype,
curiosity.
It also kind of makes me wonder still
how niche is all this behavior cuz I
don't think tech norm or normie normal
people who are going to Deep Seek. It
was all of us going and spending time
and testing it against Open AI and to
get those kind of numbers for that quick
but like that bounce, I think still
shows that the stuff's ephemeral and
like it can
people can go anywhere. People can have
a bunch of bookmarks up. They'll switch
to the next thing cuz if Deep Seek came
out of nowhere and got to those numbers
quickly, and we'll see where it is in a
month or two now, I think it shows that
yeah, no one has a competitive
stronghold or any kind of lock-in on
this stuff other than us now paying $200
for
Open AI per uh ChatGPT Pro which we'll
get into. Well, 49 million people in a
day or 49 million visits in a day to um
you know, a website, that's not just the
nerds. That is some part of the general
population. If if it Okay, if it is just
the nerds, then what? The entire usage
of ChatGPT is nerds times three.
That's That's embarrassing. That's what
worries me. No, no. When when I look at
this number, I cannot believe any non
We'll go with nerd, but tech forward
person uh was going to Deep Seek. Verse
So, that actually the 139 million
visitors to ChatGPT,
how what percentage of that is non-early
adopters? That does make the kind of
addressable market of this a little more
questionable.
Back to Open AI. If we were worried
about their models commoditizing last
week, if their chatbot can commoditize
like you said, you could just go to a
different website, and next thing you
know, ChatGPT is unseated.
Uh aren't shouldn't there be alarm bells
going going on in Open AI headquarters
right now because of what we're seeing?
Of course. I think definitely. To me uh
Gemini is the most interesting
competitor in this cuz or even I mean,
Microsoft I guess is it Copilot now or
what's
what's the generative chat bot?
It'll always just be Bing to me.
It'll just be Bing to me as well. I
think
I think
Once a flame, always a flame.
Because where people already are and
just injecting the chatbot layer is
always going to be easier. And the
distribution side Actually, sorry. We
haven't even mentioned Meta AI in all of
this. And their numbers, I'm sure they
always have they can always get when you
have 3 billion users some dramatic
headline number. But having the chatbot
integrated into where people already are
is always going to be a natural
advantage. And I think this is another
case where
we have not started to see that level of
you utilization for Gemini, but
it Open AI, yes, alarm bells ringing
very loudly I think is about should be
the case. And so as the siren went off,
the Open AI team, a merry band of
characters, made their way to Reddit to
answer questions from the town. It
really feels like that's what happened.
They all did a Reddit AMA.
And they gave some very interesting
answers. So, uh it is really clear that
Deep Seek put them on their heels, and
they said as much in this AMA that
included Sam Altman, CEO of Open AI,
Kevin Weil, head of product. Somebody
asked This is uh I think we should just
read the Reddit usernames because
they're fun to say.
I always love I gave a wedding speech
once where I found marital advice from
Reddit, and the best part of it was
reading the entire Reddit usernames out
loud to the entire audience.
You still friends with these people?
Yeah, he's one of my closest friends.
Okay. See, so folks, what we're about to
do is just going to bring us closer. So,
let's go to our good friend elo lol's
inventor. Lol's inventor says
um to the Open AI team, "Would you
consider releasing some model weights
and publishing some research?" And in
response comes a remarkable statement
from Sam Altman. "Yes, we are
discussing. I personally think we have
been on the wrong side of history here
and need to figure out a different
open-source strategy. Not everyone at
Open AI shares this view, and it's also
not our
current highest priority.
We are on the wrong side of history on
open source." Coming from the CEO of
Open AI.
To me, it's just it really pushes the
point home that whatever happened last
week and you know, the whole everybody's
been out there trying to you know, sort
of bring it down and say this isn't such
a big deal, it put the whole proprietary
model industry, the Open AIs and the
Anthropic's of the world
on their back feet understanding that
they are about to be passed by open
source, and they have to embrace it.
Curious how you read the statement.
we're on video today, so
what viewers can see me just shaking my
head because uh this is where
in terms of a company with this
valuation, it sometimes still kind of it
amuses me to know that whatever
corporate communications people would
normally be around are not cuz this is
just Sam Altman I feel just
writing out loud and just at this exact
moment that thought went through his
head that, you know, maybe open source,
that's the the topic du jour, so let's
say something big and controversial. But
then even qualifying himself saying it's
not a top priority. So, I had that one
felt really to me like just kind of stir
the pot a little bit, but I don't think
that I don't read too far into that
because
it can't be their strategy. It literally
cannot. So, and then financially, like
they cannot if they open source their
model and try to win only on the product
and the UI alone, they will never be
What is it? Wait, what was the massive
$300 valuation that they're aiming for.
Yeah, yeah, yeah. 300 bill.
You're not going to be a $300 company
when you're open source like that.
Yes, you will. Maybe it's the product
that matters, and you open source your
model, and you incorporate the best of
open source, and you grow that way.
Which I do argue regularly, but in this
case, the way they have built themselves
out, I don't think they will win on
that. I happen to like this about
Altman. Get on Reddit, leave the comms
people in the conference room, and just
say what you feel.
go.
Even if it's not 100% true, and then
people like us can sort of break it down
and explain to the people at home
uh what we think is is real. Thank you,
Sam. Thank you, Sam. And your merry band
of Open AI
gentlemen
uh singing on to Reddit. Right. The
nobles of the court.
Bounced their way down to Reddit.
Okay. Uh
back on the rails. So, here is from
Theory Sudden 5996.
They they say, "Let's Let's address this
week's elephant, Deep Seek." Uh you
know, what what do you think about it?
Sam Altman says it's a very good model.
We will produce better models, but we
will maintain less of a lead than we did
in previous years.
I mean, that to me is the biggest
confirmation that what Deep Seek did
really evened the playing field when Sam
saying it himself. Yeah, no. I mean,
on one hand, recognizing and trying the
rational point of view would be they see
Deep Seek, they see R1, and they're just
recognizing that this is the state of
the industry. Potentially, we need to go
open source. Potentially, we will not
have as as dramatic a lead over our
competitors, but we're certainly going
to get into then there's GPT-5 comments
down the road, and they obviously are
still trying to sell this idea that
GPT-5, whatever it is, and it's not
going to be GPT-5.0, it's just going to
be GPT-5, is going to be this
earth-shattering AGI, whatever it is.
They still have to sell that idea, and
they're trying to. I see you've been
deep in the Reddit AMA, which always
makes my heart warm. It really does make
me happy. Where else can you get Sam
Altman unfiltered? Actually, pretty much
everywhere.
Okay. Uh let's talk about chain of
thought. So, one of the most interesting
things that Deep Seek will do when you
go to deepseek.com is it will show you
exactly the way it's thinking through a
reasoning problem when you use its R1
model.
And Open AI just give you some bullet
points. I've had so much fun trying to
work through these uh this chain of
thought, really seeing how the model
thinks. And I think whether the model's
actually thinking or just computing is
like a pretty fun debate uh that we can
have in what is thinking.
Maybe we'll come to that at another
point. But there the the Redditors are
asking uh can we please see all the
thinking tokens? Here's Sam Altman. Um
yeah, we're going to show much more
helpful and detailed version of this
soon. Credit to R1 for updating us. So,
okay. So, here they are literally
admitting out loud that they've been
pushed by Deep Seek. And Kevin Weil, the
product head, says, "Uh we're working on
a bunch a bunch of these uh to show a
bunch more than we showed today. The
problem is the more we show, the more
that we can get distilled." They're
obviously still smarting
in their minds Deep Seek has distilled
some of their models and put it into
their own. I think this is a great for
the industry. I think this is really
good. No, I Okay. So, I'd said one of
the greatest tricks UI tricks of all
time was Open AI and the text streaming
to make you feel like the computer is
thinking. I think Deep Seek
has taken the next greatest UI trick in
terms of showing the chain of thought
processing. And again, as you said, we
can maybe save the what is thinking for
an ayahuasca retreat or something like
that down the road, but I think
Live on air. Live on air, of course. But
without getting too philosophical,
it's again kind of a party trick in the
sense that these models always go
through some logical iterations to get
to that output. There's There's always,
and you said in the end it's actually
just math. So, the text representation
of that you're seeing is still some kind
of party trick here, let's say. Like
it's still a computation that's
happening, but DeepSeek doing that. And
And I've seen a writer.com, which is an
enterprise generative AI tool that we
use
Like they had sub questions, and it
showed you the different types of
questions that it was asking to get to
the final answer. So, other tools and
models have done this. DeepSeek brought
this to the general population, and it's
brilliant because it makes people even
more attached to these type of tools.
Like it makes them really think that
they're thinking, which makes them more
usable. It But in reality, like I don't
know if you tried this. When you saw
within that chain of thought something
that didn't quite work in the way you
wanted to,
you can't just tweak that step. You're
starting from scratch again. So, yes,
I'm sure there's like some Twitter
thread about how to prompt engineer your
way out of chain of thought reasoning.
But in reality, it doesn't really give
you that much help. Yeah, but the chain
of thought is really very cogent and so
fun to read through. And you see the
model be like, "Nah, maybe that doesn't
work." Like especially one of the cool
things about DeepSeek is just like it's
very casual, the language and not not so
formal. So, whatever they did to make
that work is pretty impressive. All
right, let's talk about Stargate. So,
Theory Sudden asked how important is the
success of Stargate to OpenAI's future,
which is again for listeners, it's the
500 billion attempted infrastructure
bill by OpenAI. More likely
Announced 500 billion.
Maybe likely more likely tens of
billions, which is still impressive.
Kevin Weel says, "Yes, everything we've
seen says that the more compute we have,
the better the model we can build, and
the more valuable the products we can
make. We're now scaling models on two
dimensions at once. Basically, the
traditional LLM and the reasoning
models, and both take compute. So, does
serving products for hundreds of
millions of users. And as more as we
move to more agentic products
that are doing work for you
continuously, that takes compute. So,
think of Stargate as our factory for
turning power GPUs into awesome stuff
for you." Such a product guy response.
Such a product Such also a strange
communications Well, he was spent years
at Meta. He's been on this show. So,
that's just how he operates.
Yeah. Yeah. Yeah. Even on your episode
last Wednesday with the VP of Omniverse
and simulation from Nvidia, it's it's
interesting to me that how kind of like
dogmatic people are about more compute
means more intelligence, better outputs,
better products. And I mean
Kevin here is going down that same path.
More compute is better. And I And I like
that they're people are starting to
recognize And it's kind of a nice way of
putting it, two dimensions. There's
going to be the like raw compute and in
terms of getting better output, but also
coming up with new techniques and ways
to actually drive that output. But in
reality, I still I think DeepSeek shows
has shown us and the number is not 5 or
6 million dollars to actually build and
train the whole model, but the actual
training part of it was 6 million
dollars. We can probably take that at
some face value. That the future does
not only mean more compute means better
products. And I think the industry, at
least a lot of people, but the people
with the most vested interests are still
living by that that rule right now.
Yeah, but I'm I'm a believer. I think
it's I think it's right. Once I try to
tell you a quick story. Once I tried to
get Kevin Weel to leak me some
information from Facebook. I've never
been shut down by somebody so quickly in
my entire life. So, that's just that
that gives you some context to his
response. All right. Let's talk about
GPT-5. Ranjan, I feel like this brings
you great joy, so do you
want to take this one Concheria
had asked
that means, Concheria?
Conch like the shell and then turning it
into a name? They have a bunny user They
have a bunny ninja avatar, whatever you
will incorporate that
we'll find out after the fact that it's
something dirty. If you're listening,
Concheria Concaria, please let us know
the etymology of your Reddit username.
Now I'm starting to feel weird reading
these names because I'm like I'm sure
we'll find out it's some dirty term
after this. Anyway, let's just read it.
or cancel. But so, they ask, "Will there
be an update to to advanced voice mode?
Is this a focus for a potential GPT-5.0?
What's the rough timeline for GPT-5.0?"
I did like that they just kind of by
default called it 5.0, showing how
confusing and there's been a lot of kind
of like almost hilarious aggregations of
what the series of model names has been
from OpenAI. And I think this shows how
ridiculous it is. So, thank you. Sam
responds, and he says, "Updates to
advanced voice mode coming. I think
we'll just call it GPT-5, not GPT-5.0.
Don't have a timeline yet." So, at least
one
they're
they're streamlining the way they're
marketing this to GPT-5, which I think
is a good thing. He doesn't At least he
didn't say AGI. I'll give him credit
because actually they need to announce
AGI before we get to GPT-5. So, I think
that's why that did not make its way
into there.
Um but I don't know. It's There was a
long period of time where for OpenAI to
succeed, they had to get to GPT-5,
whatever that would be. And I think
they've actually, to their credit,
gotten to a point where that's not
necessary anymore. Like the battle of
the next year or two
could just be in the operator and deep
research and whatever other product,
which makes me happier than anyone that
people are actually competing on product
now. But I think it shows that the fact
that he's a bit cavalier about this
after What was that tweet
about like night sky or something like
that? Oh, yeah. So, Sam wrote this like
really weird crypto
like cryptic crypto. Lord almighty.
Cryptic poem that, you know, made us
think that something big is coming. But
I think he was just writing a poem. Or
maybe he had ChatGPT write him a poem
and he was sharing it with the rest of
us.
But so, whatever Sam was trying to
communicate in the past or at least kind
of allude to, now I kind of like it that
it's just no timeline. We'll call it
GPT-5, but let's talk about other
things.
Yeah.
Again, I doubt we're seeing GPT-5 this
year or maybe ever. It'll just be
versions of
I think we see GPT-5 this year. Oh,
yeah. End of the year. I think if they
don't have any killer runaway products,
they kind of have to. They have to
release something. And again, like what
it whatever 4.0 became, 4.0 mini,
whatever
They could have just called one of these
GPT-5 and tried to like build some hype
around it, and we'd all go along with
it. So, at a certain point, if none of
those products that they are releasing
and we are both paying $200 a month
right now for these new products, so
maybe they'll be okay and they don't
need to. But if the pressure comes, I
think they have to release something.
Okay, so you've mentioned multiple times
that we are paying $200 a month. I mean,
when I spend $200 a month on SaaS, We're
talking about it.
I'm talking about We did it so we could
tell we could tell you everybody at
For you. For our you our listeners. what
ChatGPT Pro is all about. And so, we
will skip our planned segment on ChatGPT
search and tell you about our
experiences giving OpenAI so much money
to use ChatGPT.
Um so,
OpenAI
now allows you to spend $200 for a few
things. Unlimited use of ChatGPT. Um
their AI agent operator that will go use
your browser and do tasks for you. And
then something that just came out this
week, which we teased in the open, a new
ChatGPT agent called Deep Research. By
the way, amazingly, they decided to take
the exact name for the similar product
that Google has.
We'll cover that in a bit, but I found
that fairly shameless and wrong. Let me
read the story. OpenAI is announcing a
new AI agent designed to help people
conduct in-depth complex complex
research using ChatGPT, the company's
AI-powered chatbot platform. This is
from TechCrunch. I love how reporters
still have to write that ChatGPT is the
company's AI-powered chatbot platform.
Just in case you didn't know TechCrunch.
You're writing
to a tech audience. What are you doing?
Yeah. Come on. Appropriately enough, the
bot is called Deep Research. OpenAI said
in a blog post published Sunday that the
new capability was designed for people
who do intensive knowledge work in areas
like finance, science, policy, and
engineering and need thorough, precise,
and reliable research. Could also be
useful, the company added, for anyone
making purchases that typically require
careful research like cars, appliances,
and furniture.
Um
We have both attempted this. It is, to
my mind, the best research agent that
you could potentially use that you could
use right now.
And Ranjan, I know you've been deep in
the weeds, so I'm curious
what you've been using it for. Last week
you said Operator was interesting, not
worth the money. Is Deep Research worth
the money?
Yes. I will say Yep. Yep. It is It's
fantastic. It is incredible. It's And
And I said last week Operator is
mesmerizing
and useless. Deep Research is fantastic.
So, basically,
more like market research-oriented
questions asking, "What are e-commerce
trends within a specific category?" Look
through Reddit. Look for through
different research reports. Look based
on geographies. Even asking an initial
question, it will break down and ask you
back good questions as though you're
talking to an actual research analyst,
and then it will provide you
an incredibly well-sourced number of
bullet points, paragraphs, with
hyperlinks embedded. Like, it does an
incredible job with this, and it makes
smart arguments the way you would expect
from I don't want to say PhD-level
person cuz I don't even know exactly
what that would mean in terms of
intelligence, but overall, this to me
was huge, and it did a good job. My my
kind of litmus test on all this is I
think there's a lot of generative AI
where people
products come out, and people, rather
than looking at what is available today,
talk about what it could be in the
future. Operator definitely fell into
that category.
This, on day one, on day zero, actually
delivered what it promised, and I mean,
honestly, you have to figure, if you're
in any strategy-oriented role, any just
any business-oriented role,
research-oriented role, this becomes
incredibly valuable. Yeah, I I've used
it for a number of interesting things. I
use I was on CNBC Tuesday to talk about
Google earnings. So, I asked it to give
me an entire like prep document about
the state of Google. It was Yeah, so,
this is the cool thing about it. It
searches the internet, and it will, if
you ask it, it will give you current
information. Um and so, it like pulled
out like the projected ad spend, and
right now it just has text, but over
time, OpenAI anticipates it'll be able
to put charts in there, which I think
will be fascinating. And I thought, wow,
like
I won't prep for CNBC without this
again. It is really really good.
going to end with I won't prep for CNBC,
but Yeah, no, I know I do my I do my
prep. I work very hard on that and on
this, and I also had it give me a prep
for the podcast today. And so, I I
actually took last week's prep document.
So, for folks, we spend the week just
kind of dropping stuff in a Google Doc
that we find interesting, and now on our
Discord also, which has been quite fun.
And I just downloaded the prep doc last
week, and I
put it into the query, and I said, use
this as a reference. You can now go to
the internet and search our show, and
see what my episodes with Ron John look
like, and give me some topics to talk
about. And
at first, like, it went super broad and
gave me like what I would do if I was
doing like an AI overview podcast, and I
was like, no, I need only information
that came after February 2nd. And so, of
course, like, the top AI story of the
week is Deep Research, so it gave me It
talked about itself as the top uh Oh,
yes.
Oh, no, you're good, Deep Research.
It's like a
Look, it has selfish
tendencies and motivations, so it does
really feel human.
That's AGI. That's human. That's AGI.
And then it really broke down next
thing, Alphabet earnings, which again, I
was on CNBC to talk about. Um and, you
know, it says AI spend soars among
DeepSeek challenge, and it talks a
little bit about what we're going to
talk about in a bit, just the capex that
that Apple is going to go through to try
to build
AGI AGI, but AI, right?
So, I found I found that to be very
good. And then I also asked it to sort
of give me a report on like
this like how to enroll in healthcare in
New York state, and Good luck with that.
it was I don't think AGI I don't think
super intelligence will help us that
Well, that one, so Well, I have a
question. Yes. Is there a moat for this
for OpenAI? Is it It's a really really
well-done product, and again, going back
to our my general thesis that OpenAI's
strength lies in the product, and the
models shouldn't matter, and hopefully
they they recognize that too, and if
they they'd only invest in the product
more, but it's a good product, but can
DeepSeek or Google or whoever else I
mean, Google has a product named Deep
Research. We just don't have access to
it. I don't even
Oh, yeah, we do. Is it public?
It is public. Oh, have you tried it?
I tried it today. I had it put together
a similar episode plan. Who won? OpenAI
won. Okay. Although Google was good, but
but OpenAI won. So, the question of is
it a moat? No, I don't think it's a
moat. Yeah.
Okay. I switched my laptop over here
because um
I'm about to read a lot, and I don't
want to face away from the camera for
the entire thing.
That's I think that's that's right, and
we can proudly see our Apple devices
here. That's right. Even though Apple
intelligence sucks, but Apple
intelligence. Oh, so I did buy a new
Mac computer this week, and I I went to
the
to the Apple Store, and they're like,
and have you heard about Apple
intelligence? And I'm like, oh god, yes,
I have. By the way, Vision Pros, nobody
Nobody anywhere close to them. Are they
still up in the Apple Store?
they used to have a special section, and
now they're off in a corner, and
legitimately no one cares about them. I
don't know what I would do if I walked
into the Apple Store, and the sales rep
with a smile on their face came up to me
and have you heard about Apple
intelligence?
I I might be arrested. I might I might
be arrested.
Calm, cool, and collected.
Siri. Siri, calm, Siri, calm. So, I I
this is The post that I want to read is
from Ethan Mollick. It's called The End
of Search and the Beginning of Research.
He's a Wharton professor that's actually
quite good on AI, and he's been on the
show, and which I have to mention every
time we cite his work. It's just part of
the contract.
Part of the contract.
And
he makes this point that what we're what
we're seeing right now is this
combination of
a new mode of AI interaction called
reasoning, which we talked about, and
agents. So, let let me read some of this
cuz I do I do think it's so good. He
says,
For the past couple years, whenever you
use a chatbot, it worked in a simple
way. You type something in, and it
immediately started responding word by
word,
or more technically, token by token. The
AI could only think while producing
these tokens, so researchers developed
tricks to improve its reasoning, like
telling it to think step by step before
answering before answering. That
approach, called chain-of-thought
prompting, markedly improved AI's
performance. So, that's like the move
from traditional LLMs to reasoning.
He says, "Reasoners are capable of
solving much harder problems, especially
in areas like math or logic, where older
chatbots failed.
The longer reasoners, and this might be
rep- repetitive for for people who are
deep in, but I feel like it's worth
reading. The longer reasoners think, the
better their answers get. The the the
rate of improvement slows as they think
longer. This is a big deal because
previously, the only way to make AIs
perform better was to train bigger and
bigger as model bigger models. Because
reasoners are so new, their capabilities
are expanding rapidly. In months, we've
seen dramatic improvements from OpenAI's
O1 family to their new O3 models, and
that's where DeepSeek factors in. And
DeepSeek has its R1 model that everyone
went crazy about
last week was a reasoner.
So, basically, what's going on with this
Deep Research he calls it he says, "Deep
Research is a narrow research agent
built on OpenAI's still unreleased O3
reasoner with access to special tools
and capability. You can see that the AI
is actually working as a researcher,
exploring findings, digging deeper into
things that interest it, and solving
problems uh like finding alternative
ways of getting access to paywalled
articles." And it goes on for 5 minutes.
Sometimes it can think for 5, 10
minutes. He ended up getting a 13-page,
3,778-word
draft with six citation and and
additional references to one of his
queries.
This is the point I'm trying to make by
reading this.
I think what we're experiencing with
Deep Research, and the reason why it's
even a question that it's worth paying
$200 a month for, uh is because it is an
implementation of these new AI methods
that we're starting to see with with
Deep Deep Research, we're starting to
see with R1, and it might be that we're
just at the cusp of something very
interesting happening in AI with this
reasoning moment. Uh what do you think
about this? And And do you think that um
I am reasonably excited about it the
same way that Ethan Mollick is? No, I I
completely agree, and this is
to take
I don't want to be cynical about it, but
to me, I'm incredibly incredibly excited
about, again, watching what Deep
Research was able to do, and what that
means for certainly any kind of like
just general research type stuff, but
also and OpenAI very
kind of, you know, and from a marketing
perspective, shoved in, you can research
couches, or cuz they want to try to have
some more commercial aspect to this, or
more consumer-focused aspect, but this
is going to happen. Like, this is going
to There's no question to me that
these type of models, these type of
actions will kind of reshape what the
web is, the way we interact with it, the
way we interact with most apps. And I
think that's good, and that's going to
completely rebuild so many areas and so
many things. I think the area to kind of
maintain some caution is
what is the word agent mean? What is the
word agentic mean? Is this agentic? Is
this something else? I think that term
is still being thrown around a little
too cavalierly, cuz like now they're if
they've kind of gotten it to where a
simple chatbot query is an agent, which
I don't think is necessarily the case.
Just seeing chain-of-thought processing
from DeepSeek isn't agentic, but Deep
Research showing you that it's going
into, you know, a bunch of different
websites, and showing you which websites
it's going to, and showing you what it's
extracting from those websites, and com-
how it's compiling it, I think that's
huge. I think that's incredible in terms
of
showing people this is possible.
To me, the biggest change that I think
needs to happen
is letting people interact within that
process. Cuz right now, you kind of like
put in the prompt,
let it think for 20 minutes sometimes,
and then get something and then have to
revise it. But imagine you can actually
in the middle of all of that action say,
"Actually wait, I don't like that. I
like this." I think that will be a huge
change in terms of how useful this stuff
is. Not only that, it's going to learn
your tendencies. And the more you
interact with these things, like right
now the memory is just something that
they don't have. And that memory's
coming. So, they'll learn your
tendencies and next thing you know, uh
you're going to have like a research
assistant that really knows everything
that you want. So, um and and just to
think about how much room there is to
improve,
there's already so much going on now. Uh
this is uh it's
Malek is pretty level-headed. Again,
Wharton professor who's deep into AI. He
says, "These systems are already capable
of performing work that once required
teams of highly paid experts or
specialized consultancies. These experts
and consultancies aren't going away. If
anything, their judgment becomes more
crucial as they evolve from doing work
to orchestrating and validating the work
of AI system." The labs, the research
labs he means, uh "believe this is just
the beginning. They're betting that
better models will crack the code of
general-purpose agents expanding beyond
narrow tasks to become autonomous
digital workers that can navigate the
web, process information across all
modalities, and take meaningful action
in the world."
It's pretty high praise. It is. I think
to me, I was thinking especially on that
shopping side of things and like
thinking, "Okay, management consultants
potentially replaced or that industry
certainly changes. Us having to do lots
of research in general, but we have very
specific parts of our job and profiles
for the larger population, like where
does this start to apply? And the
shopping thing, it's still weird to me
because how much of that does someone
really want to be
automated? Like it is the process that
the agent is going through, is that
actually the joy that a person
experiences? Is going around and
clicking on different websites and
reading through the reviews, is that
annoying and a pain or is that the part
of it that people actually enjoy?
I don't Do you like online shopping? I
do. I do.
Yeah, I I think most people
enjoyment. And also like you'll never
feel that emotional attachment to
something you get if the bot just got it
for you.
Yeah, exactly. Like it's the act of
doing the shopping or doing the research
sometimes is
Am I getting into It's the journey, not
the destination right now. I think I
have
the destination.
Oh, it is the destination. There's
definitely joy in sort of finding cool
stuff to go visit and then going and
doing it. Like if a bot's just doing
that for you, then it's just like, "All
right, well, I could have just Googled
it and went to the first result." Yeah,
so I think right now and don't get me
wrong, this the research, consulting,
strategy,
journalistic This is a pretty big
opportunity and market. I'm not I'm not
downplaying that at all. But still, who
is using this and how, especially to
expand outside of that, is still not a
trillion-dollar market or it's a I mean,
to get to that, what are the use cases
for agents? Because again,
Apple intelligence cannot find our
flight information in our email when you
ask Siri, which they pitched us as
agentic and that's my momentary Apple
intelligence bashing. But But like what
are actual agents being used for in
everyday life and for normal people?
People have not been able to articulate
that case well and I I'm still waiting
for that to happen. It might have to be
humanoid robots, going back to the
Nvidia conversation.
All right, all right. That's humanoid
robots are always a
an easy sell, I think, for anything. But
Everyone's building them. But let me ask
you another question about what Ethan is
saying, which is basically that
um consultancies aren't going away and
that it the orchestration of AI is going
to be more important than
um what the actual reports are.
I don't know. I've always been on the
side that like AI will be uh creative in
in the workforce and not destructive.
But uh I think you have to look at this
with clear eyes and that is that there
are going to be jobs that just
completely go away even if more jobs are
created over time. And seems to me like
this stuff is going to not maybe not get
people fired, but certainly make a
company think twice before hiring. Not
1,000%. Actually, I think it was from
Goldman Sachs like a couple weeks ago,
they were talking about how an S-1
financial filing, which is a enormous
document, but was always kind of a
non-human, kind of like really
plug-and-play type of document, used to
take 2 weeks and 16 bankers and now can
be done in like 5 minutes. And again,
that all makes complete sense to me. You
have a bunch of data feeds and AI can
aggregate it and you just review the
entire thing. Like that's going away.
Management consulting, all the research
and grunt work goes away. And that's
good. And I mean, you can imagine out of
all the job displacement, the least
sympathetic group when we say the
bankers and consultants are have the are
under threat.
for
being in the the bankers. Yeah. By the
way, one of the interesting things I
don't know I'm sure you noticed this,
too. It's way more accurate than it's
been. Way less hallucinations. You can
click It gives you sources, you can
click through the to the sources and the
numbers are good. Yeah, actually that's
a really good point. There everything I
clicked through was 100% correct, which
was almost shocking to me in terms of
the output. So, That's huge. This
This meaningfully changes especially any
kind of job that involved opening a lot
of browser tabs and copying and pasting
text and synthesizing that text. That is
completely changed and there's no way to
argue. I get saying orchestrating and
validating will keep certain populations
like at least a little less scared, but
this is big. This is huge. My internship
from 2009 just disappeared.
half my life has disappeared right now.
Uh so, it's not just OpenAI, Google also
has a release this week. They released a
set of Gemini thinking models. Uh from
TechCrunch, Google is bringing its
experimental reasoning artificial
intelligence model capable of explaining
how it answers complex complex questions
to the Gemini app. The Gemini 2.0 flash
thinking update is part of a slew of AI
rollouts announced by Google this week.
Uh also, talking about the CapEx, the
company is planning to spend 75 billion
on expenditures uh like growing its uh
family of AI models this year. It's a
considerable jump from the 32.3 billion
on CapEx it spent in 2023.
That's a lot of money. 72 75 billion?
It's like when Satya said, "I'm good for
my 80 billion," right? Sundar is saying,
"I'm good for my 75."
That of course went had me go check
where Nvidia is right now. It's So, it's
down 12% since the Deep Seek
announcement and it's back up a little
bit.
The story of compute, the story of
Nvidia, the story of chip demand,
I think the one thing that was
interesting about the last week or so,
I mean, Mark Zuckerberg and Meta did not
show that they're, you know, moving away
from this CapEx spend. Google's coming
out and saying it. So, it's clear that
the, you know, the big the tech giants
are still taking this path. And OpenAI
still wants this path and saying
Stargate is very important. So, I think
it's interesting because the entire like
big technology industry has a vested
interest because if compute and CapEx
are critical, then only they can win.
So, this is going to be really
interesting to watch play out that as
long as compute and CapEx are critical,
they're the winners. So, they're going
to say that, they're going to keep
spending. And if someone, that's why I
still think and we talked about this,
Deep Seek was such a big story and
remains a big story cuz it showed that
that entire narrative can just collapse
on its own if smaller players come out
and do interesting things. That's right.
And by the way, I like went out on CNBC
and I said I think Google is on its way
to being the best-positioned company in
the AI race. And of course, they
promptly missed their cloud numbers and
went down double digits. So, um but but
I think they they do have so much
potential. The one thing they really
need to fix is the way that they name
their models.
So,
if you go to Gemini right now, there's
uh Gemini 2.0 flash, there's 2.0 flash
thinking experimental, there's 2.0 flash
flash thinking experimental with apps,
there's 2.0 pro experimental, there's
1.5 with deep research, at least we know
what that means, 1.5 pro, and 1.5 flash.
Do they do this? Is it Is it just a joke
to
Google. Don't change. I love it. I love
it. I want Google to never never stop
naming model. You know, OpenAI, I'm a
little disappointed in them. I think
their model naming convention is a
little is not good. I want that from
Google. I don't If they ever had a
perfectly streamlined suite of products
with a beautiful name, I would question
everything. Remember Bard?
None of this stuff works. They've had a
thousand chat apps. Nobody uses Google
Chat. Yeah, it They need to spend
G Chat was the greatest product of all
time and they I don't even it's called
like Hangout chats with Meet or
something like that right now.
to put my head right through the table.
I am. They're spending 75 billion this
year. Could you spend 500 million and
buy an ad agency and just name this
stuff like normal human beings?
Get a subscription to Gemini 2.0 flash
thinking experimental with apps and ask
it to name your models for you. But I
don't know. I This the with all the
turbulence and volatility in the world,
Google giving its models and products
really inconsistent names just makes me
feel just a little more at peace. This
makes you happy.
Don't change, Google. Don't change.
So, speaking of AI and job loss, there
was a great New York Times story about
Klarna over the weekend. Klarna is, of
course, a payments startup. It says,
"Why is the CEO bragging about replacing
humans with AI? Ask typical corporate
executives about their goals and
adopting artificial intelligence, and
they will most likely make vague
pronouncements about how the technology
will help employees enjoy more
satisfying careers or create as many
opportunities as it eliminates. And then
there's Sebastian Siemiatkowski, the
chief executive of Klarna.
He has repeatedly talked up the amount
of work they have automated using
generative AI.
Okay, yeah, that sounds familiar because
he was on the podcast and he was talking
about how much work they automated with
generative AI. Oh, that's a story that
catches my mind or catches my eye. Let
me scroll down. Okay, so this Times
story, as usual, sites one podcast and
another podcast, and then
it says this.
"When the host of the Big Technology
podcast asked why he was so intent on
taking Klarna's AI prowess,
Siemiatkowski said that partly it was
good for humanity. We have moral
responsibility to share what we are
actually seeing and that we're actually
seeing real results and that actually
having implications that are actually
having implications on society today."
Then he acknowledged that another part
of the motivation was self-promotion,
for sure. "We are regarded as a thought
leader." I was pretty stunned to see the
Times name our show in the story,
especially cuz they had so many podcasts
be nameless. So, thank you, New York
Times, for citing us. And I want to
point out this was Ron John's question
cuz we spoke right before and he goes,
"Ask him why he's talking about it."
Very interesting question, Ron John.
Thank you for that.
Well, I'm glad the Times is finally on
it, that what is the motivation behind
bragging about replacing humans with AI.
I think
again, I'm glad. I genuinely am glad
that they're asking this question
because
the marketing impetus behind all these
pronouncements have to be questioned,
and that certainly applies to Sam Altman
and that certainly applies to an AI in
our entire AMA discussion from earlier.
And I I did love that moment that there
was this big kind of pronouncement,
literally, the good of humanity, and
it's self-promotion, for sure. We're
regarded and he even
called himself called it a thought
leader, which is normally, I feel, maybe
some people out there
use that seriously, but most people I
know do not use that as like a serious
term. And he kind of just was like,
"Yeah, we're a thought leader. We're
self-promoting." They are still, I
believe, looking at IPO.
This is a question that
if people start pushing on more, start
asking more, not why is someone saying
this, but or sorry, asking why is
someone saying that, not just the
content, but the motivation, I think the
whole AI industry needs to ask that
question for every announcement.
Absolutely. I was stoked to see that
story. I was stoked to see the headline
be the exact question that you asked,
and I was surprised and grateful that we
were mentioned.
So.
Not by name, name, but by podcast. Okay,
I will I will take it. They put podcast
name more than host name as host name.
they
not just say Alex How do they not just
say Alex Kantrowitz from the Big
Technology podcast?
hurt. They would hurt. They would really
They would have to cry. Oh, come on,
Danny.
Yes.
Uh so,
we are on the cusp of the Super Bowl. If
you're listening to this, either the
Super Bowl has happened or it's about to
happen or maybe the Super Bowl is
happening and you're one of the few
humans on Earth that's listening to the
podcast as the game's going on, in which
case we appreciate you. Thank you for
choosing good content. Um guess who's
going to be in the Super Bowl? Of
course, the Chiefs and the Eagles, but
also OpenAI and Google. This is from the
Wall Street Journal. OpenAI set to make
its Super Bowl ad debut. OpenAI, the
artificial intelligence company behind
ChatGPT, again, I just like love the
soul coming out of the reporter having
to write that explanation, is expected
to air its first TV commercial during
Sunday's Super Bowl.
OpenAI's brand took off in late 2022
when it launched its wildly popular
chatbot, ChatGPT. The big game ad is by
far OpenAI's biggest foray into
advertising as the race to build the
world's most powerful AI technology and
win over users intensifies. And the
Hollywood Reporter also says about
Google, "Google bets that the Super Bowl
can turbocharge Gemini's ad business.
Google is planning a major Super Bowl ad
for its Gemini AI product line,
including a 60-second ad in the second
quarter of the game and purchasing 50
different 30-second ads in every state,
each one spotlighting a local business
that uses its AI software. That's smart.
Um
I was reading this news
almost instinctively saying, "Spend that
money on buying GPUs and scaling your
models." However, I think this is
brilliant on behalf of OpenAI and smart
on behalf of Google.
You got to get your products in the
hands of people, like we talked about at
the beginning of the show. People have
to use ChatGPT. People have to know
ChatGPT, and millions of people are
going to use and know and talk about
ChatGPT, especially if the ad is half
decent after it's in the Super Bowl. I
think this is the game. It's five,
whatever, 5 10 million dollars really
well spent by OpenAI.
I think what for OpenAI specifically, as
well, I mean, they clearly have been
moving towards a more formalized
professional marketing function. In
December, early December of last year,
they had hired the former Coinbase CMO,
who had been at Meta for 11 years and
was global head of brand and mark
product marketing for Instagram.
Actually, the whole suite of products.
So, serious marketer. So, I'm very
curious to see what they're going to do.
The The challenge for me
is twofold. One,
the AI ads to date we have joked about
have been terrible. Apple intelligence,
not to go back there, but if we
remember, they had all these ads of
basically people kind of like not
wanting to pay attention to people not
as important to them, so summarizing
their content in real time. But even
Google had a disastrous ad in the
Olympics, if you remember, where it's
like a little girl wants to write a
letter to her favorite athlete and the
dad uses Gemini to do it. Like, how
tone-deaf. Like, it's still one of the
greatest things I saw. I was like,
"These companies need to just hire
one person who's just normal and sits in
the corner and does nothing, but just
gets shown the ad and says, 'This is
terrible.'"
and I don't think it's a bad idea.
No, I literally just in the corner, they
get paid a lot of money, and they just
sit there and, "Okay, that is just
terrible." Okay, normal people think
they will think that's good. Um
Oh, man, I go back and forth though
because the other side of this is
these this generative AI has a branding
problem. Like, when I still talk to
most non-tech friends and family, they
still associate generative AI content as
bad. And like, that's the whole joke.
It's like, "Oh, that's so ChatGPT."
Did ChatGPT write this? Yeah, did chat
No, no, I mean, it's still the entire
Which is a fair insult.
Which is, but it's a
to call kids Wikipedia when they were
saying generic stuff.
Generic stuff. That's what I mean, that
And to me, like, the actual products, if
you know how to use them, are so far
beyond that that that stereotype of
AI-generated content is like for overly
formulaic, and that's like 2 years ago
ChatGPT. But so the so the so there's a
clear branding problem. How do you solve
this when people
have a negative connotation of the
technology, have a negative connotation
of you, the company? Like, it's got to
be a damn good ad, and I think the
crypto bowl of 2022, I believe it was
January, with the Larry David ads, the
Coinbase ads, remember the bouncing
Yes. They're brilliant ads and they're
really well done, but like, I don't
think it helped the In In fact,
certainly was not a good moment
afterwards for the industry like
certainly got a lot more people to put
their money in crypto and then they got
the rug pulled out from under them. But
this is different.
point.
Like, I don't think that's we're going
to have the same scam. No, no, no. The
The to me, that part isn't the scam, but
how do you solve this branding problem?
I think like, if you're Katie Roach, the
CMO of OpenAI, you're sitting in a room,
you're like, "We have this branding
challenge." I hope they recognize it.
How do we overcome it?
I am very excited to see what this
commercial looks like.
What What's your best guess of what it's
going to be?
I'll give you mine. All right, go.
maybe you're going to have Shaq and
Charles Barkley sitting at the Inside
the NBA desk, and they're like saying
nasty insults to each other that they're
like typing in ChatGPT is giving to
them. Or maybe something voice. Maybe
it's just somebody like driving in the
car and like having a conversation with
ChatGPT. And it'll be like a Snickers
commercial. It's like, "Bored?
You know, bite into a ChatGPT." Oh, all
right. Hold on. Yours Do you know the
the most successful kind of like not
even skeptical AI thing thing that
converted AI skeptics I saw? I don't
know if you saw like Insta ChatGPT roast
my Instagram profile, where you
literally just screenshotted your grid
and then put it on, and it And that was
the moment that I think a lot I saw a
lot of people being like, "Wait, this is
genuinely creative. It's not formulaic.
It's actually funny and interesting and
and creative." So, I think that that
would be mine, roasting Sam Altman
or other famous people letting their
Instagram profiles get roasted by
ChatGPT. That's my ad. Yeah, okay. So,
basically, we both agree that it's some
form of AI roasting humans, which is
humans, and it's and it's got to be
funny. It's got to be good. I I don't
think trying to
trying to tug at heartstrings in any way
Google's going to do that.
Google's going to do that. I'm sure.
It's going to It's going to be like a
kid and a grandmother, you know, just
trying to communicate with each other,
and then Gemini will solve it.
but the weirdest part like I introduced
my dad to Gemini voice. And he really he
has Parkinson's and has trouble typing
into his phone. And it it was this
emotional moment. Like it genuinely that
could have been the commercial right
there. Yeah, it could have been. And
they're still they they that that is
sitting there and somehow it's still
going to get screwed up.
They'll mess it up.
Somehow it's
They they have made some really
beautiful search ads before in the past.
ad of all time and I realized how old I
was when I brought that up to some
younger people. Parisian love. It was
from 2009. It's a Google search ad where
and it really made Google search
emotional where someone goes through the
process of studying abroad, falling in
love, getting married. It was it was
amazing. If they can pull off the 2025
Parisian love, I'm betting it all on
Google. If they can pull I have a good
ad on that.
bet against their their ad agencies.
Okay.
Uh
we need to get out of here, but uh who
do you think is going to win in the
game?
I I'm a New England Patriots fan.
Yeah.
I don't want Mahomes to three-peat. So,
I want the Eagles to win, but my God,
the Chiefs somehow they always do it.
So,
I
will grudgingly
bet that the Chiefs will win.
And I am a Jets fan and I want Tom
Brady's legacy, especially his and Bill
Belichick's legacy to fall apart. So,
I'm taking the Chiefs.
And uh All right. I mean, I'll take the
Eagles just to take the other side and
that's where my heart lies, but What's
your prediction on uh what happens at
halftime? We got Kendrick Lamar coming
out. I have this feeling that Drake is
going to come out. They're going to hug.
Oh.
And then they're going to both
take out fake guns and shoot them and
it's going to say "Bing."
Wait. As in as in The search engine. As
in Bing.
That could be the most aggressive call
of all time. And if you are correct
about that, I mean, it's time to retire.
they should hug it out on stage. I could
see if they can do it, world peace will
happen.
literally We Are the World comes on just
like Stevie Wonder at the Grammys and uh
Kendrick and Drake sing it together.
Canada and the US friends again.
Let's bring it. Let's bring peace at the
Super Bowl. Peace to all of us. Yes, as
the Eagles and the Chiefs go at it.
That's right. All right. Well, Rajan,
great to see you in person. This has
been so fun.
This has been fun.
Let's wave to the people at home.
All right, everybody. Thank you for
watching us or listening to us. We do
this every single Friday breaking down
the week's news. Sometimes we break some
news and we hope you join us. If this is
your first time watching the show, you
can subscribe to us here either on
Spotify or
whatever app you use to get podcasts and
on Wednesdays we'll do we'll do
I'll do one-on-one interviews with
people in the tech industry and then Raj
and I will be back every Friday. So,
that'll do it. Thank you for listening
and we'll see you next time on Big
Technology Podcast.