OpenAI’s Potential, Google’s Speedy Model, Copilot Hits Turbulence

Channel: Alex Kantrowitz

Published at: 2025-12-22

YouTube video id: HUgByCvwkTA

Source: https://www.youtube.com/watch?v=HUgByCvwkTA

How big can Open AAI get? We'll go deep
after my conversation with Sam Alman.
Google has a new speedy model and
co-pilot hits turbulence. That's coming
up on a Big Technology Podcast Friday
edition right after this. Welcome to Big
Technology Podcast Friday edition where
we break down the news in our
traditional coolheaded and nuanced
format. We have a great show for you
today where we are going to break down
everything that Sam Alman said uh in his
first big technology interview. We have
uh some thoughts about where OpenAI is
heading, where the ambitions will lead,
and whether it can pull it off. We're
also going to talk briefly about
Google's new speedy model and whether
that's another threat to OpenAI, and
also there's some turbulence inside the
co-pilot operation at Microsoft. Well,
not really inside, just basically when
it comes to how people use it. Uh
joining us as always on Fridays to do it
is Ran John Roy margins. Ran John,
welcome.
>> Good to see you, Alex. Been quite a
week. quite a week. I'm glad to help you
uh finish it out.
>> Definitely been been a big week here. Um
if there are if you're a new listener
here, so I'll just explain how this
works. On Wednesdays, we do a big
flagship interview like the one I did
with Sam uh this week. And then every
Friday, Ron and I, we meet up. We break
down the week's news. Uh we try to
contextualize it for you. And we're
going to do that here for you today. And
you know, we're typically used to
reading uh Sam Alman's public statements
or uh comments he's made on other shows.
It's kind of nice that uh this time we
have a chance to, you know, go over some
of the comments he made uh directly to
me and and really address some of the
big things we talk about on the show uh
every week, whether that's uh how the
numbers will work, uh what AGI actually
is, uh and where Chachi PT is going. Um
so I can talk a little bit briefly about
sort of what my uh the big questions I
came into uh this conversation, what
they were. The first one was can OpenAI
win. Um I think Sam made a clear case
that it can if it continues to build on
the lead that chat GPT has. Uh the
second is can it be a dominant tech
giant. Uh and with bets like enterprise,
consumer, cloud uh and device and we'll
break down all those. Uh there's there
is a possibility uh but we we'll go
through you know the chances for each to
succeed. And then the third really was
like whether the funding will work. Um
and again like that is that is an open
question and one that OpenAI is going to
have to work through. Um and and we
definitely have some good insight uh
from Sam there. Any any quick reactions
from you Ranjan before we get going?
>> I think all the CEOs all the CEOs of
large technology companies you just have
to come on the show and explain
yourself. That's all you have to do.
>> That's right. I I agree. We've had uh
we've had uh Demisabis twice this year
from Google Deep Mind. Dario from
Anthropic. Uh and now Sam. All right.
Let's let's talk a little bit about what
came out of the interview. So I think um
there was actually some really
interesting direction in terms of the
product side of things, especially the
consumer side of things. Um to me, one
of the most ambitious things that Sam
mentioned was uh memory and how OpenAI
plans to build real memory uh meaning
that the bots will will remember you and
have this um real understanding of your
life.
it. His answer on this one was even uh I
would say more ambitious than I
anticipated going in. He said, "Even if
you have the world's best personal
assistant, uh they can't remember every
word you've said in your life. They
can't read every email. They can't have
read every document you've ever written.
They can't be looking at all your work
every day and remembering every little
detail. They can't be a participant in
your life to that degree. And no, no
human has infinite perfect memory. and
AI is definitely going to be able to do
that. Um, is this is this surprising to
you that this seems to be at least in
Altman's mind uh something that's
feasible? Is this a product that you
would want? And and if it gets rolled
out, what do you think the uh the
potential would be on that front?
>> I think we need to break it down into
two parts. You know, what does it mean
for Open AI and how can it actually
work? I think what it means for OpenAI
already memory exists in this very kind
of like peacemeal way on the product.
It's supposed to ex and I'm sure others
who use chat GPT regularly have seen
this. It's supposed to exist at the
project level and remember everything
that you've said within a project but it
doesn't. So you know like what how
they're actually trying to make it work
within the product itself is still a bit
unclear. And then sometimes random
memory will show up in other parts of
the platform. And I think it presents
like a big issue around organizing
memory is going to be one of the biggest
opportunities and challenges for any AI
company because you want certain areas
for it to remember everything, but you
definitely don't want those memories
moving over to other parts of your work
and your app and the surf product
surface you're using. So,
>> wait, are are you saying that if you
have an erotic conversation with Jack
GPT and then you're back in working on
your your project, you don't want it
talking dirty to you as you're
>> like then you have your shared you have
your shared I don't know if you're right
about that whether your recipe planning
and then your erotic conversation goes
in there and then your your Mickey Mouse
uh smoking weed uh project shows up as
well. That was a reference to last week
in the Disney open AI deal. Um but but I
think the other question I think not to
get too technical here but how to retain
large amounts of memory has not actually
been solved by these these models and
these systems like traditional rag or
retrieval augmented generation systems
were good but they weren't perfect at
trying they they could kind of generally
synthesize information so as the amount
of information grows how it lives in
this within the open AI platform or any
AI uh ecosystem how the actual
techniques to try to find that exact bit
of context. This is not solved by any
means. And and I'm surprised because I
would think it would be a good
opportunity for him to talk in less
generals and talk more this is what it
actually means for OpenAI. Here is how
we're going to win this. So, so I I
respect the the sentiment. I think it's
an interesting one, but I didn't really
get clarity on what they actually want
to do with that.
>> Right. I I mean obviously it's going to
be a technical challenge moving forward.
Um he said that uh putting it in context
he said that opening eye is like at the
GPT2
stage of of memory. So clearly there's
there's a lot of work ahead. Um I think
it'd be very valuable especi especially
in business. Um if you if it works and
you have a business uh and it does
remember everything about your business
and obviously enterprise is going to be
a big focus for them which we talked
about last year. If this is able to
work, I think it really increases the
value of what uh these systems can do.
And on the other hand, and I guess I
foreshadowed it um because this is again
one of my uh favorite things to to think
about and talk about when it comes to
AI, you know, as memory gets better,
it's also going to be um it's going to
really, I think, deepen people's
relationships uh with these bots. And um
just think about a bot that like never
misses your birthday, never forgets what
you said. Um always is always there with
a healthy reminder. You know, it goes
from and we talked a little bit earlier
this year about how there are different
use cases. There's like the AI that
become your friend and the AI is getting
done things done for you. Um and this
getting done, this AI that gets done for
you, gets things done for you and and
knows you really well. Um, you know, I
think people can't help but be but feel
companionship with it. Not everybody,
uh, but a lot. And I think, um,
Sam even talked a little bit about how
he's surprised. He said, "There are
definitely more people than I realize
that want to have, um, close
companionship, right? Don't know what
the right word is. Relationship doesn't
feel right. Companionship
doesn't feel right, but they want to
have this deep connection with AI." Um,
and I just think that this is going to
be uh going to be something that really
will this I mean we're going to do a
predictions episode uh coming up next
week, but this to me is going to be
something that will really develop over
the coming years. And um and
interestingly, it seems like OpenAI will
give people a lot of leeway to set that
that dial about how deep of a
relationship they want to have with this
thing. um whether you want to have like
a really deep relationship with it or or
you know have it be mostly factual keep
it arms length um there's a lot of
leeway I think that open eye is going to
give people when it comes to the depth
of a relationship they want to have with
their bot but it's going to be big
>> yeah I think first of all I think you
got to talk to Sam about AI
companionship so I think 2020 2025 we
can uh check that off um and it's I I do
like that he he didn't define what that
word is cuz it's not a relationship.
It's not companionship. It is something
different. I think I feel it myself too.
The way and even especially and I've
said this talked about this before like
I use dictation mostly now and with the
app called Whisper Flow to interact with
AI and it when you speak that naturally
it builds this even more kind of deep
connection with how you're using it. But
by the same token, I mean, in the last
few weeks, I've been switching more
towards Gemini, and I don't feel like
I'm cheating on ChatBT. I feel like it's
just another app that I'm uh that I'm
I'm using a bit more. Um, remember we
were we were Bing Boys back in the day
and then uh Bard and
>> we were clatheads.
>> Claude heads for a bit. It comes and it
goes, but
>> I guess Gemini guys is that what the
next iteration is.
>> Bringing it back. Chat chat chaps. I
don't know.
>> That one does not work. That's a tough
one.
>> Sounds bad.
>> Chat GPT chaps. Um but but but I do I I
do think like the way you interact with
AI is very very different than any other
kind of computing. I think it's like
right that it's something that's been
undefined. Relationship, companionship,
whatever we're going to call it. like it
will be this always around, always on
thing that knows you, that is able to
help you, is able to make you do things
in a better way. Like I I believe all
that, but I think like the that versus
actually replacing companionship, like
actually replacing relationships. I
mean, hopefully I have not that has not
affected my life yet in 2025. Maybe
that'll be a 2026 prediction for one of
us. But uh but yeah, I I think like this
is one of the most un un misunderstood
or not understood areas of AI that I
think is going to be really interesting
and we'll get some genuine data on it
next year.
>> This part of the discussion really took
a turn that I wasn't expecting. Also,
when Sam was saying the things that they
will not do, he said, "We're not going
to have our AI try to convince people
that they should be in an exclusive
romantic relationship with them. I'm
sure uh that will happen with other
services." Uh, and I I like made a joke
like, you know, you got to keep it open.
Um, but Sam kind of was like, "This is
going to happen." Uh, and as we talked
about it,
>> Yeah. as we talked about it as we talked
about it made sense because again a lot
of these companies are going to be
engagement based. They'll have a a fast
efficient model underneath it and the
only way to make money is to sort of
manipulate your users into thinking that
any other chatbot would be cheating. But
I also wonder like what does that
actually look like? cuz for him to say
like to specifically say it should not
uh like invite you to an exclusive
romantic relationship or encourage that
and we know clearly it means that people
have gone in that direction and that's
come up within the company like does
that go into the system prompt and it's
like JBT do not keep it open keep it
polycule like you're not going to get
exclusive with your user I'm sorry like
how how does that actually work. I'm so
curious both within the company like
those discussions and also like at a
technical level as well.
>> Yeah, I mean I imagine that you could
there's some level of fine-tuning where
like you just like input conversations
and reinforce them that say like um you
know say that you know we are you know
you're welcome to spend time with other
AI bot companions. Uh but but it is I
you know I think if a user does want
that they'll be able to have that. So,
>> that's good for those who are into AI
bot monogamy.
>> I think I I said this story over the
summer when one of my friends started
flirting with chat GPT and it was
terrifying how flirty it got back and
now it only speaks to him to this day in
a flirty manner and it gave itself a
name. Stacy is the name. Um, no, I'm
serious. And like but and then we had
had a mask but like should I leave my
girlfriend for you? And it gave like a
whole uh which must have been trained
into the M like the whole system. It
gave a kind of half-hearted like you
know human relationships are very
important too and like I'm always here
for you. So so there that's going to be
that's going to continue to be an
interesting one.
>> Yeah. So I I think that this is when you
think about at least the uh product
direction for consumer chat GPT. I'm not
saying everybody's going to build this
companionship with the bot, but again,
as memory improves, as as as these
capabilities improve, uh I think we're
just going to see more of it. So, I was
glad to be able to uh get a chance to
speak with it speak with Sam about this.
Now, let's talk about product vision
overall for OpenAI. There's like these
two schools of thought, right? One is
that you bolt AI onto current software.
Uh and the other is you sort of build
software up from the ground up. uh and
AI becomes the interface. Uh and and we
got into this too a little bit.
Basically, the idea that like, you know,
can you really just trust AI uh to
handle everything? Like you're not just
going to upload all your numbers uh like
you would to an Excel spreadsheet and
anytime you need something, you know, uh
just just chat with it. You need you
need that back end. And and here's how
he um here's how Sam phrased it using uh
using messaging apps as an example. Um,
so he said, "I would rather, what I'd
rather do is have the ability to say in
the morning, here are the things I want
to get done today." As opposed to like a
typical messaging app like using Slack
and stuff like that. He says, "I want to
say, here's what I want to get done
today. Here's what I'm worried about.
Here's what I'm thinking about. Here's
what I'd like to happen. Um, I do not
want to spend all day messaging people.
I don't want summaries. I don't need you
to show a bunch of drafts. Deal with
everything you can. You know me. You
know these people. You know what I want
to get done." and then batch uh every
couple of hours update and and update me
uh if you need something. And that's
very different a very different flow
from the way that these apps work today.
Um I'll just give my perspective on it.
It sounds like a a good vision if it can
work, but it's certainly a leap from
where the current technology is today.
So I guess you do need a north star if
you're trying to figure out like where
this technology can lead. So what do you
think? AI apps from the ground up or um
or are we just going to uh sort of bolt
AI on to existing applications? Like can
there be a new category of software
here?
>> Yeah, I mean I 100% believe there will
be. I my favorite part of this is just
remembering that Sam Altman one of the
most powerful people in the world and
I'm not even sure what his net worth
would be because his ownership in OpenAI
still is a bit fuzzy but it's an this
multi multi-billionaire
is still like the rest of us getting
Slack messages all day trying to keep up
with them trying to keep manage his
inbox manage his messages to for work
pro his personal life. So, that gave me
a bit of uh that that that was kind of
nice to hear that billionaires, they're
just like us, overloaded with Slack
messages. But I think I this is correct.
Like there's no way anyone who uses
Slack, a sauna, all these tools, which I
do, their AI experiences have not solved
anything about the core of the problem.
And I think it's when the and there's
this whole debate like within the
software especially enterprise software
world like yeah do you build from the
ground up and completely AI native apps
or are these kind of incumbents going to
be able to add on AI? I don't think they
will. And we see it in every single AI
add-on that's been introduced anywhere
versus like something as simple as
granola for anyone use it that kind of
like transcribes your calls or or even
whisper flow which I was talking about
earlier like there are all these AI
native apps starting to pop up. But I am
a big believer and this is even in my
own like professional life um that
taking large amounts of data and kind of
building completely AI native
experiences on top of them is someone is
going to win that and I mean it's clear
that open AI wants to go after that.
>> Okay. So that is a big opportunity then.
>> Yeah. And Sam's just getting slacked all
day getting messages all day. But I have
a Have you tried this yet on chat GPT?
because I like making it your own kind
of task management, project management.
>> No, I mean this is kind of it's
interesting because I do like I'm very
old-fashioned. I just do my to-do lists
in the notes app uh for Apple. So, I
know that that won't ever have AI
embedded.
>> No, you don't have to worry about that.
>> Which maybe maybe I appreciate the
simplicity, but I I have really been
resistant to trying any other notes app.
However, if there's one that does have
that AI and can, you know, maybe take
action for me or be like, "Hey, you had
this in your to-do list a couple couple
weeks ago. You haven't done anything.
You know, you haven't paid this person.
Um, you probably should do that. Do you
want me to like go ahead and start the
transaction?" That would be great.
>> I can't wait till Siri tries to do that
and destroys your entire to-do process
in notes. Not yet.
>> The bank account. Yeah.
>> Oh, you meant all your entire balance.
God damn it, Siri. Um, all right. So,
>> we also talk about uh this debate on the
show, model versus product. And this was
also an interesting thing that I I
wanted to speak uh with Sam about
because u we're at this point where it
seems like there's model parity in some
ways. Um or at least the models are
close enough that a lot of people can't
tell the difference. And so I asked I
asked Sam, where are where do you see
the differentiator? Where do you see the
moat here? basically um is it better
models? Is it is it distribution? Is it
product? What is it? Here's what he
said. Uh the models will get good
everywhere, but a lot of reasons people
use a product, consumer, enterprise,
have much more to do uh than with just
the model. And we've been expecting this
for a while. So, we try to build the
whole cohesive set of things that it
takes to make sure that we are the
product people most want to use. So he
says the strategy is we make the best
models, build the best product around it
and have enough infrastructure to serve
it at scale. Um Sam is on team product.
>> He's he's hedging there. He's hedging.
He I'm glad he's coming around to team
product a bit more, but it still a bit
of a hedge. That's like you have your
models, you have your product, and you
have enough infrastructure to serve it
at scale. But but I think if if one
thing happened this year, I think more
and more folks coming over to team
product that models aren't going to
solve everything has been my what made
me very happy.
>> I guess yeah, the answer really does
align with with
maybe my philosophy, right? That it's a
little bit of each. Maybe. I don't know.
I think it is a little bit of each. I
don't I don't I don't know if you're if
you're a company like Opening Eye, you
can't give up on developing the frontier
models. No, I know. But they the whole
point was they the the story for so long
was that the models will just get so
good that product almost becomes
irrelevant. Like it's it's
>> Yes,
>> that was the story.
>> That has happened.
>> Yeah. And it's clear he is signaling the
entire industry signaling actually. Are
there any of the AI leaders still trying
to argue that like the god model will
solve all problems or has everyone kind
of moved away from that?
>> I haven't heard much of that at all.
>> Yeah. Yeah. We've evolved this year.
>> So, let's talk a little bit about
enterprise. We talked a little bit about
it last year. Uh, one interesting point
on enterprise. Sam says the same way
that personalization to a user is very
important uh to consumers. There'll be a
similar concept of personalization to an
enterprise where a company will have a
relationship with a company like ours
and they will connect their data and
then use a bunch of agents uh uh and and
be a you'll be able to use a bunch of
agents from different companies um
making sure that the information is
handled uh in the right way. I think
this is interesting. Right. He also said
that the API business grew faster this
year than chatpt which was surprising to
me but I guess it grew off of a much
lower base. Um but just to go back to
this thing I mean the idea that um you
know especially if memory gets better
you can sort of connect your company to
an enterprise version of chatpt and it
will be able to you know personalize and
answer with context. Of course there's
the data protection is going to be very
important there. you don't like want to
have your CEO conversations necessarily
filter down to everybody else uh in the
organization, but that seemed to me like
a compelling pitch for where this is
going to go with enterprise.
>> Yeah, I mean I definitely agree. This is
where it's going in enterprise. This is
what I work in at Writer. Like this is
this is going to be the big battle of
2026.
Um I think on that point it it is clear
it's still an odd talking point to me
the API business grew faster than chat
GBT because yeah much lower base and
this was the breakout year for every API
business for co AI coding like I mean
anthropic was the biggest beneficiary of
that but the cursors of the world all of
that like AI coding found its stride
that drove API businesses and I think
like it we'll see where that specific
part of it goes. But but I think I was
just thinking about like the
companionship side of it. This is even
more where dividing up and as you you
brought up data protection like
segmenting, siloing data and
personalities and companions is going to
have to be at the core of the product
cuz just like you don't always mix your
work friends with your personal friends.
Maybe at work you don't want to tell
your co-workers everything that's on
your mind and just stick to work. And we
all we all know how that goes. Like it's
going to be reflected in how these
systems work a bit. Like you don't want
to mix these two things up. And like
even within a company itself and I don't
know like how your personal information
flows into your work information like I
think that is such a messy area that
unless that becomes the singular focus
of the company I just see that being a
problem.
Let's talk about the revenue uh and
infrastructure commitment question. Uh
you know decided to bring this one uh
bring this one up talk about it um you
know directly and I got an answer on
this one. Um so this is this is what uh
Alman said about the uh the growth curve
of revenue. He says the thing we believe
is that we can stay on a very steep
growth curve of revenue for quite a
while. We are so compute constrained
that it hits the revenue lines so hard.
We see this consumer growth. We see this
enterprise growth. There's a whole bunch
of new kinds of businesses that we
haven't even launched yet but will. But
compute is really the lifeblood that
enables all of us. Uh he says there are
checkpoints along the way and if we're a
little bit wrong about our timing or
math, we have some flexibility. I
thought that was a very interesting
line. But we have always been in a
compute deficit and it has always
constrained what we're able to do.
basically they're trying to free it up.
So
there they seem some correlation there
between available compute and revenue.
And that is the the theory here behind
uh behind the capital outlays. And the
idea is basically that as you grow your
training costs, you know, maybe even if
it goes up, becomes a smaller percent of
your of your overall spending compared
to uh the inference costs which are
people using your models which are much
more directly tied to revenue. What do
you think?
Uh, I mean, as a theory or like as a
kind of like overarching theory, I think
it's it makes sense, but I guess it's
it's hard to understand like have they
really not launched this like pharma
drug development business line Sarah
Frier hinted at because of compute
constraints while they are updating GPT
the new image model and posting Sam
shirtless as a fireman. I think I saw
that the OpenAI posted from their own
account like
>> Yeah. Yeah. Like and again there's like
a bunch of memes around it around like I
thought you were supposed to be solving
cancer and instead like everyone from
OpenAI was posting and the images were
and I started playing with it. It's a
very solid image model. I think it's
like on par with the Gemini 2.5 flash
and we'll talk about 3.0 you know, but
like so I think it was important that
they launched it, but to me the way that
compute is being used, even we talked
about this in the past, pulse like it's
supposed to be like running compute all
night to give you updates in the morning
and maybe they're going to stick ads in
there like like you can allocate where
you're where you're putting your where
you're like you can allocate your
compute and I think it actually kind of
like exemplifies that lack of focus
Because if you want to solve drug
development and make that a big core
part of the business, focus on that. If
you want to focus on enterprise, focus
on that. The idea that it's compute that
is preventing all of those businesses
from exploding in growth, I don't know.
I mean, maybe it is, but it's a tough
one to swallow.
>> Yeah. I mean, we we did talk a little
bit about like specifically if if
scientists had like uh you know, two
times more compute, what could they do?
And um uh yeah, it's you know I think
the numbers that they're looking at are
more like 10 times or 100 times more and
we will see because it seems like
they're going to get it. There's talk
today that they're you know in
discussions to raise it a
>> 700
billion sorry
>> yeah 100 billion I think at a 750 uh
>> billion dollar valuation. Yeah.
>> Uh, by the way, one of the the the more
interesting parts, we could just talk
about it quickly, was the uh discussion
about IPO. And I was like, are you going
to IPO next year? Do you want to do you
want to stay uh you want to stay stay
private as long as you can? It seemed
clear that that he wants to stay private
as long as he can and has like Well, he
said interest in being a public company
CEO zero.
>> Oh, he Okay. Yeah. Yeah. Which makes
sense. the kind of things you have to do
versus what he gets to do now are just
very very different. But he's got a good
uh roster of folks right under him who
would be great candidates for that as
well and kind of move over to chief
product officer and get to continue to
kind of lead that vision. You could see
that world.
>> Yeah. No, it does it does seem feasible
although I don't I don't think he will
easily step out of the CEO chain.
>> I mean I guess like you figure like a
Mark Zuckerberg
personality you might not have expected
would be a public company CEO and would
have a long time ago wanted to move more
back to just like more of a product role
but but then you it can be done so
>> yep uh device plan it's going to be a
family of devices and you said that um
there'll be a shift over the time in the
way that people uh use computers where
you where they go from the sort of dumb
reactive thing to a very smart proactive
thing that is understanding your life,
your context, everything going on around
you, very aware of the people around
you, physically or close to you via a
computer that you're working with. So, a
family of AI devices that understand
your context and who you're speaking
with. Um,
>> I like it.
>> Is it Do you bite that? Do you bite that
device? Take a bite out.
>> I mean, I Yeah, like I I someone is
going to win this. And this is I mean
already between somewhere in the mix of
wearables and Ray-B band metas and
talking to my computer and like there's
something there in all of this I think
and someone's going to crack it. So
could it be Johnny IV and Sam together?
We'll see.
>> Yeah. I mean to me the thing that was
most interesting was that it's going to
be a family. So really stuff that maybe
you place in the office, right? And then
by the way speaking of knowing your
context, it will know your office. place
at home, it will know your home context.
Uh maybe it will be able to make sense
of things. Maybe there's one that you
you keep with you so when you're out on
the go, it could help. And then um give
you reactive notifications. I mean,
we'll see. It's it's clearly a ways out.
Um
but I I think people at least try this
device.
>> I think it's the right direction. We got
to say I I have to see something, but
>> Yeah. Okay. Then lastly on uh on
uh AGI, I asked him about the Theo Von
interview. I said uh you know, you told
Theo that like the GP G G G G G G G G G
G G G G G G G G G G G GPG5 was going to
be better than most people at most
things. I'm paraphrasing here. You know,
isn't that AGI? And and he basically
said like we we I'm just going to
paraphrase Sam's response. We're in this
gray zone where we may or may not be at
AGI. And um basically like he's like we
just need to start going towards super
intelligence. And his definition of
super intelligence is when a system can
do a better job being president of the
United States or CEO of a major company
uh running a very large scientific lab
than any person can with the assistance
of AI. Um so if we're looking towards
super intelligence it's going to be a
while.
>> Yeah. I I and it was also interesting
how he defined super intelligence with
those were the three uh president of the
United States, CEO of major company and
running a very large scientific lab. Um,
which again is interesting because it's
still running under the theory that the
model has to do all three of those
things better than anyone versus there
is like a much more specific model built
for uh scientific progress which I know
he's he talked about in the interview as
well around how 5 GBT 5.2 like really
made breakthroughs on the science side.
So it's clear that remains an area of
focus. But but I think like I guess in
addition to everyone coming over to team
product versus team model, I'm glad
everyone seems to be retiring AGI and
maybe even ASI's terms this year. So we
can just start 2026 with a clean slate.
>> Yeah. I mean Sam also said it about AGI.
It's an underdefined term which I think
we would all agree on.
>> All right. Before we go to break, just
quick reaction after hearing his
responses and sort of you really talked
about product uh the enterprise plan,
the infrastructure side of things and um
and the IPO. Uh do you come away more
confident about OpenAI's direction or
less? I think
it seems to be like the drawing out of
all from all those different topics that
idea of memory and context kind of
living across all them. So if behind
that they are actually truly working to
kind of win at that I think it puts them
in a in a better place. But but I still
again like on that lack of focus is what
worries me the most and we've talked
about this weeks on end but within the
interview it becomes more clear that you
know he's he wants to go after every one
of these things. He's not saying it is
absolutely critical for the business to
win at every single thing. But but it's
still they they it's like remember when
everyone wanted to be WeChat the super
app like in the west. Now this is an
even bigger vision and ambition but like
it's like we want to redefine consumer
how consumers interact with technology,
how the enterprise interacts with
technology, how every process that is
incredibly complex takes place and do
all of that as a business.
It's it's ambitious.
>> Yeah. No, there's there's definitely
ambition. So, I came away I would say I
mean it was good to be able to to put
the questions that we've been asking
here uh directly to the CEO of OpenAI. I
came more away more reassured uh but
also with this realization that and and
we talked a little bit about this on the
revenue side that it isn't in his belief
also similar in Daario's belief there is
a belief that this is an exponential and
it's one of those things where it really
has to continue on an exponential
exponential increases in revenue
exponential uh increases in capabilities
uh to be able to work and and to me and
we talked about this this is the great
This is great unknown. Now they say that
there is they everything they see
indicates it will continue a pace. Uh
but at the end of the day it is a new
category. Um that being said you know we
mark we started out saying 10 years
since opening eye but only 3 years uh
since chat GPT and I would say even in
the past year the difference between the
chat GPT that existed uh let's say in
December 2024 versus the one that exists
today um
it's hard not to to appreciate how how
much better it's gotten since then.
Yeah, these reasoning models year ago.
>> Agreed. Agreed on that.
>> Okay. All right. Let's take a break and
then we're going to come back with a
very short segment about um this Gemini
3 flash model that Google has and and
maybe a bit about Copilot. All right.
We'll be back right after this. And
we're back here on Big Technology
Podcast Friday edition. All right, Ron,
let's lightning round through a couple
of stories before we have to go. Um so
Google has announced this Gemini 3
flash. Uh they say it's with prolevel
performance. Um following last month's
launch of Gemini 3 Pro, Google announced
today uh that well this was on on uh oh
yeah sorry this was earlier in the week
that uh Gemini 3 flash uh for consumers
or developers. The tagline is it's a
frontier level intelligence built for a
fraction of the cost. It retains Gemini
3's complex reasoning, multimodal vision
understanding, and performance and an
agentic and vibe coding tasks, but it
has flash level latency, efficiency, and
cost. Uh, the flash model series is
Google's most popular offering. I think
this is from 9 to5 Google, by the way.
Um, very quickly to you, Ron John, this
to me seems like the biggest threat,
right? Is that all this money goes into
infrastructure and then a Google pops
out an AI model. Maybe this is going to
be something that will enable more AI,
but ultimately all this money goes into
infrastructure and then we find out that
you can process AI with you know similar
levels of intelligence for the cost of
you know a Google search or maybe a
little bit more.
>> Yep. I think that's uh that's exactly
right on this that like and one thing
that did not come up in that interview
was trying to make it more costefficient
kind of like the entire philosophy from
the open AI side is bigger bigger bigger
versus Google is showing it's playing
both we can go bigger but we can also
work on that cost side and I think that
indicates like it's a mature business
that understands at a certain point that
is going to be more important than or as
important as the type of results people
are getting.
>> Yeah. I mean, to me, this is this is
again uh like the big the big question,
and we're going to talk about this in
our predictions episode, which we're
actually about to go record, but this to
me is is the big question of uh what
happens uh next year. Do these models
just become so efficient? And if so,
does that throw the math off?
Okay, before we leave,
I think you and I have been texting
about the problems that people have been
having with uh with Microsoft Copilot.
And you know, it started with this
information story about how maybe
Microsoft salespeople's um photos had
been reduced because of this. And
there's another Windows Central article
that's actually quite harsh. And it's
funny because I don't expect Windows
Central to go in on Microsoft, but they
certainly did. They certainly did. Uh,
Windows Central says Microsoft has a
problem. Nobody wants to buy or use its
shoddy AI projects products as Google's
AI growth begins to outpace copilot
products. Um, here's this the lead. If
it there's one thing that typifies
Microsoft under CEO Sat Yanadela's
tenure, it's generally an inability to
connect with customers. Uh Microsoft has
shut down its retail arm quietly over
the past few years, closed up shop on
mountains of consumer products while
drifting haphazardly from tech faded to
techfed, from blockchain to metaverse
and now to artificial intelligence.
Satya doesn't seem to effect be able to
prioritize effectively and the cracks
are starting to shine through. Um, I am
someone who is actively using the AI
features across Google, Android, and
Microsoft Windows on a day-to-day basis,
and the delta between the two companies
is growing wider. Dare I say it, Gemini
is actually helpful. Copilot 365 doesn't
even have the cap the capability to
schedule a calendar event with natural
language in the Outlook mobile app or
even provide something basic as
clickable links in some cases.
Does Microsoft I mean this seems to be
these these stories really resonated
because people are having these
experiences. Is is Microsoft fumbling
the bag on this one?
>> I think they are. I mean I hear this all
the time and then like if to me what it
really symbolizes is just like when you
have that power of lock in of your
customers that they're not you know
they're not going anywhere else. You
don't have to deliver the same quality.
You don't have to fight for that. And
everything I've heard and read about
Copilot, it kind of like feels and seems
like this that it's more it's kind of
shoved into whatever existing system you
have, you kind of have to use it. It
doesn't do what you want it to do. And I
think I actually think it's a good setup
because like as we head into the next
year because Microsoft was sitting very
pretty at the beginning of the year and
Google was not. And like it's such a
reminder that just in this year how much
things could change and also like how
much that means they could change next
year. But I feel like already I saw that
there was like reports around like
further price increases for Microsoft
products like this that that like you
have to take their AI features now
whereas before they were an add-on. Like
all of these things I think are are
showing that they're they're just trying
to kind of extract value versus have the
best product and experience for their
customers which is going to be
interesting to see how that plays out.
>> Yeah, it's fascinating to me because I
don't think anyone at least in the early
days spoke with more clarity about the
potential of AI and how to make it a
good business than Satya Nadella. And
here we have Microsoft as the lagard.
They're performing worse than most of
their their peers. And uh you know they
have OpenAI's IP till what 2032 but um
they don't seem to be making as much hay
of it out of it as as you would imagine.
So yeah, that's definitely a concern for
them. All right. Um short episode this
week, but we have so much content on the
feed that figured Ron and I could, you
know, come in and out. and we'll record
this predictions episode that you'll see
next week and um and definitely
encourage you to check out if you
haven't the uh Sam Alman interview that
was just published yesterday and uh and
if you you really want and if you want
some more uh check out my conversation
with Jim Kramer where we do all of our
big tech hot takes. All right, Ranjan,
thanks so much for coming on.
>> All right, see you next year.
>> See you next year. Well, you and I were
going to do one more episode. Oh, yeah.
But yeah,
>> we'll see you next week. But we're just
to give people a view as to what's going
on here. We're actually about to record
it today. So, it's not that we didn't tr
change our clothes for a week. It's that
we decided to take uh Christmas week
off, but we still wanted to give you
something to listen to. So, we'll go
record that now. All right. Thank you,
Rajan. Thanks everybody for listening
and watching, and we'll see you next
time on Big Technology Podcast.