Google Cloud CEO Thomas Kurian on AI Competition, Agents, And Tariffs

Channel: Alex Kantrowitz

Published at: 2025-04-09

YouTube video id: 93piVCwqXz8

Source: https://www.youtube.com/watch?v=93piVCwqXz8

Can Google Cloud Platform ride AI into
the field's top echelon? And how much is
AI shaking up the trillion dollar
industry? We'll find out with the CEO of
Google Cloud Platform right after this.
Welcome to Big Technology Podcast, a
show for coolheaded and nuanced
conversation of the tech world and
beyond. Today we're joined by Thomas
Curran. He's the CEO of Google Cloud
Platform and he's here for a realistic
look at how companies are building with
AI and how Google is positioning itself
to win in the moment. We'll also talk
tariffs of course for a bit towards the
end of the show. Thomas, great to see
you. Welcome to the show. Thank you for
having me. Thanks for being here. Let's
talk about the surge that Google Cloud
Platform has had uh in the past couple
months or years really and a lot of that
has been tied to artificial
intelligence. I think it's fair to say
that GCP, Google Cloud Platform uh was
running maybe a distant third behind
Microsoft and Amazon when it came to
cloud hosting. And now every time I look
at the earnings numbers, I see these
massive growth rates 30% uh per year,
per quarter. U how much is AI a part of
that? You know, AI has definitely driven
adoption of different parts of our
platform. Uh, and so people typically
when they come in for AI, they depending
on the type of company, they come in at
different parts of our portfolio. Some
of them say, I really want to do
supercaled training or inference of my
own model. And so there's a whole range
of people doing that all the way from
foundation model companies whether
that's Anthropic or Midjourney or others
and also traditional companies Ford
motor company for example when they
brought their uh they wanted to use our
our chips and our system called TPU
tensor processing unit to model air flow
and wind tunnel simulation using
computers rather than physical wind
tunnels. So they're doing that as an
example. So one set comes and says I'll
use your AI infrastructure. A second set
comes in and says I want to use your AI
models and that could be somebody
building an advertising campaign using
our image processing model, somebody
wanting to write code using Gemini,
somebody wanting to build an application
uh using Gemini or one of our newer
models like Veo which is our video
processing model. So in that case they
come in and use the platform but along
with that they may say I want to put my
data so that the model can access it
quickly and they start with one of our
database offerings for example. So it
certainly draws more pieces of our
portfolio as part of it. And then the
third is people coming in and saying I
want to use a packaged agent that you
have. For example we offer something for
customer service. We offer something for
food ordering. We offer something to
help you in your vehicle like in car. Uh
we offer stuff for cyber security. So
there's a whole portfolio of these and
so depending on which customers coming
in, they come in at different layers of
our stack. And it's so great to hear you
talk about actual products that people
are building with AI because a lot of
the conversation has been around
capabilities. how can uh AI's latest
models perform on the math olympiad
tests and very little I think of the
discussion has been about what do they
actually do. So we're going to cover in
the second half some concrete products
that you're seeing being built. Uh but
let's go back just to this bigger cloud
battle because this is a multi-billion
or even multi-t trillion dollar fight
right now to be able to get companies to
host run applications in the cloud as
opposed to you know in their premises.
um when people are making decisions to
buy, how much of their decisions are
predicated on AI capabilities because
what you just told me are a number of
specific I want to build an AI program,
I'm coming to Google for that. Now, I
imagine that's important, but when you
think about the broader landscape of
people making decisions to buy cloud
services, uh how how much does AI factor
right now? It's a good question. It
depends on the country. It depends on
the industry. It depends on the segment.
Let me explain what I mean. If you're an
AI unicorn, meaning you're funded to
build a foundation model or you're
building an application based on AI,
that's really the central part of your
decision. If you are in an industry that
for example in retail where we have a
product called uh retail search and
conversational shopping where you can
take Google like search using text,
images, video and put it on your catalog
and you can also put conversational
shopping where I can ask a question. I'd
like to return this dress and have the
system handle that transaction for you.
It's a super important thing. For
example, for people in commerce, whether
that's retail or telecommunication. On
the other hand, if you look at a utility
or an industrial manufacturer, it
applies to part of their organization,
but it may not be the central thing. And
so, it really depends by industry and by
customer segment. And so, but we part of
our value proposition is that we offer
all of these different capabilities. And
so, AI is helping us. it's not the sole
reason for our growth. Okay. And then
just broadly um just talk a little bit
about Okay. So definitely different
segments have different approaches to it
but you're the CEO of Google Cloud
Platform. So like when it comes to the
broad Google Cloud Platform ability to
compete uh how important is AI across
everything? Yes, of course it varies for
individual uh use cases but it's broadly
it's going to be important going
forward. We've been very me measured in
how we brought our AI message to the
market to avoid people feel like we're
overhyping things. And we've always said
we're going to build the best technology
in the market. Right now we're super
proud. We have over 2 million developers
building every day, every morning, every
night using our AI platform. And you can
see the strength of our models. You
know, Gemini Pro25 is the world's
leading model. Gemini Flash is the most
price performant model. Imagine and VO
are considered state-of-the-art for
media processing. And we've got tons of
new stuff that we're introducing at our
event next week from audio v, you know,
speech, etc. So, we've been very very
thoughtful about how we've introduced
stuff and I'm not a marketer. So, I will
tell you it's an important factor. It
will be an increasingly important factor
and our strength in it helps bring other
products along with it. Yeah. And we're
not asking for hype man or marketing. Um
I think this podcast we're just trying
to get to the truth and I appreciate you
being reasoned about the role of it and
not saying uh something that's out of
line with reality. So thank you for
that. Now you talked about some models.
You talked about a lot of models coming
out of Deep Mind.
Um, here's what, let's say, Amazon might
say, uh, when if if we're talking to, if
they're talking to an AI customer,
here's what Amazon might say. Google has
its own models and it wants you to use
them. At Amazon, we have some
proprietary, but our job is really to
let you pick whichever model you want
from anthropic uh, on down. and uh you
can just trust us to be to not push our
own stuff and then therefore uh choose
us over Google. What would you say to
that? I would say we offer 200 models in
our platform. In fact, we look every
quarter at what's driving popularity in
the developer community and we offer
them. We offer a variety of third-party
models and partners, not just Anthropic,
AI21 Labs, Allen Institute, there's a
variety of models there. We offer all
the popular open-source models. Uh,
Llama, Mistral, Deep Seek, uh, a variety
of them and we base it what based on
what customers want. Uh so we track
what's on the leaderboards and what's
getting developer adoption and put them
in the platform and people have been
super pleased that we have an open
platform an open platform companies we
always feel companies want to choose the
best model for their needs and there's a
range of them we're offering a platform
you can choose the model you want uh the
only model we don't offer today is open
AI and that's not because we don't want
to offer their model it's because you
welcome them on the platform. Would you
welcome them on the platform? Of course,
we would.
Okay. Any talks about that? I don't want
to tell you that we won't do it. We have
always said we're open to doing it. I
think it's their decision. Okay. So, but
the argument I think would be just to
pinpoint the argument from Anthropic.
I'd really be curious to get your sorry
from Amazon. I'd be curious to get your
perspective on this. They they might say
I'm just going to channel them. I
haven't spoken with them about this.
They might say something like, "Well,
Google will still, even though they can
offer everything, they might still push
you to use deep mind models." What do
you think about that? Well, our field is
not compensated any differently. Our
partner ecosystem is able to use all the
models in the platform and most
importantly, we have very large
anthropic customers running on GCP.
So if you don't have your own model or
you have a model of your own but it's
terrible naturally you're going to say
something like that.
Are you saying that their model is
terrible? Okay. Um let's move why don't
we move to Microsoft then? Um Microsoft
uh would tell you basically that they
have this partnership with OpenAI which
is going to build the best and breed. Um
what do you think about that? I mean,
OpenAI basically ushered in this
generative AI revolution and have been
the best at productizing it.
They've done a good job. No question. Uh
I would say OpenAI has done a good job
with that. Um whether that's how much of
credit goes to Microsoft outside of
providing them a bunch of GPUs, time
will tell. Okay. Now it's interesting
because they do have that partnership
and that has been largely responsible
for uh the surge that they've seen uh in
the generative AI moment. Um but there
is a pretty interesting difference
between Google and and Microsoft and
that is that Google does have Deep Mind
inhouse whereas Microsoft has this I
don't know if it's even arms length or
handinhand relationship with OpenAI. So
I actually am curious when when it comes
to we talked again about all these
businesses that are building AI
applications. Uh when it comes to
that what does deep mind give you uh
that might be an advantage there because
it is in house. We we work
extraordinarily closely with Deis and
his team. When I say extraordinary
closely our people sit in the same
buildings we work extraordinary closely.
My team builds the infrastructure on
which the models train and inference. We
get models from Demis and team uh every
day. In fact, we're staging models out
to the developer ecosystem within a
matter of a few hours after they have
finally built. Uh and then we take also
feedback from users and move it upstream
into pre-training to optimize the
models. And one benefit we have at
Google is all our services whether
that's search or us or YouTube this
inferencing of the same stack and same
model series. So the model learns very
quickly from all that reinforcement
learning feedback and gets better and
better. So there's a lot of close
collaboration.
Many times if I can be frank when we
enter a new domain like I'll give you an
example we built a solution for cyber
intelligence using
Gemini so there's a lot of threats
happening in the
world you want to collect all that
threat feed we do that using a team we
have called mandant
uh and also from other intelligence
signals we're getting on what are the
threats emerging you then want to
compare it to your environment
to see if you've been, you know, you're
at risk. And most importantly, you want
to compare it to what parts of my
configuration will somebody use to try
and get in. And so we used our Gemini
system to help
prioritize and also help people hunt
faster. We call it threat hunting
faster. Now in that environment, the
model has to learn how to find patterns
in a large number of log files that
people are ingesting and that required
specific tuning of the model to do that.
And so there are things there that
having a close working relationship with
the deep mind team has helped
enormously. Um similar things when you
look at for example customer engagement,
customer service. Um, we've got a
project on at Wendy's to automate food
ordering in the drive-thru. You know, if
you actually think of a drive-thru, it's
an extraordinarily complicated scenario
because there's a lot of background
noise. Uh, kids screaming in a car.
People change their mind when they're
ordering something. I didn't mean that
one. I wanted that one change to this
one. And which one did you mean by that
one? Thomas, it feels like you're
describing the way that I handle these
uh interactions and I'm very embarrassed
about it, but that is me bizarre. So,
there's a lot of things that we needed
the model to do to have ultra low
latency in being able to have that
conversational interaction with the
user. So, all those elements, the
partnership we have with Demis has been
super super, you know, productive and
it's also, you know, most importantly,
it's people working together. We're all
close personal relationships that helps
us get through a lot of design changes
and other things and we're all rowing
towards the same
goal. Right. But okay, I was speaking
with Mustafa Sullean, the CEO of
Microsoft AI uh just a few days ago. So,
this is kind of a fortuitous backtoback
uh episode scheduling. And what he said
was, look, you can
uh for without spending the billions and
billions of dollars it takes to train
the new models, basically replicate what
they're doing with a lot less money and
put it into action just a little bit
more slowly. And so therefore, what he's
saying is basically Microsoft gets the
benefit uh without the cost. What do you
think about that argument? I you know, I
don't want to comment on what he said. I
can just tell you there's a lot of
debate on cost of training and
inference. First and foremost, in the
long run, if AI really scales, the cost
you really want to care about is
inference cost because that's what's
integrated into serving. And any company
that wants to recover the cost of
training has to have a large scale
inference footprint. Uh there are lots
of things we've done with our Gemini
Flash, Gemini Pro models that you can
see and also other people using TPU for
inferencing. For example, large
companies are using it uh to allow them
to optimize the cost of inference. Cost
of inference can be on the efficiency
with which you handle your serving
fleet, how you go disagregated serving,
what you do with caching and key value
stores. there's a hundred different
variants of that. The proof I think is
in our numbers. You know, if you look at
our price performance, meaning quality
performance of models and the unit price
of tokens, were extraordinarily
competitive. So that's number one.
Number two on the training, I think
there's a bit of confusion that's you
know may exist in the market. There is
so there is research frontier research
exploration. Frontier research
exploration for example could be how do
I think about teaching a model a skill
like mathemat mathematics? How do I
teach a model for example a new skill
like planning? How do I teach a model a
new skill uh in a in a brand new area?
So those are what we call you know
frontier research that goes on and many
many experiments like that are done and
then after you find the recipe you then
actually train a model and train a model
is actually you do the model run where
you're running the actual training. I
think people are mixing up the total
amount of money spent on research and
breakthroughs as opposed to actual
training. uh and we are very confident
we wouldn't be investing in the way we
are as a company without knowing the
ratios between all of these. And so
we're very confident that we know how to
run very efficient model training, what
we're investing in frontier research and
then most importantly how we're handling
model inferencing and being worldclass
at all three. Do you think there are
still gains to be had by scaling up the
pre-training of models? There are gains
to be had. I don't think they will be at
the same ratio as earlier because just
you know there's always a lot of
diminishing returns at some point. I
don't think we are at the point where
there are no more gains but I think we
won't see the same ratio of gains we
used to see with inference. So that will
be the new cost to basically taking the
models and putting them into production
and using them. I'm curious how big of
um how much of the cost of that or how
much of the use of of your services is
going to be toward reasoning and what
have these new reasoning capabilities
allowed your customers to do that they
couldn't do previously? It's a really
good question. I mean reasoning is
something we are starting to see
customers using in different parts of
our enterprise customer base. For
example, in financial services, we've
had people say, "Hey, I want to
understand what's happening in financial
markets." summarize the information
coming off whether that's video feeds
like
CNBC, financial market indexes and other
financial information and tell me what's
the what's happening and the model can
not only build a plan for how it
collects the information but summarize
it and then reason on the summary to say
are there you know conclusions to be
derived right uh so we are starting to
see people starting to do that uh how
much of that will be versus other
scenarios time will tell but we are
starting to see people doing much more
sophisticated complicated reasoning even
in areas we have a travel company for
example that's working on give me a very
high level description of what you want
to travel for I want to fly to New York
I'm taking you know my son we'd like to
see Coney Island and the following three
things build me a plan and in that it
and have multiple choices. But it may
say, you know, if you're traveling in
June, may be hot in the afternoon.
Therefore, I think we should have you
see Coney in the morning and go to the
museum in the afternoon. And models are
starting to be able to reason on those
things. And we are starting to see early
adopter companies test in all these
different dimensions. Wow, that's wild.
Wait, so are people I just need to ask
you this follow-up. Are people scraping
the audio feed from CNBC and then using
the summarized information to trade?
There are feeds when I mentioned CNBC
I'm using an example there's they have
personal feeds from their broker and
dealer networks which are private of
their own that they're feeding into this
because when they have a broker or an
equity analyst make a broadcast to their
internal uh you know teams they want to
feed that as an example that I was using
that just as an example to say what kind
of a feed given your audience to explain
what a video feed would look like right
and now what what about reasoning allows
these companies to build this stuff that
they couldn't previously. For instance,
this travel planning thing, I mean, in
the non-reasoning versions of large
language models, I could say, "Build me
a plan." And it could do that. So, what
does reasoning do that either ups the
performance or allows customers to be
able to do stuff they could not
previously?
reasoning I think allows so historically
when LLMs were used people were worried
about
hallucination and so they gave an a
large language model a singlestep task
meaning do this and come back to me so I
can determine if your answer is
hallucinatory or not and so I didn't
delegate a complex task to you
secondly when you when I asked you a
question you gave me a single answer.
You didn't generate a variety of
different options and then reason on it
or critique them to say this might be
the best answer. Uh so that is the
nature of some of the differences we see
in why people are using reasoning now as
opposed to prior. And the more you can
trust that the model can actually reason
across a set when whenever you have a
multi-step thought chain of thought. If
you have drift meaning early in that
chain of thought you had an incorrect
answer and then it stepped on that
incorrect path and reasoned a lot more.
Downstream you can get way off relative
to what the right path ought to be. And
so as models have become more
sophisticated, people have trusted them.
Part of it is the accuracy can be
higher. Part of it is that it can
evaluate a set of different choices and
give you an answer based on a set of
choices, not just say here's single
answer. And the third is we also allow
people to understand what the steps were
in how it reasoned. So they can look at
it and say, "Yeah, maybe I agree with
it, maybe I don't."
Okay. So Jensen at NVIDIA says reasoning
costs 100 times more to do. You also
have your own compute. You're also
facilitating that. Is that in the
ballpark or are you seeing different
numbers? You know, it depends on how
long, right? Like for instance, you
could give it a very complicated problem
and a model can take hours to reason on
an extraordinarily large data set that
would be more expensive. At the same
time in the example I gave you on travel
given the number of trips that are made
etc that company is not going to spend
millions of dollars to calculate the
answer for what's the best choice of
trip for me or in the financial markets
area given how much information is
coming all the time and how quickly you
need to reason on it to present your
equity traders or your private wealth
managers an answer. You're also going to
timebound the reasoning computation. And
so there's controls in the platform to
allow you to say what is the breadth of
the reasoning meaning how large a
cluster do you want to reason across how
much data and how long do you want it to
reason. All those factors are in the
user's control and therefore drive how
much they want to spend.
So if you were selling the hardware and
the systems and the maybe the software
to train this stuff, you might be
incentivized to say it cost 100x. Uh but
that might be the most optimistic
scenario there. But there are plenty of
other reasoning use cases that are much
less expensive than that 100x in
compute. Does that sound like a
reasonable take away here?
what we've seen just if you look at
models themselves. Mhm. People were
talking about you know you would need a
billion times more energy uh if you
straightlined extrapolated the cost of a
model from an inference point of view in
23. Like if you look at just 2024 we've
reduced the cost of inferencing and you
can see it in our prices of the models
by a factor of 20 times.
And it's because there's a lot of
optimizations you can do in that. Same
thing on reasoning. There will be a lot
of optimizations that we will continue
to make to lower the cost of reasoning.
People will want to do more reasoning as
you make it more affordable. People will
use it more widely. There will be a
range of things all the way from
relatively quick short time bound
reasoning to much longer things. Like an
example, there's a financial institution
working with us to do fraud uh analysis
on transactions that are happening on
their payment network. Uh by definition
they they need to do that in real time.
So their reasoning is time bound because
they have to flag a transaction within a
certain period of time. Now uh they also
do anti-money laundering and other
calculations. that reasoning is done in
batch and can take a lot longer if they
want to. And so that's why I think there
will be a range of these things and
saying it's all one or all the other is
not correct. Okay. I I appreciate your
your viewpoint again in this area. Uh
reasonable, realistic versus hype. I can
sense a pattern. This this is good. This
is what we like to do on on this show.
You mentioned deepseek. I just want to
ask you about open source. Yes. There
might be a view that if open source well
let let me just say it this way. If
open-source exceeds the proprietary
models and it seems like what we saw
with Deep Seek wasn't that moment but it
certainly opened a lot of people's eyes
to the fact that it might be possible.
Um the notion might be that all cloud
services are kind of going to be um it
it won't matter. It will just be like
because like Microsoft might say you
need us for open AI and uh you guys
might be saying you know we have Gemini.
Um the idea if is if open source
overtakes the proprietary models then it
really won't matter which cloud uh
platform you use um and it sort of
levels the playing field. What what do
you think about that? I it's a good
question. I think it's very early to
tell first of all whether open-source
versus proprietary models are going to
win or lose. You know, an example of our
own model, we we put out an open source
model called Gemma, which is getting a
lot of adoption among the developer
community for people wanting to build
certain classes of
applications. And we are we want to
continue to see how open-source and
proprietary models evolve. One example
was historically
uh open- source models were used because
people wanted to fine-tune a model to
have their own weights. And when I say
fine-tune a model, they would take an
open source model and really tune it on
their data set to have their own
weights. Now as more and more
sophisticated techniques for
uh optimizing models have come in where
you don't need to depend on fine-tuning
with adjustment of all the model weights
that case has become less important but
there's always going to be a need for a
combination of these and it's very early
to tell. Now separate from that let's
assume to your question Alex if open
source became the dominant one how would
we do you know we have a history with
that uh just a couple of examples first
of all Kubernetes became it's an open
standard for people spinning up cloud
workloads uh in computation um many
people would say Kubernetes is a
standard and it's become the dominant
programming paradigm through which
people stand up containerized workload
clothes which are the dominant way
forward. We've got a great solution
something called Google Kubernetes
Engine and people still take vanilla
Kubernetes but choose us because of
performance scale reliability and all
the other things. And so it's a you know
even if you said open source models
become popular you still have to serve
the model. You still have to optimize
the performance of the model. and we're
confident we can do that better than
others. Now, okay, lastly, many people
are coming in at different other parts
of the stack where they're using a model
as part of a service. So, for instance,
I gave you the example in cyber inside
the cyber tool, they don't really care
if it's Gemini or something else. What
they're looking for is a great cyber
hunting capability. If you look at data
science where people saying I just want
to build you know ask a question to my
data warehouse using English and can you
understand what I'm asking and show me
the calculations and that's actually a
very complex technical problem. Uh and
so for those cases do they really care
is it Gemini? It works particularly well
because it's Gemini but they're just
accessing our product. We have a new
product called agentspace. Agentspace is
search conversational chat and agentic
technology for your enterprise. They
really don't see the model. They're
using the plat. They're using an
application or a platform and underneath
we're providing the capability. So
there's other ways to differentiate even
if open source became extraordinary
popular and agent space if I'm right is
your fastest growing product ever.
Yes. Yes. Yeah. give some. So basically
it's a way for people to query uh
different things within the workplace
and get things done in the workplace
using natural language. That's right.
It's growing. How how fast is it
growing? I mean it's it's we'll publish
all the stats next week but as an
example
KPMG is one example of a customer they
are using it to help their professional
workforce. We have insurance companies
doing uh using it as a research
assistance to help their insurance
brokers. when you call to understand
what health care benefits are you
eligible for how do I find whether
you're eligible for this and then to
speed up things like pre-authorization
for healthcare benefits we have banks
using it and banks using it to help
their frontline understand the customer
is calling in I'm the private wealth
manager can I research their portfolio
to see what's changed in their portfolio
so there's a lot of different use cases
and it's basically Google quality
search, conversational chat, and
workflow or process automation using
agents all in one system. Right. Okay.
Last last question here and then we're
going to move on to some product
examples. You've made Gemini a free
add-on for the $30 uh per seat option.
Can you talk through that decision
because it seems like that's kind of
counter to what your competitors are
doing. Uh and also I wouldn't say very
easy to make that something that you
throw in. This is for Google Workspace
which is our collaboration tool. We made
Gemini part of Google Workspace rather
than requiring somebody to buy a
separate subscription. Why did we do
that? Using so if you're using Google
Workspace and for example you're using
Gmail you people love the fact that when
I receive a lot of email it summarizes
things for me. uh or I want to write an
email and I want to write it uh to
recommend somebody for a position, you
can ask it to help write the email. If
you're doing slides in Google Slides, uh
you want to have a great visual
presentation of a set of information. Uh
I'm not very good at creating amazing
slides, but now you can use our imagine
tool to create amazing images and put it
into slides. It requires people to
change the way they work and we want to
drive daily usage of AI and because it
change needs to change the way they
work. You want them to get used to using
it. If hey this group of users in a
company gets it that group of users is
not allowed to do it. This group is
maybe going to be allowed but they have
to buy a subscription. You don't let
them get used to using AI as part of
their daily life. And we learned doing
it back in 2014 2015 when we added
autocomplete, auto suggest to Gmail that
a lot of people love. It was part of the
product and that's what got people used
to using it. It helps us improve our AI
because of all the usage you notice
patterns and the models get better and
better. But it also helps condition the
users to start using AI to assist them
every day. That's why we put it into the
base product. Okay. And that is a great
segue into what our next segment is
going to be, which is there's all these
AI capabilities. Are people going to use
them? Uh so why don't we cover that when
we come back right after this? And we're
back here on Big Technology Podcast with
Thomas Kurion, the CEO of Google Cloud
Platform. Thomas, it's great having you
here. Let's just talk about how people
are actually using this technology.
There have been a couple of op-eds that
we've talked about on the show recently.
One from the New York Times calling AI
mid. Another one saying the problem with
Apple intelligence uh isn't Apple, it's
the artificial intelligence. And
basically saying that the AI technology
has been okay uh but not overwhelming to
this point. And it's interesting that
you brought up the Wendy's example
trying to automate takeout because one
of the examples in that piece is that
yes, you can now do selfch checkckout at
the supermarket, but it hasn't really
changed your life. It's still, you know,
flawed, shall we say. Uh I mean, I can't
tell you the number of times I've been
on uh the checkout line at Stop and Shop
or in the checkout automation um and I
do some one thing wrong. I forget to put
it exactly in the right space and then a
cashier has to come over 10 minutes
later they come over and let me out of
the store. Um so what do you think about
this argument that generative AI is mid
or um not not you know uh living up to
all the boasts and what type of
applications have you seen in the
technology if you were going to argue
the other way which I think you are that
make you believe that there's something
here. I always say you know any major
technology shift takes a while for
adoption to happen and for people to
understand it. If you look at the
internet it went through a similar
thing. If you look back at 9798 99 it
was there was a lot of hype that it was
going to change things in 2001. There
was you know that some of the hype fell
apart but over the long term it has
definitely shown that it's transformed
the way that people find information.
and they buy things, they even run their
businesses. So I think AI is going
through a bit early on there was people
had maybe too rosy a view and I think in
the long term we always say the
technology is going to be really a
fundamental transformation how quickly
it changes in the dayto day every day
time will tell but I'll I'll give you
examples of things that we we always say
let the customers tell the story let's
not tell the customer story on their
behalf and we're super proud of the work
we've done I in Seattle Children's
Hospital. They wanted their
pediatricians when they see a child to
be able to understand the guidelines for
treatment. Guidelines are complicated.
You need to be accurate in the
information put in front of the person.
We've helped them do that. uh at the
Mayo Clinic they wanted us to provide a
system through which a doctor could find
information from the electronic health
record from their clinical trial system
from the from the radiology imaging
system and synthesize it so a nurse
before she sees a patient can see the
information if you look at what we did
with Verizon Verizon is the largest
consumer customer base in
telecommunications in the United states
they have a over a million calls a day
going to call into the call center. Uh
we've helped them build something called
a personal research assistant so that if
I am a call center person and you call
me saying here is my uh set of issues
and we can how long does it take to
research that information and put it
back in front of you so that you can
handle customer service faster uh and
better and they are very pleased. 96%
accuracy in the information placed and
the reason that's important number is
better than a human. Uh we've had people
do it with uh in consumer uh in the
consumer world in retail we've had
people improve the way they shop for
things uh helping people change accuracy
of search results on their search page.
uh improve the way that back office a
company called AES it's a energy utility
uh it's an energy company it builds uh
uh you know and delivers energy
different parts of the world it used to
take them 14 days to run their end of
quarter audit they do it in one hour now
and so these are examples of people
doing it right at the core of their
business uh Honeywell in industrial
manufacturing has put our technology
into the manufacturing control systems.
Deutsche Bank is using it for their
private wealth managers to summarize
information for them. Are they
transformative to the people doing the
work and to those customers? It is
transformative. They've seen the
business
results. Time will tell how
transformative consumers experience it
to be. So, it is interesting that this
is happening in enterprise first. Uh, we
mean there's one I would say
one mainstream AI application and that's
chat GPT and you're at Google so maybe
you can argue with me on that one but
the numbers show 500 million uh people
are using it each week. Why do you think
enterprise has been so much quicker to
adopt this than consumer? And can is it
going to be like the Blackberry? Like
are we going to start to see some
enterprise adoption and then all of a
sudden it will just shift over to
consumer when the time is right? I think
you know the enterprises find real value
at the core of their business. You know
it's helping people like Wayfair write
code faster and write better code. It's
helping people like Mattel the toy
company find answers so that they can be
much more quick and efficient in
managing their supply chain and
operations infrastructure. It's helping
people in the, you know, entertainment
business build much better
recommendations of titles for people to
see. Uh there's lots of companies using
our recommendation system for it. And I
think it helps them decide one, do I
want to improve my top line? Top line is
get people to buy more product, get
people to use more of my services, for
example, recommendations on movie
titles. It helps them be much more
efficient in their back office and in
some places it also helps Home Depot. We
help them build an employee help desk
that answers employee questions like you
know about the benefits about medical
insurance about lots of things and it
also helps them improve the way their
own employees experience the
organization. So enterprises are
choosing it for a variety of reasons.
Time will tell whether there will be
many killer consumer apps based on
generative AI, but we're, you know,
focused on making sure people have the
best technology to build a great
experience. I mean, Bending Spoon, for
example, is a company out of Italy. 60
million photos a day. They're using our
tools to edit and do magical stuff with
it. Samsung
S24 uh every smartphone has our AI
Gemini on it and people are using it to
create great images and do amazing stuff
with it. So there's lots and lots of
examples of even enterprises now
bringing these technologies to their
consumer experience. Even the work that
we did with Mercedes help me drive and
help me give me guidance by just talking
through maps. uh is it transformative?
You know, it's up to the consumer to
decide,
right? But I feel like you probably have
a perspect perspective on it. But hey,
look, I appreciate that you came
prepared uh with lots of case studies.
So, let me just ask you quickly about
agents. You talked a little bit about
customer service. Um agents, I would
say, is one of the biggest buzzwords
I've ever heard covering tech. Um it
does seem like some companies are allow
are using this technology to have
generative AI bots take action on their
behalf which to me I would say that's
the definition of agent. So how far do
you think we are in the roll out and
then what is a multi- aent framework?
That's a great question. It's early on I
would say but what let me just start
with what we mean by an agent. An agent
is an
intelligent system, software system that
has a set of
skills. One of the set of skills is for
example that it can reason. Another set
is that it can use
tools and third it can communicate with
enterprise applications and systems uh
and do that uh in order to for example
automate answer questions or do
something on your behalf.
So here's a very simple example to way
think about a single agent a multi- aent
scenario. So I I I'm just going to use a
communications example. I have a phone.
I want to decide whether I want to
upgrade that phone or not. So I call my
telephone company. A digital agent, not
a human agent. Digital agent comes on
and says, "Thomas, I notice you're
calling from this number. Uh let me find
out what are you calling about." and I
said, "I'd like to figure out a trade
itin. Uh, I notice you're on your
mobile. Can I text you uh a link? Please
take a photograph of your phone and tell
me and upload it. I notice you have X
phone, Yodel. You know, you have a crack
screen, so you're authorized for this
much in, you know, of a trade in." So,
it's handling that interaction with the
customer. It's looking at my plan and my
profile and says he's a premium
customer. is eligible for tradein. So
it's looking at using a set of tools to
calculate do I have the right profile
and am I authorized for a tradein and
then it's looking up a system to
understand how much is that tradein
amount worth. So it's automating that
flow rather than saying the customer is
calling in for a tradein let me
transcribe that for a human and then the
human says tell me what phone they have
and then saying they have x phone uh
tell me is it screen cracked do do you
see what I mean so that's the example
now where is agentto agent interaction
agentto agent interaction is when this
agent is functioning it may need to for
example say Hey, I'm I'm going to send
you the new phone, but you have to
activate it. In order to activate it,
I'm going to schedule you to go to our
nearest retail store. So, it may need to
call a scheduling system to schedule an
appointment for you. That scheduling
system may be in some CRM, Salesforce or
otherwise, where it needs to create a
ticket for you so that when you go into
the store, it says, "Friday, Thomas is
showing up with his new phone. Let's
have people ready to activate it." So
there's one agent talking to another
agent and that needs an open protocol.
So what we've done at Google is build an
agent development kit which has an API
through which you can one create agents.
We provide you a tool set to do it. We
provide you a set of tools that these
agents can use but we also have an open
agentto agent protocol supported by a
lot of companies. It's just an open
open-source project that we're doing
where you can connect our agent to any
other agent.
Okay. All right. That's definitely
something I'm going to keep in mind and
keep watching as you guys keep rolling
out these new products. All right.
Couple more questions that to get to. Uh
now we get to the fun stuff, which is
tariffs. We're talking today on Friday,
April 4th. The interview is going to
come out the following Wednesday, so the
world might be changed by then. But I I
just need to ask you a question on
tariffs.
Um, this is a tweet from Gavin Baker
who's an investor. He said,
"Geopolitically, nothing matters more
than winning AI. These tariffs, as
constructed, essentially guarantee that
America will lose AI by making America
the most expensive place on Earth to
build AI data centers. Uh, do you agree
with that? And how do you think these
tariffs will impact your business?"
We, you know, I'm not going to comment
on policy. I am we do have a global
footprint. So we do have data centers,
machines, networks, all subc cables in
many many different parts of the world.
That's part of Google's infrastructure
and I am responsible for that along with
the team. So we have got lots of places
we manufacture things, lots of places we
deliver things and we are working
through the implications of the tariffs
for for our part of the business. We're
confident we can work through it and we
have lots of smart people way smarter
than me working on solutions on how we
manage through this environment which is
uncertain. Right. But what about all the
raw materials that come in? Uh this is
continuing on from Baker. He says the
semiconductor exemption was irrelevant
for AI data center semiconductors come
into America in finished goods from
Taiwan and other Asian countries which
include servers, storage systems and
networking switches. By the time we have
developed the capacity to domestically
produce these systems, we will have lost
the AI race. I mean you're buying this
stuff. What do you think about that?
some parts of our manufacturing, some
significant parts are here and uh we
have solutions to some of this and I'll
leave it at that because that the rest
of it is confidential on how we're
managing through this environment. Okay.
Let me just ask you one more quick
followup broadly.
Um for the parts that come out from
outside of the US like do you rely on uh
suppliers outside of the US? Does that
mean your costs will have to increase if
they go into effect? we have mitigations
in lots of other ways to protect our our
infrastructure and our cost. uh I don't
want to give more details than that
because it can lead to speculation on
financial results and I'm not going to
get into that but I we've we've run a
global infrastructure for Alphabet for
many many years and part of our success
at Google has been having good
lowcost highly scalable training serving
infrastructure for all our services
YouTube search advertising Whimo etc you
And you know, I always tell people trust
that we know how to run a large global
supply chain and we've been working on
contingency plans for quite a while.
Okay. All right. You know, as we round
out this interview and and go to wrap
up, I want to tell you just something
that I've been observing as an outsider
for quite some time. There was the
conventional wisdom a number of years
ago that Google had all the technology
in the world to compete in cloud but
none of the sales muscle that Google
basically was used to uh got used to
selling in an automated fashion through
AdWords and didn't know how to sell to
people. I think you came into Google
Cloud and revenue was a billion dollars
a year. Now it's in the 40s. It's
expected to be in the 50s in 2025. Um
how did you guys learn how to sell to
people?
We we've we learned how to sell by
listening to customers and building a
great great great sales team. You know,
we in order to do cloud well, I think
you have to do three really basic
things. You have to anticipate customer
problems and solve them in different
ways than other people did. U so that's
number one and very proud of our ability
to identify where the next customer
painoint is going to be and solve it.
Number two, we built a global sales
team. Uh, and credit to our go to market
organization. Uh, we've done it. You
know, it's a grind to build such a
thing. That's why very few companies
have done it successfully. And to grow
from the scale we were in 2019 to where
we are now. No other enterprise software
company has grown that fast. And that's
a credit to our sales organization. We
had to bring discipline. We had to start
with a certain set of countries, get
critical mass there, then expand. We had
to find the right mixture of sales reps,
technical customer engineers, people who
do customer service, customer support.
We had to ensure that for example, our
contracting, legal framework, all of the
other things that sit behind the sales
organization were world class. Super
proud of that. And third, we always have
believed that cloud is a platform
business. And the way that you grow is
you provide a platform that lets other
people grow on top of you. Whether
that's independent software vendors like
Salesforce, service now, workday, SAP,
all of whom have great relationships
with us that you work with partners for
example the relationship we have with
Oracle and many other independent
software vendors, Palo Alto Networks,
etc. bringing them to our customer base
jointly and and then lastly for every
customer who has in-house staff there
are many who don't and they want
partners to help them deliver the
solutions. We made a decision early on
we're not going to have a big big
professional services organization
specifically so that we can attract the
partner community. One stat we are super
proud of. In 2019 we had about a
thousand partners. Today we have
100,000. And it's that allowing people
to grow with you and building that great
sales organization that's been what's
transformed our business and what when
we talk to customers and when you see
them at the show next week, you'll see
how proud they are at the difference in
which the way that Google works with
them. they that listen to them that we
help them innovate their business and
it's not a IT vendor relationship with
the vast majority of them. Okay, last
question for you. Right now cloud makes
up like 15 to 20% of total overall tech
workloads. So most of tech uh most of uh
hosting is still done on prem. Um so 15
to 20% where do you think it can get to
in the future? Can it go to 100 or what
do you think the cap is here? We
definitely see it getting north of 50%.
I mean
people there the the historical
reluctance on I can do it cheaper, I can
do it better. You know my cyber security
controls on premise are better. There
were lots of those arguments. I think
those are increasingly people are seeing
they don't make sense. And as the
breadth of technology that you get in
the cloud continues to mature, you know,
the cyber tools, the AI platforms, the
analytical tools, how fast you can do
something, it's helping people move. I
mean, just as an example, last year we
had Walmart speaking at a conference.
You know, every transaction that happens
at a Walmart gets into our cloud to
allow them to do analysis of how much
inventory do they need to replace, which
customers are buying, what products are
selling. You know, if you look at the
volume of transactions and the accuracy
and how quickly they can get analysis
into the hands of their store managers,
their retail store people, it's an order
manitude faster. And so our job is not
to criticize customers who run stuff on
their premise. There's always some
reasons for it. But increasingly we've
also built technology to take our cloud
into their data centers if they want to.
So for example, for people who have
classified and highly sensitive
workloads, we've taken our cloud into
their data centers and that's also a new
way to deliver cloud. If you look at the
work we're doing with McDonald's, we're
putting our cloud into the restaurants.
And so when people think about cloud,
they used to think it's one definition.
It's these big cloud regions that we
have. Increasingly cloud also means the
same technology can come into your
premises. And that's also changing this
definition of how what percentage of
workloads can you reach. All right,
Thomas, uh, good luck with the event
this week and thank you so much for
coming on. It was great to meet you. I
hope we can do this annually and we can
keep talking about the adoption of AI
and where Google's role will be in that.
So, thanks for coming on the show. Such
a pleasure to speak with you, Alex.
Thanks again for having me. Likewise.
All right, everybody. Thank you so much
for watching. We'll be back on Friday to
break down the week's news with Ron John
Roy. Until then, we'll see you next time
on Big Technology