Zendesk's Adrian McDermott: AI's Customer Service Potential, Adoption Cycle, Scale vs. Orchestration

Channel: Alex Kantrowitz

Published at: 2025-12-05

YouTube video id: Oa5_Q2o4OGs

Source: https://www.youtube.com/watch?v=Oa5_Q2o4OGs

Let's talk about how AI is actually
changing customer service, whether it
will lead to job loss, and whether the
models are good enough to make an impact
today.
We're joined today by Adrian McDermott,
the Chief Technology Officer of Zendesk,
in a conversation brought to you by
Zendesk. Adrian, it's great to see you.
Welcome to the show. Great to see you,
Alex. Thanks for having me. Pleasure to
be here. All right. Yeah, pleasure to
have you, and I'm thrilled to have you
and see him already jump into ask you
question one because
you have a great insight into what's
happening in the
sort of the one of the most applicable
areas of artificial intelligence
technology, which is customer service.
Mhm. And there's been all this
conversation, and you know, maybe I've
contributed it to as what to it as well,
about like how AI might come for
customer service jobs before it comes
for everything else. And you guys are
using artificial intelligence to answer
customer service queries at Zendesk. So,
still early on, but what can you tell us
about the impact of AI on customer
service jobs so far?
Well, I think first it's sort of
interesting to compare and contrast the
two big areas where AI's impacting jobs,
right? I think one is software
engineering, development, and the other
is customer service.
Different in some ways, uh obviously.
Um customer service is a little bit more
um repetitive, a lot of repetitive
question answering and these kind of
things, stuff that AI does pretty well.
Development is a lot about understanding
syntax and context and being able to
generate.
The core difference I see is that with
software engineering jobs,
people acknowledge that, you know, it's
common knowledge that no one's saying,
"You know what? I don't have I have I
have enough product. I don't need any
more." So, if you get 10x, 5x developer
productivity,
the immediate response isn't to go fire
a bunch of developers. But in customer
service, right, where you can 5x, 10x
productivity of a customer service team,
uh I think customer service
professionals and chief customer
officers are more likely to be saying,
"You know what? I don't actually have
enough customer service. I have a
customer service debt to pay down with
my users." But at the same time, no
one's going to miss some of these jobs,
right? Some of these jobs where you're
just kind of like answering the change
password question or everything else.
Those can be upskilled and those can be
treated differently. The other thing
that you can do is you can uh you can
increase hours, increase language
coverage, uh add new channels, and just
basically meet your customers where they
want to be met. Huge opportunity, I
think, for brands to differentiate by
doing that. So, we're seeing a tension.
The tension between sort of budget
pressure,
um and, you know, this idea of thinking
about customer service as a cost of
doing business. And then a tension the
tension then between that and um the
fact that your customers are really
important to you, and they help you grow
your business, and service is one way to
differentiate.
In my early years of reporting on
marketing technology, uh which I did way
back in the day,
uh one of the things that people were
talk talked about in that in that world
was
um that companies were going to
differentiate themselves on the basis of
customer experience. You know, it sort
of gets to a certain point in a service
economy where your products matter, but
how customers feel about the way that
they interact with you
uh is going to matter even more. That
will be the difference between
companies. And so, to me, and I'm
curious if you've seen this in the data,
I always found I always found it funny
that people say, "Okay, well, you know,
we'll we'll fire customer service reps
if AI can do part of their job because
um these are the people who are having
the interactions with the end customer.
And you're right, like if you can get
the sort of change password questions
out of the way, you now take this this
um division, which ends up like which is
has been dealing with problems, and you
make them sort of the owners of the
relationship with the with the end
customer. Does that Does that sound
right? Yeah, if you think about it, um
you know, you have a long relationship.
We talk about lifetime value in
marketing, right? If I'm not mistaken in
terms of how much is this dollar of
email spend getting you, how much is
this dollar of advertising getting you?
After you have that initial conversion,
the only people who really talk to the
customer are the customer service
agents.
You know, if we look at our own com-
company data,
54% of our customers contacted us for
support in 2024.
But they're representing 95% of revenue.
Right? So, the most important customers
are speaking to you. They're speaking to
you in what we would call the moments
that matter, where they need some help.
And I think, you know, AI is
cuz it gives you this incredible
potential to raise the level of every
agent up to the level of the best and
longest serving customer service agent
you have with co-pilot technology and
assistance, right?
It [snorts] also means that you can kind
of like
have an immediate response and kind of
deal with things. To a certain extent,
customer service is this human-powered
factory.
Right? The metrics are ones of
throughput and effort. It's like tickets
today per day, time to first response,
you know, all of these things where
you're measuring productivity, average
handle time.
In an AI world, you know, where
especially where you're automating
tickets, tickets per day is infinite,
you know? I'll just the there will be
more inference. It's fine, right?
Average handle time is talk to me for as
long as you want. The more you the
longer you engage, the better it is,
right? Time to first response is usually
about 300 milliseconds, depending on how
many inference chips chips are available
for your provider.
And so, even the metrics are outdated in
an AI era. And what you're really then
trying to do is sort of say, "Well,
uh what are my customers looking for,
and how can I help them?"
Right. And so, take us through the
continuum of
someone who's adopting this technology.
Like, what does it look like from when
they first get a taste of it to when
they're fully deployed?
I think as as with a lot of use cases
that probably your listeners have seen,
you [snorts] begin optimizing
uh human behavior and human potential,
right? And in customer support, that's
really looking at the human in the loop
um capabilities and seeing what you can
do.
The other thing is you look at LLMs, you
know, and they they they have an
incredible world knowledge, they can
generate content, and they can reason,
which makes them super useful for
search. Generative search is taking
over, you know, Gemini 3 was recently
released, we're seeing the effect at
Google of that kind of technology.
And so, the first thing our customers
are doing is they're basically deploy
you know, building up their knowledge
and getting some of that generated with
AI. They're deploying generative search,
and they're seeing, you know, upwards of
30, 40% of inquiries being handled by
generative search.
Users who spend, you know, probably two
human generations learning to type into
a box and process 10 blue links have
suddenly pivoted, and they just want the
answer and the results. And if I think
that's table stakes for customer support
at this point.
And then [snorts] we look at
um co-pilot experiences where you can,
you know, think about customer service,
there's high turnover, you know, you
don't necessarily get time to train
people as new things happen in your
company, new breaks, new fixes.
You know, it's tough for teams to come
up to speed. And what we see with
co-pilot is we can lower the training
burden and increase consistency. And so,
adoption of these things is rapid.
>> [snorts]
>> You know, AI agents, there's a little
more reticence, right? There's a lot of
um
we're still building trust, you know,
there's a lot of guardrails that have to
be put in place, and also
the ability to write really great
procedures that a that a generative AI
can understand. It's a nascent skill.
You know, we spent a couple of years
ourselves learning prompt development,
but getting a getting a model to follow
or an AI agent to follow a prescriptive
script of, you know, first you get the
order number, then you find out the
item, then you kind of process the
return, you make sure it's within 30
days, etc., etc.
That's something where, you know, we're
helping our customers with right now
when we're building tools to kind of
build out. Those who get there, they get
great results cuz the models work.
So, a light bulb went off for me when
you talked about generative search. So,
for me with generative search, like my
thought is what's happening, and you can
correct me if I'm wrong, is that people
go to a website, they have some like
customer service style inquiry, and they
like just type it in the search bar, and
there might be a chatbot there or maybe
in the bottom right-hand window, and
they get a chance to have a conversation
with that bot. Um
I wonder, you know, because you you're
the Chief Technology Officer of Zendesk,
so you'll have an insight into this. Do
you think that customer service is going
to happen um
on, let's say, client websites in the
future, or do you think it might migrate
into like the big broader bots like
ChatGPT?
I think that if the bots represent um
our agents, it's clearly clearly there's
going to be some kind of migration,
right? Already for a given brand, you
know, if you want to know about some
companies, you begin with a search in
Google,
and turns out Google's results have
usually pretty good. I think the same is
happening with ChatGPT. But when we get
into like real service flows, where will
I be saying to ChatGPT, "I bought these
shoes last week, you know, go go figure
out that order, return them, and
generate me
uh generate me a packing label."
It Yeah, I think that's probably going
to happen as brands short-circuit. Now,
is that the customer service flow so
much as the action flow, where we're
actually moving towards systems of
action that do that? I think it's all
becoming integrated, and the nature is
changing.
On at the back of it, though, everyone
is still probably going to need to have
those moments where uh a place to go to
actually get contact and talk to a
brand.
>> [snorts]
>> We um you know, recently we kind of
looked at we took a sample of 15 million
or more
um customer service conversations across
Zendesk customers. And we we actually
use ChatGPT to classify the contact
reasons or the intents and just kind of
group them into cohorts.
And you get uh something along the lines
of 47%
are basically there was some kind of
failure in the business, right? There
was something that happened. For about
half of them where the product didn't
work, the product didn't show up, the
service is wrong, they just want to
cancel, whatever it is.
Another another quarter actually are
just people who know that the delivery
date is October 10th, but they're going
to call and ask you when the delivery
date and it is anyway. They're going to
say, "Can I get it on the 9th or is it
really the 10th?"
There's so much of that is sort of what
we call the cost of doing business of
customer support. And in many ways those
people just need a human connection.
Now, I think going to want to you know,
is an automated going to be enough?
Maybe for about half of them, but I
think many will still be looking for
that human connection.
Then the final quarter of all inquiries
is actually sort of upsell, cross-sell,
advice. And again, it'll be a personal
preference, but many of those people
will actually be looking for a human
connection.
And so, I think yes, we will you know,
if you think about you know, we you have
a chat channel and a voice channel and
you have you know, a messaging channel
and an email channel, you're going to
have an LLM channel where the LLM will
kind of be the initiator and have the
conversation.
But as you go, you know, one of the
things that we say is automation drives
escalation. You know, as I automate
something, more and more people the
Alexes of the world are pressing zero
and asking to talk to the operator and
that isn't going away. Yeah.
I [snorts] am just a zero till I get a
representative first. And it's just it's
kind of interesting to me that we just
kind of want to talk. Like we need that
reassurance, you know, as as people
sometimes that we just need to call and
be like, "That thing still coming?" Or
let's just have a conversation about,
you know, what I've what I've purchased
or what I'm hoping for in a service.
>> Yeah, see I'm I'm a true CTO. I consider
that to be representative roulette. It's
a thing that I will generally try to
avoid.
That does make sense. So, all right,
let's go back to the um CTO uh
uh hat of yours and talk to you a little
bit about where the technology is today
for what you're trying to build. So,
we've talked through a bunch of
different use cases, generative search,
co-pilot, which
uh if I have it correctly is a AI
assistant that's going to be there with
customer service reps to help give them
information about similar cases and how
to how to resolve things Mhm. um and
information about the client um and and
of course there's going to be some
automation of customer service. Um some
of those uh you know, easy things I
imagine like the password reset that you
spoke about.
How are you finding the models today?
Are they enough for what you're looking
to do and what would you sort of wish
for uh in the models of the future?
>> Mhm. I would say
um we've spent probably the last 2 years
or or you could think about I think all
of SaaS and a lot of industry
development where people are building
apps on top of LLMs, right? Has been
sort of the
the lumpen proletariat of the developer
class building guardrails and checks and
balances and deployability basically
forcing non-deterministic libraries and
we're not used to do to programming
against non-deterministic libraries to
behave deterministically.
And for many of us, you know, we have
products in market now. Like Zendesk has
20,000 people using some kind 20,000
customers using some kind of AI. I think
we've gotten to a point where we're
innovating on top of it and moving
pretty quickly.
So, the next frontier model release kind
of comes along and it's sort of like
iPhone 16 to 17. It's like, "Oh, you
know, here's a vapor chamber.
Ah, great." But we'd actually spend a
couple of years dealing with
hallucinations and dealing with
unpredictability. And so, it's great
that the models have less of that,
right? That is that does make a
difference. But uh the evolutionary
improvements at the moment, the
incremental improvements um on moving
the needle like almost frontier use
cases right now where it's like only the
latest model will do is something like
we have a
we have an agent that listens to every
conversation that is automated and said,
you know, cuz we we run a resolution
platform and we charge for actual
resolutions, not just conversations. And
so, we need to know that the customer's
problem was resolved. And so, we use I
think we use a the latest one of the
latest Claude models in Bedrock to do
that. And it's listening in. Like that's
a thing with judgment and with reasoning
and with skill. It's also lower latency,
so it's something you can use a frontier
model for. And every time that gets
better, it makes a difference. Every
time the agent that first greets a
customer, the AI agent you know, which
you know, uh the task identification
agent that starts talking to you and
figure it figures out, "What is it that
Alex wants?" Every time the model gets
better and we can move to a new
generation, that really makes a
difference. But for 90% of the work, you
know, those rack searches
um model improvements aren't aren't
making a huge difference. We're only
just getting to the point at which we're
really utilizing the capabilities that
they have. So, you have a Claude agent
that will effectively look at
conversations that reps are having or AI
reps are having with customers and then
and that Claude agent will look at that
activity and determine whether it's been
resolved or not. That's exactly right,
yeah. And we we do that across every
automated conversation.
That's fascinating and it sort of gets
into the orchestration of the models,
right? Like one what like you could have
one bot doing one thing and another bot
checking its work and then another bot
um taking taking a task in there.
>> in AI, right? Is you know, how do you
solve a problem in AI? It's with more
AI. So, we have we have models watching
models watching models.
And so, can you talk a little bit about
I mean, we had Mustafa Suleyman on the
show a little while ago and he was
talking about how he believes that uh
the real lifts are going to be from
orchestration because the models will
come out of ties and those that
understand how to orchestrate them will
end up doing the best with the
technology. So, I'd actually love to
hear your perspective on how effective
the orchestration is of of models and
whether you agree with Mustafa on that
front. Uh yeah, I I really do and I'll
I'll I'll put it in context from a
customer service point of view.
And you know, we've seen this over and
over again.
If I'm deploying
um automation, you know, if we're
deploying automation for a customer and
they have a reasonable kind of knowledge
base FAQ,
you can imagine that you can get to sort
of 20, 30% automation absent the
customers who are going to jump out of
the loop, the Alexes pressing zero. You
can get to 20, 30, 40% just answering
the question, just by doing a really
great job of generative search and
putting an answer in front of them.
If you want to go beyond that, you
actually have to go out into the real
world. You need that agent talking to
the back office system, right? If it's
financial services, you need to go into
the customer management system and into
the finance system. If it's retail, you
need to go to the e-commerce and
shipping systems and beyond. I need to
go figure out who exactly you are, Alex.
What exactly you recently did with the
company cuz you really love it or you
prefer it or you get annoyed when I
don't have that context. And then I need
to kind of figure out what it is that
you want and I need to move move
forward, right? So, it's integration and
orchestration, integration and
orchestration, integration and
orchestration, right? In LLM terms, it's
all about tool use, right? You have to
be able to select the right tool, go get
the right information, and follow a
procedure.
And
um I think absolutely agree with
Mustafa, right? That you know, real you
know, the next phase of benefits, the
next phase of automation, taking people
to 80, not just actually able to perform
the task, but performing the task in
such a way that Alex says to himself, "I
would just rather go through it with
this bot. I get the same result every
time. It's consistent and it's really
enjoyable. It's really fast, never gets
tired. It's on 24 hours a day.
Um speaks every human language available
including American English. Uh and I can
totally get this done. And I think that
is that is the goal and that's how we're
using frontier models at the moment.
Okay, but can I ask you, when are these
models going to be able to take action?
Uh or these I guess orchestrated models
be able to Maybe they already are.
Because one thing that I find with uh
speaking with automated uh customer
service is they're generally good
informationally, right? Going to that
generative search uh style thing. Um but
like let's say I need to move a flight
uh or I am asking for a refund and I
have a pretty good case. Uh then I'm
always passed off. Um and and I wonder
whether
uh we'll ever see a moment where
customer service is uh AI is going to
get good enough to be able to say, you
know, uh I've like checked with my uh
you know, my counterpart refund AI agent
and you're actually um
you are you qualify for one. So, Alex,
the the future's here. It's just
unevenly distributed, right? There are
plenty of companies that are out there
doing that. Already? Already, yeah. I
think so. But I think the different you
know, what is the key challenge? And
honestly, out in the real world, the key
challenge
um to hearken back to the past of
perhaps 2 or 3 years ago, the key
challenge is how far do people get on
their digital transformation journey?
You know, you
you would you know, you go visit the
chief customer officer of a company and
you you know you me and you're like okay
yeah well you want to get to 75 80%
automation
on your on your bots so you need to
orchestrate across these three back-end
systems and do these things and you know
you have a homegrown stack and a couple
of other things.
Right? Uh
your order management system I see it's
homegrown. Like do you have an API for
it? Because
we currently don't put bots in general
like you know there is computer use
right?
>> Claude and uh
and chat GPT both have sort of computer
use versions that can pretend to be a
human and you know kind of type into the
thing. And what human agents do for you
is they are swivel chair integration.
They go from this screen to this screen
and they copy the information and they
kind of make it work for you.
I think if you don't have an API on that
system we can't swivel chair yet with an
AI agent effectively or you know
reliably.
And you know we can vibe code and build
the integration for them but the API has
to be there. And so mostly the block out
like why couldn't you change that
flight? It's because they don't have
that ability in their application at the
moment or in their
in their back-end to be able to do that.
And that's really the only thing
blocking is like it's the same thing
that was blocking great customer service
a few years ago. Like why doesn't Amazon
for example have a a phone number or
an email address on the website for
customer support?
Because they've auto they've made it
every single thing that you could do
available for self-service. You can
cancel your order you can change your
order you can find out where it is. You
can find out where it is on the street
that it's five stops away etc. etc.
Everything is automated to reduce the
need for customer service.
That is that is beyond the budget and
ability of most companies who were just
focused on their core business and they
don't have that scale.
>> [snorts]
>> But that is you know you need something
approaching that or you need to build
something approaching that to be able to
do it cuz what we do in customer service
today we have humans that do that for
us. We write a procedure for the human
standard operating procedure and they
follow it and they go from one system to
another and they give you the answer
which is why you get transferred.
It's a great lesson I think for
everybody that's either watching this or
working in it which is that you know
there are these studies that AI is not
leading to an ROI so therefore the
technology must be flawed. But in
reality when I if I'm hearing this right
is that you know you need some
underlying
structure within an organization to be
able to hand that over to AI.
Yeah I mean AI does an incredible job
with one-shot questions and one-shot
answers right? But
if it's complex and requires
orchestration it does need access. You
know you do need to be able to have it
take on the identity of the user and log
into something and act as them as though
or act as a customer service agent. And
that's for some people the next step but
not everyone and the tools to get that
working are getting better and better
every day because guess what? They're
powered by AI.
Right so does that mean that like better
models will be able to do this? Like is
that what is that what's really needed
is I mean of course some work on on you
know the company end but
will better models like sort of I don't
know maybe they can solve the capture so
they can log into the system and then
handle these requests? I mean where do
you think that that leap is going to be
taken? I think that
better models will certainly be able to
do a lot with that. I actually think
what's really going to make a difference
more likely than computer use models
computer using models is just the
improvements in coding models. And
that's a little bit basing the future on
the immediate past. Coding models have
gotten gotten so good recently and they
can make developers so productive that I
think that that is going to unlock for a
lot of internal builders and a lot of
company builders it's just going to
unlock this potential to create so many
connections along with sort of tools
that have a gentle AI built in that can
interrogate you know the API space of
systems and create experiences. So I
think we're going to see what could be
semi daunting projects for people to
take on just to automate another 3% of
customer service tickets suddenly become
things where the ROI is a lot clearer
and they'll be like yeah I could do
that. I could
I could knock that up in cursor. I could
get you know opening our code X to do
that with me and it wouldn't take me too
long.
Right. Okay so just to put a bow on it
your perspective is the current models
are actually pretty good and once you
build some of the processes around them
to minimize hallucinations you can
actually get pretty far.
Yeah. I think that's if you think about
the uh
the stack of customer service work at
the bottom you have tier one and tier or
tier zero and tier one agents. They
figure out who you are what it is you
want. They'll give you a one-shot annual
answer if they can read it from the FAQ
or the manual. Maybe they'll do some
simple stuff and then they create a case
and it goes somewhere else. All of tier
zero tier one automatable.
Next to phase up tier three tier four
a co-pilot sitting next to them that
says oh I see that Alex is trying to do
a return. I'm going to pull up all these
different orders and you're just going
to ask him which one it is and then I'm
going to you're going to tell me what is
being returned and I'm going to generate
the shipping label and do all that work
for you so it's like boom boom boom.
Right? The available today. Similarly
generative search available today. And
so that's a lot of the work of customer
service. It's just the as you said the
orchestration and the integration. There
um
if it's an easy landscape and we can
cover the API estate with what we have
you can really
>> [snorts]
>> get to 80 or 100% depending on where you
want to go.
Um so as of today it's all there. I
think voice isn't quite there and that's
a voice is a different conversation cuz
human communication over voice is um
uh
it's a it's a difficult thing because of
the way people interact.
Okay I I do want to get to voice in a
minute but one one follow-up on this. Um
we've talked about this a lot from like
the company perspective like they have a
chatbot and some of that is automated.
What happens when the customer starts
sending their own uh automated chatbot?
Like for me I'll tell you my dream is to
have my own chatbot or my own AI agent
whatever you want to call it and I I
just want it to be called like simple
simple agent.
>> [laughter]
>> And what it does is it knows my number
my credit card information my address
um and all the account numbers with all
the companies I do business with and
then it gets me past that tier one and
tier zero whether it's voice on the
phone with a company or filling it out
in the in the
>> Mhm. in the bot you know
and then I'll get you know to have my
conversation that I so desperately want
to have because it's what I love to do.
Um I think I I share your dream although
mine actually goes all the way and does
all the work and I don't have the
conversation. But um complex agent I you
know there's there's a couple of
standards out there MCP which was
developed by Anthropic and agent to
agent and few other things and they're
becoming fairly standard and I think
we've been thinking about ways for you
know is there some button that we can
give our customers to click? So 100,000
of them could turn on if you are an
agent you know this is where we
advertise what you can do for this
customer you know you here this is a
retail tailer you can do returns. Here
this is a financial services company you
can like not do a lot uh depending on
what it is right? And so we we I think
right now are thinking about well at
some point it's going to make sense to
develop this standard and have things
using standardized interfaces. Already
those agents I think know enough to go
interrogate
a knowledge base and FAQ get a
generative answer and be able to guide
you as to what you would do next. Many
of them can fill in that first web form
which is I'd like to create an issue and
like have a conversation. So we're a
long way
but not all the way there.
Yeah. All right well I can dream. Also
those personal agents don't work yet cuz
they don't have memory. But that will
come too.
Right. Yeah so so I
this is good. I'm I want to get to
memory also. Well we'll go in order on
voice. Um there's a great podcast it's
called Shell Game I've had the host on.
Mhm. Name's Evan Ratliff he cloned his
voice and had it like you know speak to
his friends and try to fool them.
Um sent it to business meetings had it
speak to his his wife and kids uh and
sent it to they put 8,000 words of uh
his like deepest darkest secrets in its
context window and then sent it to
therapy and uh they worked through some
real issues and sent it to um an AI
therapist which was hilarious cuz they
started doing breath work together. Like
the AI therapist tells the bot take a
deep breath and the bot's like I'm
taking a deep breath. But like to me
you know I I saw him pushing the limits
of voice technology and I thought um
well this is further along than I
thought and we're we are going to see
this in action.
So it is interesting for me to hear your
perspective that there's still a long
way to go.
Um so how far how far is there to go on
voice? Well
um
I think on the speech to text text to
speech front we're already there right?
Like you can you know I think a company
recently is
this they're already renting you know
you can rent actually famous voices to
say anything that you want. There are
platforms where you can recreate
yourself should you be so inclined
right? I've cloned my I've cloned my
voice for sure.
There's enough of it out there that I
can do it yeah. Yeah. Um and I think you
know the danger of that of course is
that someone could be trading crypto on
your behalf and like giving instructions
to your financial advisor but um there
are always around that too.
But I think the the challenge of
interactive human conversation is just
one of latency, right?
You you talking to me, me asking you a
question, you asking me a question, you
interrupting me, you using slang.
All of those uses are
in the you know, we need sort of
predictable response times, basically is
the way I would say it. There's a famous
dearth of chips out there in the
industry in a race to get enough
inference chips and inference computing
power to power all of these things. And
I think
you know, the best frontier models
sometimes think for a while and take a
while to answer. It's just that that
doesn't work in speech. So, all of the
technology is there. Right? Almost every
single piece of it. Some of it is just a
little slow. Cuz if I'm going, you say a
thing to me and I convert it to text and
I put the text into an LLM, I get back
the answer. It's a great answer. You
know, I totally understood you. I get it
back. And then I take the answer and I
convert it back into speech and then I
play it to you. That's awesome, but 4
seconds went by. Uh on the you know, on
the worst case, right? And so, what
we're working on now is not just for the
happy path of I speak, you speak, and we
both listen to each other. Cuz it turns
out in customer service, that's not sort
of the
that's not always the predominant case.
You might be agitated, stressed, and
interrupting me. I need to react to the
emotion in your voice cuz I'm definitely
not going to try and upsell you if
you're annoyed. Uh and I'm going to try
and read signals about you know, how
you're feeling in that moment. All of
that technology is available and the
reaction to it. But it requires a lot of
reasoning. So, um next time you're on
the phone with someone like you hit zero
and you get through to them,
um marvel at the at the low latency
cognitive ability of the human brain and
how it can handle all of that in a
moment when it does it well. And the
fact that um
you know, it takes a lot of compute to
get there right now with the current
technology to do it in an artificial
way.
How many years away do you think we are
to seeing this actually be ready for
production? I know it's an unfair
question. Well, I think it's all it's
basically ready now. So, we're only kind
of a yearish away. But I think to get it
mass market, mass scale, and always get
the latency and reliability that we
want,
probably still a year probably you know,
that's coming online in the next 6
months to a year.
Okay. So, fast. Yeah, I mean, things
move fast in AI.
Definitely. I know feels like we're
living in dog years. Um
on on the memory front,
all the frontier labs or yeah, all the
frontier AI research labs are talking
about how memory is so important to
them. I imagine a big part of that is
okay, like if you're in chat GPT, then
it remembers you and it doesn't have
this goldfish brain and it's actually
become a lot better at that. Um and so,
I think something that Sam Altman has
talked about as one of his one of his
favorite releases. But then
on an enterprise side, I think it's also
quite valuable. Um
So, where do you So, I I obviously
you're working with um the cutting edge
technology and you're seeing what these
AI labs are shipping. So, could you give
us like a state of where memory is today
and how useful it is? I think we're
experiencing memory through putting
really good summaries into the context
window, right? Or in into the into the
prompt. And so, you know, the way that
looks is
Alex comes and asks me a question. I
might retrieve a bunch of information
about Alex's recent orders and recent
interaction and recent customer service
readings. I might put that in an LLM and
generate a summary about you about it
and then I'll use that in the prompt,
which is sort of a hey, just so you
know. Um Alex has given four negative
CSAT readings in a row. He's a minus 25
on net promoter score. And he's had this
recent bad experience, right? That is
something that maybe should be known in
the conversation.
But
um probably and who knows how far it is
away, right? But I think we'd agree that
the ultimate customer service agent
would have total recall
on every single conversation and every
single interaction and every single
transaction that you've done.
And really understand the nuances, all
the nuances of Alex, as soon as you
basically say hi into the chat window or
respond to the voice bot and start
speaking to it. And so, I think for me
that's
um that's the that's going to be a huge
leap. Not just for customer service
actually, but then it also enables
Alex's personal agent to be effective.
And it enables all these other use cases
in the world and apps that
you know, they're hard to imagine now
that could be incredible, right? They're
in the sort of personal companion
uh group companion or just automaton
space that are just really exciting.
It's also kind of the ultimate CRM. You
know, you know you can you can talk to
someone who is as though they were them
basically as far as you're concerned and
find out everything that you need to
know.
That's wild. Do you think it's a
solvable problem because
um when you think about the way that
large language models are structured,
there's no easy way to bolt memory on.
Yeah, I think uh
um
there've been a couple of
uh recent sort of frontier lab
interviews where people have said there
are
you know, what is it? Five or 10 great
innovations that are required to get to
AGI.
Memory it feels like it's one of them.
And so,
um we might not get there with the exact
technology that we're using for large
language models at the moment. But it
feels like somewhere that we want to get
to with sort of frontier level AI.
Definitely. All right, last question for
you. Uh continual learning, right?
Models that get better as they go is
another area on the frontier. Um
how useful would it be if you if you had
let's say you could set a large language
model loose for customer service, but
through every interaction it learned uh
to get better at what it was doing.
How I mean, that seems like it's like,
you know, I think I think Satya Nadella
was asked about it and he said that's
game, set, match.
Um do you feel the same way about
customer service if that comes?
I think um
it's a
it's what we would like is sort of agent
getting smarter every day. But what we
have today actually, um which I think is
almost just as good. If you think about
all of someone's customer support,
right? All the human agents, the search,
the articles, the workflow, the
procedures,
and the AI agents. If you think about it
all as one unit, this is my service
estate.
Today, right? Things like the
um
Zendesk resolution loop. Like we can and
this is this is something that we do. We
can look and say you need to write this
knowledge base article. You've gotten 10
conversations about this. It's probably
time that you do something. Or the way
the agents respond to this type of
problem here, this type of return of
this type of item over 30 days,
you need to write a new macro, a new
repeatable response that deals with it.
And so, if you think about it if you
think about service as a machine, AI is
so good at giving insights,
you know, if you kind of set it up and
do it in the right way. But you can
already do that. What would you do, you
know, would it be incredible for as an
individual customer service agent or
one,
you know, if I had one AI agent that I
think about is the one that talks
directly to Alex and tries to automate.
If that could learn from every single
conversation,
that would also be very, very cool. If
if you owned that bot though, you'd
probably be saying to yourself, I wonder
what it's learned and how I can see.
When we're changing the machine and
we're writing new articles and we're
building new macros, it's sort of easy
for the humans who own that machine to
understand what's going on. Like the
the head of support technology can be
like, yep, yep, I that is the correct
workflow. We should do that. Those are
our those are the rules of my business.
I think when it's there's a little bit
of a black box fright where you don't
know what's going on inside, which is
generally a large language model
problem, right? Where that kind of
continual learning, I suppose we could
have the model express what's happening
chain of thought style. But that's
that's the sort of downside to it. Do
you just put your trust in the machine
and you believe it's going to be better?
That's right. How much how much do you
trust this technology?
Um I
uh
I feel like I'm cheating cuz I I feel
like some of the time, you know, we run
evals on the technology before we deploy
it. You know, we test every model. And I
kind of like we get to see what it's
good at and what it's bad at. In my
personal life, I'm very like risk
preferring.
Um
you know, laziness drives you there,
right? Or optimize a desire to optimize,
let's call it that. And so, um yeah, I
think it's
um talking to I you know, like talking
to the new Gemini model on your phone or
talking to chat GPT on your phone with
the pro version, it's an extraordinary
thing to do, right? It does feel like
you're living in the future. Um and the
thing is so convincing, you just trust
it immediately.
Or I do. Yeah.
Yeah, no. I
I try to be given my my profession as
skeptical as possible or trust but
verify guy, but uh there are times where
chat GPT tells me to do something and
I'm like, I don't really have time to
check. Uh I will end up going with this
this solution that you suggested. Uh and
then uh
anyway, I end up, you know, almost
burning the house down.
>> [laughter]
>> Not always. Uh but every now and then
it's never failed me on home
maintenance. I will say that.
Yeah, no, there's I I think it
definitely is trained on enough good
stuff that it's it's pretty good on that
front.
Adrian, you know, I said at the
beginning, I'll say it again. Uh someone
like yourself, you're on the front line.
You're deploying this stuff in real
world applications
and really have some great insight into
the state of the technology and where
it's most useful and it's it's great
speaking with you. It's always great
speaking with you and the team at at
Zendesk. So, thanks for coming on.
Thanks so much, Alex. Pleasure.
Appreciate it. If people want to learn
more about Zendesk, where could they go?
Uh zendesk.com is a great place to start
where you can learn all about the
technology
and someone [music]
machine or human will be there to help
you.
Okay, sounds great. A fitting way to end
this conversation. Thank you so much,
Adrian, and thank you, everybody, for
watching. We'll be back on the feed with
another video later this week. Thank you
for watching.