Capital One's Prem Natarajan: Why We're Building Our AI From The Ground Up

Channel: Alex Kantrowitz

Published at: 2025-12-15

YouTube video id: agLCmeqXrrM

Source: https://www.youtube.com/watch?v=agLCmeqXrrM

Should you build AI from the ground up
or buy off the shelf? This brand has an
emphatic answer. Today we're joined by
Prem Natarajan, Capital 1's executive
vice president, head of enterprise data
and AI and chief scientist to talk with
us about how the company is putting
together its AI strategy in an interview
brought to you by Capital 1. Prem, great
to see you. Welcome to the show,
>> Alex. Delighted to be here.
>> So, you have a very interesting history.
you were really there on the ground up
when uh Amazon built Alexa and we'll
come back to that in a bit but you've
really seen how companies build AI
strategies effectively and you are
running the AI effort at Capital One and
I tease at the beginning that uh you
know basically many brands are going
through this question of uh debate
really about whether to build or buy AI
off the shelf and you have an emphatic
answer you've chosen to build and that
really has surprised me. So, talk a
little bit about why you've decided to
build your own AI stack rather than rely
on off-the-shelf models. Uh why build
and how did you how how did you decide
how much to invest?
>> Yeah, uh great uh kickoff question
there, Alex. Um let me say you know our
decisions are rooted um for most
organizations your decisions uh at any
point are rooted in your history and
kind of the DNA of your organization uh
and what has worked for you
historically. So at Capital One um you
know I think it's well known uh a very
techforward uh u bank uh our CEO will
often talk of us as being a tech company
uh that uh is in the banking industry
right uh now what has informed that I
mean if you go back to the starting of
Capital 1 uh our legacy is one of a
company that is very effectively used uh
decisions driven by data and analytics
uh to make the best uh to deliver the
best products and services to our
customers. Uh that's really at the core
of what we do. So when we look at this
AI revolution and we ask ourselves what
will it take to bring the full benefits
of AI to our customers. Uh financial
services are so central uh to our lives.
Uh finance is so central to our lives
and financial services therefore are so
important. So our question always is how
do we deliver the best? You look at it
from that perspective. Uh we're building
uh you know on a journey that started
you know in a sense at Capital 1's
founding but really over the last dozen
years or so we've been on this tech
journey. Uh we're the first company uh
first bank that went allin on the cloud.
Uh we're entirely uh on the public
cloud. Uh that brings to us
state-of-the-art software engineering,
state-of-the-art practices, etc. We've
been on this data transformation journey
within that cloud uh cloud journey. Uh
where not only is all of our data uh uh
in the cloud uh but we have invested a
lot in the curation in the governance uh
and in the quality and the completeness
of that data all of which turns out are
prerequisites for succeeding in the AI
world. But there is one more thing. In
order for you to truly bring all of that
to life for your customers, you have to
bring your own data to the models. in a
way that you can do deep customization
of those models uh so that you truly
unlock the value in that data uh for the
products and services that you're
offering. That kind of is what has led
us and a lot of experience over the past
uh few years has kind of validated uh
the pathway that we're on. uh but uh uh
for now I'll leave it there saying like
it's it's really that bringing your data
to the models and harnessing the full
power of AI in that way that has driven
our that was our hypothesis and what
we've done in the past few years has
validated uh that hypothesis.
>> So are you bu building ground up or are
you using this data with off-the-shelf
models because my impression is was a
ground up build.
>> Yeah. Yeah. Yeah. We're definitely
building ground up. Like I said, first
we went all in on the cloud. So we we
basically build on top of the cloud. But
when it comes to uh building platforms,
we we use the the core services on the
cloud. The elastic compute uh the
elastic storage and all of that and many
other um you know baseline services but
then we build our platforms on top of
that of those core cloud services. So in
the case of before AI, in the case of
just classical machine learning models,
um we built our own uh enterprise
platforms for all of our modelers and
data scientists across the company to
build their models and these platforms
work in the context of our data uh
platforms and our software platforms uh
to unlock you know modeling at scale. Uh
we've brought that same thinking. So
we've built our own platforms. We've
built our own reusable services on top
of that platform. Uh and these reusable
services are both like patterns that
developers across the company can
leverage to build AI powered
applications. But there are also other
important things. I mean we are a bank.
We lead with risk management as a
central thing. In fact, hey, let me show
you. Look at this bottle. I drink water
out of the risk tech bottle. Right? And
so the um we lead with that. So all
kinds of observability and monitoring uh
capabilities are built in the platform.
So a lot of the concerns that people
would have around how to manage these
things once we put in a platform we can
scale our way through all of those
services. So we're building from the
ground up. You know we build on top of
elastic compute elastic storage GPUs.
GPUs as you know are still not elastic.
In fact, they're often, you know,
availability itself is can be a
challenge. But beyond that, we build
platform layers, reusable services,
reusable capabilities, uh developer
facing tools and then also observability
and monitoring things and and dashboards
so that our uh risk management
professionals can make sure things are
being done well.
>> Okay. So, I definitely want to
interrogate this a bit because I'd like
to know like why it makes sense to go
ahead and develop your own models uh
versus just take off the shelf. But I
think in order to do this, we probably
need to ground a little bit and actually
ask you what you've built. So, what have
you built at Capital One so far? Can you
talk through like the whole stack uh
GPUs and infrastructure models, agents,
applications like what exactly does
Capital One have out there in the market
that is built on top of this generative
AI uh moment or generative AI
technology? I can uh let me uh ground it
in a concrete example. Uh Alex um uh you
know we are in the wave of agentic AI
right now at least in the public
perception right um
>> and what is what is agentic AI?
>> Yeah so agentic uh AI at least the way
we look at it is the um is the bringing
together of two of the um most powerful
forces in generative AI today. One is
the power of reasoning and the other is
the power of specialization. And so
Agentic AI uses reasoning to break
complex workflows into simpler tasks and
then the power of specialization to
execute those uh simpler tasks uh in an
accurate robust reliable way and in in
an efficient way. And so the bringing
together of these two is is really
agentic AI. Of course uh there are other
aspects to agentic uh which is like it
goes from simply answering questions to
being able to take certain actions etc.
But all of those things depend on these
two uh fundamental attributes reasoning
and specialization. And this tells you
the specialization piece is where um
it's super important to bring your data
uh to the models right because they have
to be specialists in how you do your
things. Um now in terms of what we've
built what we we started building out
our agent infrastructure uh in the
spring of 2024 right because we were
already seeing that this is going to be
the the trend in in how we mobilize AI
across the enterprise at scale. Um and
so when we started building it uh we
said what are some applications that we
can like kind of bring to life with this
especially because we're such a at such
an early stage and as I was saying we
lead with risk management uh uh as as
kind of a central requirement in any
early exploration of anything. So we uh
identified this um application in
partnership with our uh colleagues in
the financial services uh line of
business at Capital 1. you know, Capital
One has uh one of the largest, if not
the largest auto lending uh businesses
in the country. And so uh they in
addition to providing financial products
also provide software products to uh the
the dealers uh to auto dealers across
the country. And so one of those is uh a
a chatbot that interacts with users. And
we said, oh, this is a relatively
lowrisk but high uh surface area of
interactions. So it would allow us to
build and demonstrate the power of
agentic while agentic infrastructure and
architectures while at the same time
allowing us to manage and calibrate risk
and learn from that interactions and say
what are the behaviors of these things.
So we actually built this thing called
chat concierge you know so phenomenal
capital one has this culture of
collaboration across uh things. uh when
I came in first uh I was like like can
we but now I'm like totally inspired by
like how there's natural instinct to
mobilize across the company to bring
things to life. So we worked very
closely with our partners and uh the
financial services. Uh they built the
application uh backed by this agentic uh
infrastructure. Chatcon is now available
uh at many dealerships uh across the
company uh across the country and and so
that is like one way but the general
takeaway we have had from that Alex is
that casting what seems like generative
AI problems into agentic problems has
allowed us to bridge the gap between the
lab and production
and what that has done is we've been
able to bring in the power of reasoning
very specific to the application and the
part of specialization which is very
specific to the company and how we want
to do things.
>> So okay so so tell me a little bit about
this chat concierge application. Uh is
it that I walk into an auto dealer I see
a car that I really like. I wonder if I
qualify for financing. And so I hop on
to chat concierge and tell it uh you
know a lot of my attributes like how
much I make, where I live, uh you know
whether I've gone bankrupt in the past
and then [laughter] it will you know
tell me tell me whether I qualify for a
loan or what does what does it do
exactly?
>> We live in a very online world as you
know Alex today, right?
>> So our customer experience doesn't start
when we walk into some place. Our
customer experience starts when we when
we look that business up on the web and
and say what do they have. So chat
concage starts uh online uh when
customers are looking for cars they can
say what kind of vehicle are they
looking for and cars are so central. One
of the things I learned from uh Sanjie
who heads our I mean inspiring leader
who leads our auto division was how
central cars are to people being able to
live full uh economic and successful
productive lives in the US. It's just
incredible. So I think there's a mission
focus here on making sure we can you
know help people get in their cars. Uh
but the experience starts online. Uh so
people will go to a dealer website and
they'll say these are the kinds of cars
I want and it might interact and say
okay what's the size of your family what
kind of trips this that and then you
come all of that experience is now
fronted by chat concurge because it can
it can actually engage with you in an
interactive understanding of your needs.
You know, usually we think of it as just
what is the customer's intent like in
the classical chatbot and then let's go
fulfill the intent. We're now in this
paradigm where we say we really want to
understand your need. Let's interact
like what what is it that you really
need and then from that we say here are
the choices you have and then would you
like to schedule an appointment and then
and then at some point we hand up. I
mean obviously the most satisfying
experience always have a human in the
loop at some point and so the idea is to
make this available at scale to people
so that they can figure out what they
want and then prepare them so that when
they come in uh to the dealership um
they've already established a connection
with that human and then they're going
forward. So that's what in fact um you
know after this call I'll have folks
send you maybe a link to chatcon you can
try it out uh at some at your favorite
dealership perhaps
>> and so pre obvious question so uh why
why is a bank building this is it
because um you know you get you help
dealers uh you know close more deals and
therefore it will lead to uh more
financing opportunities uh from the bank
side or where does where does the bank
uh part of the business uh come into
play here.
>> You know, I think you know, given the
interest in that, Alex, I'd say at some
point should definitely sit down and
talk with uh Sanjie and understand like
the overall uh auto finance business.
Great. One of one of my great joys in my
job is that I have a lot of line of
business uh partners who have all these
needs to advance their businesses and
then we get to say how are we going to
bring the magic of AI or enable and
bring the magic of AI to it. But I'll
tell you ultimately we are in the
business of having very happy customers,
>> right? And so to me the motivation I
care most about uh Alex is how am I
helping to make our end customer most
satisfied uh with the services and
products we're providing. Uh each lob
will have their own specific business
plans and why why they're in certain
things. Um, but uh from a technology
perspective, from where I sit, my job is
to make sure I'm enabling all of those
plans as best as I can uh especially
with a relentless focus on a happy
customer.
>> Yeah, that's fascinating. So, Capital
One in a way becomes a technology
clearing house for uh end customers and
helps them probably grow their
businesses uh through these
applications. you you mentioned earlier
that what Capital One will do is uh help
bring its data uh to the models to be
able to sort of uh improve the function
of these models overall. So let's go
back to this experience again with the
chat concierge uh where people are
chatting with it uh on dealer websites
maybe to find the type of car they want.
Where does the data that Capital One has
uh come into play on on this experience?
So on this it's really you know as I
said we started off with a with with an
application where kind of the risk
surface is relatively small uh and the
interaction is large [clears throat] so
as you're interacting with the users
what are the kinds of questions they ask
right um and where where do some of
these conversations especially
historically where have they been
frictionful right etc uh and how would
we address them in a more flexible more
capable paradigm time that was one um
that was one set of data uh that we
have. The other set of data of course is
actually all the inventories for the
cars and what's available etc. So in
that sense uh because this was an
initial beach head uh for us to prove
out uh this capability we were able to
bring both of those. There's also an
important learning uh aspect to all of
this uh Alex which is you know building
something in the uh in your engineering
environment and then taking it to
production is one step of it. Uh what
I've come to recognize is that's the
first step in the AI stairway to heaven.
Right? [laughter] A lot of the action a
lot of the learning is actually in
ascending through the rest of the
staircase. uh you know what are your
post-production learnings what do your
observability and monitoring tools tell
you about where the customer might be
experiencing friction that data comes in
how do we improve it so while we don't
usually get as you might imagine into
like uh specific numbers in terms of
improvements what I can tell you is the
post-p production uh improvements
are pretty substantial
>> so that's the other part like you talked
about why build uh by etc.
uh you know in a very different use of
the word agent agency you as a company
want to have agency in being able to
improve every part a lot of these things
now these complex uh technical uh
architectures and products they're all
about different layers of the stack
working in tandem with each other right
so we are in in my mind
past a system integration view of the
world where you simply say bring this in
here, I bring this in. I I tie them
together and and and do it. I I'll give
you an example that's very old uh from
speech recognition and machine
translation like from 20 years 25 years
ago when I was very heavily um in the
DARPA world you know a lot of my work
was sponsored by DARPA programs and one
view was you take a speech recognition
engine use it to you know transcribe the
speech then you bring in a capable
machine translation engine and then that
takes the output of speech and it
translates it. Turns out if the
vocabularies
of the speech recognition engine and the
machine translation engine are not
synchronized then you can lose a lot in
the gap or even the standardization of
the vocabulary terms etc. So you lose a
lot of accuracy in that same thing. Now
scale it to where you have so many
layers in your stack. Like unless all of
them are playing out of the same
orchestral uh song sheet, it's very hard
to create something that's totally in
tune and like a pleasing uh orchestra
orchestral arrangement. That's why you
really need to build because all of
these things now they require so much
kind of interplay and optimization joint
optimization throughout and and here and
that and this post-production
improvement is actually an example of
that like you don't know exactly which
part of your stack you need to improve
in order to address a particular class
of frictions that you observe. But being
able to
uh change any part of your stack is a
tremendously liberating and empowering
thing for our product managers, for our
engineers, for our scientists because
everybody can now mobilize to improve uh
improve the the product and the
experience.
>> Right. So, okay, you mentioned that this
is a beach head. Um so, talk a little
bit about what the shore is like. Uh
what's the ambition here? If this works
well, then what comes next?
>> The ambition. So what comes next will be
really a slow and steady uh again we
slope through risk. I mean our ambition
is of course to use this as broadly and
as effectively as possible uh across a
wide range of uh across a wide range of
um application areas. But um our initial
sets of ambitions around these things
are how do we empower our associates
to deliver kind of the best uh services
we can to our customers. One of the uh
first applications for example actually
the first application we devel start
started developing back in uh 2023
uh was agent assist right we have a lot
of uh uh customer servicing um
interactions that happen on a daily
basis and as you know when you call in
there's always a little bit of a wait
time right um and and when you call in
to the call center and you've been
waiting for the agent And the human
agent who picks up the call is very
aware of the fact that you've been uh
you've been waiting and that you're in a
heightened state of you know
anticipation of a quick and correct
answer. And so but then they have to
deal with the systems they have at the
time to find the answers. Sometimes the
answers come from multiple different
sources. They have to synthesize the
answer and and give it to you. And then
you know we being customers and you know
impatient are often like you know I need
the answer quickly. uh what happens
therefore is if we can build AI systems
that reduce the cognitive burden on the
human agent at that time, right? Not
only does it translate to faster and
more accurate responses to the customer,
it also greatly improves the lived
experience of our human agents, right?
Who are the front line of our
interactions with our customers. So now
when we started off we built an initial
version that works in certain way but
when we look at how much more power
casting these uh casting these problems
as agentic problems is giving us uh we
are already starting to see how we're
able to improve the performance of those
things for human agents and so there's
just like the other example I would say
is software right uh so much of our you
know we are a tech company like I said
you know at heart uh and so uh our
software uh developers are going to be
uh are already benefiting from
generative AI but that's going to be uh
another uh big uh philillip uh to our
work and we live in this world where our
ideas and our aspirations often exceed
our capacity to execute and deliver on
them in any finite amount of time. So we
are very excited about the fact that all
of these things will unlock the speed
with which we're able to develop and
deliver things uh to to and as I said
again to our customers. uh and so um I
think this will uh this will impact uh
uh every one of us uh um and and our
focus is on making sure that I've always
felt one of the noble aims of AI Alex is
to transfer the cognitive burden from
the human to the system at a time when
the human feels that burden to be most
heavy
and I think the more I think about AI I
think the next few years we're going
have like a real blast like bringing
that power of AI to uh to all of our uh
associates uh at Capital One.
>> Definitely. Okay. So, let's talk a
little bit more about the architecture
here. Uh I just want to double tap on
this one more time. Uh so again going
back to be able to build this experience
that you've built. Uh you have started
with open source and then customized it,
brought the data in. uh what's the
advantage of doing that versus using the
off-the-shelf models because you know in
in my conversations with some of the AI
lab leaders um like Dario from anthropic
something that he said is like you know
we are starting to um mirror open source
in a way where that customization is
possible and the bringing the data in is
possible with our our off-the-shelf
models meaning claw enthropic. So why go
open source and customize? Is it just a
greater degree of control or what do you
get there?
>> Well, I mean when we say greater degree
of control, we're actually saying a
greater ability to improve performance.
It's it's not control for control's sake
in that sense. It's really about the
performance.
>> Uh let's go back to chat concurge for
for a moment, right? Like
>> terms of it. What are the different uh
steps that like something like chat
concurge uh goes through, right?
um even simple interactions. You want to
confirm u the needs with the user. You
then want to simulate uh the plan that
you're going to work against. You want
to validate that plan, make sure that
it's it's right, etc. Uh each one of
these steps um is a combination of both
the the data that is required to improve
the execution of the of the step, but
also uh kind of the reasoning capability
and the interaction with the user. How
does the business want to execute it?
You may have certain things that you say
if these happen I want this to be this
to go to my assoc to my human in the
loop uh in that thing in other some
other people may have different things.
So there's a fair bit of customization
both of the UX and of the uh performance
itself, right? Uh in this context, uh
what are things we care? One of the
things we care about most is speed or
latency, right? Uh you know this from
your favorite home assistant that you
use. If it takes 3 seconds to respond to
you, it's way less satisfying than it
takes 2 seconds to respond to you, which
is way less satisfying than if it takes
a second to respond to you. We just like
we like the speed of that interaction uh
and and and it's a big driver of
customer satisfaction. So if we want to
improve the speed of these things, you
have to do things like distillation,
make these models. What one of the other
benefits of specialization is the models
can be made much more compact to deliver
the same uh kind of performance uh uh uh
relative to the size of the model,
right? because now they're specialized,
which means they can run much uh faster,
which means overall not only do you get
uh efficiency, you also get a much more
satisfying experience uh for the
associate or for the uh for the
customer. These kinds of things right
now, at least what we're finding are
much more and I don't think we're the
only ones uh finding it. What we're
finding is these are much more doable uh
by anchoring on open source models and
then bringing your data to these models
in a very deep uh customization uh and
then getting the full benefit of it.
There will be areas for example like
software development right where there
is so much horizontal aspect to it terms
of how it is practiced across the world.
uh where I do think um uh these other
approaches like you know where what you
bring to that is the context of your
software environment perhaps etc at
least for now and so uh some of these
closed source models with their amazing
ability to consume the right context etc
for certain tasks uh I think will will
will continue to provide a lot of value
uh it's just that as it gets into every
area that we're interacting with uh Alex
I think there's going to be room for
almost every paradigm to contribute and
but in our area what we're finding is
building on open source bringing our
data very close to the models uh etc is
what's uh making a difference between
and we constantly benchmark I mean while
we build uh uh you know rather than buy
in many cases uh we don't do that in a
vacuum
>> we're constantly looking for what is the
best way to do something and and right
now all of our experiments are pointing
us in this direction.
>> Do you have an open source model that
you favor?
>> No. Uh we um um we actually use more
than one uh depending on the
application. Again, like I said, we have
this benchmarking.
>> So we take a look at these models um and
I mean we favor US-based models uh uh in
in general. Uh but beyond that we're
constantly looking uh for which model
provides the best tradeoff for different
uh classes of applications. At the same
time uh we're not like a place that's
going to have every model in the world
uh supported but we do borrow heavily
from multiple open source models.
>> Fascinating. I mean I'm about to have
Mr. come in studio and uh and one of the
things that I've been thinking about is
you know when this deepseek moment
happened which it was like okay open
source had not exceeded the closed
source models but it seemed like it had
caught up to the point where people
started to take it really seriously and
back then something that uh people were
telling me was that um that that
opensource was was going to be at par
because instead of having open AI
working on its own anthropic working on
its own. You had this whole community of
open source effectively working together
uh uh to build on each other's
innovations and and uh and they would
soon exceed uh the closed models. Uh I
don't know if that's happened yet and
I'm curious to hear your perspective on
sort of where what the state of open
source is because as I'm thinking
through this it's like okay well uh
DeepSeek happened uh you know at the
beginning of 2025 we're at the end here
and I'm curious what the race looks
like.
>> Yeah I think um there are there's more
than one race going on. Uh Alex, I think
part of it is we would like it all to be
one race, right? like the best model on
some benchmarks. But as you know uh even
the benchmarks that were being that have
been used in some sense continue to be
used to measure the performance of these
models are themselves uh in question
because what happens is even the people
developing these models they see them
improve on these benchmarks then they
put it out there and the real world is
like meh right was uh was that a
difference right and so what is one of
the challenge anytime you have a race
right unless you have a very clear
metric that correlates to our real world
perception uh of of that experience.
It's it's it's hard to determine
progress. The race that we are in is to
build the best possible uh uh enterprise
AI solutions that enable the most
capable enterprise AI products and
services that are then provided to our
associates and our customers. So when we
look at it from that race, it's no
longer just about the models alone.
It's the entirety of the stack,
right? You've heard this now. Uh a lot
of value that's unlocked also comes from
your ability to consume the right
context at the right time,
right? Which is why you're seeing the
emergence of all of these protocols,
etc. as well. But one of the best ways
to bring the right context to the models
is actually to do a deep customization
of the models. Not all context is
dynamic.
>> A lot of context is also you know
changes at a much slower rate. That
context being baked into the model makes
the models much more capable to start
with. Then when we when you bring your
dynamic enterprise content to it, you
already have somebody it's it's really
Alex like um u you know somebody who's
been working uh at um at a particular
company take capital one for a while
when they have to do a new task there
they have learned a lot about how things
operate and the styles in which we might
work etc. And then they bring that to
the new context and they're much more
capable. it it's not very different from
that in in in how we see this when the
model is is is deeply customized with
more and more of our data and how we do
things and many different things and and
we've seen this in the past with models
when you train the same model to do more
tasks it turns out it becomes better at
every task in some ways it's it's kind
of an interesting uh thing is the same
here when we train it to do more and
customize it to do more and more of our
task it kind of learns more and more
about uh [clears throat] capital one and
then when we bring new context to it. So
in in that uh in that sense um we take a
a full stack models are a pretty
critical and you know foundational
element uh of that of that stack. Uh but
they're not the only element of that
stack.
>> Oh yeah. Oh that makes a lot of sense.
All right. A couple more for you. Um
>> AI pilots. uh you've you've run a couple
and look there there's there's some
disputes about uh the percentage that
work or not. I mean there was that MIT
study that said 95% fail. I don't fully
believe that. Uh but but I but the idea
that the majority of pilots don't get
out into production uh certainly
resonates with me in terms of what I've
heard. So talk a little bit about what
is what like what helps an AI pilot get
out into production? what makes it uh
profitable or what makes it successful
once it's there.
>> Yeah, you know, maybe one way to look at
it is I mean it it doesn't first let me
just say it doesn't align with our
experience by the way. Uh but again
going back to uh going back to our
history uh we have been invested in tech
but especially in AI and machine
learning through multiple generations of
machine learning and so through that you
learn certain uh approaches
methodologies and practices how to
qualify
which use cases are worth pursuing up
front what might take more development
etc. So I do think your probability of
success is conditioned by the
preparation you've had uh to engage in
something. Uh it's like I I don't know I
you know I've played fair bit of you
know street cricket growing up that
hasn't really prepared me uh for
American football.
>> Okay.
>> Little different sport.
>> Yeah. You can say yeah I know I love
games. I love playing. Uh but then I go
into football I'm like wait what are the
rules and what you know and so it's like
it's it's in that same sense I say how
long have you been preparing uh uh for
this world what what are your it's
beyond just technology what are your
processes like what is the talent that
you have right this is a this is a very
360 game in a way uh you know it's a
very multi-disiplinary game you need
awesome uh product managers you You need
awesome engineers. You need awesome
scientists. You need awesome business
thinkers, right? You need awesome
awesome risk management people who are
not just like, you know, I understand
risk, but they're also learning new
things and saying, how do I apply my
experience of risk to this emerging
area? We're blessed with a collection of
talents across all of these things that
allow us to. So I recognize that that's
kind of a little bit of a privileged uh
position I live in. But acknowledging
all of that, I'll still say our
experience is that when you follow the
right thought process, you surface the
use cases, you analyze the readiness to
uh develop those use, you have a lot of
qualifying steps along the way and
people have to have an open conversation
that kind of also relies on trust
between the constituents and say we find
we can take you know I without getting a
number I'd say that's about as far away
from our experience as it could be the
that study that that would that that
you're that you're talking about. Um,
but I really think it's about uh about
uh uh the preparation uh that you have
um uh to to engage in in the game if you
will.
>> Yeah. All right. Last question for you.
So, you were on the ground like I
mentioned in the in our opening uh with
Amazon Alexa uh at the very beginning
there. Um just tell us a little bit
about what it's been like watching AI
from there to where it is today and
where do you think it's heading?
>> Yeah. Uh Alex, if it's okay, I'll take a
further step back.
>> Sure. because I do think uh we have to
acknowledge uh the tremendous
contribution of DARPA and the community
that it sponsored through decades of its
existence cuz almost all of the
foundational work in AI and machine
learning uh huge fraction of it has
happened with DARPA sponsorship and so I
having spent a lot of my life in in a
world that was that benefited from DARPA
I want to acknowledge it whether it's
speech recognition machine translation
even AI you know DARPO is investing an
explainable AI before other folks were
talking about it. DARPA was talking
about trustworthy AI before um other
folks. So I think there's that part to
be acknowledged. What I've seen though
as a trend and especially where I think
the work at Alexa was kind of um uh took
it to a new level at the time that it um
that it arrived was we were building all
these component technologies one at a
time and we're trying to put them
together in certain applications you
know but the scale at which it it came
into a retail experience in everybody's
home at one time and and became kind of
a consumer consumer AI that not just
adults but children could interact with,
right? That was the magical
transformation at the time that it
happened, right? Which is, you know, I I
used to see my daughters like uh like
you know they have their friends over
and and they're like play this song and
play that song and then this thing and
then there's a competition I can make a
change to this song etc. that that sense
that it created that I can talk to my
environment, it talks back to me or it
responds to me that to me is is the real
uh was was the real magic. Uh I had a
you know fantastic time uh both in my
DARPA world. Actually every I was at USC
for a while uh University of Southern
California. Fantastic fantastic learning
experiences there too. Uh and now um uh
you know Amazon and then Capital One.
Each of these experiences has kind of
been super satisfying uh in its own way.
Um and um Alex, you know, I have to say
uh if you want to contribute to changing
[clears throat] the future of um uh AI
and finance and finance itself and you
want to be part of the world's best AI
organization in finance,
please come join us.
>> Okay. And so if people are interested in
learning more about uh Capital 1's AI
efforts or actually deciding to to join
the effort, where where do they go?
>> Uh we maintain a pretty uh um solid uh
website presence that talks about a lot
of our AI work. Uh so I would say go to
our AI blogs and AI uh part of our
website. Uh it's very do a simple Google
search. Uh because you know website URLs
can change. I would simply say go do a
search uh web search uh capital one AI
uh and then you will see uh a lot of our
work um and [clears throat] um and and
you know you have any questions there's
contact information there reach out uh
and there's always LinkedIn
>> that's right well Prem look fascinating
stuff it is wild to see how far uh
you've gone in in building uh from the
ground up and uh and and specifically
outlining why you've taken that path as
opposed to uh buying off the shelf has
been been really fascinating for me. A
lot of ways to do this, but I think that
uh it's just really [music] interesting
to see uh your path and I'm looking
forward to learning more about what you
have coming down the pike. So, thank you
again.
>> No, uh thank you uh Alex. Lot of uh
thoughtprovoking questions there and
love this interaction.
>> Yes, likewise. All right, everybody. Uh
thank you to Prem. Thank you uh to you
all for watching and we will be back uh
on the feed with another video shortly.
Thanks again and we'll see you next
time.