Build Dynamic Products, and Stop the AI Sideshow — Eliza Cabrera (Workday) + Jeremy Silva (Freeplay)

Channel: aiDotEngineer

Published at: 2025-07-23

YouTube video id: CB-4NKDYnRs

Source: https://www.youtube.com/watch?v=CB-4NKDYnRs

[Music]
[Music]
All right. Well, thank you for joining
us. We are here to talk AI products and
specifically dynamic products which
we'll unpack in the next 20 minutes or
so. A little bit about us before we jump
in. I am Alisa Cabrera. I'm a principal
AI product manager at Workday. I'm
currently building with an incredible
team our financial audit agent. I also
led go to market for our policy agent as
well as early access for our assistant
which is more like a co-pilot as well as
some of our early days kind of geni uh
features as well. My name is Jeremy
Silva. Uh I come from a data science
machine learning background. Been
building language models since the dark
ages preGPT3 that is. Um, and now I lead
product at a company called Free Play,
which exists to help teams build great
AI products. Awesome. Let's get into it.
So, if there's one thing that we want
you to take away from this session, it's
to stop the AI slideshow, which I know
sounds a little bit counterintuitive.
We're all at an AI conference. All of us
are talking about agents and AI. It's
all over the place, right? So, what
exactly are we talking about here?
Having AI leading your products, your go
to market, your strategy. This was a
really great approach when we were all
trying to communicate that we were at
the forefront of this technological
disruption, right? But now everyone is
really kind of saying the same thing.
And if we look at the different products
that have resulted in hindsight, which
hindsight's kind of 2020, right? we're
able to see that what we've been doing
is building product to try to figure out
what these different technological
breakthroughs can do for us.
So, let's unpack what we're talking
about here a bit.
So let's go back maybe post chat GPT
maybe for some of us in the room preGPT
but whenever your sort of aha moment was
with LLMs trying to figure out what you
can do with the tech what you can't what
the boundaries are we ended up using
chat UIs content UIs existing
applications right to be able to really
test the boundaries of these LLMs we
were also using multimodal to see what
different kinds of inputs and outputs we
could use the technology for.
Then we realized we could ground the
models. We had vector databases and rag.
We were trying to get to accuracy and
truth, if we can agree on that. Um, we
had larger context windows and increased
memory. We also weren't super um, I
would say comfortable with having AI do
work for us. So everything was a
co-pilot, right? a buddy next to us who
can help us get things done, but we
don't want to be taking anybody's jobs
away. We don't want to be um sort of
automating work until we realize that
might actually be kind of nice to be
able to have agents that can do things
for us to reason to be able to use tools
and various APIs to orchestrate across
different business problems. And this is
the state and sort of space I would say
that we're in right now.
We're not saying that these different
approaches are wrong, but they're an
approach to understand the technology
and it's not going to build you a
differentiated strategy because everyone
is doing the same thing.
So, why do we see these kind of like
bolt-on non-ifferiated AI products
persist?
By working across dozens of enterprise
companies at Free Play, we've noticed a
common trend emerge, which is companies
know they rightly need to prioritize AI.
But the way they do that is by creating
this sort of centralized AI strategy.
And what happens is this centralized AI
strategy starts running as this sort of
sidecar of their core product strategy
rather than the two beeply integrated.
there's different initiatives, sometimes
even different teams, and then naturally
these sort of like bolt-on,
non-integrated AI features and products
start to proliferate.
So, what are some of the causes of this
sort of AI sideshow that we're talking
about here? The first is that companies
seek to mitigate the risk associated
with AI by quarantining to specific
corners of the product. Albeit there
like is new risk here, right? there's
this new reliability question you have
to ask yourself which is like can I even
get this feature to work reliably enough
to drive value for customers.
Second,
we see teams prioritizing the technology
over their customer needs. They become
the hammer in search of the nail. Rather
than trying to solve their customer
problem by harnessing the technology,
they're just trying to find any
manifestation of that technology. And we
see this manifest in a bunch of
predictable ways. We see teams building
chatbots because chatbots demonstrate AI
capability, not because customers are
actually struggling with support. We see
companies building document
summarization again because it
demonstrates capability, not because
their users are suffering with
information overload.
And finally, we see companies creating
this kind of top down the pushing
solutions out from the top down rather
than setting that top level strategy and
letting the bottoms up discovery process
um be the manifestation of that
priority.
So, how do you avoid the AI sideshow
here? The key is to integrate and align
your AI and your product and integrating
AI risk into planning is a critical part
of that.
There is this new risk we're talking
about, but instead of being shying away
from that risk and trying to quarantine
AI to specific corners of the product or
specific teams, you need to deeply
integrate that into your product
planning. And this will require like
some new muscles here, right? like you
need to kind of build these systems for
evaluation, for testing because if
you're doing good prototyping and
testing, you can at least kind of wrap
your arms around that risk and know how
to handle it. And then second, start
with the customer problem. If you're
inventing new problems to go solve with
the advent of AI, you're probably going
to stray here. And finally, like we
talked about, enable that bottoms up
discovery process for AI products. It's
likely your product folks who are boots
on the ground every day who understand
the right solutions here. Give them the
space to experiment, prototype, and
importantly fail fast, but set that
topline strategy and then allow the
bottoms up discovery process to take
place. This is how you ultimately
manifest AI products that feel a like a
natural and cohesive part of the product
experience rather than feeling bolt-on.
And that's al ultimately the like the
hallmark of good successful AI
integration are AI products that need
not announce themselves as AI but rather
just solve the customer problem better
than what came before.
So the north star that Eliza and I are
talking about today are AI products that
are deeply and dynamically integrated
into your product ecosystem. But the
only way you get there is by aligning
your your strategy, your teams, and your
road maps accordingly
and importantly avoiding the AI
sideshow.
This is admittedly like an audacious
northstar and especially if you're kind
of stuck in this sideshow model like how
do you find your way out? This is where
we think this crawl, walk, run approach
comes into play.
We're all new to building generative AI
products. Like to some degree or
another, we're all building the plane
while we're flying it. The most
successful teams we see here are those
that crawl, walk, run their way into
this new era of kind of generative AI
products. Because what that allows you
to do is it allows you to sort of build
the capability iteratively while laying
the fun the foundation of that AI
functionality throughout your product
suite. So I want to like walk through an
example here um with an example. We'll
take like an a customer support SAS
company. Let's say they have like a
shared inbox feature um that customer
support teams come on to work out of
mature product um but they want to start
integrating AI. So in this crawl phase
you're starting to build embedded AI
experiences.
You're likely in this phase not building
a whole lot of new product surface area.
Rather you're just like adding AI on the
back end and starting to kind of
accentuate and accelerate the existing
functionality you have. If we take that
customer support example, that might
look something like, you know, building
a feature that uses semantic search to
like surface previous similar questions
to help the c the user ground when they
like are responding to their customer.
And then in the walk phase,
this is where we're starting to build
more contextual and personalized AI
experiences. Here we might actually we
are starting to build like new product
surface area but we're probably not at
the point yet where we need to like
fundamentally rethink our core app
architecture and our UX. If we go back
to that example that might look
something like um you know building a
feature that will like suggest a draft
ahead of time so that when the the user
comes in there's already a draft there
ready to go for them to start from. And
then finally where we land when we
really start to run. This is where we're
building those dynamic interoperable and
integrated AI experiences throughout our
product suite. This is the stage where
you do start needing to like re
fundamentally rethink your UI, your UX
and your app architecture because now
your AI features like if we go back to
our customer support example, it might
look like an autonomous agent that can
triage issues, respond to customers, but
importantly it's operating across the
product and and feature set. And in
order to incorporate that kind of
functionality, you do need to start
rebuilding core surface area and like
starting to revisit your UX. But
importantly along the way, you're not
throwing out functionality. It's
building on top of each other. It's that
functionality is building as you go,
right? You're just extending on it. And
importantly, even at the crawl phase,
you're still building embedded
functionality, not this sort of like
bolt-on non-integrated functionality. So
I'll pass it to Alisa now. Yeah. So
let's walk through a tangible example
here because it there's a lot to unpack.
So this problem space, I feel like
everyone knows this. I've been living
and breathing it for a few years, but HR
service delivery or employee
self-service is all of us work in jobs
or you're running a company. Your
employees need to be able to get their
questions answered quickly. And if they
can't get those questions answered, they
need help from a a a support person. So
through a case, this could be a live
agent, etc. So we've spent a lot of time
working in this space. This is also
where some products have found product
market fit especially with early sort of
genai solutions.
So where we started to I would say crawl
with the technology um this was within
our help product. So help has two
components. There's a knowledgebased
solution. There's also a case management
solution. And so early days we took a
look at the tech and said where can we
use Gen AI really to affect change with
customers? And so I know knowledge base
has become like the backend for GPTs and
just a best practice, but at the time we
said, okay, we've got content gen in
here. We've got translations in here.
This is the content that's fueling the
answers to all of those questions that
employees are asking. And so there were
two key features. So one was actually
content authors. So they might come into
an editor like this. They are going to
upload say a policy doc. So imagine a
benefits policy like 20 plus pages long.
They don't want to necessarily write
that article, right? but they could have
the AI ingest it, create an employee
FAQ. In this case, we had talking points
for managers and they're able to get a
consistent format. So, the other thing I
would mention is we're thinking about
content at scale. So, this isn't for
small sort of SMBs. This is large
enterprise who have content teams of say
like three to 15 people. And so, that
you need to have united sort of voice
around that content that's coming out.
So on top of that other feature, we put
this translations which you can see in
the GIF here. In just a couple of
clicks, I can go in and translate into
one of the 34 different languages that
we support. And you see we added on the
lefth hand panel here and the ability to
actually manage versions as well. So I
might have my base article. I'm
generating talking points in English and
then I want to translate into French and
Spanish um maybe Japanese. And you can
see that you're managing those versions
as well.
A couple things I want to call out here.
Yes, we're using Gen AI and
translations, but this isn't in your
face sparkles and chat bots and text
fields all over the place. This was
built for users who didn't know about
Gen AI. This is 2023. Wanted to be able
to kind of get in and use the features
without actually understanding the
functionality. And it also, you know,
keeping that human in the loop. We want
to have the disclaimer around AI and so
we make sure that we've got enough
little purple sparkles to let them know
what they're using. Um, but it's not the
entire experience here. So this allowed
us to go GA um in uh 2024
um R2 or August I should say um and sort
of I would say kind of crawling with the
functionality.
So on top of that, so that's our our
content teams. So then we moved into
what I would say is walking. This was
now we have our content drafted, but we
actually need to solve the self-service
problem. So as a manager, I might need
to come in. Elaine in this case is
trying to do a location change to San
Francisco and she knows a lot of the
fields but not all of them. And so she
now has this sort of contextually aware
co-pilot, workday assistant that lives
across workday that she can sort of
prompt. A lot of us are familiar with
this functionality. But a couple points
I want to make here. One, we have the
contextually aware suggestions. So it
knows what's happening when I'm on the
page. Also around the data processing,
if you're looking at a help article,
it's generally customer content, which
is sensitive but not nearly as sensitive
as PII or personally identifiable
information. Think about these tasks
more on say pay or compensation things
that are really sensitive where
employees are putting really sensitive
information in. So this is a next level
of um sort of walking with the
capabilities. The other piece I'd
mention is that this was a platform
capability meaning that we had to be
working across our suite. So we have HCM
and financials think benefits
procurement um core HCM etc. And so
there's a higher level of sort of top
down and bottoms up alignment that had
to happen to get these capabilities out
the door.
Then finally um running. So extending
the same use case here. You may have
seen a few months back we announced our
agent system of record. A subset of that
functionality targeted towards again
those employees and managers was really
around the agentic capabilities behind
the workday assistant. So again, our
users don't necessarily want to know or
have the sort of technical expertise
around agents, but we still have that
work happening behind the scenes where
our assistant becomes a lot more
autonomous, proactive, listening to
policy changes, notifying us with
suggestions as well. And so you can see
just thinking through this at scale,
there's a much higher level of I would
say sort of top- down strategy with
bottoms up execution that then happens
threading the needle across um these
different product experiences.
So you can see here we've kind of gone
from a single product within a skew all
the way across our sort of core platform
which um some of you may know or not,
but we serve uh like 60% of the S&P 500.
So, a a pretty broad group. So, where we
would land with all of this when we talk
about not making AI a sideshow, we're
not telling you to stop working on
agents or stop caring about AI, but
understand that these are stepping
stones in terms of teaching your
organization, training up your
organization on what it means to
actually be building impactful AI
experiences. And so as you sort of I
would say mature as an organization
ideally where we want to get to is
building dynamic products. I'm hearing
some of this today in some of the talks
if you heard Sarah or Brian talking
earlier about building purposeful sort
of vertical specific um products. I
think it's really interesting when we
start thinking about dynamic products in
terms of new problem spaces. I don't
know if anyone else feels this way, but
sometimes I feel like we're solving
yesterday's roadmap with just like a
much more powerful technology. And so as
we digitize new data, new inputs in
terms of our environment and spaces, we
can see the problem space of the
products that we're creating really sort
of extend. Um I think especially with
multimodal, this is where this gets
really compelling as well. when we have
frictionless multimodal experiences that
interoperate I would say
interoperability and RL are still pretty
relevant within the agent sphere but
when we think about dynamic products
that are sort of responsive to your
environment um this is where we really
start to see I would say the next
generation of products come into play.
So hopefully this sparked a few thoughts
maybe some questions. Um, if you want to
connect, feel free to, um, scan our QR
codes. Happy to, uh, connect if you want
to drop us a note. We'll also be around
the rest of the week, so happy to chat.
And we are right at time. Look at that.
Okay, thanks everyone.
[Music]