SAP's Muhammad Alam: AI's Real Employment Impact, Path To Genuine ROI, Is Hype Good?

Channel: Alex Kantrowitz

Published at: 2025-12-09

YouTube video id: 3lFEc47el9s

Source: https://www.youtube.com/watch?v=3lFEc47el9s

Let's talk about Generative AI's real
ROI, whether the technology is really
taking jobs, [music] and how good data
is the key to it all. We're joined today
by Muhammad Alam, the head of product
and engineering at SAP, and a member of
the company's executive board in a
conversation brought to you by SAP.
Muhammad, welcome. Great to see you
again.
>> Thank you, Alex. Thanks for having me.
>> So, let's start. Why don't we go right
to the heavy stuff? Um there's been so
much conversation and speculation about
whether generative art generative
artificial intelligence is going to lead
to job loss and uh you and I have spoken
recently about this topic and you
brought up some really interesting uh
data to me. Um something along the lines
of I'll start and and then toss it over
to you. Something along the lines of um
that you have 40 35 to 40,000 people
working underneath you but capacity for
200,000 people. Um to me this is just
the sort of this is the key to it all
right which is that if we think that AI
will just take jobs and people will be
satisfied with what they're doing today
they're missing half of the other
picture or the other half of the picture
which is that there's so much room um to
do more work that people like yourself
need. So talk a little bit about sort of
the staffing that you have today and and
how AI if it's able to automate a lot of
work would change that picture. Yeah,
no, it sounds good. And I think um it's
a very popular question, Alex, as you
can imagine, not just because now you're
asking it as well, but it's a question I
get internally quite a bit um because
there's a lot of um you know what I'd
call it sort of noise feedback opinions
out in the system in terms of what is it
going to lead to particularly in the
software development space because if
you think about one industry that
generative AI deeply deeply impacts, it
is software development where there is a
potential for significant uplift from an
efficiency from an automation
perspective. perspective um than how
work gets done today. And it's we're not
we're not going to be unique in terms of
it not impacting us from an SAP
perspective. But really the way I think
about it and the way um it's playing out
for us even this year where we have
actually rolled out tools too broadly um
our 3540,000 developers as you pointed
out earlier that is already giving
significant efficiency in terms of
development in terms of execution um
engineering systems pipeline management
that we do internally but what that
hasn't led to for us is um job reduction
I think from a you know the the the
point that I made when we had this uh
the fireside side chat is if I look at
our backlogs, our backlogs are pretty
massive. Like the kinds of systems that
we build from an SAP perspective, from a
front office perspective, from finance,
from supply chain and others um that the
changes that are happening in the
industries are massive and we need to be
able to both keep up and innovate in
them. So our product teams have a lot to
go do. So if we can find efficiency,
it's about how we can deliver more
innovation faster. Um so you know our
backlogs again are to some extent never
ending um and the needs of customers
continue to increase. So what we look at
from a generative AI perspective is how
do we then use the technology that's out
there to accelerate the innovation and
the way I talk about it with my team is
listen we are looking for not just the
10% improvements the 20% improvement
we're looking for 5x 10x improvement um
because what we believe today is there
is tech out there that can enable all
roles in a software development life
cycle from PM from developer from QA
from UA and others to be able to drive
significant efficiency. And with that,
what we're looking to go do is now not
just operate at the throughput of 40,000
people, but operate at the throughput of
5x of 200,000 effective staff to deliver
even more innovation for our customers.
And we might even honestly in 26,
barring any unforeseen external
circumstances, even grow a little to be
able to keep pace with the innovation
that our customers are looking for. that
that is our point of view which I
believe um
uh is aligned with sort of what we're
seeing in our customer channel if you
will. Now what that doesn't mean is hey
some of the roles won't change some of
kind of how we allocate capacity won't
change. So we'll probably see some
shifts within which roles are invested
in more what's the skill set and the
capability of the role where do we sort
of bring the roles in. And so we'll see
some adjustments but in aggregate we
expect us to actually grow uh modestly
not now significantly because we think
we can drive significant efficiencies
and acceleration through the use of the
tech that's out there. [snorts]
>> And I have so much I want to speak with
you about. I want to speak to you about
how those roles might change and of
course uh getting to AI ROI. But I think
that this is such an important point and
there have been a few people that have
made it as clearly as you have uh in my
conversations with folks. And I'm saying
this like not as an AI booster uh or as
a you know an unbridled optimist. I
think it's just important to get to the
truth of the matter which is exactly
what you're saying. If you were and this
is sort of why these conversations about
automation uh when it comes to AI are so
incomplete. If you were to stop uh and
just say all right we're going to
automate as much work as we can not
really build out the road map and and
just focus on profitability. I imagine
for someone like yourself like the
short-term numbers you know might look
better like oh look at our division
we're highly profitable but in in the
way that you're talking about the fact
that we have this fastmoving economy
it's just not a winning strategy right
to just say all right let's just
automate and be more profitable as
opposed to keep up with what what's
>> a competitive flavor too right Alex I
mean what I tell my team is if I look at
I mean our portfolio is pretty wide um
but if you sort of go to one solution
area within one solution area a single
product you can argue We could get to
teams of a thousand or 2,000 even that
are focused in one product area. And I
talk to these teams and I say, "Hey,
listen. In today's world with the tech
that's out there, a 200% startup can
actually operate at the throughput of
what we operate with 2,000 or a,000."
Like where we sort of maintain and
sustain our advantage is if we take our
2,000 and now operate at the pace of
8,000. Now that's going to be hard for
somebody to go do with 50 or 100 people.
So for us it's as much about hey keeping
the staff maybe even growing it. So the
pace of innovation is just going to be
hard to match with anybody that comes up
now with some new to new tools
generative coding generative development
and say hey now we've got this app and
that app but we need to be able to keep
that same distance um from an innovation
throughput perspective if that makes
sense. It makes total sense and this is
what I always think about um
conversations like this when I see
headlines that show that say like as
generative AI you know takes off there's
job loss it's like there's always there
may be a correlation but very few
headlines actually have causation and I
appreciate you bringing this you know
this really important data point uh
because um it really is good to get
clear examples of of the fact that like
yes there's this powerful technology uh
but no we're actually looking to hire
reminds me of the Mayo Clinic which uh
they have this big AI program. Their
head of AI is a radiologist and of
course years back uh we all heard that
AI can read scans and therefore we are
not going to need uh radiologists
anymore. The head of like a radiologist
runs this AI program within the Mayo
Clinic. They have 11 AI models running
within uh that division and they still
can't hire enough radiologists. I think
that we're just at least in the short
term that just seems to be what what
we're going to see. So um but I do want
to sorry go ahead.
>> I mean I I was going to say I think
there's there's two things here. One is
um listen there is a lot of because you
take on one hand what we just discussed
right and then you sort of on the other
hand look at all the news that comes out
from tech companies about reductions and
restructuring and then how do you
reconcile the two because in some cases
what you'll hear about is hey some of
this is related to AI. Some will
actually go on to clarify, no, no, no,
it's not really about AI. We're doing
this because we had some structural
issues. But I do think like if you
decompose some of that, it's not as much
about the capacity that you need is
what's going away. It's maybe that the
needs are shifting as we discussed
earlier, the roles and so forth. But in
some cases, they're also fixing things
that were just things that needed to be
fixed pre-AI. Um, they're just fixing it
post AI. And it's sort of a a bit of a
because it's happening post AI, AI gets
in effect. And this this phenomenon that
we're discussing I think obviously is
for software development as you take it
in different functions like finance
customer service and others that the
impact levels are different as well. I
think this mayo example is a great one
too that it's a different industry but
even there the application of AI given
how much the business is growing and how
much the need is there doesn't
necessarily lead to um a capacity
reduction but it allows you to scale
even even more and we've danced around
around it a bit or you've mentioned it a
couple times that roles are going to
change and um you're in a very
interesting position in this economy and
in this in the business world because
you run both a product and engineering
team within SAP and I'm curious what you
think the future future of these
different teams is going to be. Uh are
they going to become closer together or
is product going to uh effectively
subsume engineering because everybody's
talking about how it's become so much
easier to code and people are vibe
coding and it used to be the product
team would maybe steer the way that the
engineers build something. Um, and now
from what I'm hearing, product leaders,
even CEOs are showing up to engineers
with working prototypes that they've
built in things like Replet and saying
here uh improve this or it should work
like this. So, how do you think that uh
the interplay of those two groups is
going to look like uh try not to do Alex
and it might be slightly disappointing
to use. I mean, I don't have a crystal
ball so I don't try try not to predict
the future but I will
>> How about how about how about now? Yeah,
let's
>> How about now? So what what we are
trying to do is we're trying to push the
envelope in any direction um possible to
be able to really think about how the
world will change. And I will tell you
as we discussed if you look at the role
of a product manager I think gone are
the days where a product manager is just
writing a a spec and sort of giving a
set of instructions or requirements for
somebody else to go build. Like a
product manager necessarily now can be
empowered to generate an app and get it
to 4050 in some cases depending on how
good the models are. or if you have some
fine-tuned models 70 80% and then hand
over in collaboration to a set of senior
engineers now that take it to the last
mile if you will exactly the same
example that you give with CEOs
generating apps on themselves if you
will and as part of that process sort of
core set of design and user research
constructs are built in and of course
all the fundamentals and the foundations
of uh test automation um as well as
documentation sort of comes out of it as
well. So I think we are actually at SAP
piloting many different models. So
across our our many different product
areas, we have a bunch of what we call
frontr runner teams that we've given
this freedom to innovate on their own.
So we're not pushing down a point of
view top down to say hey thou shalt have
this role that does this function and
these engineers and this many um UA and
QA if you will. we're saying is listen
here's a set of smart people. You can
pick any tool that you want in the
market. We'll obviously take it through
the right um privacy and security scans
and things like that as well that our
customers expect of us. But then you can
look at what model works and we're
seeing some amazingly successful stories
where teams are coming back and uh in
some ways are slightly apologetic that
says hey listen we used tools this and
this and we were we were 7x better than
what we delivered in the last sprint.
I'm like, well, but then why are you so
sad? They're like, but we spend a lot of
time learning. We think in the next
sprint we're going to be 12x better, if
you will, from that perspective. So, if
you look at that as a potential, another
team um as part of these front runner
teams came and said, "Hey, we delivered
almost all of what we did in the last
quarter in this last two week sprint."
Um, and that is phenomenal acceleration,
if you will. And that gives me a lot of
hope that as you sort of empower these
smart teams, I think we'll start seeing
some trends. and the trends and the
patterns are going to be different by
the the state of where a product is, the
life cycle of where a product is. If
you're building a new product, I think
there's a lot more runway to be able to
sort of go leverage and generate the
app. If it's an existing app that you're
adding or fixing a bug in, it's a bit
different. So, I think what we're going
to see is different patterns evolve that
will lead to different both team mixes
and and the kinds of evolution each role
can have, if you will. But um we're
seeing some really exciting results from
the the experiments that we're running.
>> That's very interesting that you've seen
it within teams. Like I have this uh
belief philosophy that we're going to
see maybe the age of an empowered
individual, right? An individual today
can do many much more than they were uh
able to do previously. I was working on
this story about um Anthropic and
speaking with people using their coding
tools and one engineer told me that they
had like claude code running seven
different agents and then was like
checking their work and coordinating and
you you could never really do um that as
just one person previously but now you
you can orchestrate in a way. So for me
it was always like we're going to see uh
much more enhanced productivity uh among
motivated individuals that really learn
these tools. But what you're saying is
is something maybe even further which is
now that you know if you have groups
that really learn how to use them they
can push forward too
>> and they'll help define the needs and
the roles that need to be played right
like how the role of a PM maybe will
shift from being a product manager to a
product builder and then there's a
senior engineering team that sort of
takes it to the and then you know maybe
the role of a a single
engineer wouldn't exist because
everybody would need to by definition be
an engineering manager even if they
don't have any humans reporting to them
as you said
>> they'll have a lot of agents working for
them and then you need to have the same
skill set to be able to understand the
output of those agents and how can you
build upon them so I do think some of
these shifts are pretty evident already
like the shifts in in product manager
how I think every engineer would need to
learn the skills of being an engineering
manager because you'll have a set of
agents if not humans uh working with you
to be able to sort of deal with code
that from a uh size and a scale
perspective is exponential compared to
what you were able to go do with just
yourself or or a set of humans under
underneath you.
>> Fascinating. And you know, let's get to
the ROI question. This is a great leadin
because clearly there's an ROI uh with
coders like the claude coder that I was
talking about uh previously was spending
like 200 a month and running seven
agents in parallel. So he was getting
his money's worth on AI. But there is
this debate about whether AI is landing
an ROI to businesses that are using it.
In fact, there's like two dueling
studies uh that I love to cite. MIT said
95% of companies who've built AI pilots
or products are not seeing an ROI and
then Wharton uh said 74% of businesses
using generative AI are seeing an ROI.
So uh we'll talk about ways to get
better ROI from these tools. But first
of all, I'm curious to hear your
perspective.
>> Which one of those studies was closer to
the truth? Not asking you to agree
completely to either one, but who has
the right idea and which one might been
might have been a little bit off.
>> So I think from my perspective, it's a
bit nuanced, right? I think it depends
on both the use case and the function
you're talking about. So if you clearly
take software development at SAP, um,
you know, you can you can argue sort of
I'm running the supply chain function of
a software development organization. So
for me, our supply chain because of some
of the tools that exist, we're
definitely seeing value that's out
there, we're still seeing a lot of
untapped value. While, you know, I think
we're generally um within 10 to 20% even
better already with the tools we've
rolled out with the potential of what we
just discussed a few minutes ago. I
think that the potential is is multiples
of uh of what we delivered today from a
throughput perspective. So I think in
this space it works. Now if you change
domain and maybe get into finance or
just you know really hardcore supply
chain if you will a front office and
others then you get by industry into
into a mixed place and our point of view
from SAP in here is uh is very simple as
well and we we tried to articulate it um
at our uh couple of events that we had
in the fall and it it resonated very
well both with customers and industry
analysts as well which is listen we
believe that for our wide to stick um
you need to have um a seamlessness of AI
into the core of what you do into the
flow of things and that happens when the
applications that you're using
particularly the class of applications
we're in which is sort of core hardcore
financials or supply chain um your HCM
uh uh capabilities or your spend those
are the applications that run your
business that create the data on top of
which you need AI to run and as those
Three things happen seamlessly and that
AI is embedded in the flow of where you
work. Value gets realized the more these
things are bolted upon each other like
an application layer is separate and
somebody has figured out a separate data
layer where you're throwing all the data
trying to harmonize and then somebody
else in the organization's trying to
figure out AI on top of it like it gets
so splintered that the value becomes
marginalized if you will. So our you
know we've got three core beliefs here.
That's one which is a seamless app data
AI layer really allows you to realize
value which otherwise would be very hard
to go do. The other one is listen so
this app data AI flywheel right it makes
sense but you have an app A here from
one vendor another app here from another
vendor and a third app here. Like how
many of these disparit app data AI
flywheels can work? Because ultimately
then you're optimizing for the local not
for the global. And if you have to
optimize for the global then you have to
go through the same complexity again
which is take the data out throw it
somewhere have a bespoke AI layer and
then you run into the same value
realization problem. So we believe the
broadest context would create the
biggest ROI in the value which is if you
have finance data, spend data, supply
chain, HCM, front office all harmonized
with AI on top of it that will generate
the global maximum for you. So that's
the second ingredient. And the third one
is we believe AI has to start with
people first. There's a lot of talk
about agents, hundreds of agents,
thousands of agents, in some cases
billions of agents. Right now the
question is like how do you make sense
of that? Today it's people that run
organizations and most organizations
have a human in the loop policy anyway
for AI. Um we have to empower the roles
that are there to make them smarter,
more productive, more efficient. And as
they build confidence, then you get into
the autonomous execution layer as well.
So we believe that the the framework to
realize ROI from AI is pretty simple.
The seamlessness, the breadth of the
context and as you focus on the
individual things happen and that's what
we see real life with our customers and
our customers uh seem to react
positively to that as well.
>> So as a product guy, you're going to
like this. Um can you walk us through a
use case of how this works like
according to plan like what is the ideal
uh scenario where someone using SAP's AI
tool is actually able to get value or
using SAP's tools with AI baked in is
able to get value.
>> Yeah. No, I'll give you a couple of
examples and I tend to be a little bit
verbose. I'm trying to keep myself short
so we can cover more ground. So if I if
I talk too much, Alex, let me know.
>> I'll let you know. But as you can tell
this topic is uh is obviously exciting.
So take um let's start with finance. In
finance you know there's roles that
exist that are on the accounts
receivable side. There's controlling
roles and the construct exists. Those
are the people that are running
businesses today. And if you take
accounts receivable or controlling there
are functions that they do in terms of
dispute management or acrruels
management. So what we're doing is we're
saying hey for our accounts receivable
and our controlling colleagues we're
creating an AI assistant a dual
assistant for accounts receivable or a
dual assistance for controlling and what
these assistants then have is a is n
number of agents and capabilities
underneath them. So for a controlling
agent they can go in and do uh acrruals
management. So they can reason over the
data and the history, understand the
patterns, predict based on those
patterns what the next acral should be
and propose that acroval for you. That
could be a very laborious exercise at
every month and a quarter in. But we
provide that efficiency and that
capability to reason over that data for
people to be able to go dispute
management is another one that you've
got a bunch of accounts receivable
invoices that you need to be able to
understand how what's the best way to
understand and resolve those disputes.
Now these capabilities ladder up to an
assistant that makes that role 20%
efficient to begin with. As we add more
it becomes 50% more efficient 60%.
Eventually it will get to a place where
you can say that hey most of the
recommendations eight or nine out of
those recommendations that come out of
Juul or AI are ones that I just agree to
because they're actually the right
recommendations that what I would do.
And then you can move to autonomous to
say I might then change this agent to be
able to continue to execute because I
don't need to check every step. And then
you get into an autonomous financial
close for instance that goes through the
steps once you have enough confidence
and trust built in the people that are
running it today. So this is sort of the
lading up of making the human efficient
smarter as the trust builds in. You can
allow the agents to run autonomously to
get to an autonomous function if you
will. You can take that in finance. You
can take that in customer service where
customers are doing that already where a
case comes in. You look at what the
issue is, you reason over, you come up
with the recommendation based on your
knowledge base. You send the
recommendation to the customer, the
customer provides a feedback, you can
close the case and there you can take
some percentage of your cases that are
simpler and run it through this
autonomous service execution or
touchless process. So that's how I think
it ladders up with real life scenarios
that we're working to help land with our
customers. You know, it's interesting
because I think a lot of the
conversation around AI has been hyping
up the models and talking about how
they're going to be these omnisient
large language models that you like
throw a lot of data at and they'll
figure it all out and you don't really
need uh much structure underneath. And
what you're saying is if I'm hearing you
right, basically like yeah, you can get
an ROI from a large language model. Uh
but there needs to be a structure built
in otherwise it's going to I I guess
hallucinate and get things wrong and not
do what you want. Is that the right
read?
>> Yeah, I think that's the right read.
That's one right. And I think we need to
be pragmatic because that end state that
everything is going to be run by agents,
billions of agents. I mean, I think it's
just it's an unrealistic, hyped up uh
end state right now that we're not ready
to get to. And I think there's a lot of
other people that are now saying the
same things as well that it's not going
to be a year of agents, it's going to be
a decade of agents because it's going to
take a little bit for us to sort of go
figure out the accuracy, the trust, and
the reliability on it. And we believe in
the same thing. But we have to sort of
go through those steps. And one of the
things I like to sort of make fun of
ourselves in some ways is,
you know, we're not going out there and
announcing like thousands of agents. And
I think somebody came to me and said,
"Well, you guys just only announced this
many agents. You could have sent an
email. You didn't need a conference for
it." Um, but this is where our our point
of view is, listen, we want to do the
stuff that's pragmatic, that's grounded,
that's creating value, and not just the
hype that sits on a shelf somewhere that
nobody really gets any value from.
>> Right? SAP has made some uh very
interesting choices on the technology
side as well. Uh I don't and correct me
if I'm wrong, I don't see SAP going out
there uh and raising billions or a
trillion in debt and working to train
your own LLMs. So what is your
philosophy in terms of you know build
your own large language models or work
with the state-of-the-art uh or work
with the frontier labs? Um talk through
the strategy on that side a little bit.
>> Yeah, it's it's a hybrid strategy. So we
early on um you know we we
our strategy sort of was grounded in and
you can say predicting that the world of
large language models is going to get
commoditized anyway you're going to see
a better model uh come up every day if
you will from that perspective um as the
flywheel sort of takes effect and we're
seeing that. So we we intentionally
wanted to be both decoupled and agnostic
of the the large language model
underneath so we could use the best one
for the best use case and that's sort of
the platform that we've built um with
our generative AI hub and our core AI
that's available in Juul that you can
pick the right model for the use case to
go build. Now that was a clear strategy
that we're not going to go build our own
large language model. Now the thing that
we are doing to complement that is we do
take um the right large language models
and then sort of fine-tune them with
data that we have e either with consent
from our customers or our application
codebase that isn't available in the
public domain to make sure that then we
have a set of fine-tuned models that
when you interact with that fine-tuned
model that's built on one of the base
models that you get much higher much
better results more accurate results if
you will. So that we're doing and that
we're doing in multiple domains. That
makes
our our build tool far more aware of
when you need to create an extension for
an SAP application or when you interact
with a model um with with Arieba or S4
that it gets the right results if you
will than a public model would or a
public model that you have to provide a
lot of context as part of your natural
language query to be able to get to the
same answer. This the third thing that
we are doing which we announced at tech
and a couple weeks ago that we're super
proud of is our rapid one model. Um
which we are now taking the thing that
we believe we uniquely have which is the
thousands of customers that have given
us permission to use their aggregated
anonymized data to build a fine-tuned
sort of tabular model that allows us to
do predictive modeling based on tabular
on numbers. Now that as a foundational
model on tabular data alongside a large
language model based on text now creates
some significant potential use cases
that again are geared towards delivering
ROI that makes sense for customers.
>> Okay, last one for you Muhammad. Um hype
lot of hype in the AI world. Is it good
or bad?
>> Um I think um it's both. I mean I would
say it's both. I I think um I'll start
with the good and I think the good is at
least it is creating an awareness of the
potential and the possibilities of what
AI can bring and it's leading to real
conversations with customers to say hey
how do we go apply it now that's the
good part of it there's intention
there's willingness and in some cases
there's there's budget significant
budgets to say how do we make that real
now the real part of it obviously isn't
good because there's so much hype and
the reality doesn't hit then you go into
a burst of a bubble, if you will, which
we're starting to um at least hear about
in the news, if you will. So, to me,
that's the part that's not good. And
that's where we from an SAP perspective
always take a very pragmatic approach to
say, hey, we don't want to go right up
to the top of the hype cycle. We want to
deliver stuff working with customers
handinhand to create that value. Um,
just to make sure that this journey that
we're collectively on delivers results.
>> Well, Muhammad, look, it's always great
to speak with you. I really appreciate
your grounded uh expertise, [music]
shall we say, right? Uh telling us
what's actually happening in the AI
world uh and helping people know how to
get real results from it. So, it's
always great to speak with you. You're
always welcome.
>> Thank you, Alex. Appreciate it.
>> All right. Thank you, Muhammad. And
thanks everybody for watching. We'll be
back on the feed [music] with another
video later this week.