Will AI Make Its Biggest Splash In Industrial Use Cases? — With Mark Moffat, IFS

Channel: Alex Kantrowitz

Published at: 2025-11-25

YouTube video id: EWoJSEqQPjI

Source: https://www.youtube.com/watch?v=EWoJSEqQPjI

Could AI make the biggest splash [music]
not in the corporate office, but in
industry? You may be surprised. Let's
talk about it with Mark Moffett, the CEO
of IFS, in a conversation brought to you
[music] by IFS, and Mark is here with us
today. Mark, great to see you. Welcome
to the show. Thank you for being here.
It's great to spend some time with you,
and I can't wait to get into it. So, we
talk about AI a lot, a lot, a lot. And
>> lot, a lot, a lot. the focus is always
on the corporate office.
>> Yeah. Then, there's 70% of workers that
are not in the corporate office.
>> 70% of work is done outside of it. In
industrial settings. And up until this
point, I haven't heard a word about how
generative AI could be used to change
their work, to help them, to make these
processes better. But, you have uh
belief that it's actually going to be
more impactful in industry than in the
office. Yeah, well, if you think about
it based on what we've just discussed,
70% of the world's workforce are not
behind a desk, running industrial
operations, running the very industries
that support economic development and
prosperity. You think about the nature
of the industries IFS support,
construction, engineering,
manufacturing,
uh aerospace, defense,
telecommunications, energy, natural
resources. These are all fundamental to
driving industrial progress and economic
development. And most of these
organizations have workforces that are
in the field, in the operation
day-to-day. So, it stands to reason you
don't get the full benefit of AI until
you enable that workforce. But, hold on,
because let's say I'm a lawyer, right?
Like, hypothetically. I'm not one in
real life, but in this example, I
>> I'm an accountant by background, by the
way.
>> Okay. All right, so we can even talk
about the accounting example.
>> You can take reams of data and drop it
in chat GPT, and then in natural
language, you can query it, and it will
give you answers today.
Um, but if you're working on the line in
a factory, or if you're managing
operations uh in some industrial
settings, it's hard for me to fully
picture how generative AI can then be
applicable in that setting.
>> 100%. So, how do you do it? 100%. Well,
think about an example that we're
working on, real example, real use case
we're working on, in development, with
Boston Dynamics, with Eversource Energy,
and with Anthropic, okay? Three parties
orchestrated as one to make manhole duct
inspections operate at a different
level, okay? So, what happens in
practice? We send a Boston Dynamics Spot
robotic a long a 5-km manhole duct. The
Eversource are required by Boston law,
in Boston law, Massachusetts, that by
law they have to inspect that manhole
duct on a on a periodic basis, okay?
Now, that robotic dog is picking up
lidar, picking up video, image, gas
sensors, heat, temperature, pressure,
all the way through that manhole duct,
okay? They're spotting issues and
fractures and and problems in that
environment that the human often will
miss. Or, they might not get with the
same level of accuracy. Something
spotted a stress fracture on a
transformer, immediately captured, GPS
coordinates, immediately triggering a
work order, looking for the spare part,
dispatching a crew. That's all happening
pretty much instantaneously.
Now, you think about the alternative,
when the humans do that, often they
don't want to do that work, it's
difficult to resource it, and they might
not spot that, and a catastrophic
failure might arise, and therefore you
see issues in uptime, you see issues in
transmission, all sorts of problems. So,
the AI in that question, obviously,
transforms the operation. By the way,
this is why I'm so happy to be speaking
with you, because these are use cases,
again, I spoke about it in the intro, we
don't hear about. But, we're about 3
months after the introduction of chat
GPT, and we're starting to see these
applications of how natural language and
LLMs can be used to take data that has
been seen previously, and actually make
it useful. And we talk all the time
about the ROI of AI, and this seems to
be a place where you can actually have
an ROI. So, just talk through the tech
stack that you would have in the example
that you discussed. So, you have,
obviously, the robot Spot, which looks
like a dog, it's making its way, you
know, through this environment, yeah,
and then is there machine learning with
some uh visual intelligence picking up
on the different things that it sees,
and then it sends it into an LLM that
makes decisions? Correct. Uh also
orchestrated by thousands of workflows
and data that IFS has transacted for
other utility companies. Knowing what
parts to deploy, knowing what supply
chain to activate. But, the LLM, you
know, a generic LLM is not going to
discover what's needed in that
environment. It needs specific training
and adaptation to that environment in
question. So, with the application of
Spot the dog into the entirety of
Eversource's operations, together,
hopefully, with other utility companies
as customers in North America, together
with any OEM information we might have,
or any other data that gives us a sense
of patterns and repeat probability, you
put all of that together in a
specialized set of models, then the
ability for you to, you know, deliver an
increased ROI is just off the charts
different. I want to take a moment to
talk a little bit about IFS, because, of
course, there's a lot of moving parts
here, right? There's the robot dog,
there's the Anthropic large language
models, but there has to be some sort of
system in the middle to route these
signals to the people on the line. So,
talk a little bit about the role that
IFS plays here. Yeah, what I talked
about this morning at the event that
we're hosting today, which we've called
Industrial X uh unleashed. Industrial X
unleashed is all about what I describe
as bringing the dimensions of the X, you
know, the four dimensions or or engines,
as I call them, of progress and
possibility. They all need to come to
bear. You got the models, you have the
infrastructure and the data, you have
the robotics, and then you got the
reinvention partners. We often work with
partners, advisory firms, top-tier
consultants, because when you're
changing fundamentally operations and
teams and skills and capabilities, with
all of that coming together, you have to
move an organization from A to B. It
doesn't happen by magic overnight.
There's resistance to change, etc.,
okay?
IFS has been supporting the industries
we serve for decades. The entire
organization is geared with
understanding the intimacy of every
industry we serve. And we've been
developing workflow for field engineers,
for asset maintenance, for ERP in these
industries for over 40 years. So, we've
got know-how built in, we know how to
orchestrate value chains. So, IFS is
sitting in the middle of those four
engines of progress and possibility. Our
platform, the application stack, it
doesn't go away.
Because ordinary workers in the field,
they still need coherence. They still
need to understand how they're expected
to execute their tasks on a day-to-day
basis. We're not asking engineers who,
in another example, are but, you know,
telephone tower or a power tower to use
LLMs on the fly. They need to be using
and digesting and consuming AI in the
natural flow of their work. That's IFS's
role. Okay, and so I want to talk to you
about the um enhancement that large
language models have played here. Right?
Because IFS in the past, and correct me
if I'm wrong,
>> had been working on predicting when
there was going to be something that had
gone wrong. Yeah. Um I don't know, is
the Boston Dynamics a new thing, or have
you you're always working with these
robot dogs? New thing. Uh new thing.
We've we've looked at this market
evolving, and we've recognized we can't
do this by ourselves. That would be
arrogant in the extreme. And we
recognize there's so much innovation
happening in the field of robotics, with
all the capital that's flowing out there
and the expertise that's available
there. And particularly Boston Dynamics,
because Boston Dynamics, they're not
developing humanoids, or they are now,
but they've come from a world where
they've developed different form factors
for the industry challenges. And Spot is
an incarnation of that, okay? So, we
recognize we have to work with those
industry participants if we're going to
be able to access the level of
innovation that we believe our customers
need. So, okay, so now you have this new
data point as an input. It goes into
your system, and the to me, the
interesting thing is, without an LLM, it
maybe would just sit there. But, what
you can do now is be proactive to people
out in the field, and get them to work
on things fairly quickly now that the
data points can be translated into
natural language and conveyed to people
that are there working. It's a really
good point, because what we see
from an LLM perspective and a natural
language perspective is we've just
introduced a completely new engagement
layer for our technicians. So, they can
use voice, they can use video, they can
use photography to engage with, you
know, all the intelligence that sits in
the platform that draws in all these
other capabilities to do the job at
hand.
So, if an aircraft engineer, to use
another example, is faced with something
they haven't seen on a jet engine as
they're stripping back the jet engine to
make a repair that's required by the
maintenance that's set by the OEM, then
they're able to use natural language to
engage with all the intelligence that's
built up to help them in the task at
hand. And if you take it one step
further, we're exploring with the use of
Nvidia Jetson technology in the
Metropolis platform to give augmented
reality to those technicians, so that
when the technician is performing a
task, you know, with that edge-based
chip and an LLM that can go on to that
chip, still with a billion of
parameters, they're able to get
real-time guidance on the torque setting
for a part of an engine. It's just
mind-blowing in terms of possibility.
Now, I want to talk to you about
Anthropic, right? Because Anthropic is a
big partner. We're here again at this
Industrial X Unleashed event that you've
held here in New York City. Uh New York
City, great city to hold events in. I
agree. And um
Anthropic is providing an important
layer right here with with the LLM. So,
talk a little bit about how they fit
into this picture. Uh you know, we're
always looking uh how innovation is
happening in the market. And of course,
we've been tracking quite carefully the
frontier models.
And we were super attracted I was super
attracted in listening to how Dario and
the other co-founders of Anthropic have
set their strategic position out.
They're clearly focused on the
enterprise. I think that's less of a
focus than some of the other frontier
models based on our experience and based
on what we've seen. I'm sure it's got
some relevance, but you know, Dario and
the team seem to be very focused on the
enterprise and they see the potential to
unlock value in the enterprise. But
they're also focused, I think, uniquely
on the balance of risk and possibility.
And I think that's so important with
industries that we serve. You can't
afford to get things wrong. You need
99.99999%
reliability on the work execution.
Otherwise, supply chains grind to halt,
aircraft are grounded, or ultimately,
there's risk to life. You know, that's
the type of operationals that we're
dealing with. Mission-critical
operations. So, them understanding
there's risk in the broadest context as
well from a societal perspective is
really encouraging. And I think that's
different based on others that we see.
So, those two components, I think, taken
together made Anthropic a very obvious
partner for us. And and we approached
them, and very rapidly we got into
conversation about introducing kind of
world's first partnership for someone
like IFS in the space of the industrial
sector to partner to bring that
capability to bear together. So, how
much trust do you put into them? Because
Huge amount. Huge amount. But there's
varying levels of trust, right? The
first bit of trust is I'm going to trust
them to take a signal that I'm seeing in
the data. Like going back to this robot
example, there's a signal. I'm going to
send it to someone. I'm going to trust
them to get that signal accurately to
the people I'm working with. The other
level of trust is I'm going to trust the
large language model to actually make
decisions uh that would previously be
made made by the company. Where do you
stand?
I mean, we're not relying blindly. I
mean, these things don't get into
production without regular stress
testing Mhm.
in all sorts of ways. Uh one of the ways
in which we think about stress testing
the capability that ultimately makes
decisions in the field is running
millions of scenarios. Because the more
scenarios you run and the more times you
run queries, the higher probability you
are getting that accurate. So, the
availability of compute allows that to
happen. So, I think we recognize
Anthropic recognize, you know, the job
to be done, the nature of the
operations, they are mission-critical in
nature. So, we will find ways together
to get the level of trust we jointly
need because our brands on the line,
Anthropic's brand will be on the line,
and the nature of what we do is so
significant. So, it's not blind trust,
but the nature of the partnership and
and and how we are as organizations I
think creates that environment of trust
to develop in that way. So, you will
have to make choices eventually. But
what ultimately the technology will have
to make the choices. But the other thing
that I would say is that we are making
sure that we balance
any agentic-based capability with a
human in the loop. So, where are the
checkpoints? What are the escalation
points? What are the break points in a
process that would still require you to
have some level of human intervention? I
don't see that going away. Where we draw
that line, I mean, yet we've to
discover.
And uh you've actually already seen some
pretty impressive results with them.
Um yeah, as someone who's from Scotland,
I think this is a example close to your
heart. You were speaking with a Scotch
company or some distillery in Grants,
yeah. That um was actually only uh
encountering most of their issues in an
emergency setting. But with some more
proactive notifications and translation
into natural language, they've been able
to uh get ahead of some of the problems
that they would have seen previously.
Can you expand upon that a little bit?
Yeah, no, 100%. They invested hundreds
of millions into new The team told a
chemicals manufacturing facility. I call
it a distillery. They were trying to
obviously disguise that they were
working with uh
and and and Hendrick's when they were
complaining about how much time they
were spending on site in in the
facility. I I only joke, as you know.
Uh but, you know, they were experiencing
a lot of problems in that facility.
They'd invested a huge amount of money.
Uh and we began a discussion with them,
and that led to the development of a new
uh field capability where we were able
to analyze equipment and failed points
uh and issues uh using the Anthropic
model to index uh you know, the issues
at hand that have resulted in a more
proactive maintenance uh set
capabilities. So, how would that work in
practice? Like, can you help flush it
out a little bit for someone trying to
think about what this would look like on
the ground? Yeah, so somewhere in the
distilling process, if there was uh
temperature reading or a valve or a
throughput that wasn't reading
accurately,
uh the team were able to get on top of
that super quick through the telemetry.
They're able to use video and
photography
uh to look at the issue at hand very
quick, and able to put that into the
context of the engineering drawings and
diagrams to be able to diagnose the
end-to-end and what was happening
through effectively the value chain of
creating the spirit at hand. That's
happening with large language model.
>> Happening right now today.
With an LLM. Correct. I keep asking
because to me it's just like when we
think about this technology, we never
think And I keep going back to this, we
never think about it for these
applications.
>> certainly don't think about it for
making whiskey, that's for sure. No. And
an LLM I think well, I mean, part of the
issue is LLM large language model. You
know, it's got It goes way beyond
language now. It goes beyond all sorts
of data. So, I think the very
description of LLM needs to change. In
fact, I think now we're talking about
world models. I think world models are a
more accurate description of what we're
talking about.
>> Right. And the inputs that you're
getting, the video, the photos, that all
feeds into these models' perceptions of
the world that you're placing them into.
>> think about the wide modality of data
you can take on board from a
manufacturing facility, this being one
example. Like flow rates, temperature,
gas, vibrations,
uh the throughput measure. There's so
many ways in which you can take and
catalog and ingest data.
The hard bit is about how you put it all
together and how you make some meaning
of it.
Now,
let's talk a little bit about labor
here. Because these questions always
come up when, for instance, you have a
Boston Dynamics robot like I saw uh
today walking around the factory floor
looking for spills
uh and probably very efficiently through
computer vision making a notice that,
"Hey, there's a spill." and maybe
saving, you know, a lot of time or
potential like uh batch of product. Uh
but previously, maybe that was something
that a person did. So, how do you think
that this will impact labor moving
forward? So, I'm an optimist.
Uh and I read with great interest and
agreement the all we're going to
experience is growth.
Uh economic growth for sure. The
predictions are 1% percentage point
increase to global GDP growth in the
coming years.
Uh and I look at, you know, research
like the World Economic Forum who did
the future work study at the beginning
of this year, and they concluded that
170 million new jobs will be created by
2030. Okay?
Uh 92 million of existing jobs will be
displaced. My mathematics tell me that's
7 to 8 million of net new jobs,
incremental jobs and employment by 2030.
Okay? And what we're experiencing is, in
all senses, economic growth. So, you
think about growth, you think about some
of that research, and you think about
some of the previous general-purpose
technology shifts that we've been
through, albeit this one is different.
All of them have resulted in growth and
more labor and more employment and more
business models, and I think the same is
absolutely true here.
When I put that into the real world, so
put that to one side for a second,
every customer that we deal with,
whether it's in North America, whether
it's in Europe, whether it's in Asia, in
the industries that we serve to a
greater or lesser extent, are dealing
with labor shortage today.
And that's driven by aging workforces.
It's also driven by reindustrialization
of Western economies. It's driven by
global supply chains moving. You know,
the reindustrialization in the United
States, manufacturing coming into this
country. You know, these plants need to
be built, these data centers need to be
built, transmission grid networks need
to be improved. I mean, you look right
across the footprint of the industries
we serve, it's growth, it's demand on
capacity. The only way we're going to be
able to deal with that is by using
digital workers and by robotics, in my
view, as well as increased growth of
labor. It's almost as if the Japan
example is playing out for everybody.
Like, Japan has been the one that's been
way ahead of this because it has an
aging workforce, it has a shortage of
high-skilled labor, and they have really
tried hard uh to automate, and they're
far ahead of everyone, I think, on
automation and robotics. They love
robots over there.
>> They do. They're good at it.
>> is that a preview for what's going to
happen with us? I haven't studied it in
detail, but directionally, from what I
understand, I think so. And and I think
again, we come back to an environment
where I genuinely believe there's going
to be employment growth over the long
term. The nature of the roles will
fundamentally change. No question. And I
think, you know, that's a responsibility
that falls onto, you know, governments,
uh higher education to be thinking about
what's the shape of the labor force to
come, and how do what does one plan for
it. The nature of the jobs will be
different. And there's no question,
World Economic Forum said 92 million
jobs will be displaced. Uh and those
individuals in those roles will need to
be thinking about how they enhance their
skills and how they develop to take on
new roles. Right. Um and going back to
this question about high-skilled labor,
I mean, that's a theme that comes up all
around the world is that there's more
work to be done by high-skilled laborers
than there are high-skilled laborers.
Agreed.
>> And so, is this where this could sort of
step up, fill that productivity gap, the
company's able to make more, deliver
what the market actually needs, and
that's where you see the growth. Yeah,
and if you make it super practical and
very right down to basics, an aging
workforce is you've got individuals
who've got 40 years worth of what I call
fingertip experience. You know, they've
just got innate knowledge that in some
cases you can't really develop over 40
years. So, how do you how do you ingest
that in modern technology? I mean,
through super basic means. I mean, you
interview these people and you know, for
days on end. You follow them in their
day-to-day work. You have cameras on
them. You have an ability to ingest all
of the reports, all the write-ups, all
the work they've ever done. You've got
to suck as much knowledge as you can
into LLMs, into models. And I think if
you do that, depending on the nature of
the industry, with any other data you
can get from OEMs, etc., then you are
finding the way to create models that
are very specific to that role to be
done. And, you know, I often talk about
doing the hard yards and doing the hard
work and getting down to first principle
thinking, and that's often what that
requires. Okay, but there is this quote.
I have to bring it up because I did hear
it here today.
Uh the quote is, "In the factory of the
future, you're going to need a man and a
dog. The man will feed the dog and the
dog will make sure nobody touches the
machines."
I didn't hear that. I wasn't in the room
for that one. I'm assuming that was in
the Boston Dynamics session. Right. Yes,
it was. So, do uh do you think that
that's the future or is that kind of a
joke? No, I think in some cases it
absolutely is. Uh I I interviewed I was
on stage with Muhammad Candy from PwC,
the global chairman of PwC, this morning
and he talked about a personal
experience he'd had in South Korea
recently where he went to a factory to
observe the manufacturing process and I
don't know what company it was or what
line of work it was, but literally as
they arrived, they turned the lights on,
he could see the whole operation
automated with robots, with drones, with
automated production lines. So, it's
categorically going to be the case in
many uh industries and that would be an
example perhaps of the 92 million that I
talk about. Right.
>> but it's not going to be true of every
single uh industry. So, as somebody who
works very close to industry,
uh you obviously are keenly aware of the
need for power and how much these large
language models are planning to take out
of the ecosystem. What do you think the
future is going to be on that front?
Well, I think the future is going to be
well, on two fronts.
Uh
obviously increased power generation is
one way to solve the energy crunch. The
other way to solve the energy crunch is
by getting more of what already exists.
Mhm. And I don't know if you were
listening to Sabine from Siemens, uh the
CEO of grid software at Siemens, but the
stat that caught my eye was that there's
150 billion
uh output a year that's lost in the US
as a result of outages, right? So, if we
deal with those outages, and I don't
know what the power consumption issues
are there, but if we deal with those
outages with those types of numbers,
something's telling me that we've got
more energy available. So, how do we
look at existing infrastructure and
squeeze more out of the existing
infrastructure? How do we avoid
downtime? How can we get the throughput
increased on existing infrastructure?
And then overlay that with new power
generation, and there's all sorts of
ways clearly you can do that. Uh
you know, introduction of new nuclear
technologies is obviously one that tends
to be more carbon efficient. There's
clearly carbon intense hydrocarbons that
can be used. Uh there's a proliferation
of different things.
Uh I also feel the renewables needs more
attention. Some of the technological
developments with solar and particularly
predominantly coming out of China make
the cost of generating wattage for that
technology is significantly lower than
it's ever been. So, it's going to take a
very wide array of things to deal with
the energy crunch. Yeah, it's almost I
was speaking with someone recently, this
is completely different, but talking
about the brain computer interface, and
the brain computer interface in a way
can be a parallel track that helps
alongside medicine. Like you have the
typical medicine that we've had with
doctors, and the brain computer
interface can be something that helps um
for instance, if you're losing sight,
you have two options. Get a contact lens
or glasses, or um maybe there'll be a
future where you can have a small probe
put in and it will deliver that
information to your visual cortex and
you'll be able to see. And I it's a
weird, but maybe there's some parallel
here where in industry, you can either
make more power Mhm. the traditional way
or you can optimize. You can use
technology in out-of-the-box ways, ways
that haven't been thought about until
recently, and make everything in
industry more efficient, and there you
go. Yeah, multimodal.
Uh I mean, everything. Pushing
everything all times to solve problems.
I think it's the way that I would uh
think about that. What you begin to
describe there just blows my mind. I
mean, you can bet I'll be the first in
line when that happens. Yeah. It it's
already in production, but I think it's
a long way until it will just be easy to
pop it in.
>> Yeah, I might wait until a few more
people have tried it.
>> too. I mean, I did meet the first
Neuralink patient, Nolan Arbaugh, Wow.
>> and we played a video game against each
other. Yeah.
>> Uh he's quadriplegic, so he can't use a
computer with his hands, uh but he was
able to think on the screen and the
cursor would move and click. Wow.
>> And he beat me in that game. And one
interesting thing that he said
>> Wow.
um was that he can uh uh think about mo-
motion and the cursor will already be
there. So, there the his prediction is
there'll be a league of in video games
of people who that actually have this
ability because people who have to
actually move the thing with their
fingers will never be able to do Do you
think you're beginning to predict the IO
device that's coming from OpenAI?
I It look, if OpenAI
eventually wants to plug into our
brains, um that might be a time where
I'm saying that they actually are too
big.
>> [laughter]
>> I agree. Okay, um but but one thing they
do talk about is AGI. Mhm. Um
do you think that we're anywhere close
to that? And does it even matter
honestly, because people think to get
real economic value out of AI, you need
to have AGI. Do you believe that? I
mean, I think for the for the immediate
opportunity in front of the customers
that we see, I mean, AGI isn't a
necessary step right now. I mean,
actually, let's get to the practical
reality.
I was with a customer on Monday earlier
this week, a commercial aviation
customer, and they were talking about
what they got today in their technology
stack.
And they've got IFS in the core helping
them with their maintenance, their
planning, their repair, their overhaul.
And then they've built up like 80
boundary applications, organically
built, all serving mission-critical
purpose, all dealing with regulatory
requirements with the FAA and others,
okay?
And that, you know, core technology
isn't yet on the cloud.
Okay?
>> Wow. So,
you know, and and that's a
representation of many customers that I
meet. I met another customer this week.
Uh I met the CEO,
the CIO, and the COO of a ports terminal
business. Very similar picture. Still
got some of their estate on prem, right?
That's the hard nose reality where a lot
of companies are today.
So, to get them from legacy application
architecture to cloud to enable the
availability of AI,
and then to go to age AGI is like there.
Yeah. So, I think the reality is whilst
it's super exciting and it's got all
sorts of positive implications for
mankind, for businesses,
you know, a lot of businesses that we
encounter, they've got some basic stuff
to get right first.
So, when we talk about just let's end
where we started, which is ROI on AI.
When we talk about AI um and the ROI of
this technology, a lot of people have
wondered, well, like is AI going to have
to replace all coders for there to be a
justification for the massive valuations
we're seeing. And when I hear you talk
about the way that AI can be applied and
deployed in these industrial settings,
I'm like, wait a second.
This might be the place where the ROI
comes. If it makes factories and people
working in industry much more efficient,
that's invaluable. I I I couldn't agree
more. Uh and the reality is that most
frontier models right now, I think,
based on what I can deduce, are making
most of the revenue from consumer
applications. With the exception of
Anthropic, who are distinctive, I think,
and focused on the enterprise, and I
think that's one of the reasons we're
working with Anthropic. All right, Mark,
thanks so much. Great speaking with you
and I appreciate you opening my mind
[music] to this new application of the
technology. Well, thank you for
entertaining me and engaging and being
here at [music] Industrial X.
My pleasure.