One Registry to Rule them All - Sonny Merla, Mauro Luchetti, & Mattia Redaelli, Quantyca

Channel: aiDotEngineer

Published at: 2026-04-10

YouTube video id: VXfRt_H-V08

Source: https://www.youtube.com/watch?v=VXfRt_H-V08

What happens when you have dozen of
teams across three continents all
building AI agents, each one wiring up
their own connections, reinventing their
own security model, deploying their own
infrastructures? You get chaos.
Hi, I'm Sonny Merla, Global Data Science
and AI Manager at Amplifon, and I'm here
today with Mauro Luchetti, AI Center of
Excellence Manager, and Mattia Redaelli,
AI Engineer at Quantica, the team that
design and build the technical solution
that we are about to describe.
Today, we are going to show you how
Amplifon tackled down this problem by
launching their own Amplify program, and
specifically how we design an
enterprise-grade registry system for MCP
and A2A agents.
For who don't know Amplifon, Amplifon is
the world leader in hearing care
solutions. We operate across 26
countries around the globe. We are more
than 20,000 people, and we operate over
10,000 stores across the globe.
We are in the AI transformation. Right
now, we are experimenting with AI
solutions, technologies, and so we are
facing challenges like building
solutions that are stable over time, and
understanding how to make them scale
responsibly accordingly to guidelines
that are defined centrally.
So, how Amplifon decided to adopt AI at
scale? We launched in January 2025 the
Amplify program.
It's a global and cross-functional
program designed to set the rules for
the AI adoption. And it is basic
basically composed by an operating model
and an execution plan. The operating
model is based on two main sources, the
control tower and the committee. The
control tower is a limited set of people
including chief decide deciding what are
the guidelines for security, legal,
technology, but also what are the
strategy the the focus for the strategy,
and so the use cases
to develop us first. Then there is the
committee that has the responsibility
for running the strategy in the
countries, but also in the the corporate
side. So, prioritizing also the use
cases more granularly, those are to to
release the value to the organization.
Which are the main focus of the Amplify
program? We have three of them,
governance, platform, and factory. So,
the governance, we want to ensure the
alignment with AI regulatory,
also the strategy and the guidelines
that we define centrally. So, it's also
a matter of make the people aware of the
existence of a program, the also the
rules of the play, and also then
make all the people informed of how we
deliver and roll out the value. Then
there is the platform side. So, we have
to set up the infrastructure on which we
operate as developers and implementation
teams. So, certifying infrastructures
and way of working to deliver processes
and also services to
scale AI application. Then we have the
the factory. So, this is the most
practical
part of the story. So, we have the
development teams that needs to have
a focus on rolling out solutions to the
market, also caring about the rollout
across countries that is very important
for for Amplifon. So, thinking about the
solution as a scalable scalable and also
reusable across different domains.
So, which are the main problems that we
see as an organization that tries to
roll out AI at scale in a pervasive way
in the organization? So, we foresee for
sure maintenance and operations
problems, governance and compliance
problems, but also enterprise scaling.
So, how the developers
needs to
develop to make the the solution stable
over time.
Um
Starting from the maintenance and
operations side, uh
even the short life cycle of the LLM
models that are at the core of the AI
application and the AI agents that we
develop and roll out, we want to be sure
that we are able to address
the the usage of this kind of LLM LLMs
across the the use cases that we we roll
out. So, we want to be ready and
prepared to act promptly every time we
we see a disruption in the model we use
in the use case. Um on the other side
with the governance and compliance, we
we need to be sure
to know about where we use AI in the
organization, what are the main use
cases, also for the regulatory
point of view, but also for the usage
across the organization. So, we want to
have a catalog. We want to have a way to
understand also the assets used by the
single use cases. So, what they
implemented, what they used
to also create a sort of
lineage of the information. On the other
side then there is a a point related to
the the way we develop AI solution in
the organization across multiple teams
that operates on different
infrastructures. So, we want to make the
developments
at least
in terms of governance centralized. Then
so with clear guidelines,
reusable also on different
infrastructures and different teams.
This is the the goal. So, we we want to
make easy the life of developers to
focus on the business logic inside the
use cases, avoiding to reinvent the
wheel every time we need to to take to
take care about the security,
but also the deployment
and maintenance of the the use cases.
So, now I let Mauro to introduce how we
address these topics at Amplifon.
Thank you, Sonny, and let's try to bring
more technical point of view in this.
The first component that we built in
order to, you know, let's try try to to
address those problems are an AI
gateway.
First of all, it brings us unified
access. So, all the
developers that want to use a model,
they can
use this
gateway, and they can point to
the unified endpoint, and
use all the models that Amplifon has in
in
its catalog. Then there's a security
aspect.
If you want to use the models, you have
to, you know, connect to the gateway.
You have to
authenticate yourself, and we have done
this with the intra intra ID
integration.
Then there's a budgeting aspect because
obviously Amplifon has lots of use
cases, and if a use case came to you and
asked for, you know, for budget for
using those those models, you can set in
this AI gateway
a budget. So,
a cost monthly cost, or I mean, yeah,
you can set it monthly, weekly, and so
on, but you can set a budget, and
while the developers are using
those budgets, it erodes, and it can
brings to developers you know, the
remaining part of those budgets. So,
they can they can control it. And then
there's the the the control aspect.
So, all the
you know, all the
requests that
are are done through LLM models or
responses, all those all the analytics
that we need to put in place
on top of all the requests are done
using
a central auditing monitoring and
analysis tools obviously connected to
this AI gateway. And then for the
governance part, I mean, this is the
entry point. This is the top layer, I
would say. And then we have three
different registries. The first one is
the MCP registry. So, as you as you can
imagine, all the tools, all the
integration with Amplifon systems, all
the functionalities that we want to
provide to LLM models are exposed
through this MCP registry, which is the
the the central catalog of all all
available tools.
Then there's the A2A
agent-to-agent registry. So,
it brings
it's a full catalog of all implemented
of all available agents, and it uses
agent card standard. It exposes agent
card, and also it can
it can
give the developer the ability to
connect to already developed agents. And
then there's the use case registry,
which is the you know, the registry that
connects
connects all those all those information
together, all those metadata together,
and
bring out the real governance
functionality, the lineage
functionality, and
again connects all those aspects
together.
Let's try to go in more detail about
each of those registry.
I don't want to obviously tell anybody
what MCP is not at this conference,
but we started from the official MCP
registry maintained by the community.
This is the you know, the the public
community-wide catalog of all available
MCP servers,
and we essentially build on top of that.
So,
Amplifon has built
its own private MCP registry as an
extension
in functionalities, and also in you
know, enterprise context that we want to
add to each of the
registered servers.
It contains two main things. As you can
imagine, the you know, the custom
internal servers that
the the the the internal Amplifon team
have built for specific system, specific
integration, specific tool that Amplifon
want to provide, and also a curated set
of
public server that have been approved
that have been, you know,
certified by Amplifone for for for
Amplifone use cases. And both these
servers that we want that we register in
in this catalog are enriched with some
additional enterprise metadata.
Let's let's see what are those metadata.
First of all, the ownership. Each server
has an owner. So,
which is which team, which use case,
which project is in a way owner is
responsible for that specific server.
What are the environment in in which the
server is running. So, it is running is
it running in dev, test, prod, and so
on.
What are the authentication model?
So,
how I can effect
how I can as a developer use
that server? What are the mechanism that
I need to put in place? The cost
attribution. So, this is linked to the
AI gateway functionality, the budgeting
aspect that we have described before.
And this is done in order to see
what server
spending what essentially. And then the
use case linkage. So, what are the use
cases that are effect that are actually
using that specific that specific
server. And these are not simply, you
know, metadata that are nice to have.
This is something that really
bring out the impact analysis
functionality. This is where
effectively we enable the governance and
the auditability.
And we have the complete trail of what
AI tooling exist and how
they are they are being used by
Amplifone developers.
And then we have second registry which
is the eight way registry. This is fully
based on the agent card
that, you know, describe the agent's
identity, its endpoint,
the agent capabilities, the supported
modalities, authentication requirements,
and so on. We
have built some blueprints and then we
talk about those blueprints. But
essentially, when an agent is deployed,
it automatically publish publishes its
agent card to the registry via CICD
integration. So, in this way any other
agent, any other developer can discover
this new agent and obviously can
interact with it.
So, in a way we are trying to make all
those
agent development self-documenting.
Now, we will see how use case registry
connects those two other registry
together. How do we can use the MCP
registry and eight way registry from a
business point of view?
We want to have a use case registry. So,
to map the agents and the tools in
specific use case adopted across the
organization. And this is the reason why
we designed this specific building block
that aims to contain the information of
what are the assets used by the single
use case, what they implement, what are
the models they used also for the
maintenance topic that we mentioned
before.
And also
understand how and where we develop and
deployed this kind of use cases. For
example, which is the system that serve
the use case the specific use case and
what are all the other impacted by
this use case. So, for example, if we
have connection among multiple use
cases, we want to see that clearly in
interface that can be a catalog for for
everyone.
Let's see now how it works in practice.
So, let's go in a walk through of the
the platform that we developed and that
implement all these registries
for the organization.
Okay, so we wanted also to give you a
brief overview of our platform.
Here you can see that is the home page.
You can go into the catalog, so what we
described in detail before, so MCP,
eight way, use cases. We also have the
AI gateway part where we define what
LLMs are
available
in the enterprise right now.
Going back to the dashboard, if we move
on to the catalog,
this is the platform that we are going
to deploy in production soon. Here we
have demo data.
So, we have six entities defined
until this point.
We have use cases, MCP, and eight way
agents. Going into the
use cases part,
if we open a sample use case for
instance, we can define
its status, its version, its
description,
assets used. So, for instance, an agent
and then MCP server,
what AI models it's using,
and the life cycle history of the
of the use case.
If you go on to the
create use case page, you can see that
we can define a name, a description, the
status of the use case, the ownership,
and also the assets linked to that.
If we move to the AI tool section, so
MCP servers,
you can see that we have two sample
MCP servers. So,
the actual server JSON is is described
here. We can also see into the eight way
agents the same
thing for agent cards. So, for instance,
here we can define the we can have the
long chain test agent that test these
capabilities and description from the
agent card.
We also define the inspector page where
you can
select
an MCP server and you can launch the
inspector in another tab. So, you can
also connect and check what that MCP is
providing you.
We also have the same inspector that is
just checking for
compatibility with the eight way agent
card.
And you can do the same here.
We also have widgets.
So, you can in order to make the life of
the developer easier,
define the server.json for MCP and the
agent card for eight way with a form
and then preview it here instead of
starting from the actual JSON on the
repo if it's the first, let's say,
server for you.
We can
also check the lineage
because, for instance, you can go on to
the use case, open
a use case that you want to check, open
the object lineage.
In this lineage view, you can see that,
for instance, the use case
here that is ticket optimization with AI
is connected to an agent, is connected
to another agent here, and also has AI
models connected to it. So, we can have
the
full lineage of the use case and also be
able, as Maurice said before, to
be sure and also make modifications in
case some parts of the lineage are
affected by an outage or a problem and
go back to the use case affected.
So, moving into the enterprise
development cycle, so we talked a lot
about
metadata and registries,
but how do actually Amplifone developers
develop MCP servers and agent to agent
servers to
deploy them in production? We deployed
and developed two repositories, one for
MCP and one for agent to agent protocol.
These two repositories are template
repositories on GitHub. So, then
developers and teams can start from from
them and work their way up to production
environment. The idea is that these two
blueprints actually have boilerplate
provided
and also
infrastructure and tooling
already present. So, for instance,
Docker files,
package manager,
both are fast API servers, so they are
exposed in the same way.
And also the authentication
and cost tracking cost tracking is
handled inside the blueprint.
We also have an integration
to LongFuse, which is an observability
tool that we deployed at the platform
level.
So,
the development teams can also trace
their agents,
run evaluations, and check how the agent
is performing.
Also, the
eight way server blueprint is agnostic,
so it's not based on a particular
framework, so LongChain or Agno or
any other framework, but actually is
composed of interfaces and
ports so that every team can implement
their own solution
in their framework of choice.
The important thing is that they provide
the same interface
that we
saved in the in the blueprint.
So, that uh the development is uh easy
on the developers and they can focus on
the actual value of the agent.
So um
Mauro talked about the um CICD uh that
is in place in the um
uh
A2A uh blueprint and also the MCP
blueprint.
Uh the idea is that uh once you are
ready with your development, uh you can
tag um a certain branch and uh GitHub
action so uh
starts and uh uh all not only publishes
the Docker image on our artifact
repository,
uh but also publishes the metadata of
that agent so the agent card for uh
agent-to-agent protocol and the
server.json
uh for MCP
uh onto the uh backend uh let's say
proxy of the the catalog of the
registries.
Um
in this image we can also see that um
in case any AI agents uh needs to call
either an MCP or an A2A proxy, um
the idea is that they can go through the
um
APG AI gateway that we deployed and um
these two proxies, so the MCP one and
the uh A2A one,
um
go uh look up into the uh actual catalog
of uh agents and MCPs to retrieve the
actual URL of the backend that the agent
uh wants to call and then the agent
authenticate itself uh
with another header
um
onto the the actual server.
So to bring it back to the business
perspective, what we achieved with the
amplify platform and the registry uh we
developed, we have right now a catalog
uh to to make the governance happen. So
we see the MCP and A2A server that we
deploy across the organization and
across multiple teams.
Uh we have a full traceability of the
use cases, agents, tools, and also
models adopted um across across the use
cases. Then we have the production ready
blueprints for developers to start from
something standard across across teams,
but uh ready for building and focusing
on
um the business logic in the use cases.
Then we have the CICD pipelines that
ties for deploying the servers to
production, but also the metadata into
the registry. Of course, it is still in
progress the the work on this platform,
so we are keep growing the capabilities.
Uh so feel free to to reach out to us
and keep in touch if you have any
similar uh point of view or also
uh something different that you want to
discuss, more than welcome.
Thank you. Feel free to reach out.