Combine Skills and MCP to Close the Context Gap — Pedro Rodrigues, Supabase

Channel: aiDotEngineer

Published at: 2026-05-15

YouTube video id: JT3OzDKrucU

Source: https://www.youtube.com/watch?v=JT3OzDKrucU

All right, I get I have the green light
so we can get it started. Hello
everyone. You might have noticed that
the the titles changed a bit from the
one that we have in on the schedule.
That's because when I submitted the
talk,
there's the MCP versus skill debate was
still going on was a topic. I think now
we settled on they're both different.
They all
have their own roles and now the I think
the debate is more on the MCP versus
CLI. So I thought it would be more
useful to come and explain how we've
wrote our Superbase skill
and the lessons that we got from writing
the this document cuz I've never spent
more time writing a single document
since I've wrote my master thesis, okay?
So I know that writing a skill sounds
simple, but it can be very complex
especially when you have complex product
like Superbase.
For starters,
I'm Pedro. I'm an AI tooling engineer at
Superbase. I'm an MCP enthusiast AI in
general.
Feel free to connect me on LinkedIn and
I'm also a co-founder of the Lisbon AI
Week in Lisbon. We'll be on
late October this year. And if we're
talking about me at the moment, I
usually prefer
doing this on
dark darker
darker mode. I don't know how many of
you prefer dark mode over light mode.
The majority. I thought so. So let's do
this presentation on dark mode instead.
So we I think we can all agree that
agents are already smart enough, right?
They can
do very they're very capable of doing
mundane tasks by themselves. But
when you present a task about something
that they've haven't seen yet or you've
updated since they were trained like
your product for example, they need the
right guidance.
For for example, Superbase we noticed
that they would usually either miss some
some security pitfalls that we have like
row level security instructions that
they have to set to
to basically not expose your app.
They could just usually operate on stale
knowledge on their training data and
they're very lazy to and very stubborn
to admit that they don't know and they
do need to find fresh information.
And also
we would like to guide them on specific
workflows that we think are the most
optimized for agents
on our products. So for starters, how
many of you know or have written a skill
before?
Okay, so what I'm going to say is
probably not new for the most of you,
but
just to get an introduction on skills.
So skills
are folders containing instructions,
scripts and resources that agents
discover, right?
Progressively they discover. This is the
main selling point of skills and they
they have this envelope called front
matter where they have the name and the
description. This is how the agent is
going to decide when to load the skill.
Then they have the actual instructions
inside the file the main file called
skill.md and then optional bundled
resources like scripts to perform
actions or reference files that the main
file can reference for more information
that doesn't have to be loaded
immediately to context.
So
we tested out at Superbase we
experienced giving the same agent
in this case was Claude Sonnet 4.6
the same prompt for a simple task. Like
we had an app like a collaboration
collaborative app and we wanted to
create this new SQL view, right? On top
of table that already had row level
security enabled. So the users could
only see the information that belonged
to them.
We gave we gave them just the the MCP on
one condition and the MCP plus the the
agents the agent skills. And the result
was well the expected.
If you don't know in Postgres, if you
create a skill over
a table on top of a table that has RLS
enabled, if you don't explicitly pass
that
that flag over there the security
invoker equals true, it will bypass the
RLS. So basically the view will expose
data that is not this exposed by default
on the
on the table. So the agent with the
skill with the knowledge was able to get
this
this information implemented correctly
and safely while the one that only had
access to the integration to the
to the MCP tool did not. So
for this we decided well just like this
we wanted to enable agents to know how
to work correctly with
with Superbase. So we decided to we're
announced actually announcing today
this
this Superbase agent skill that I've
been working on on this couple couple
past of months and to make things
official
I'm actually going to try something.
Wait.
I'm going to
live tweet it on stage. So
it's live.
All right. So
what exactly is this skill about and
what lessons can I share with you? So if
you're building a skill for your
product, you can build one like we did
and happy to discuss the all details
like this is free text text and we
haven't achieved a a standard for it. So
happy to to chat about it later.
So to break down on some principles that
we
converged to. The first one is that
don't duplicate information. Yeah, like
treat skills as a document documentation
for yourself and you will not
duplicate your documentation, right? You
already have documentation on your
product. Just point the agent to it to
the most up-to-date. You have to be very
stubborn with the model to go to ask
them to go
search the the web or your
documentation. Provide the guidance.
Tell where to and how to find the
documentation, but be very persistent
with it to go for it.
We also did a run a little experiments
and this is still
well as I said
an experiment and yet to be
it's
we're still figuring out the details,
but we also announced quite recently and
you can see it can read it on our blog.
We're we basically exposing our
documentation through SSH. The main
reason behind it is that the agents can
now look for the documentation like it
was a file system. So they're very
familiar with file systems in general
are navigating them finding files and
information on
using Linux based
tools. If we expose if you give them
this interface to also do this but
remotely,
our premise is that they will have easy
ease to to navigate the the agent. So
would also would love after the talk or
during the the conference to see and to
hear your opinions on this idea.
The second principle that I get that I
get I have for you is that if something
can get skipped
it will be skipped. What I mean by this
is that besides newer information on
searching online, so agents like
fetching information online or tool
calling it's expensive for for agents.
So they mostly
default to their training data. The same
is true for reference files. We've
noticed that
even if when the the agent loaded the
skill,
it will even if it had reference files
there,
it will be very lazy to load them. And
even if it loaded one reference file,
if your problem requires more than one
the information that it's more than in
more than one file,
it will most likely will not load two
files, right? It's almost impossible and
not not even starting on three or four.
So
you have to be
very critical about what you put on your
skill.md file from the beginning. You
put information that is likely not to
change like in our case a security
checklist about Superbase
that we didn't really wanted the agent
to miss at all. So we decided this
cannot be on a reference file. We
actually started by putting on a
reference file and he usually missed it.
So we put it on the skill itself. So if
you have any
information that the agent can just not
miss and like defines your product, go
to the skill.md file. Do not afford it
to put on on a reference file.
And lastly,
the third principle that I have for you
when writing a product skill is to be
opinionated. You know your product the
best. You know how to work with it. You
know how your users or you should know
how your users are using it. Don't be
afraid of guiding the agents on what on
workflows that you think are the most
effective when working with your
product. In our case for example,
managing a database schema, right? I
have actually haven't asked this and
should have in the the beginning. How
many of you know Superbase and what
Superbase does?
Okay, for So for the ones who who don't
know, we're basically a backend as a
service. We provide a storage database,
authentication and other uh features
that you would need to create a back end
out of the box. So, we provide as I said
we provide a database and the agent can
interact and manipulate your your
schema, right?
We found that this for our platform was
the best workflow for the agents to
efficiently manage the schema. So, in
this case run uh
direct DDL operations like change the
schema freely on your um
development or staging database. Once
you're happy about it, we provide an
advisor to our basically linked to give
any
uh security or performance issues that
the database could have, fix them,
and only then create the migration file.
So, this will prevent the agent to
create a migration file every time he
changes the schema. We found this
to be the best um
workflow to manage the schema. So, for
us it should be in the
um in the skill when working with with
Superbase.
How we tested this skill? So, we're
living
a very interesting times where we now
can test free tests. We we can now test
um documents. We can now test
documentation, right? Uh this would be
uh completely bonkers to think uh years
ago. Now, I've basically been testing a
markdown file.
And how did we do this? Uh through
evals. Uh so, for those of you who don't
know evals, evaluations for a short for
evaluations, are basically uh tests that
you can run
mostly like you would run uh on your CI,
uh but now instead of evaluating code
you're evaluating an agent and LLM and
its behavior, what tools it's it's
calling, what's the it's reasoning,
right? And so, we ran on an early stage
we ran a set of six specific scenarios
for Superbase. So, ongoing Superbase
projects in different uh
in different scenarios, right? And we
ran it against four different uh agents
from two vendors
um in three test conditions. So, we
wanted to test on a baseline, so no MCP,
no skills, with just the MCP server, and
with the MCP server plus the skill. All
this was done based on a test
completeness score
four graded score uh
the run on Braintrust. If you don't know
them, they're around they're sponsors of
this uh conference. Go um go to their
booth and and talk about it. It's a very
cool product. So, the results are uh are
out here.
Um
the skills plus the MCP uh outperform
any other conditions on every model that
we we tested. We tested on on Claude
Code for Opus 4.6 and Sonnet 4.6 and
also on Codex for GPT 5.4 and GPT 5.4
mini.
Uh so,
I think we can
conclude that it's pretty unanimous that
the skills were actually improving the
performance the test completeness score
uh because they were providing the right
guidance to to the agent. So, we had
already the tools. We already had an MCP
server. We just needed the right
guidance so on how to operate with with
Superbase. Uh it's agent agnostic. Uh
most more and more agents are adopting
this open standard on on skills. Um and
currently as I said in the beginning the
bottom line is not the the context, it's
the guidance.
So, if you can take something from this
talk uh when building a skill for your
product is that points for your single
source of truth.
Point to your documentation basically.
Be opinionated. You know your product.
Don't be afraid to to show it. And start
minimal. Any model vendor, let's say
whether it's that's Anthropic, OpenAI,
on any blog post about skills, you will
read start uh start minimal, start slow,
and then iterate, expand, don't be
afraid to create new versions for it.
So, if you want to know more about it,
we
we actually wrote a a blog post. Uh it's
live today. You can check it on my
Twitter account on on Superbase um
blog
uh or you can just run this uh this
commands and install your
um
install it on your project and start to
to to to use them now.
That's all. I'll be around and once
again, thank you very much.
I do have one more thing to show you.
So, we're running a a giveaway. If you
want to to have a chance to win a Mac
mini, uh just scan the the QR code,
uh sign up for Superbase, and good luck.
Do I have time for any questions if
there's any?
Yeah. Okay.
I I just had one thing I wanted to ask.
So, since we're all moving into ranks,
right? I mean
I'm just wondering how much of a demand
do you guys see on vector databases
Superbase right now?
Hm.
Uh it's a it's an interesting
interesting question. So,
uh
I mean it depends the the you you're
asking about like uh
how many customers are
have been more and more
as the as the days pass. Like um
the use cases for for vectors are mainly
for embeddings, right? Yeah. And
semantic
uh there are many use cases for for
embeddings. The one that I'm most
interested about is uh
semantic search, right? That you could
use to provide even more context. Uh for
example, through the SSH
uh exposing the docs through the SSH,
you can now instead of naively uh let
them navigate with the the bash tools,
you can augment the tools already known
bash tools into all providing some sort
of uh semantic search. So, I would do
see a very big potential on vectors.
On our data, definitely customers have
been exploring more this uh
this solution. Thank you for the
question.
Yeah.
Thank you for the talk. Very
interesting.
How do you
distribute the skills inside your
organization?
Do you just pass them? Do you have a
repository? Do you have a repository or
do you have
some package manager?
That's That's an amazing That's an
amazing question because
uh currently one of the downsides or the
I would say the the constraints of using
skills is their distribution
distribution system, right? Uh we're
still finding the there's still some
players trying to reclaim
uh the
either the registry or the way to
distribute it. So, Vercel came up with
the skills package. Uh we're seeing
plugins, right, that you can bundle with
MCP servers and other things. Uh but
they are model specific. So,
this is to distribute in general, skills
in general. It's already a a problem for
itself that we haven't solved it yet.
Um
Internally, we have uh we are packaging
the skills on the the repos themselves.
So, if you want to
uh create a plugin, you just uh create a
dot Claude
uh plugin or a dot cursor plugin or
whatever in that repo, right?
Uh and then it's available or or it's
discoverable uh if the if the
the repo it's it's open sourced or you
have access to it, uh you can use the
skills package to to fetch it.
So, yeah, this and this is how I've seen
the other uh companies
distributing the skills like a repo, a
skill, trying to package the the skills
into the the knowledge.
Thank you.
Thank you.
Are there any more? I think we have time
for one more.
Yeah.
Yes. I recently built self-improving
skills.
Should we do a collab?
Sure. Let's talk after.
All right. So, once again, thank you
very much. It was a pleasure to be here.
I'll be around.
Thank you.