Full Spec MCP: Hidden Capabilities of the MCP spec — Harald Kirschner, Microsoft/VSCode

Channel: aiDotEngineer

Published at: 2025-07-18

YouTube video id: ExeD-8gFUMM

Source: https://www.youtube.com/watch?v=ExeD-8gFUMM

[Music]
Since all the questions already got
asked, who built an MCP server and it
didn't work? Okay, Sam, go. So, we're
here commiserate on like how to actually
build with the full spec. What are the
hidden capabilities, why they matter,
and how they light up? I work on VS
Code, so this is a biased local MCP for
development track, but all of it is
applicable to everything. I really love
the intro to the track. It's all about
it's MCP on high velocity. is a lot of
ecosystem growth, excitement, people
working together, collaborating, but
there's so much more work to do is they
realize it's so early in that ecosystem.
So none of this is a criticism of the
spec or the ecosystem. It's just we're
so early and I want to point out where
we can gain more powers. And just 10
days ago on a Friday, we had actually
this first in real life gathering of the
MCP steering committee during the MCP
Dev Summit. So that's how early it is.
We haven't even met before. We just talk
on discords. We finally met in person
the first time to talk about the
anything how to evolve the spec, how to
evolve the ecosystem
and all the basics are kind of covered.
Um hopefully in the previous talks this
is my first MCP talk that I don't spend
half way through just explaining what
MCP is. There's roots in the client.
There's sampling. There's prompts and
tools and resources. There's a really
rich ecosystem to build dynamic
discovery and persistent resources and
rich interactions, but there's a gap in
how this is being implemented. There's
this like MCP is just another API
wrapper syndrome that's happening
because people just want to ship. They
want to build products and they're
actually building really excellent
products with just tools. And that
creates this reinforcing loop because
once you see how MCP works, you're just
going to use the same stacks and repeat
the same tools only ecosystem. And
there's technical barriers. People do
this because there's missing support in
the clients and SDKs and documentation
and the references.
And the clients reflect this most. If
you look at the adoption that's from the
website of model context protocol, you
see everybody goes for tools because
that's where the most immediate success
is. And if you're honest, actually most
of like resources and prompts, you can
do similar flows just with tools. And VS
Code does the same thing. We when we
launched two weeks two months ago now
with our MCP support we started with
tools and we already added discovery and
roots because we're working towards
actually reading this the spec and
implementing it and I'm happy to
announce that with VS Code's upcoming
release v 1.10 is one zero. Totally got
it wrong, but it's already in insiders
now. So download it. We actually have
the full spec support and that's I want
to talk about here about all the other
things that people are not using yet.
Yes, that's clapping.
Okay, so the message is if you go with
full MCP spec support, you will can
unlock these rich stateful interactions
that MCP vision is really outlining on
how agents should work together.
Starting with the most obvious tools, so
not going too deep here, but tools
reflect actions, well-defined performing
actions and mostly easy mapping to
function calling if you're used to that.
And on the right side, you see
playright. You can start a server, it
will open the browser and take a
screenshot. But tools are often leading
to quality problems and we all struggle
with that. Raise your hand if you had
like some error in your IDE that that
you couldn't add more tools and you
couldn't run it or run wrong tools
because you have too many. And there's
research from lang chain that nicely
underlines that and pointing out the
three vectors of a it's too many tools.
So AI gets confused by that. It's too
many domains of tools. So if you
suddenly have some different properties
for each tool and instructions coming
with each tool then it also gets
confused versus just a pure like this is
UI testing. And lastly it's just a
repetition. The more repetitions the AI
has to do to actually run tools to solve
a problem the easier it is to get
confused as well. So it's really quality
over quantity and clients handle that
somewhat. They give you extra controls
like in VS Code uh we added actually per
chat tool selection. So there's a little
tool packer and you can actually reduce
down the tools of what you actually need
in the moment versus all the tools. It
has nice keyword accessibility. It's
really quick to set up and will persist
for the session. So that's one way we
have actually mentioning of tools. So
like sometimes you're like pull this
issue and trying to like verb out
whatever tool you're trying to invoke
like why not just use this tool and
please make up all the right parameters
to use it properly and then use the
other tool. So that's what we allow as
well. And then lastly just in this
insiders actually we're shipping
userdefined tool sets and that's more of
a reusable concept. Once you get into
the mode like these are all the tools I
need for a front-end testing flow, then
you just put those into a tool set and
said use my front-end testing flow. So
that's coming as well. So these are all
user controls, but actually that spec
has dynamic discovery built in. And that
means on the fly a server can say but
actually that spec hack are going to
give you these other tools. And on the
right you see GitHub mudmc. It's on
GitHub. You can check it out. And this
starts with a chat mode that I created
that puts the agent into a game master
prompt and it has the Mud MCP installed.
So now with the mode active, I can go
into the agent, switch to MUD and play
the game. And what dynamic tool
discovery does here, it actually makes
it aware of which room I am in. So
dungeon crawler, you walk from room to
room, like you can go east and north,
you can pick up stuff and if there's a
monster, I can battle the monster. But
the tool for battling shouldn't be there
when there's no monster. Eventually, I
advance through the game and I finally
find a goblin I can battle. Where of
tools for battling shouldn't be there
when there's no monster. Eventually, I
advance through the game and I finally
find a goblin I can battle and the
battle tool appears. I can battle the
goblin. So, imagine those those MCP you
want to work on. Those are coming up to
give servers and clients a little bit
more really tools and actions actually
the add context return a giant file from
your server but you want to return a
reference to the file and that could be
something the LM could follow up on or
the user can actually act upon. Then the
other use case is actually giving files
to the user. So if you take a screenshot
via playright, it want to expose it to
both the LLM and the user and resources
provide that semantic layer and you in
what are the issues? Oh, I found your
issue that's they want to understand the
Python environment and maybe look at
your settings of how you set it up so
they can customize and that makes it
more dynamic and stateful out of the
box.
The other one is like if you can look at
actual the packages and your libraries
installed, it's a great way to customize
it to a React setup versus a swelt setup
and really acknowledging what the user
is looking at and not asking constantly
like what framework are you working on.
Like just you work in my folder, so just
look at it.
And lastly, I think the idea of like
what what is that CI/CD pipeline? That's
where MCP servers really shine to
connect the end to end of a developer
experience. And you can also read those
out.
Sampling. Who has heard about sampling?
Is really excited about sampling. Okay,
so you understand what I mean. So
sampling. Sampling is one of the oddly
named uh primitives as well. And if it
had a better name, maybe more people
would use it. Uh but it's actually now
implemented insiders and it's so much
fun to use. So it allows the server to
request LLM completions from the client.
And what I'm showing here on the right
is the permission dialogue that pops up
to allow the server to access the LM.
Right now it's wired up by default to
GPD4.1. There's more spec improvements
to make it with structured formatting.
There's some ideas out there. So there's
a lot of things to make it better, but
right now nobody has implemented it. So
there wasn't really need to make it
better, but implementation is now here.
So please use sampling. That's a nice
progressive enhancement. Maybe by
default you return the kitchen sync. And
once you have sampling, you can do
interesting things like summarizing
resources in into more tangible things.
You can format a website that you fetch
into markdown for the LM or you can even
think about agentic server tools that
one run via the LM from the client.
We look beyond the primitives. There's a
few things that are also interesting. So
far we have roots and tools and
resources and prompts and they with
dynamic discovery you can update them at
any time. The client will send new roots
as the VS code workspace changes. You
can send new roots new servers and new
pool tools and prompts from the server
as you update and you change. So it's
really dynamic environment already. But
there's more pain points to make these
servers really powerful.
One is the developer experience. who's
been struggling with working on MCP
servers and debugging and logging and
everything. Yeah, want to see hands up.
Yeah. Yes. Um, apparently it's really
easy, so maybe it's not a problem.
Okay, so we have it now dev mode in VS
Code, which is a little dev toggle, and
you already see the console that always
works for all MCPU servers. So once you
hit a snack, that just works. And then
now now it's in debugging mode. So
actually has the debugger attached. So
once I run the prompt which is
dynamically generate on a server, I can
now hit the break point and step through
it. And that's really hard usually
because your server is not owned usually
by any process that you run manually.
It's owned by whatever client and host
is running the MCP server. So because VS
Code is both, it can just put it into
debug mode and attach its debugger and
that works for Python and Note right now
out of the box. So, super exciting and
it's yeah, it has changed how I work on
MCPS. Definitely
the latest spec uh was already called
out. I just want to call it out again
because it's so important that people
stay on the tip of the spec on what's
coming and understand what's in draft.
Those things that are in draft only
become stable because people provide
feedback that it's useful and that it's
working. And if they're in draft and
nobody provides feedback, then they will
still go into stable and they might need
revisions like the offspec. So the
updated offspec on the right gives this
enterprise grade authorization. There's
a talk tomorrow about building protected
MCP server that I can highly recommend
from then who actually worked on the
offspec. So if you want to talk to one
of the people behind it and want to dive
really deep into O you can do that. Then
streamable HTTP has been working in VS
Code since two versions as well. But
then it's been really hard to test
because there's no servers out there. So
if you work on hosting, you're really
excited about streamable HTTP. You
should really get everybody that is
hosting your MSP servers to to get onto
it and not use SSE anymore. SSE is still
possible to use with HTTP. So you get
both benefits, but you're avoiding this
really stateful churn on your servers.
Last one already mentioned there's a
community registry happening and that's
think the other big pain point like if I
build a server and nobody finds it or
what is the discovery experience like
how do I send people like do I send JSON
blobs around for people to discover my
server. There's a lot of community work
around this to make this discovery easy.
So it's a big shout out to everybody on
the steering committee the community
working groups and everybody involved
here. Um, if you want to check it out,
it's on model contracts protocolregistry
on GitHub and it's all happening out in
the open. And lastly, I'm really excited
about elicitations. Um, that's actually
coming in the next draft. Um, spec
reference, spec draft, release,
whatever. And this is a way for tools to
finally reach out back to the user when
they need more information. Right now,
tools are all controlled by the LM and
you get all the information from them.
But then when it actually needs more
concrete specific input from the user
then you you can throw them into another
chat experience and ask for it but why
not just give them an input to provide
it directly. So it's it's again more
statefulness in the tools on top.
So your help is needed. Um progressive
enhancement in MCP is possible. I think
we want to have more best practices out
there maybe even in the references
servers to show it off.
But everything is now ready to be used.
There's clients supporting the latest
spec that you can run it in and test it
in. Those clients are used by users. And
as more users showcase how great these
stateful servers can be and outline
these best practices, this
interoperability gap will close and
clients will catch up. It's a very
fastmoving ecosystem. People are
complaining like, oh, you ship this two
weeks after the other person. Um but
it's all coming together and as as
people use these and learn and bring
feedback it becomes better. So make
actionoriented context aware semanticaw
aware service using the full spec. And
then lastly contribute to the ecosystem.
If you have the time read up on some of
the open RFC's I shared like namespaces
and search to kind of see what's coming.
Make sure they get into the SDKs you're
using by following the issues and just
share back on your experience. I think a
lot of people mis misunderstand how much
influence they have on clients and SDKs
and everything by filing issues by
providing feedback. I'm helping to
triage a lot of the MCP issues coming
into VS Code. We read all of them. We
learn from them and really that drives
our road map and that happens probably
with every other uh clip team out there.
So really make your voice heard of like
you everybody should support sampling.
So so there's a transformative potential
in MCP that we all can unlock with the
spec that is already there so the
ecosystem catches up to the spec.
So with that let's go um and feel free
to hit us up on the Microsoft booth.
There's two VS Code people there Tyler
and Rob. You can also talk to or talk to
me or talk to your friendly MCP steering
committee members.
Thank you.