AI Didn’t Kill the Web, It Moved in! — Olivier Leplus (AWS) & Yohan Lasorsa (Microsoft)

Channel: aiDotEngineer

Published at: 2026-04-10

YouTube video id: XZ0boOjtbNo

Source: https://www.youtube.com/watch?v=XZ0boOjtbNo

Hey folks, welcome to this session where
we'll discuss a bit about the web and
what the recent AI innovation change for
us as web developers.
So, let's take just some time for a
quick presentation. So, my name is
Yohan. I work as a developer advocate at
Microsoft. I'm also a GDE for Angular
and I'm today with Olivier.
Yeah, I'm Olivier, my developer advocate
at AWS and I'm also a GDE, but this time
on web.
So, no surprise here. We'll talk about
AI and more specifically, in the last 6
months, the rising quality of the models
have kind of changed the game for web
developers. And it's not just the
models, it's also the integrations all
around it. So, AI is now there at every
stage of the life cycle of our web apps.
For development, of course, but also for
debugging, improving the performance,
natively also integrated in the
browsers.
And even coming full cycle as agents
increasingly seen using our web apps
along humans. That also means that we
have to adapt our web applications for
it.
So, the plan for today is to cover some
of the latest progress in all these
different stages.
Coding, of course, but also debugging
and tuning our applications using the
new local AI APIs
that have started to appear in our
browsers. And finally, Olivier will show
us how to upgrade your web apps for the
new agentic web app era.
For the teaser.
So, it's 2026.
It's no longer no longer the question of
can I code my web app with AI, but
rather how to get the best results out
of AI coding agents.
I still hear some folks arguing from
time to time that they can't get good
results with AI.
Or it's never exactly as they want it
when they're asking their coding agents.
The truth is that today it's mainly a
matter of skills. But don't get me
wrong. It's the one that you install and
use with your favorite code agent.
So, if you've never used them, skills
are lightweight plugins described in
text format based on an open
specification that's supported by by
most coding agents nowadays.
Basically, it's useful for adding
domain expertise for something very
specifically to your use case, to what
you're developing.
New capabilities that are not built in
into your agents. We'll see a bit more
about that, especially if you need to
customize it.
And also something very important in
this
agentic code agents era, it's to do some
repeatable workflows.
So, we'll see just a quick demo now of
what it matters to you.
So, moving up
to
my VS code.
So, let's just start something very
simple. Actually, before just moving the
code, let's just show you an example
application that we've built
that we'll use for most of our demo
today.
So, it's Seine.
It's a name of a French river.
Just to have like some e-commerce
website as an example. Here we have like
a product page with a description, some
reviews, possibility to add your
reviews. Just like having some
example for all our demos. And in this
product, I would like to add something
new. So, let's just try
a very simple prompt. What I'm asking is
just to look at open issues in the repo
and I'm asking my coding agents to
implement the first one.
So, it will take a
quite some time. So, meanwhile, I will
have to show you what will happen behind
the wood. Just as you can see, I not
specifically asked what to use. You can
see it's already trying to run the
GitHub CLI.
Let's move on to show you GitHub. So,
basically, this is the repo for this
application. I've created one issue that
is to add contact page. So, I'm just
describing
what I want to see in this contact page
for my website. And basically, when I
ask to implement the first open issue,
it has used the GitHub CLI to pull that
information and now it's
trying to implement that.
So, I mentioned skills. So, if I look
into my repo inside the .agent/skills
folder, you can see that I have a few
ones. And
you've seen already that it has used the
GitHub CLI to access the issues. So,
this is the skill
that implements that. Something I
already done know that's already
available. If you look into
the the skill.md file, you can see that
each skill has a name
basically matching what you have for the
folder name. It has a description.
That's the important part where
basically skills are not always loaded
into your coding agents. It's basically
pulled depending of what's needed to
implement the current task. So, the
description is there to explain your
code agents when it's useful and when
your code agents needs to get the
information from the skill into its
context. And then you have basically the
the information for the skill explaining
what the GitHub CLI does with a lot of
examples command and telling your agent
how to use that one. So, you can see
that I have a few skills here in my
repo. I have for example one
that's uh
allows to do better front-end design.
Something important for us as web
developers. I have one
that allows to use the Playwright CLI.
For example, you can see that
you can see later that it will record
a video of the feature, hopefully. I've
built also a few custom skills using the
skill creator skill. So, yes, you you
have a skill that I can help you
customize and build your own. And I've
built two, actually. One that's called
public tunnel
that can help
send to me what I wanted is basically
when a feature is implemented
by the coding agent, I want to be able
to test it on my smartphone. And to be
able to do that, I need like a local
tunnel between my dev machine and the
smartphone. And to make things easier, I
want
to be able to receive the URL directly
on my smartphone. So, I've built this
Telegram send skill and I've asked the
agents to send me a message with the URL
so I can test the application directly
on my smartphone.
So, this is my workflow and it all works
through skills. And basically,
what I explain is just described here in
my agents.md file, which is a standard
file now that is used by almost all
coding agents. And I just put in there
that every time
the agent make a new change to the
website, I want it to record a short
video. So, it will use the Playwright
CLI to do that, run the dev server,
create a local tunnel
and send me the URL Telegram. And
basically, don't close the GitHub issue
until I confirm it's done.
So, it's still running. I can see that
it's
it has already run the Playwright CLI.
It's
moving on to creating the tunnel. So, I
will show you the notification when it
will be working.
Back to the slides right now.
I'll show you when it's when it's ended.
Now, moving on to Olivier. I leave you
the hand to to show you the next step in
the development with agents.
Yes, let's see how we can use agents to
develop our application. So,
right [clears throat] now, most of you
already do this. You can write test to
test your application.
And or you can do what more like a lot
of people do, you know, you go on the
websites, you go in the dev tools and
you try to, you know, make it work and
everything.
Thing is, on Chrome, the Chrome dev
tools are amazing. There's like so many
things you can do. You can console
issues,
animation, the computer layout, you get
the performance. You have access to all
these tools here.
It would be amazing if an MCP existed
for that. Like an agent can call it.
That's what exactly what the Chrome MCP
does. If you go on the
on the GitHub Chrome
dev tool MCP here, you have the
information on how to install it.
Basically, an MCP server is a server
that hosts tools that can be called by
your agent. And it's very easy to set up
on any IDE, any CLI, whatever you use
for your agents. And so, this is what
you have to put to install the Chrome
dev tools
MCP. So, that's what I've done here. If
I go to my MCP.json, I have my Chrome
dev tool MCP. And you can see that in my
IDE here, I have it and it has access to
all these tools. So, everything I can do
in the Chrome dev tools
or in the browser, I can do it here.
Like click, fill form, get get console
message, the network request, the
lighthouse audit, navigate page, take
screenshot, everything. Resize page and
everything. So, I can do everything from
here. So, let's say for example, I have
my
my agent here. If I open my agent window
and I ask,
"Okay, you know what? Can you run the
application and see how the main page
works in Chrome browser?" So, if I do
that and you open my dev my tool here,
if I do that, okay, it's going to see
the agent.md and everything. But then
he's going to say, "Okay, to run the
application, I need to start it." So,
it's going to have a look at my
package.json. That's just like basic
engine
doing their job. It's opening a new
terminal.
It's running the application.
And now to test it, it's going to need
to open a Chrome. And that's when it's
going to call the
the NCP tool here. Uh so, it's going to
say, "Okay, I can navigate to a page. I
can list the page." I know it's opening
Chrome. I haven't touched anything. You
can see like it's Chrome being
controlled by automatic test software.
So, now it opened the Chrome and it's
going to test it. So, I can either like
run or test. Now, it
I see if it can take some screenshots.
So, here my prompt was very very basic.
I just asked, "Can you test the main
page?" And so, it's going there. It's
taking a screenshot and listing it. Then
you can do everything you want. You can
say, "Okay, you know what? I want to do
something else.
I want to Okay, let's
let's kill it.
Cuz I don't need it anymore here.
I can say, "You know what? Let's open a
new window." Say, "You know what? Run
the application in 3G, uh 2G, and fast
internet network and see how the
application performance is and where it
slows down." So, now the goal is to
control the Chrome DevTools and
including the network one and the
performance one. So, let's see.
Uh again, this prompt is very basic. So,
you can be way more uh specific.
Let it Let's give it a minute to see
what it's going to go through.
Sometimes also I didn't ask for it, but
sometimes it will also like generate
like screenshots or like JSON files here
of reports.
Um
Okay. Okay, it's launching the
application. Okay, it's navigating on
it.
And now you can see that Okay, test one.
It's going to use like a no or true that
performance start trace
uh to test it on Okay, fast internet no
throttling first. That's good.
Um it captured the baseline. Now, it's
doing some performance analysis.
You can see that every time you call the
tool, you can see the tool it calls is
the uh the the details. So, now it tests
on fast 3G. Remember, I asked like three
tests, three different
um
connection speed.
And let's see.
Give it another minute.
It emulates performance start trace.
Okay, it has it. Performance analysis.
And then it's going to do the last
one. Hopefully, okay.
Okay, let's see if it's going to
generate a test like a report at the
end.
Sometimes it does, sometimes it doesn't.
It depends. I haven't asked for one, so
let's see what it came up with.
Okay.
And I still have my window here. You
don't It doesn't open the Chrome
DevTools, but it's using the the tool in
the background.
Doing some performance analysis.
Okay.
LSP image and Stanley size. Okay.
Okay, let's give it like just a few
seconds to see
uh what it's doing. It's checking the
images. So, it must have noticed that
there's something we can optimize with
the with the image, I guess. So, it's
checking the the size of it.
Um it's analyzing the code. Okay.
Usually, it doesn't go that far as
analyzing code.
So, let's
I'm going to give it like 15 seconds. If
it continues Okay, now it's done. Okay,
you can see that it analyzed everything
and it can give you all like a these
reports saying, "Okay, for every
internet connection speed, I have the
LCP, the CLS, the critical path latency,
and the random blocking saving." And
then it gives you some like guidelines.
So, the headphone image is too big, so
it don't through 3G is not good. It's
giving you ways of improving it. So, you
can put a place priority high. You have
the size of CSS and some issues in the
JavaScript. You can put some like
preload for some JSON. So, you have all
these that you you can play with them to
improve your
your application.
Yann, does it Do you have any any news
on your your notification and demo? Uh
let's have a look.
Yeah, I did not receive anything. We can
check. Sometimes the agent gets stuck.
Uh it failed because it couldn't find
Oh, the token.
Yes, I have a data file. I'm not sure
why sometimes it failed to to find it. I
have a data file.
Use it and let me try.
Yeah.
Sometimes agents not always working,
especially during demos. You know that.
The alpha is for your um
the messaging application like uh Yeah,
so so it it knows which channel to use
to send me the the Telegram message. So,
yeah. Okay. Well, let's let's let's
continue and we'll come back to that
later, okay? Oh, and
there goes just telling it.
Uh so, let me try to show it to you. So,
yeah, I received the notification I
don't Yeah.
I received the notification on my
smartphone. So, yeah, we'll open it just
so you can have a look. So, yeah, you
can see
trying to focus in there. Yeah, can see
here preview of the video uh showing the
feature it recorded. I will open the
link. So, we can see the look of the
contact page it did. And yeah, this
looks like a great contact page.
So, yeah, I can also show it to you
since I have
uh
I have it running in there. So, you can
see now we have this uh contact page.
So, yeah, basically done the thing and I
can just test it and have a look at the
video. So, this was very simple feature,
but can see the why can be useful for
more complex feature and basically allow
you to review everything directly from
your smartphone and even without having
to run anything in the case of looking
at the video just to be able to test it.
Uh if you want to send it, I've sent it
to myself using a Telegram, but you can
uh for example, use Slack to send it to
your coworkers or anything else that you
can imagine.
It's your workflows.
So,
Yeah, it's pretty awesome, actually.
Moving on to the next uh nice thing. Uh
we've seen that uh with Olivier, your
dev your dev tools can be controlled
directly by agents through MCP. Uh but
do you know also that
uh you can use AI directly into your
browser dev tools?
Uh you can get all sorts of insight
directly uh from there, from your
browser to help you diagnose and fix
issues. And uh yeah, instead of just
describing any all the kind of things
that you can do, let me just show you
actually.
So, moving back to the website,
uh we'll move on to the about page, for
example. Let's open the dev tools.
So, we can see that we have an error.
So, first thing, if you want to have
like all the new useful AI stuff, you
have to go to the dev tools settings and
make sure that under the AI innovation
tab, you have uh enabled anything. So,
it's not uh enabled by default. You have
to do that yourself if you want to get
this AI assistance.
And once you have done that, for
example, here uh you can see that I have
like uh
uh common error in the console saying
that yeah, I have
some requests have been blocked by CORS
policy. And you can see this
uh this little
icon in there. I can directly click on
it to get an explanation uh
uh using AI about this error
and even a suggested fix why this error
happens and what to do to fix that. So,
directly in your console, you don't have
to like copy paste errors uh using that
and your coding engines or ChatGPT
depending on whatever you're using. It's
directly in there inside your dev tools.
And uh yeah, you can see it's pretty
extensive
uh and giving you all sorts of hints uh
to try to fix the issue. So, let's try
to to see where else it can be useful,
for example, in the network tab. Let's
reload the page. Uh you can see one of
the failing request that has like 400
errors. Again, uh we have this debug
with AI icon. So, I will click on it
and uh just can see that my failing
request has been added to this uh chat
interface thing. And I can, for example,
ask why is the request failing?
So, I have a sort of built-in uh
chat uh with AI that can directly access
uh the context of my running application
from directly from the dev tools.
So, again, uh it's analyzing why this
request is failing and saying
uh giving me some hints about what I may
try to do to try to to fix that. Okay,
telling me it's bad request, most likely
because
uh that's an old endpoint that's no
longer there.
So, yeah, I think that's kind of cool
that you can do directly that uh from
your dev tools inside your browser.
Can do all sorts of uh other fun stuff
like, for example, uh
Let me move on, for example, to GitHub.
Uh this will be more meaningful with
that one. I move on to the performance
tab.
So, uh this time I will click on the
refresh
uh in there. So, it will gather some
insights about the web page like how
long does it take to run the different
things and some metrics about it. So, I
have
uh this trace. So, it's a bit similar to
kind of the test that Olivier did
earlier with the MCP dev tool server.
For example, I can open the LCP
breakdown. And again, you can you can
see that I have this ask AI
uh button. So, I can ask it using my
trace that I just recorded on the GitHub
website to help me optimize uh my LCP
score. So,
running it, it will analyze all trace
from the website and telling me, "Okay,
the LCP, the largest uh contentful paint
for this page is uh okay this time." And
uh basically what's causing most of the
delay
uh
because uh it's some stuff and
Oh, I can't see uh investigate render
blocking issues. Sometimes it's more
verbose, sometimes not. Again, uh this
is AI.
So, yeah.
Uh it can tell me uh a few uh hints
about what I I need to optimize. Uh all
the time it was telling me it was mostly
because of CSS and yeah, you can get
some hints
uh if you're not sure about what to to
start with uh for example to optimize
the performance on on your website.
But it can do more than that. Uh let's
go back to our send website.
Uh this time
I will for example uh select this about
send elements.
Uh I'm in the CSS. You can see here,
this is uh this H1. So, here I'm
navigating the DOM and you can see again
I have this uh AI debug with AI button.
So, selecting it, you can see that is it
has added this specific H1 element in
the context. And I can ask thing for
example, let's say that uh I want to
uh change this boring title and make it
a nice gradient. So, I will
just tell it to make the send text uh
CSS have a nice gradient and I want it
to be in line uh with the existing color
theme because I'm already using some CSS
variables. So, yeah.
So, let's continue. I agree that uh
the page can be modified. And yeah, you
can see that I have very nice gradients
uh in line with all the other colors
that I have in my my website. And you
can see even more something uh
interesting cuz from there I just
modified uh the the CSS live in the web
page, but what could be more useful is
to directly modify the source code
because here I'm just making
non-permanent uh changes. So, you can
see that I have this unsaved change uh
tab in there with uh the CSS that was
added to my web page. I can try to apply
that change directly uh to my uh code
using apply to workspace. What it will
ask you basically is to add your source
folder uh to the dev tools. And uh it
will allow uh the the dev tools to
directly do this uh CSS modification
back to your source code file. So, it
will it's now uh
not only able to to do like live
modification on your DOM CSS and
JavaScript, but also
uh able to uh apply back that changes uh
back to to your original source code.
So, I think this is very interesting uh
especially as uh yeah, as a web
developer, I tend to fight with CSS most
of the time. Right, Olivier?
>> [laughter]
>> Yeah, and after after like you do a lot
of changes on the on the Chrome dev
tools and then
you don't remember which line you had to
copy paste on your CSS file and then
you're like, "I can't find it again."
because then it refreshes anyway.
We've all been through that.
So, yeah. I found that very interesting.
Also, it reduces like the back and forth
that you can uh have sometimes between
like uh your browser when debugging or
tweaking the CSS and uh your coding
agents because basically you have
everything at one place.
So, now
>> Okay, so we we've seen how to code, how
to debug with AI. Let's see how we can
just include AI in our application. So,
uh you may have used some AI APIs either
directly from an AI provider, from a
cloud provider. Um but that requires uh
to make some calls on the internet, uh
usually to pay for it, to use some
tokens and everything.
The good thing is that there's a new um
API called like a web AI API that would
come directly in the browser. So, now
it's like um still in like a draft, I
think under the W3C. But you can see
that we have a lot of different API. We
have some summarize API, we have some
like uh write rewrite, um we have the
prompt API. And uh basically, let's see
how we can use that in our browser. So,
the goal is to have like a model
directly running on our machine in our
uh browser.
So, here I have an application that
you've seen it from your end. I have
some reviews, I can actually add some
reviews. And you may have seen that in
in some website now that they give you
like a summarize of all the reviews. So,
let's see how we can implement that.
So, I'm going to go to uh here my code.
And I have some I have like three
different demos that we're going to uh
to do. So, the first one is summarize.
So, I want to when I click on this
button he gives me a summarize of all
the all the the reviews here. So, right
now he's doing nothing except for like
calling this function.
So, let's let's start by checking
because not all browser manage like um
have that API yet. So, let's see if my
browser has it. And then I'm going to
create a summarizer summarizer like
summarizer.create.
Uh then I'm going to give some options.
So, the options I'm giving the type. So,
the type here is key point. It's the
default one. If you go on the
documentation here, you can see that
here are the different types you can
have. So, we have TLDR, teaser, key
points, headline. And for each of them
you can do like a different length and
you can see the size of sentence or
words. So, you can decide which one you
want to use. I'm giving the expect input
language of my reviews and the expect
output language. You can see that this
is an array, so you can have different
uh several um
um
output uh input actually. Um
Okay, I'm going to give the context. I'm
going to say like, "Okay, these are
reviews um for an article you give
uh as a strong stringified JSON. Give me
a summary of what people think."
Then I'm going to
um
Oops.
I'm going to give you a monitor. So,
this this function monitor is here to
monitor the download of the model. So,
how this API works is that it's going to
download the model on your computer.
It's going to download only once, which
is good because it takes 4 GB right now.
But when it's done, it's done for every
website.
It's if your computer needs uh storage
like your your like storage if you're
running low, Chrome is going to delete
it, but by default it's going to uh keep
it.
So, I have this uh monitor and then what
I'm going to do is
call it. So, I'm going to like
summarizer.summarize
and giving it the the data and then uh
let's
return summarizer. I just return the the
response as I'm going to do here
actually.
All right. So, now let's see what we
have here. Going back here.
I'm going to click on summarize and now
what is happening is that
uh okay, saying something about the
model.
So, it's calling my function again.
And it should give me in a second. So,
you can see that my the model was
already downloaded for me. So, it goes
from like 0% to 100% directly because I
don't have to download it again. You can
see here I have a summarize of all my
reviews. So, customers are really
praising the headphone for their sound
quality and battery life. Blah blah blah
blah blah.
Few things there. For it to work, you
have to activate some of the flags on
Chrome. So, just search for AI usually
and you're going to find it. So, just
for like uh
Gemini
Gemini and have the prompt API, the
proofreader API, you have the writer
API, the rewriter and everything. So,
just enable them so you can use them in
your
in your application.
Uh here you also have these on-device
internals on Chrome. So, you can see a
few things. You can load a model or load
like uh the sound model and then you can
just talk to it. You can add images,
audio, play with the top K, temperature.
You can see all the event logs. So,
these are the event logs of what I just
did with the summarize.
Uh you can see the model status of how
many tokens have been used for each of
the
of the calls of the API. So, yeah, this
is like a good way to debug. Now, let's
see another
API. I'm going to see the proofreader.
So,
uh save.
Checking if I have access to the API.
I'm creating a proofreader. So, the
monitor is the same. I'm just monitoring
if it downloads the uh
the the model.
Then I'm going to say, "Okay, I'm going
to give you a
list of expected complex language."
Okay, here I'm going to say okay, this
is going to be uh English because I want
you to correct it.
And then I have this
uh
I'm calling the proofreader.
And returning the proofreader. But now
this is useful to um to fix like
spelling issues. Let's say I have my
write review here and I'm just like this
is is
a
very
uh
good
article. I'm going to say like lower.
Okay. So, for example, I'm writing this
and if I leave the focus, you can see
here I'm calling the oops, I'm calling
the API that's going to take the
analysis. You can see that it corrected
like this is a very good article. So,
all the mistakes I did you can see that
it changed them. And actually if I you
know what, let's we have time? We have a
little bit of time. Let me print it.
If I print the
uh wait, it's console.log
Yeah.
That was bad.
>> [laughter]
>> This is good
products. Again, going out. All right,
click clicking on the And you can see
that it corrected again. And you can see
that it also gives me so the correct
corrected output and all the correction
with the and start index and index and
what it changed. So you can even have
like some like correction things if you
want on your on your input.
So these are two examples. I'm going to
let you on do the actually the cool one,
the cool demo.
Yes, let's
add So what Olivier showed you is uh
basically the very uh focused API for
summarization,
for uh proofreading, uh but we also have
access uh to um
like more general API like the one you
may have been used uh for example, if
you're using the the OpenAI API or
whatever uh AI provider you're using to
just basically send it uh your prompt.
So we have this uh long language model
that create API.
Uh you can set what kind of expected
input you have uh because you can send
it text and in our case I want to be
able to send it uh images.
Uh you can also have audio as a type, so
it's multimodal.
Now that I have uh
here uh you basically build your general
prompt. So uh what I want to do is
basically have uh
an auto writing uh of a review based on
a uh just an image uh that I upload. So
what I say is okay, this is an image of
the product and I want to generate a
description for a review to mention the
condition in one sentence and basically
tell how you felt uh when receiving the
product and generate also title. And I
want uh the result to be a JSON object
uh with the title and review contents.
So okay, this is my prompt just like uh
when you're trying to use a regular AI
API.
And uh next step is basically uh to uh
run the prompt API. So session.prompt
here to get the response. Uh so the look
of the input is basically uh set a list
of messages. Here I'm using uh the user
message to uh send our prompt in there
and the input image. So the content you
can see you can mix and match different
kind of content.
>> [sighs]
>> I want to have uh JSON
as an output. So I need to add some
constraint to the to the response. So I
want just to say uh
my response needs to follow a specified
uh specific schema. So
let's define the schema in there. And
the schema is just a a JSON plain JSON
schema. I think that I want an object
that has title that's a string and a
description that's also string.
And now I should have uh everything
except to uh return the result and also
print that in the logs.
So uh let's test what we just did.
Uh moving on to the browser.
Let's give it
a bit more space. So
I want
uh to try this prompt API to basically
write the review in my place. So I will
upload this image. So this is the
headphone I received. As you can see,
not in pretty good shape. So let's see
what review it will come up with. So if
I select analyze, oh
just want to show the console in there.
Oh, that may be new. Uh it's saying that
I didn't specify the language. So yeah,
as you can see, Olivier has uh always
specified in which language the request
was sent.
So uh this is the result. You can see
the JSON and you can see that uh it has
filled in the the field for me. So
devastatingly damaged, broken headset
upon arrival. And of course uh I was
disappointed and frustrated when
receiving uh this kind of uh
of product. So yeah, I can just submit
the review in there basically and uh can
see saving me some time just uploading
the image and have the AI write
everything for me. And again uh just
recalling what Olivier already said, but
this is all using a local model uh
running on the client machine entirely
in the browser. So nothing and no using
any web APIs. It's all within uh the
browser. So I think that's pretty cool
and this is just a single uh example use
case of the kind of things that you can
do uh because this uh local model, as
you can see, it's uh multimodal. It can
understand images, it can understand
audio and text and you can do all kind
of stuff uh without having to pay like
for an external API.
Yeah, is it this is this these APIs are
still new. I mean, if you can move to my
um to my screen. I just I just went to
open the the summarize API on on the uh
MDN documentation and you can see that
it's still like highly experimental and
you can see that but actually I just
feel like Opera is already implementing
it. You have Chrome. Um Edge is coming.
So the APIs can still change. And what
we just saw on on your own screen is
actually new. We didn't have this like
language exception like a week ago when
we we tried it. So just
be careful. Like these APIs can change.
So if you implement it in your website,
you know now.
So yeah, very experimental, but I think
also very exciting for the kind of
uh possibilities that it opens for for
web developers.
I mean, the fact that you can take an
image
and come up with uh an explanation
what's on the image. I mean, we've seen
like some Pictionary demo where you can
like you have an image and people can
should draw on their on the screen and
then it compares the image and give you
like a percentage on does it look like
the same thing? So it's actually pretty
cool without relying on
any cloud, any online model, any like
token you have to send or whatever.
Pretty cool.
Okay. Uh yes.
>> have next? Back to the slides.
>> Oh.
Last [snorts]
uh but not least section uh because this
means okay, AI AI agents do a lot of
works for us uh nowadays, but we also
have to do some work for them.
>> [laughter]
>> This is the time that yeah, you actually
need some
human humans to like
upgrade your websites uh for
for agents. So yeah, agents are capable
of browsing the webs.
The new thing is that you have to
optimize your website not only for
humans, usability
uh and good SEO for being discoverable
in search engine, but now we are also to
think about uh the way agents can
consume and use web your web apps. So
that's brand new thing.
And uh first we'll start with just a
very simple proposal. Uh just like we
already have like robots.txt
uh that has been adopted for search
engine uh that are already crawling the
web uh just to give some rules about how
the these crawlers uh navigate to your
websites. We also have like sitemaps uh
for humans to improve how they can
navigate the websites in more accessible
way. We have this new LLM.txt proposal
that's basically a bit of uh mix of
both. So it's used by agents to act as a
map uh to discover where it can find the
information it needs on your website.
Just kind of mix of robots.txt and uh
for the format the text file format and
uh the sitemap. And uh let me show you
actually like an example.
One
uh already told you I'm an Angular GD.
So I will show you the LLM.txt for the
angular.dev website. So uh this is what
you get. You get a markdown file with a
bunch of links. Uh so basically if an AI
want to search for the documentation, it
doesn't have like to go through each web
page to try to find the information it
needs. Uh for example, if I want to have
something about animation,
it will basically directly uh guide the
AI agents to move on to look at one of
these documentation page depending of
what you're trying to do. This is uh the
basic LLM.txt
uh premises. So making it easier for
your agents to try to find the content
uh that they need. But uh it can go a
bit further because we have like this
LLM uh dash full.txt variants where uh
basically it brings in all the contents
of your website into a single file. So
uh we have also one for Angular.
Uh this one is pretty extensive like as
you can see the scroll bar in there. If
And if you're uh scrolling a bit inside
there, you can see that I even have like
some code uh file example. It's all the
contents of the latest Angular version
gather in one single text file that you
can feed your agents. Uh for example,
one of them
uh difficult thing with uh sometimes
with using coding agents is that uh uh
their their last checkpoint was based uh
using like older version of the
frameworks because uh yeah, you can't
always train the new model with the the
new contents. Yeah, sometimes it have a
month or years uh of delay uh regarding
the content. So it doesn't know how to
use like the the brand new latest
version of your framework. Let's say for
example for Angular. So if you want to
make sure that uh for example, I want to
code an application using the very
latest feature of Angular and the very
last version, I want to make sure that
it use the most up-to-date reference. Uh
I can feed it this LLM uh dash full.txt
file uh to my coding agents. So, uh it
has all the
the up-to-date information to make sure
that I don't use like old feature from
the training data like old Angular JS
example from 10 years ago and stuff like
that. So, this is
kind of cool and helpful.
For example,
to make sure that agents can have the
latest information about
this is like for a coding library, but
it can be you translated to any kind of
content that your website provide.
Now,
last but not least Web MCP. Moving on
to the slides and I will give the hand
to Olivier for this one, the very last
and fun demo that we have.
Yeah, we went from
from like experimental
APIs to like very very highly
experimental.
So, Web MCP, can you share my screen,
please? Yeah. The Web MCP, I mean, if
you want to see how experimental it is,
go on the website of Web MCP. This is
this is the website right now. So, not
not much.
But the idea is that
okay, we have agents can browse the web
as Yuan mentioned, they go and they
check the LMS, that takes text files.
But more and more we're seeing like
AI embedded on your browser that you you
have like a we we call it like an
agentic browser. So, basically it's
going to browse the web for you on your
behalf.
And right now we have some tools can do
that. They're going to open a browser,
they're going to click and browse. But
the way they do that is that they're
trying to mimic human interactions. So,
they're going to look either at the page
by taking screenshots or by looking at
the DOM and say and like okay, there's a
button here that says that I can click
on it taking the coordinates, going and
click on it. Same for a form. But
basically, they are trying to mimic um
a human behavior. So, the website I've
been designed for humans, not agents.
And so, this is exactly what this
proposal is trying to fix, the Web MCP.
The goal would be to have like an MCP
server, let's say, running on your web
application. So, you'd have access to
tools.
And we know that tools that agent
understand. Agents can call tools
because they have access to the name,
the definitions,
and they know when to call them. So, let
me show you. For example, I have the
application here again. I have a add to
cart here
button. So, you can see that I have
something in my cart now.
And but if an AI would have to do the
same, it would have to open a Chrome,
try to guess that there's a button here,
get the coordinate, and click on the
button.
But what if it could have actually have
access to a tool,
like an AI tool,
to call it. So, here I have Chrome.
Chrome is not yet an agentic
IDE,
but I have an extension here that can
show if I have any tool registered on my
page. So, let's see how I can register a
tool on my page. I have a cart tool here
file that just basically import the add
cart. Add cart is the function that is
called when I click on add cart here. It
just add something to my cart.
And so, I'm going to
create a tool.
So, if you if you remember if you're
creating tool before we had all these
SDKs, this is how we used to do. We used
to have like a JSON.
You would give it a name.
You would give it a description. So,
here it's a a tool to add items to cart.
And give me a a schema. So, it's taking
an object with the item its item name
and the quantity. So, I don't have a
database, whatever. So, just the name
and the quantity with it.
And then I have the the execute. So, the
execute function is a function that's
going to have that uh
the business code in it.
So, I'm going to get the the arguments.
So, I'm retrieving the item and the
quantity.
I'm going to loop over the quantity
because my add to cart only takes one.
It doesn't manage quantity. So, I'm
looping and adding to cart
um
what I have. And I'm just returning like
whatever. Just say the quantity item has
been added.
And the other thing to do is to register
my my my tool. So, I created my tool and
I registered it on my the navigator
object of my
my page. Now, if I go back here
and I refresh, you can see that my
my data extension here, see that I have
a add to cart tool and I can call it.
So, I can say okay, I want to add I
don't know
water
bottles and I want to have five.
And when I execute the tool here,
basically, you can see that it added
like five water bottle.
And it's basically like an AI who did it
on my behalf. It's calling the tool that
is registered on my page.
Let me see. I had a We have a bit of
time. I'm I'm I'm testing it.
Testing if I if I can call it directly
from my my IDE here.
Yeah, I know. I have a lot of things.
Let me see.
Yeah. So, basically, I can I could even
do it like from here. So, here I have my
agent. I can say like, you know, add
Okay, I'm going to remove the dev tool
so I don't have too many tools.
Add three
I don't know.
Three
phone laptops
to my cart.
So, I'm going to do that and normally if
everything works well, it's going to
call the
the tool that is registered on my
on my web page in my Chrome. So, let's
see. So, it said that there is an add to
cart. Oh, what is it doing?
>> [clears throat]
>> It's creating tool that makes my
content.
Okay. So, it's calling yeah, it's
calling the tool add to cart, laptop
quantity three. You can see here if I go
back to my web page now. Okay, let's
see. Down three laptop. I should have
like three laptops here. So, now I
called it either from like an extension
or from my
my IDE, but the goal would be that you
your your browser can do it for you
because it it's an agent type browser.
It can navigate tabs and do it for you.
So, you can see that my tool is running
on my web page. So, and it's not much
code, but there is even like a better I
don't know if it's a better way, but
let's say you don't want to add some new
JavaScript to your application because
you're transitioning and you want to
just
test these these MCP tools
on your application without changing
much the code.
So, let's say I have the form here. The
form is to write a review. What I can do
is I can say okay, I'm I'm going to add
a tool name.
Write
review tool.
I'm going to give it a tool description.
Here I'm going to say like add review to
the product.
And by doing that, I've transformed my
form into a tool. It's going to take all
the different inputs in it and then
transform them into
arguments. I can even like do some like
tool
param description. You say like rate the
product. So, I can add some descriptions
to it. For example, I have the
whatever do I have as like the input
here. It's like okay, tool param
description equal like add a title
oops, add a title for the review.
And I can add these, but I don't have
to. If I go back here, you can see that
now I have my write to review. And this
is the schema of that it has generated.
You see it's basically the same thing
that we had here
when I manually added the schema.
But
you can see that I have my
Where is it?
My tool. So, I have that the write the
product and it took all the input
options here from one to five. I have
the review title with my description. I
put the add title to review, but also
have the review text and the review
photo. And it automatically generates
some description. And to do that, it
took the nearest label on my
on my HTML. So, if I go to the add a
photo, you can see I have a label add
photo optional and that's what it put
here by by default. So, let's see if
that can work. I'm going to call the
write review tool.
I don't have a photo. I can say like
it's a it's a good for
um let's see if I can inspect so I can
see it here.
I can say like title
awesome review. I can say like review
text
perfect
product.
And I'm going to execute the tool.
So,
is it doing something?
No, it's not supposed to add it here
anyway.
um
Oh, yeah. No, no. It's supposed to
like like that.
Uh-huh.
What's happening?
I said my file is finally saved.
Okay, let's
let's try again.
For so like
awesome
product.
Love it.
I'm going to execute it. Oh, yeah. And
you can see that it filled the form for
me. Yeah, the awesome product here. It
added love it. I don't have a picture,
but it could have it. But you can also
say okay, this is good because it filled
the form, but I want also it to validate
the form. So, if I go back to
my form here,
I'm going to say
um tool
tool
auto submit. By by doing that, it's
going to both
uh fill the form. Let's say three.
Perfect.
Also, well, wait. Okay. Don't pay
attention to the mistakes.
And then, if I click execute tool here,
it's going to fill the form, but also
validate the full form itself. It
doesn't even like require any human
interaction. It's going to fill and
validate. So, this is highly
experimental again. And actually, the
API changed like 10 days ago again.
So, be careful, but just know that it's
like we we used to say that that it's
like uh you know, responsive design at
some point. You had to adapt your
website for mobile. If you didn't do it,
then the competition did it, and then
people wouldn't go to your website on
the mobile. And so, I tend to think that
this is the same. Make sure your your
website
is going to be prepared once we have all
the agent agent peak uh browser coming
out on the market. And so, you can start
experimenting, again, highly
experimental, but start experimenting so
you're ready when we have all these new
browsers coming up.
And the demo worked.
>> [laughter]
>> Nice. Yeah, I expect this is a glimpse
of what we'll have to do as web
developer in in the near future cuz
yeah, agents are coming from the web
very fast, and
I expect the the this specification is
already moving very fast. It has been
useful. Yeah, if you looked at the state
a few months back, it wasn't already
usable just like you you shown. I
especially like the feature how to
upgrade like existing forms
as MCP tools cuz it makes the life of
developers like us very simple, and
even agents to implement that for us.
Yeah. So,
uh in the end,
what we've seen during the this session
is basically
with AI, it makes the life of web
developers like us easier.
Whether it's writing the code,
implementing
better workflow, debugging, of course
very important, showing the performance.
But also, we have to help the AI tools
be able to better use our website and
web apps. So, it's a bit early in the
process, but yeah, you can start already
thinking about that.
LLMs.txt
is already widespread nowadays. MCP the
norm is already widespread, and Web MCP
is coming for the next big thing,
hopefully. So, yeah, you have to prepare
for that, and hopefully,
it will just make
better web apps in the end.
Yes.
And yes, thanks for for seeing this
session. And we have a QR code in there
with basically all the resources, the
the code for the demo, and the links to
the different resources that we've used,
we've shown during during this session.
So, yeah, and just if you have any
question, you can ping us on LinkedIn.
In the meantime, have fun. Have fun. See
you. Okay. Bye-bye.