Building the platform for agent coordination — Tom Moor, Linear

Channel: aiDotEngineer

Published at: 2025-07-28

YouTube video id: UG9IAdmi2Dg

Source: https://www.youtube.com/watch?v=UG9IAdmi2Dg

[Music]
So yeah, I'm I'm Tom. I uh lead the
engineering team at Linear. Um, and
today I would love to talk to you a bit
about our story with AI, how we think
about AI as a company, uh, some of the
features we've built, and then how we
see software development going from here
and perhaps Linear's place in that
future.
Um, so just for anybody that hasn't
heard of of Linear in the room, um, that
you might not be familiar. So Linear is
a product development tool. Um, it's
disguised as an issue tracker, we like
to say. Um, we spent the last five years
obsessing over the speed, clarity,
removing friction, making it just the
best tool for IC's um to to use to work
every day. So yeah, it started as a
simple tracker and and now we think of
it as as an operating system for
engineering and product teams to to
build their products and we're used by
openai, ramp, fcel, thousands of other
modern software companies you've heard
of um use linear to kind of keep track
of their work.
So just a little bit of of history uh of
of our AI journey as it were. Um we spun
up an internal like skunk works team in
early 2023. Um which I think was about
GPT3 if I remember rightly. Um our
initial focus was on kind of
summarization
uh some similarity. We were looking at
embeddings. We nobody on the team had
any AI experience. So we're just kind of
jumping in and figuring it out as we go.
Uh one of the things we realized really
quickly was that uh many of the features
that we needed to build needed a really
solid search foundation. Um almost
everything you need to first find the
relevant stuff, right? So right we had
elastic search at that time. Uh and they
didn't have a very good vector offering.
I think maybe they actually did have no
vector offering back in uh 2023. Um so
we looked around and this was kind of a
moment where there was like a hundred
startups suddenly came out with vector
databases, right? like there's pine
cone, there was this, there was that.
And uh so we looked at these, we
evaluated a few. They all had a ton of
trade-offs. Um and so we we literally
just ended up after experimenting with a
bunch of things and we had like OpenAI
embeddings and we stored them in PG
vector and we put the PG vector on GCP
and it was like the most classic linear
decision ever because it was so pra so
pragmatic and just use use the solid
things. Um so on that base we we shipped
some features, right? We shipped a v1 of
similar issues uh where we're kind of
suggesting related issues. This was like
in hindsight two years later so naive.
Um we we were just doing simple cosign
embedding op uh comparisons um against
the the vector database
and we've we shipped natural language
filters. I actually think this is one of
the better ones where um you can just
type in natural language bugs assigned
to me in the last two weeks that are
closed and it will produce the filter.
So it's very oneshot, very naive in
comparison. Yeah. But um pretty useful
and kind of hidden I would say as well.
We also have another feature where if
you create an issue from a Slack thread,
we will um not just pass the text from
the Slack message, we will we will try
and produce the right issue from that
automatically. And that was like so
seamless and hidden that I think a lot
of people didn't even realize it was
happening. Um and we never shipped a
co-pilot. We tried. it was like it was
co-pilot season and um we just the
quality wasn't there. You know, we we
have this quality bar and it did not
reach it. So, um I don't know if it was
a lack of imagination for our team
because we weren't like AI pilled enough
at the time or uh it was like the the
capability of these early models. I
think a bit of both to be honest. Um so,
you know, we uh I think this was the
right approach at the time in a way.
Like a lot of people on on Twitter kind
of noticed. They're like, "Oh, these are
very seamless features. You're not
slapping AI in our face." Like there was
literally toothbrushes that said they
had AI. Um I think it's probably much
worse now to be honest, but you know,
people kind of appreciated this approach
uh of like small pragmatic value ads.
And then like fast forward to 2024 and
you know, we we we've added a few things
since then, but it really feels like at
2024, the end of 20 24 we hit a turning
point. um you know O3 coming out the
planning and reasoning models the
multimodal capabilities became available
in the APIs the context windows went
through the roof uh you know have like
million token context like you can do
crazy things with that um deepseek of
course made a splash and uh we felt like
uh some of our experiments started to
become a lot less brittle and things
actually felt smart um things kind of
clicked for the team a little bit more
the we we saw how deep this could go.
So, uh the first thing we did was we
started by uh rebuilding our search
index again. Um which I don't know if
you ever like back backfilled like
million hundreds of millions of rows of
embeddings. It takes a while. Um so we
moved to a hybrid search approach. This
was something that um we we had really
felt was lacking over like the year and
a half that we kind of had PG Vector sat
on its own and we weren't um we didn't
put it in our main database because it
was so huge. So it was kind of sat in
its own thing. So uh we moved to
Turppuffer. If you've not heard of
Turppuffer, really really cool uh search
index. I'd highly recommend giving it a
look. Um and we moved our embeddings
over to Coher after doing kind of a
comparison. and we felt that they were a
lot uh a lot better for our domain at
least um than than open ais.
Um, so this kind of filled a gap in the
surge. Um, and this is actually just
finished rolling out in like the last
two weeks um because the the back fill
took took such a while. But now we
thought we okay, we've got a really
solid search foundation. What are we
going to do with this? Um, so first
thing we did is like we're building this
feature called product intelligence. Um,
this is basically like similar issues
v2. So instead of just doing sim simple
cosign matching, we now have a pipeline.
um that pipeline is using query
rewriting. It's using the hybrid search
engine. It's reranking the results.
We're using deterministic rules. And
then out the other side um what we get
is a map of relationships from any given
issue to its related issues and then how
they are related
and and the why they are related. Um and
then what we're able to do with that is
expose this in the product. I hope
that's clear enough as you know we have
suggested suggested labels suggested
assignees um possible duplicates and
then on things like projects it's like
why this might be uh uh why this might
be the right person to work on this
issue or why this might be the right
project for this so you know we're
working with like the open AIs of the
world they have thousands of tickets
coming in and they really have to have
as much help as possible to to kind of
churn through them and and get them into
the right the hands of the right
engineers
Um, oh, I think I skipped one. Yeah. So,
the next one was uh customer feedback
analysis. This is something we're
working on right now. Um, so one of the
other features of Linear is you can
bring in all of the customer feedback um
from all of your channels and then use
that to help to decide what you're going
to build. Um, and so obviously one of
the steps there is, okay, we have
hundreds of pieces of feedback.
How do we figure out what to build from
this? Right? So uh of course LLMs are
are great at analyzing text and we found
that um I think our head of product
actually said that uh our analysis was
able to beat 90% of the candidates he
talks to in the interview process um for
what they're able to do in terms of uh
analysis. So um we're able to yes churn
through hundreds or thousands of
customer requests and then figure out
for this given project like how might we
split this up? What what features might
be created from this? um which is pretty
cool.
Uh another feature we've already shipped
is a daily or weekly pulse. This uh
synthesizes all of the updates that are
happening in your workspace. Um creates
a a pulse from it. Uh like a summarized
pulse. Um and then we also produce like
an audio podcast version which is pretty
cool because you can uh pull open our
mobile app and then listen to that on
your commute. Uh I hope we have an RSS
feed for it soon. I really want to just
subscribe to it in a podcast player. Uh,
so although I put podcasts here, it's
not quite a podcast. You have to have a
mobile app or the desktop app. Uh, but
this is great. You just like over
breakfast like what has the team been up
to? Well, I was asleep. Um,
oh, that was a uh sorry, that's the
that's the visual of it. Apologies.
Um,
uh, and then yeah, so one other feature
I'll go through here is um this issue
from video. So literally
so many bugs can come in as video
recordings from customers, right? Drop
the video. Um we'll analyze it. We'll
figure out the reproduction steps and
then we'll create the issue for you uh
from that. Maybe not the finest example
of that feature, but um another kind of
like seamless but but very powerful and
saves a bunch of time.
Um so of course we're we're baking as
much into the platform as we can um in
terms of these things but uh there's a
limit to that right we can't put put in
everything we don't know every team is
different every team is shaped
differently so we want to make this
pluggable um and this is kind of where
agents come in so the way we're thinking
about agents is as infinitely scalable
cloud-based teammates um this is a so we
launched a platform for this two weeks
ago um we figure you know we're already
doing a pretty good job of orchestrating
humans. Um, we are a communication tool
for humans after all. Um, and if agents
are going to be members of your team
going forward, then they should also
live in the same place where all of the
the human communication happens. Um, so
first hopefully if the internet stands
up, I'm tethering. Um, I'll I'll do some
I've got some some videos. Uh, yeah. So,
Codegen is one of the first um coding
agents that integrated with us. Um
so they can is this gonna play? Cool.
Yeah. So codegen you can assign it, you
can mention it um inside of linear like
any other c any other user and it will
produce plans. It will produce PRs. You
can see here it's going to pop in. Uh
this is a sped up by the way that took
four minutes not 20 seconds. Uh but yes
it will produce the PR. Then you can go
and review it like you would any other
any other worker uh any other team
member.
Um this is really powerful by the way
and you can because it's uh an agentic
system in the background you can also
interact with it from not just from
within linear but from within slack or
from other communication tools and you
can say go and fix this ticket uh and
give it a linear issue and it will know
how to connect it all up um or you'll be
able to interrupt it uh part way
um bucket is a feature flagging platform
that that integrated with our the first
version of our um agents uh platform
here. Let's see is this going to Oh no.
All righty. Yeah. So in this case you
can just mention the bucket agent, tell
it to create a flag. It will create a
feature flag for you. You can roll it
out. Um you can check the status of
things um all within here. And of course
because it's Aentic, you don't have to
go command by command. You could say
create a new flag, roll it out to 30% of
users. Um and things like that.
And then Charlie is another coding agent
with access uh to your repository. It's
really good at creating plans um and
doing like root cause analysis of bugs.
So in this case uh we have an issue
here. It has a sentry uh issue attached.
Um we can just uh mention Charlie ask it
to do some research. So it can go and
look at your recent commits. Um it can
go look through the codebase and it can
kind of figure out uh the cause of this
issue. And you can imagine immediately
right like this has saved a lot of um of
minutes of engineers time. They can come
in here and immediately see uh possible
causes and regression reasons for for
this issue.
Um so the examples I've shown so far
have been uh kind of living in the
common area of an issue. Obviously
that's uh not quite where we want to be
in the long term. So, you know, we're
working uh on build building uh uh
additional surfaces for this in the
product. Um so that agents aren't just
like the same as as users on the team.
They're they're kind of better because
you can see what they're thinking. Uh
and I can't see what my teammates are
thinking a lot of the time. Um so yeah,
so we'll we'll have this surface where
the agents can send you their
observations. They can send you the
their tool calls. You're able to kind of
go behind the scenes of the agent.
you'll be able to um interrupt it. Um
and then this is kind of consistent
across the whole workspace, right? So
you have different coding agents, you
have PM uh PM agents. Um one other
company that's building an integration
with us right now is intercom with their
Finn agent. So you'll be able to do
things like just say, "Hey Finn, I fixed
this bug. Can you go and uh can you go
and reply to the hundred customers that
reported it?" And you know, how much
time did that just save? So, we're
building this this interface out right
now and expect to have it in a couple of
weeks. Um, but I've been really using
these features a ton and uh I've been
hammering this for months and I I I
think it it really changes the game and
we'll expect kind of the amount of bugs
sitting in companies backlogs which we
kind of take for granted that you have
this giant backlog that you're never
going to get to the bottom of. Um, I
think there's just not going to be an
excuse for that anymore. Um, the the
agents can tackle it for you. Um,
there's nothing to stop you assigning
every single issue in your backlog out
to nation. Have it do a first pass.
Maybe 50% of them will be fixed by the
end of the week. Um, so I think yes,
we're really in this world now where you
can build more, you can build higher
quality because more of the grunt work
is being done and you can build faster.
How much time we got? So, uh, I'll just
talk a little bit about like the
architecture of this. Um so yeah in
linear agents are first class users um
they have identity they have history you
can see everything they do there's a
full audit trail um of those events uh
you install them via oorthth um and then
once they're installed kind of any admin
on the team can manage uh manage that
that agent and its access and they work
fully transparently.
Um so we have a very mature GraphQL API
at this point um which basically enables
agents to do anything in the product
that a human could do. Um and granular
scopes and then we added brand new web
hooks for this specifically where if you
are developing an agent with linear um
you will get web hooks when events
happen that are specific to your agent.
So somebody replied to your agent your
agent was triggered on this issue.
Uh we also added some additional scopes
that you can opt into to choose whether
your agent is mentionable or assignable.
And then as part of that kind of future
UI that I just uh showed uh we're also
working on a new SDK to be released at
the same time which will just make uh
that really really easy where you can so
right now you can build all this stuff.
It's on our existing API um and you kind
of have to figure out a bit more I would
say. So we're kind of building this
abstraction layer, this sugar where you
can uh very very easily uh integrate
with the platform.
Uh yeah, I'll finish with some some of
the best best practices um that we we
found working with these partners over
the last couple of months. Um you know,
it really felt like we're kind of on the
cutting edge here and we're building it
as the agents themselves still haven't
launched in a lot of ways. um you know
like Google and Codex only just launched
theirs within the last couple of weeks.
Um so first is to be to respond very
quickly and very precisely um when folks
uh trigger your agent. So if I if I
mention your agent, it should respond as
fast as possible. A lot of what we've
seen is people using emoji reactions for
that right now. Um excuse me, I have to
cough.
Um yeah so and then respond in a way
that kind of like reassures
uh the user that you the the agent
understood the request you know so it's
like if you say at codegen can you take
care of this the response should be like
something I will produce a PR for for
this specific thing you asked me it's
like okay you understood what I meant
great
um inhabit the platform this is like
linear specific a little bit but um in
this example but I think it applies
anywhere. We we really expect that these
agents are not linear agents. They are
they agents that live in the cloud and
one of the ways that they interact is
through linear, right? Uh it's just
another it's a window into their
behavior and hopefully like a a really
well structured one where they get a lot
of context. Um but we really think that
you know if you're if you're working
within if you're working within Slack,
you should use the language of those
platforms and and not confuse things and
put great effort into that. And then um
things like one of the things that we
expect to happen inside of linear is if
you're working on an issue, you should
move that issue to in progress. Don't
just leave it in the backlog. Um you
expect that of your teammates and we
expect that of agents as well. Um and
then just again like natural behavior.
So if if somebody triggered you and then
they replied in that thread, you
shouldn't need to mention the agent
again to get a response. It should be a
natural behavior that if you reply to
them, they will they will respond.
Um yeah, don't be clever. Uh a lot
clarify your intent before acting. I
think we see a lot of like attempts at
oneshots. Um one pattern that we're
seeing right now coming out of a lot of
the coding agents is they'll form a plan
um before doing anything um and
communicate that plan up front and get
clarification on it. Uh, so that's
something that we definitely expect to
happen.
And finally, you know, be sure you're
you're adding value. Um, I I think, you
know, LLMs, they they love to just
produce tons of text. We don't want to
see splats straight out of OpenAI into
comments, into issues, into any other
services. Um, be concise, be useful, be
like a good teammate would be. Um, you
can always fall back on asking like what
would a human do in this situation and
try your best to achieve achieve that.
Cool. That's it. Uh, thanks for
listening and if you're interested in
working with us on this platform or uh
integrating with Linear, let me know.
[Music]