Taste & Craft: A Conversation with Tuomas Artman, CTO Linear & Gergely Orosz, @pragmaticengineer

Channel: aiDotEngineer

Published at: 2026-04-21

YouTube video id: wjk0ulMAkbc

Source: https://www.youtube.com/watch?v=wjk0ulMAkbc

Awesome.
So, we didn't see it, but hands up if
you do use linear and hands up if you
heard of linear and hands up if you want
to use linear.
Awesome. Great to see. So we're we could
be talking about linear but we're gonna
talk about something a bit bigger which
is a bit of a new trend that with Thomas
we're talking about things are trending
the wrong way right now. What is
trending the wrong way?
The
so
what happens when when agents um are
capable of doing everything um
immediately for you? uh the the fact
that might be that like the pendulum has
swung too far into the into the wrong
direction where if you get a feature
request you might now be in the position
to just immediately ship it and that
might be the wrong thing to do. Um and I
I reckon that you know hopefully half a
year from now or a year from now we'll
understand that like shipping things uh
without really too much thinking is um
is a bad thing. Um what will happen is
is that um you because you have this
this enormous power of effectively just
like shipping every single request that
comes in or every single thing thing
that pops into your head uh you will
effectively ship you know a software
that that is not great. Um Steve Jobs
back in the day said that you know great
products come out of saying no um to you
know 999 things and yes to one thing and
with AI um we might be in a place where
um it's just too easy to say yes and and
try things out and ship it and get to a
very convoluted place where you know
software actually doesn't work for the
end customer um nicely anymore or that
the user experience get gets confusing.
We used to previously have um this you
know uh this thing that gated us from
from doing this which was like the
actual engineering used to be hard. So
we used to think about these these
features and these these you know um the
the applications that we wanted to build
before we actually started engineering
because engineering was such a time you
know waste of time and it it took a long
time to ship something. So um yeah that
>> but I want to challenge you a little bit
on that. Did we not see this happen
before before AI that some companies
were already just like shipping putting
on a bunch of features and stacking and
what are you seeing different right now?
Are you actually are we actually seeing
more companies you know do more of this
like I don't know like feature factoring
thing
we had a you know common experience at
Uber where we worked together where we
you know we we went through hyperrowth
um and the thing about Uber was that it
was a winner takes it all market and
Uber was going against back in the day
in the in the US and um you just had to
ship immensely um and and and you know
just outpace you know the competition um
at all costs. Um and what I saw at Uber
like was was that hyperrowth that I
never want to go through again which was
like um at all costs you know just
fighting fires keeping the
infrastructure running scaling as
quickly as possible trying out
everything and and you know trying to
come out you know as a winner um in in
in that front and I I I see the analogy
to to AI nowadays because when everybody
has the capability of shipping you know
tons of functionalities like you you
always are in a competition with with
somebody else like your competition
might be you know a small team or even
one person that you know is very capable
of using AI to to to ship and um you
know build a product that is you know
has the same feature set as as as you
do. Um and in that world like I I think
it becomes important to sort of you know
stand out um in a way where like you
build tasteful software and where you
build high quality software um and thus
sort of you know maintain some sort of
you know competitive advantage um
towards your your uh competition. So at
linear even before AI came out you were
building tasteful software and and and
focusing on those things but then these
tools came out and they became more
powerful specifically since cloud code
came out now we have Opus 4.5
you should be able to ship faster your
engineering team your CTO your
engineering team should be able to ship
a lot faster what are you telling them
like what should they be doing inside of
linear with this capability should they
be slowing down. No. Right. What's going
on inside a line of linear? Tell us.
>> Well, yes and no. Like we still, you
know, we we still think about um every
single feature that we put out like we
we don't we don't go down the route of
of just trying out prototypes. We want
to sort of maintain that design angle
that we have and and you know think
about the user experience. Still say no
to a lot of um customer requests. Like a
lot of time you know hasn't gone into
really just engineering. Um it's it's
about figuring out like what the
customer wants. uh we do get a ton of
you know feature requests. We usually
never ship them as such. Like what we
really want to do um is uh get a lot of
feedback from our customers, talk to our
customers, figure out what their actual
problem is and then sort of group that
together and figure out like you know
what is actually the root cause of of
you know these feature requests and then
come up with a solution that is um that
that is perfect for that you know
particular group of feature requests.
Um, and that takes time. Like AI can
help you, you know, so much. Obviously,
it can sort of, you know, go through all
of those requests and give you a summary
and maybe sort of point you to sort of,
you know, different groupings. Um, but
it still takes time to figure out what
the right thing is. Um, and then you go
into design and and and figure out like,
you know, h how do you implement a great
UX around uh the functioning that you
want to want to build? Yes, we want to
move faster and we are moving faster. Um
there's certain aspects of uh of you
know building a product that has you
know accelerated a lot. Um one is for
example fixing bugs. um every feature uh
every every product has bugs and you
know the inflows of bugs is effectively
constant and um those are much easier to
fix now like I you know 10% of our bugs
are automatically fixed by you know a
singleshot AI instance like when a bug
comes in into linear um be it from sort
of you know our engineers uh reporting
those or a customer reporting a problem
10% are automatically you know up with a
PR and automatically landed without an
engineer doing anything um over time
that will go upline. I do foresee a
future where like it gets closer to
100%. Um in the next few years um so
that's something where you can
accelerate your your building like hand
off you know these tasks that don't
really require much thinking or you know
design expertise or thinking about
functionality hand that off to agents
you care about quality and you you can
tell the linear and and you have always
always had let's talk about cloth code.
What do you think about cloud code? And
you can you can it's it's it's a safe
space.
Yeah, hopefully it's a safe space. Um,
Anthropic said, you know, that all of
the functionality has been has been in
cloud code has been coded by claude. Um,
and I think it it shows like if you if
if you truly use you know cloud code
either the CLI or or then the desktop
application um you can spot problems and
you know small sort of you know
small bugs. I I would say there's
there's not really, you know, just
quality fixes, but they're actually bugs
um in in effectively a few seconds. Um
it is a bit, you know, slow. It might
be, you know, functioning in a way that
you you don't you don't really really
see. And to me, that's sort of a um a
side effect of moving so fast. Like
obviously they they again they're in a
competition with, you know, Open AI and
um they need to ship features and they
need to sort of move move really quickly
uh because it might be a better takes
all market again. Um, and uh, the side
effect of that is that the quality just,
you know, isn't there.
>> Yeah. Well, this was not a great
acquisition pitch, so I I don't think
you're you're going to get there. But,
uh, I absolutely like you you can see
some of these things, but how do you
measure quality?
And we've talked about this before uh
just be just before we started on Uber
how we tried to measure quality and and
how that's influence you to learn what
you can measure about it and what you
cannot.
Uber is a good example of like where it
is immensely hard to measure quality and
therefore you sort of don't. Um Uber as
an example like we had you know these
five big metrics um that everybody was
looking looking after and looking
looking to improve. Um the big one was
revenue like um it is effectively a
transactional you know application. Um
the more revenue you generate the the
better. So
>> the other ones were like trips taken
>> trips taken
>> trip trip taken.
>> Uh
>> I think the quality of the ride was was
one as well.
>> It also time to first trip from from
sign up to the first time that well
that's so we had a few golden metrics
>> right but the the revenue one was what
everybody everybody looked after. So
when you shipped a new feature um or
shipped you know something totally new
like you know Uber pool for example um I
don't know which one came first lift
pool or Uber poolool I think Uber
started it and then lift came around but
um obviously if you ship a new feature
that sort of you know makes the price
you know of a of taking a trip lower it
will increase your revenue so how do you
measure quality in that like you you
simply don't um if if there's no other
way of you know if if there's no other
platform that provides you with uh with
with a pool drive that is inexpensive
then uh you you don't really need to
have quality. Um and that was you know
my my feeling throughout like at ED like
we had engineers that that cared at
least in the beginning we had engineers
that cared about quality like it was up
to us to figure out like whether
something we shipped was was great or
not. I I still remember when I joined in
2012, I think. Um I put up a first PR
and back back then the Ubra application
used to have this this poll in the
middle of of the screen that used to
have an ETA of like when when your trip
is, you know, is is is going to arrive.
And I made some changes to the margins
of the of the map. Um and the PR came
back from a from an OG engineer who was,
you know, was on the team for from from
the get-go. I think he was the first iOS
engineer. Um and he he was like, "Oh,
this this pole is off by two pixels." I
was like, "You you measured it?" "Oh,
yeah, sure. I I measured it." And I was
like, "I measured again." And you know,
yes, two pixels off. So, I had to move
it up by one pixel. And that was the
like nobody would would really care.
Nobody would see it, but like people
were keen on upholding the quality. And
that's why at least in the beginning,
the Uber application was was pretty
performant, was was of highest quality.
Um but then like once you have a big
enough team and you've got these um
incentives of just increasing revenue um
you ship new features as as quickly as
possible and and quality is a thing that
like it it doesn't affect your revenue
until it does. So what happens Uber
ships Uber pool lift comes ahead and
ships Uber uh you know Lyft pool as
well. So you've got two competing
products that effectively have the same
price point do the same thing. um you
can choose either one of the
applications and over time like my
theory and and that's why we we you know
want to build linear into this high
quality tool uh is that over time people
will will pick the one that is of higher
quality um it might take a while like
people might be you know sticking to
Uber and then trying out lift you know
once a year or something like opening it
up mean like oh this user experience
actually feels better I I I feel like
I'm getting the car faster um even
though the price and the product that
they sell is is the same. So over time
you will start losing your users. Um and
it will be a gradual you know slip.
There there will be no AB test that you
can do in order to figure out like
whether you should invest in quality. Um
it'll just happen over time. And that's
that's the danger of it. Um if you if
you build a bad quality product, you
open up yourselves uh to uh sort of be
leaprocked by by a competition. Um well
not leaprocked like slowly overtaken by
the competition.
you do something really unique at linear
related to this that I've never seen
before. It's called quality Wednesdays
and I sat into one of your quality
Wednesdays. The whole engineering team
gets together. It's a full remote team.
So everyone just dials in and it was 30
minutes and every engineer I think we
had about 25 engineers on that call
would show one fix that they did was
quality and it went from like a one
pixel change. It was literally a one
pixel to uh oh I just made our our our
back end like way more efficient and and
using less things and it was just boom
boom boom boom boom. Uh and I think it
took like 37 minutes for the 25 people
but it was less than two minutes. How
did this start?
And was this you? It was me. Yeah, for
sure. Um the the big one was like I mean
to go back like I think it was three
four years ago. Um we have this thing in
the application like if if you use it um
you can you can you can spot it like
every single highlight needs to
highlight instantaneously when you hover
over it because that you know makes the
application feel fast but when you hover
out there needs to be this very quick
fade out of of a button because that
makes the application feel smooth like
it has to be this like you know
instantaneous highlight and then over
150 milliseconds you know fade out
because that adds a bit of quality um to
to the user interaction and um that was
in place since you know the beginning
like the the the early Um, and then I
got sort of frustrated because I had to
sort of point this out to to engineers
because if you're not looking for that
very small minute detail, you're just
not going to find it. Like you implement
new functionality and um you just forget
to uh you know implement it or you don't
even see it if you're not if you don't
know what you're looking out for. So
what I did as at one of our off sites
because I got frustrated of reporting
these I was like let me show everybody
like you know what what what they should
be doing um and and how they should be
implementing these these you know small
quality quality fixes. Um and what I
took is a very small portion of the
application and I was like you know
where where I noticed that you know the
highlights were missing and and you know
I brought the team together and uh I
told them let's you know spend an hour
trying to figure out what's wrong with
this particular view and in my mind it's
trust the highlights and everybody duck
in um and what we found in it was one of
the view option menus uh we found like
35 problems with with that tiny UI um
and I was holy Holy, holy crap. Um, like
I I didn't see those. Um, I I had no
idea that we had all these small
problems that like you wouldn't notice
when you're when you're not really
looking. Um, so from that, you know,
what what I what I thought we we would,
you know, want to do is like have
everybody always chime in and and and
try to find problems in the product
because apparently we were full of, you
know, small quality quality problems. If
a, you know, small menu has 35 things to
fix, then the rest of the application
has, you know, thousands. And to date,
we've probably fixed um 2,500 or 3,000
um of these small minute details uh in
the application. Um and and that's how,
you know, has become better and um and
and has the highest highest quality bar.
Um that was, you know, that was the
start of it. But then we realized um
there's there's a nice side effect to
this. And what we what we told people is
that you have to every Wednesday you
have to find a problem yourself like we
won't hand them to you like you have to
go in into the product and find it. So
people started doing that every single
week finding a problem and it used to be
um in the beginning it was it was easy
then it became harder because you know
the quality fixes went down but um you
know people kept on finding finding
problems in in the product. Um and the
the side effect of that was that
everybody was all but when whenever they
were building something a totally
unrelated feature there was they were
always on the lookout for these small
quality fixes because they knew they had
to come to the next Wednesday meeting
with a fix
>> to get the fix them. Yeah.
>> Yeah. So they were always looking
looking for those and that meant that
they were introducing less and less
regressions or these you know small
quality quality regressions into the
product anyway. So if you if you think
about quality all the time and if you
are aware um of quality you know things
then uh you're you're bound to make less
mistakes.
>> I mean this practice is I haven't seen
it elsewhere and it seems both awesome
also pretty aspirational. So I mean if
you're a small startup like you should
probably try it out if you can because
especially now nowadays with with agents
like like it shouldn't be that difficult
to do and and you might get
>> if you're a big startup you should even
even more try it out.
>> There there we go.
But one thing that is is not as
aspirational and a lot easier to do
especially now that you have been doing
even before agents zero bug policy. Tell
me about this. What does zero bug policy
mean for you and what does it mean in
practice? Like you have bugs surely,
right? I'm just playing devil's advocate
here.
>> Sure. Um we zero bug policy literally
means that if a bug gets reported um it
gets assigned to somebody automatically
immediately using agents obviously like
they will find who has created this bug
or who has you know been working in this
area and um that becomes your highest
priority. You drop everything else. Um
the morning you wake up you go to your
my issues list and you see a bug
assigned to you. That's the first thing
you pick up and you fix it. Or you can
also decide not to fix it. Like that's
important. like not every bug gets
fixed. If it's, you know, super hard or
gnarly and it, you know, applies to one
out of, you know, 100,000 users, um, you
know, you probably shouldn't waste your
time on it. Um, but every single bug
gets fixed immediately. And the the the
the start of this um came from from the
idea that like bugs are are are created
at a constant rate at every company.
When you create features, when you when
you create function, when you engineer,
um, you will be creating bugs. And most
of the companies and we prior to our
zero back policy like we um put them in
a backlog. We're like uh you know when
it gets when we get some time like we'll
we'll fix them. And uh what happens over
time is like your product gets worse and
worse and at some point you look like oh
man we've got you know 500 bugs in the
backlog like we need to do something
about it and that's when you start
fixing from the top
and what happens is that um the rate at
which you have to fix bugs is again
constant. It doesn't matter whether you
fix them, you know, two months from now
or immediately. Like once you hit that
threshold of we've got so many bugs,
you're now effectively fixing all the
bugs that come in um except two months
later. So with that small notion in
mind, like there's there's a very small
trade-off that you have to do in order
to get to zero bucks. If the rate that
you have to fix bugs is constant, all
you need to do is stop, you know,
development of new features for as long
as it takes you to, you know, bring your
bugs to zero and then enforce that
you're going to keep on, you know,
fixing your bugs because it's not more
effort to fix bugs immediately than to
fix them three months from now if you
care about the, you know, overall sum of
of of your of your problems. So to us
that meant um we spent effectively three
weeks of of not working on any any new
functionality of just fixing bugs
getting that down to zero. Um and from
there on out every bug gets fixed um
within seven days usually you know in
two hours or 3 hours. Um and what that
means to users like users get super
excited when they report a bug and two
hours later they get an email saying oh
we fixed it if you refresh your browser
um we've got it covered for you. um that
makes you like that makes your users
super happy because you know you don't
really have that experience too often
with with companies.
>> Okay, curve ball question. If I'm
working at linear and there's a quality
Wednesday coming up and I get assigned a
bug, does that count?
>> No, that does not count. That's that's a
defect. Um you have to find a quality
fix.
>> Oh man,
>> bugs are separate. They they are
immediately immediately created. And now
with you know AIS being capable of at
least pointing you where that problem is
and helping you immensely uh fix bugs I
think like literally every company um
should have a zero bug policy. It it
doesn't make sense to not have one. One
thing that you know when we talk about a
and think about AI agents we think about
speed code generation. We rarely use
quality and AI agents in the same
sentence. Why is that with the tools
getting better? Should AI engines not be
better to have feedback loops? you know
they can write unit tests like should
they not be able to produce better code
better features better UIs even uh no
they they don't feel they they have no
taste um they they simply don't um they
are not human beings I think the last
bastion that you know we have to tackle
at some point and maybe we'll get there
maybe we won't is um you know have you
know tasteful AI being able to create UI
that is you know purpose-built for you
know that specific feature you're
building for the product that you're
building is, you know, has great design
um and has the ability to figure out
like what what a user feels when they
use your application. To give you an
example, um AI doesn't have a concept of
time and currently how it sort of
interacts with your browser is um
effectively timeless. um they take
screenshots or they look at the DOM and
if you ask it to create a very you know
high performant application like yeah it
can go back you know and look at all the
all the things that have been written
about like you know go to versol to host
your next app or you know use caching or
or whatever but it won't be able to use
your application and get frustrated
because you know a click took two
seconds um it knows that one second is
better than two seconds but it it
doesn't know whether two seconds is is
is slow enough. Um the other aspect that
goes into it is um it it doesn't really
see um and it doesn't know what for
example a good use animation is. Um
Emil, one of our um uh design engineers
um just yesterday posted um on on on X
um where he you know did this trial of
you know having agents build certain
animations for you know um certain
functionality like you know bringing up
a pop-up or highlighting a button or
moving things around and um he agents
were totally capable of doing all of
this and then he took a manual step and
was like well if I now take it and just
improve it and make it feel good um
here's the outcome and he has it up on
his side can sort of you try out what
the agent did and what he then he then
fixed and at least to me and I hope
everybody else um like his animations
just feel natural they they feel they
feel like like the welldesigned whereby
the the agent did all the right things
but you know had an ease in as an
animation or or you know did it a bit
too slowly or too quickly um and it just
felt you know unnatural.
I wanted to talk a little bit about the
culture at Alineir. One, it's like
working there like how you created this
team that really cares about quality
good customer experience. What are what
what are things that you do specifically
there? C can we talk about it on on what
engineers are exposed to who who join
the company from day one? Yeah, we we
hire for for that specifically and we
have a you know specific hiring process
where we make sure that we get people
that think like mind or think think like
us and want to build high quality
software that is beautiful. Um most of
our engineers are product engineers.
They're they're like we obviously do
have technical challenges. We have um
you know a synchronization engine. We
have you know scale. We need to scale
our infrastructure. But what we wanted
to do is have most of our engineers just
be you know focusing on the product and
build features you know and and
functionality for customers and engage
with the customers at a very high level.
Um so first of all we we we hire for
that. Um we have a Burke trial that we
do with every single employee that that
you know several days right
>> it's a full week.
>> It's a full week.
>> So we we obviously pay for that effort
but we work with a with a person for a
full week. they usually implement a
green field, you know, pro project or
product or feature. Um, sometimes they
even ship it after that week, which is
which is pretty amazing. Um, but what we
want to get out of that experience is
just to see them drive, you know, a
product from from start to finish,
figure out what is needed.
>> So, a push back here would be like, hang
on, a whole week, you pay for it, sure,
but someone had to take time off of it.
A bunch of great people will say, no, I
I either cannot or will not do that.
>> Well, that's totally fine. like those
people didn't want to be here in the
first place. So,
it's it's it's self, but after you go
through this pretty rigorous hiring
process, it's a lot longer than I think
any other process. I mean, you know, you
have a dayong process at most places or
or they're stacked across.
>> Did you see any different result than
for example when you hired at Uber? When
you were hiring at Uber, you you did the
usual, you know, like five interviews,
six interviews and so on. What was the
outcome difference that you're seeing?
Certainly um like we we've had very few
misses. Um most of the people that we've
hired and sure there always are a few
where we um we just missed something and
um we and going back into sort of the
loops like we there's inklings of like
us being a bit uncertain and then we
went ahead and hired the person anyways.
But those are just like a a few handful
of handful of people like I think most
of our engineers are you know really
excellent. Um and our engineering, you
know, bar is is super high and
constantly increasing.
>> And then once those engineers enter, you
told me something interesting about the
Slack channels and customers, right? We
do have um Slack channels with all of
our big customers. It's open to anybody.
Anybody can jump in. Um and most people
do like you browse through customer
requests, you browse through what you
know, what problems people have. And um
we we also record every single meeting
that we have with customers. And we have
a lot of meetings like not only on the
on the CX side or um support but our PMs
have constantly you know talked with
Bler customers to figure out what we
should be building next and all of those
are recorded um and any interesting
points are tagged so anybody can can go
in and then you know look at and even
search uh for you know certain
functionality and figure out like what
are customers saying what what do they
want so everybody gets exposed um to
customer needs uh and that is super
critical if you if you want great
product. It's almost like if you enter
linear, you get this like fire hose of
like what are customers feedback and you
you cannot really escape seeing and
feeling, you know, feeling the customer
pain or joy or whatever that is.
>> Certainly. Yeah. Cuz we build it for
customers like Liner started off as a
product where we build it for ourselves,
right? We as engineers were the primary
customer. We've grown out of that like
we we build it for larger corporations
and enterprises and we're no big
enterprise. So we have to build things
that you know we wouldn't use ourselves
and the only way to do that is to you
know talk with your customers and figure
out what they need.
>> If you had to look a year ahead you you
you sometimes have strong opinions. So
like let's bring those out a year ahead.
How do you think the role of the
software engineer or product engineer
will change because we do have these
powerful tools. They're getting better
in certain areas and maybe not so much
better in others.
>> I think everybody will become a product
engineer um in in some sense.
If you if you think about how AI has
progressed um go back like four years it
wasn't able to write a single line of
code and now it's commandeering code
bases um go four years ahead and if you
if you still believe that it's that the
exponential growth is still there and we
don't hit a wall um which I I don't know
if we will but um if it if it keeps on
going growing like this like you you
won't be needing engineers that sort of
pipe data from one place to another you
still will be needing engineers who know
what a customer wants and what a good
feature looks like or what a good user
experience looks like. Um so I think you
know engineers will have to move to
become product oriented and product
focused. They will have to be sort of
mini PMs who sort of talk with customers
are engaging in that layer um and then
can you know implement functionality um
that your customers want.
>> Oh man. So like you know I remember in
like the the 2000 the two 2000s we as a
as a programmer you could just use one
language then it was like multiple
languages then the QA job you got the QA
job then you got DevOps now you're
saying we're the the product job and the
customer support job as well.
>> Oh everything else has dropped now like
you just need to do the you know PM job.
>> Okay. Um and as closing advice you are
hiring for for product engineer. You
said that you actually hire for that
now. Not every might have the
opportunity to work in a role that is a
product engineer right now. But if
you're a software engineer, what are
things that you can do to grow this
product sense to be to to change your
work to be closer to what a product
engineer does? I mean, it's all about uh
getting closer to your customers if
you're working at a company or just
building stuff. Like the best way to
learn is to actually you get your hands
dirty, try something out, build it for
yourself. That's the easiest part. um
you can think about like what you need
you can you can build it and you learn
from that experience then you ship it to
the world hopefully somebody else uses
it as well and then you know you've got
you know your first customers that you
can get experience from of you know
whether you're building the right thing
right thing or not um obviously there's
literal lit literature as well you can
you know read through you know Apple's
human interface guidelines that's the
best book um if you want to do sort of
good UX um just follow what they say and
you'll be good um and yeah those are
those are the two big things
>> awesome Well, Thomas, thank you so much.
>> Thank you.