The Bitter Layout or: How I Learned to Love the Model Picker — Maximillian Piras, Yutori

Channel: aiDotEngineer

Published at: 2025-07-21

YouTube video id: BZtD0yYAgCQ

Source: https://www.youtube.com/watch?v=BZtD0yYAgCQ

[Music]
bitter layout or the alternative name
for this talk, how I learned to love the
model picker and hopefully you will too.
So the idea for this talk started when I
was perusing all the AI first apps I use
all the time and just realizing how
similar they're all starting to look
very consistent layout between them all.
And it's not just the chat bots and the
answer engines. It's also the creative
tools like coding assistants like Vzero
and even Canva. They're all try starting
to use this very similar layout. They've
got an input field. They've got this
turnbyturn UX. They've got this drop
down with just way too many models to
pick from. And uh it all feels like
they're kind of retrofitting stuff into
this chatbot UX. Uh but don't worry,
this talk is not about if chat is the
future or not. I think we've all heard
that enough times, at least I have. And
Swix did a good job of humbling all
designers with this tweet where he
basically he basically called all the
design thought leaders out who are
saying chat is dead and then or sorry,
chat's not the future and then they show
off their cool demo and then shortly
after we'll just go back to using chat
GBT. So uh fair point. Um, I think that
right now we're in this state of a bit
of a dualistic future of AIUX. So, I've
called this first section Schroinger's
chat. Uh, because we all, you know, all
designers know how many usability issues
that that chat bots have, but yet we all
still use them every day. So, it's kind
of like obviously they are the future,
but at the same time obviously they
shouldn't be. Uh, so I won't go into my
thoughts on if they should be, but I'll
do some some anthropology here of just
getting everybody up to speed in case
you're not familiar with uh the great
chatbot debate. Uh, not sure if people
realize how long we've been debating
this, but I can track it all the way
back to 2022 with this post from Lionus
Lee, which this is a great post by the
way, still holds up today, but he
essentially says he doesn't think that
exposing the raw text completion is
really the right paradigm long term. And
so, you know, note the date, uh, May
2022. uh because if a couple months
after that we have chat GPT essentially
saying hold my beer and uh you know if
that's not the right UI paradigm it
certainly didn't bother them and uh I
think a lot of other other designers
have kind of taken note of the escape
velocity of that but still uh the next
year May 2023 we saw some other great
posts from people like Amelia
Wadenberger and then the next month um
Maggie Appleton who are making great
arguments about why chat's not the
future. These I think held up pretty
well. Uh but at the same time, yeah,
obviously you have other designers who
are arguing how intuitive it is and then
as we progress and chat GBT hits escape
velocity, we're kind of seeing everybody
just start to meme the defense of chat,
which is like just look at the chart,
you know, it's like obviously it's it's
working, right? U and I think there's
something interesting about that. If you
can kind of come to the debate with a
meme, it means there there's something a
bit intuitive about your argument. So,
uh fair enough. Um
but then still even in March of this
year we've had uh people like Julian
Lear writing very good reasons of why
chat should not be the future and he's
like showing clock speed here relative
to the different interface paradigm. So
it's very convincing. Uh he pretty much
says it's a bottleneck. Uh but at the
same time you know we I'll probably
still go back to using chat after this.
So the sher's chat remains. And so I'll
segue into the next um the next section
here which is called models and modes.
And this is on this idea of the model
picker, which is this other UI paradigm
that's been developing alongside
chatbots popularity. It's that, you
know, I'm sure everybody knows what it
is, but to be clear, it's this drop down
where you just have to select from a
million different models. And so I put
this I made this section in memory of
Larry Tesler. If you're not familiar
with him, he's kind of a big deal.
Invented like copy and paste and stuff,
but another thing he was famous for was
uh apparently saying, "Don't mow me in."
Uh, I don't actually believe he would
say this, but I mean the quote is like
attributed to him, but you know, he
hated modes. And if you're not familiar
with modes, this is a setting in a UI
where once you flip it, all of a sudden
your inputs are mapping to drastically
different outputs. And so Larry Tesla
hated this. He thought it was
unintuitive and he wanted everything to
be modeless. And I don't know how many
or sorry to give an example of a mode
just to be clear, uh, caps lock. This is
a mode, right? You hit caps lock and now
your keyboard performs differently. Um,
and then a more recent mode. Excuse me.
Um, I'm not sure how many people would
would agree with me on this, but I think
that the model picker, sorry,
is my voice at like the worst time. Um,
I think that the model picker is a bit
of a mode selector as well. Um, you
know, it's obviously we're dealing with
stochastic output and generative value
uh applications. So everything it's kind
of not a distinct change in setting, but
once you flip a different model, you
have a whole step change of output. So
to me that's kind of like a mode
selector. And here's a quick video to to
illustrate this point. This is a bit
old. Uh it's the an older version of
chatgbt, but you can see I'm trying to
use certain modes and uh the model is
not supporting it. So I have to go
through this menu and find which model
allows me to use this mode. And so the
argument here is like we're kind of
putting modes on top of modes, right?
You now have to match models to modes
and uh it's not super intuitive. Uh
they've actually done a great job of
redesigning this lat lately. So this is
certainly not like throwing shade at
OpenAI. I think they have a great design
team, but I'm just trying to illustrate
a moment in time when this was super
frustrating to me. And like you kind of
just want to talk to the model and be
like here's my use case like what mode
and what model should I use? But I don't
know maybe maybe the model will pick
itself and so that won't work. Uh so
this is uh really illustrating the point
of the you the flexibility usability
trade-off. This is a design principle
where pretty much you're constantly
trying to decide like how well do you
understand user needs and if you can
pretty much pinpoint them down then you
can create a very uh very usable
optimized UX for them and but as you you
try to make a more flexible system the
usability tends to to uh decrease
because you're just increasing the
amount of edge cases and the complexity
and the requirements and I think that
this is a trade-off that doesn't get
talked enough about in this is chat the
future debate. uh you know we generally
talk about in terms of absolutes but
it's really less of a yes and a no or
and more of like a time frame and like
what trade-offs are we talking about. Um
and so I'd like to uh segue into this
next section which is going to push the
idea of like when we design interfaces
we really need to consider the zeitgeist
that we're working in. So what are the
trade-offs of the time? Um you know what
constraints we're working with? What
time frame could an interface be good
for? Uh the subtitle of this one is
called the context of all in which we
live and all that came before us. Uh
Easter egg for anybody who remembers
that. Um and so to get into this
section, I'd like to lean into a theory
from the innovator's solution. This is
the follow-up to the innovator's
dilemma. And uh in this book, they try
to uh give you some guidelines on theory
of how to approach building products
with this idea of product architecture.
So the architecture is generally this
idea that you have a system and you're
trying to figure out how the components
in the system are interacting with each
other or interfacing with them. Um and
so when you start to understand how they
interact with each other, you can see
different um different attributes. So
they map them out to these two distinct
sides of a spectrum. You've got
integrated architectures and then you've
got modular architectures. And uh you
know generally integrated this is more
common in early stage disruption and you
have proprietary technologies. They're
very optimized interdependent. It allows
vertical scaling and then to contrast
that you have modular and this is
generally when technologies start to
commoditize and they can be more
interdependent and you can allow
horizontal scaling. Uh but the key point
in their theory is that you know you're
never really on one side of the spectrum
or the other. You're instead kind of
bouncing between the two and the in the
uh industry as well is having different
parts of the tech stack commoditize and
decommoditize all at once. So as a
designer or anybody else building, you
pretty much have to figure out where are
the strategic points to be integrated
versus modular. And so their their
theory uses like IBM as an example when
they started out making mainframe
computers very integrated. Then they
shifted to personal computers and
started to make it more modular and then
of course ended up the whole um computer
itself ended up commoditizing at some
point and they got out of that business.
Uh so thinking through uh today what
parts of the AI industry are
commoditizing and decommoditizing can
help us think about how to design
interfaces. And so of course if the main
question probably everybody would ask in
this when you start to pose this prompt
is are the models themselves
commoditizing right because it this is
kind of like the big topic that
everybody debates and um it brings us to
the bitter lesson uh from Rich Sudden.
you haven't read this, I definitely
recommend doing so if it's not wise to
build an AI today without knowing this
lesson. Uh, but I'll take the TLDDR for
this talk is that you just we shouldn't
assume that computation is constant as
long as we're seeing scaling laws and
effect, right? Like as long as the the
next model is still important, which you
can see it still is like every time a
new model comes out, everybody's like
drop everything and let's check out the
new model. Uh, as long as the scaling
laws are still in effect, then we can
assume that the the models themselves
are not commoditized. And so the bitter
lesson actually leads us to what I'll
call the the bitter design lesson. Why
not? Uh which is this idea that if the
basis of computation of competition is
inference performance, then the UI
itself must be primarily focused on
conforming to the next model. So said
plainly, if the model's not
commoditized, it's actually the
interface that's the commodity now. And
until that changes when like models
overshoot user needs and you don't need
a rocket scientist doing whatever use
case you're doing on in chatbt uh then
we can start to explore different
integrations within the interface. Uh
but until then the primary job of every
interface is really has to be figuring
out how to conform to the next model's
capabilities. So, it's it's kind of a
bitter design lesson because then you
get uh layouts like this, which you
know, the bitter layout. Uh pretty
uninspiring, not super usable. Uh just
not very cool at all. But the one thing
this does really well is it can absorb
the next model's capabilities. So, as
soon as that next model comes out, jam
it into the bitter layout and then, you
know, update your model picker and your
app is more intelligent. So, you know, I
I hate this design, but like as a
designer, like you can't really hate on
this ROI, which is you just add one line
item and now your app is NX more
intelligent. So, kind of hard to debate
this as a design decision. So, that's
the the bitter design lesson. Um, and
the takeaway from this, I think, is, you
know, I'm not ready to eat my words on
saying chat is not the future yet. uh
but I think that it's quite clear that
it's one attribute of chat which is that
it can really conform to the next model
very easily is one of the the key
features we need to keep in mind and so
the future of AIUX until we're we see
models stop commoditizing or excuse me
until models do commoditize will be that
the the future of AIUX must conform to
the next model so that's the the bitter
design lesson but how do we go from
bitter to sweet right what comes next
and as most things in life uh Brett
Victor has already given a really a talk
on this. So, you should just watch this
talk. It's much better than this one.
Uh, but it's called the future of
programming. And he explains all of the
kind of mistakes we've made over the
past decades with thinking about
programming. And specifically uses this
uh or for this context, he uses this
example of how people found it very hard
to go from binary to soap, right? the
binary programmers could not understand
you could give up control to the machine
and use these abstraction layers
efficiently and they like to hand code
everything and of course like making
this mind shift was was really important
as we see in retrospect and so his his
lesson was you should stop thinking in
terms of procedures and start trying to
think of programming in terms of goals
and constraints and pretty much guiding
programs with with these higher lay
layers of abstraction
and so uh designers I think are actually
pretty well suited to do this. Um, you
know, it'll be a mind sh a mindset shift
to jump from, uh, thinking procedurally
to goals and constraints. You know, we
we like to be very detailed in how we
design user flows and consider all the
edge cases and all that and it's very
important, but we are likely going to
have to start to move up a layer of
abstraction as we're seeing apps become
more and more stochastic and dynamic and
just more probabilistic. So, we can't
really envision every possible uh path
now. So we have to think a bit more in
terms of what constraints can we set and
what goals can we set to get people to
the along the happy path and so you know
design systems um these are already
things we use to guide to uh developers
and our designers to to goals and set
constraints for them. So what happens
when we start using this for generative
UI, right? If we start to collaborate
with a model, do we have a design system
that keeps them within the right
constraints? Uh quality assurance, is
that like reinforcement learning? I I'm
not really sure, but like maybe it is.
When you go to like critique a design
that a model has created, will this be
like a reinforcement learning loop? Uh
user stories. These are, you know, we're
very good as designers of thinking about
how to envision what the user is trying
to do via user stories. And these are
kind of like system prompts in a way
like can the model become also a partner
in helping uh set the the goal for a
user and uh we we can translate some of
these user stories into the system
prompts themselves. Uh so this is pretty
speculative speculative but I think this
is a nice prompt for the future of UI
design in the AI age. And uh I'd like to
end on this quote from Dario Amadai who
says he feels that generative AI systems
are more grown than they are built. And
I like this as a prompt uh for
inspiration for helping us do this uh
shift mindsets and start to think about
how the future of the of UX might be
more one where we're it's less like a
process of construction and perhaps more
like a process of gardening. So um if we
can embrace these types of lessons and
start to think about design in a new way
a little at a higher level uh perhaps
then we can move beyond the bitter
layout. Uh, thanks a lot.
[Applause]
[Music]