tldraw.computer - Steve Ruiz, tldraw

Channel: aiDotEngineer

Published at: 2025-07-21

YouTube video id: 1C2TdPkj6aQ

Source: https://www.youtube.com/watch?v=1C2TdPkj6aQ

[Music]
My name is Steve. Uh Steve Ruiz. I am
from a company that I started called
Teal Draw. Teal Draw started as a um
well, a couple things. started as like a
a digital ink library that then uh
Christopher had me implement in
Excaladra. When I was working on that, I
was like, you know, there should
probably be like a kind of a a really
good SDK for building these types of
things. And I'd already worked on a
couple of projects that uh
we're kind of going in that direction.
So, I did turned out if you build a
canvas that other people can use, people
will will build cool stuff with it. So
today I'm going to be talking about some
of the stuff that we've done with uh AI
using our kind of our own toys playing
with our own canvas here. So I'm here in
tealraw.com.
It's a free whiteboard. Um you can come
in, use it, make your diagrams, make
your slides. Uh very very similar in use
case to uh to Excalibraw actually. Um
but there are a few things that are are
kind of special here and I'll show you
real quick. So again this is tildraw.com
free uh end user whiteboard application.
Uh and
we also have taildraw.dev which is the
um SDK website. If you wanted to build
stuff with tildraw um then you could go
to tildraw.dev and learn all about the
the code and the documentation and how
to do that. The cool thing about the the
canvas is that it is um well I'll skip
this one for a second. It is just normal
web stuff. It's like react all the way
down. So, for example, I can do like
things like you know, play YouTube
videos and uh you know, still interact
with them, still draw on top of them.
But the uh yeah, every one of these
little shapes, including like doing some
pretty cool stuff like uh you know, I
have a whole code editor here. This is
just code sandbox that's embedded in
tildraw.com. This one is uh Figma um
that like just is embedded in
tildraw.com.
Uh even you if you really like Excal you
can even use Excel
uh inside of teldraw.com. So um and I'm
pretty sure I hope hope this doesn't
break my slides but if I paste the own
the teal draw inside of itself um then
we can we can kind of let me see if I
can draw inside of teal draw.
Hang on a second.
Uh
yeah, right. Is that we're we're kind of
modifying the the inner from the outer
whatever. I'll let you think about how
that works. Um but yeah, and and it has
a lot of like kind of little details.
I'll do this really quick of like, you
know, nice arrows that just, you know,
perfectly kind of follow the different
shapes of things and you know, boxes
where the um uh you know, the the the
corners of the boxes always stay in the
corner, right? So that's part of our
value proposition is that we we take
care of all these little like little
details. Make sure the corners are
right. Make sure the arrows are right.
Stuff like that. Um we did a couple of
different
AI stuff on top of this. And some of
these are going to work, some of these
are are not going to work. Um did we
find out is is uh is foul in the room
here? Uh okay. Well,
in 2023, we had um a lot of success with
with Make Real. I'll skip this one for
now. Uh Make Real was the idea that um
people were using Till Draw for
whiteboarding
uh as well as like drawing wireframes.
And the idea would be like, well, what
if we could take the diagrams that we
were drawing, the wireframes that we
were drawing, and we could just kind of
kind of make them real, right? What
would what would be involved in that?
Um, and so we when the uh um when the
vision models came out like GP4 with
vision, uh, that's annoying.
I'll have to do it myself. We uh we
realized you could just send a
screenshot to um continue. Boom. Uh, to
the model and say, "Hey, model, you're a
web developer. Your designers just gave
you this loi thing. Can you can you
create a higher like can you actually
prototype this? Can you build it?" And
the models could do that really well.
Um, as usual, I'm going to kind of like
give this a second to to load while we
uh All right, they're really running
with this input. All right. Well,
the models have since become very very
ambitious. Uh, here here's another good
one. Uh, let's say I want to have a stop
motion application where like I have a a
feed from my camera and I want to be
able to take pictures. Um, and I want to
be able to like see all those pictures
there, but I also want to be able to
play them like in series uh using only
the the input here, right? I won't even
do the title just just to be uh to be
fun. The model will will spin off on
that and it will eventually we can kind
of watch it generate, but it will
eventually come up with this. I just did
this um during the last talk where uh
that that's my app. You know, I can kind
of do this. This is doing the onion
skinning. Uh, and there there's my GIF,
right? And not surprising. I mean, you
can add images to cursor and stuff like
that, and it it just works really well.
Um, but the fun part is because I'm
going to stop this. Um, because this is
back on the canvas, you can actually um
annotate on top of the website and use
that as the next prompt. You kind of
click this kind of kind of just generate
the next one. And I've done this
already, but you can see that. Yeah.
Sure enough, it made the button solid
like I asked it to. And so using these
drawing tools as a way of not only
generating stuff, but annotating and
like kind of iterating through these uh
you can get some pretty pretty wild
results. This came out at the end of
2023. It was uh one of the first kind of
tools that let people that um couldn't
program and couldn't create software to
to to kind of do it. And it was uh it
was pretty remarkable. So like this this
being the input uh leads to um you know
leads to an app. You might have seen
there the uh that it it just did a
little flash of green you know there was
a bug involved. So I just took a
screenshot of the bug and and sent it
together with the uh the original source
and said hey can you can you fix that
particular bug and and yeah it did. So
it's uh it's pretty cool the um if I
don't crash my browser. Hey. All right.
So, that's make real. Uh, we also did
this one called draw fast, which may or
may not work. I'm just gonna see if it
does. This used a uh a thing called like
latent consistency models. I think
that's the name. Basically like a
create an image for me as fast as
possible. And we will see if I can wake
up the uh the server here. Oh, hey,
look. Hey, this normally doesn't work.
Special uh where you have a drawing, you
have an image being created from the
drawing. And I as I change the drawing,
oh, come on, do it. Uh, then the image
is gonna going to change as well. Um,
you can even take these things and
flatten them like this. And now I can
interact with the um the model, the
images like this. And you know, let's
say I'm going to rotate it or or maybe
stretch it out really big. Uh, and
in in in good circumstances, this stuff
works almost in real time, but you'll
have to have to accept the uh well,
whatever
uh the the one one moment of uh of
working as the best we're going to get.
I'm going to need to use two hands to do
this. But no,
if I just make a whole bunch of people,
will they
Oh, because they they're running because
they're all sideways, right? I got it.
Anyway, this is draw fast. uh but the
one that I'm going to talk about uh
mainly is teal draw computer.
So this is how well I'll just I'll just
do it. This is kind of a graph full of
these other little components. Uh I am
gonna say
uh AI engineer
um
MCP observability I don't know whatever
uh
uh conference
uh and I'm going to draw a picture too
of like maybe a uh I'll just do like a
big um uh top hat or something like
that. I don't know whatever with some
playing cards in the in the brim. Got
it.
Write a short commercial is the
instruction here. Uh I'll even do please
uh
and run it. Okay. So, a couple of things
are going to happen all at once here.
This graph is going to execute. Right
now, the instruction is creating a
script for itself. And then it just
executed the script. Sorry, this goes
fast. Wrote the text. Now, it's
generating speech. It's also generating
an image based on this. Um each one of
these blocks accepts inputs and produces
outputs. So this image
is based on uh our our text which was
based on this this instruction which was
based on these inputs. Uh and then it's
it's you know creating speech right now.
Um and that's going to be whatever
the AI engineer conference is
innovation.
You got it. And then I can I can keep
piping it on and it'll you know but this
time it'll make it sad and and serious
and create an image based on that.
Right? So we this is cool. Um the
each one of these things like I said has
this script of like how should I use my
inputs? What should I produce based on
my inputs? So for this write a short
commercial. It's something like it's
tiny. I'll read it. Analyze inputs
looking for guidance on the product
services style or other requirements for
the commercial. based on the inputs,
write the text for a short commercial
script, output the result, right? And
it'll repeat those same instructions
based on whatever I get it, and it'll
it'll pipe it out in the same sort of
data that uh um is acceptable as inputs
by the next next thing down the line. Um
we did this in uh collaboration with
Google. They uh came to me and said,
"Hey, we have Gemini 2 coming out. We
want to launch with a bunch of cool
demos and a bunch of cool partners. um
do you want to have uh you know be a
part of that? I'm like awesome. Does
that mean we get early access to the the
new models, you know, these um they had
shown the you know using your phone and
kind of like you know where did I leave
my keys and all that type of real-time
stuff and they're like no like all right
like do I anything no you got to work
with what you got. So, uh, cool. All
right, we'll do that. The, uh,
and so we did, um, you know, Gemini 1.5
was out. That's pretty cool, but also
Gemini Flash was out and Flash was fast
and pretty good and multimodal. So, that
was the kind of the inspiration for
this. As we worked on it more, that's
that's good. That's good. Sad and
serious AI engineer conference. Um,
yeah, it's good stuff.
As we worked on this more, we we
realized that like you could you could
kind of do computer stuff with it. You
could kind of like take a uh an
instruction and say something like like
increment or like I I'll do this like
add up uh all your inputs
and then you give it some inputs uh like
um you know whatever two
and it's hard to do this you know 11 and
it will come up with uh like you like
you kind of expect it'll it'll come up
with whatever 13 but the execution here
is not being done in code. The execution
is being done by a language model.
Languages models are capable of this
kind of like nonlinear thinking. Um, so
if I gave it two and uh octopus
um as the inputs and asked it to add
that up
um well that octopus is not a number but
if you forced me to which we do in the
prompt uh infer a number from whatever
you know maybe maybe it's eight and and
eight and two should make 10 and there
you go right and you know if it if it
was a uh you know a camera feed And it
is me. I'm going to try and do this. Uh,
hold on a second.
You know, like, is it going to be 14?
Maybe.
Yeah, there we go. Right. So, like it's
it's able to use Thank you. Yeah.
[Applause]
And it's not it shouldn't be that like
surprising, you know? It's just
multimodal model. Take a bunch of
inputs, uh, produce outputs. Um, we we
kind of went further with this. I'm
going to have to jump to, and by the
way, the the the killer use case for
this, if it's not immediately obvious,
is turning your daughter's uh drawings
and stuff into u pictures and stories
and piping them all around, right? Uh,
but the
um where is this one? This is a good
one.
You can also do I was playing a lot of
Factorio at the moment as well. Um, oh
no, this is the wrong one. Hang on. All
the way down at the bottom
and grab this one.
And so, uh, the idea of having these
these machines that even include cycles
and loops and that'll just operate
forever. Um, so in this one, it comes up
with a random pop song, adds it to a
list, feeds it back in so it doesn't
repeat,
asks, is this song about love? Uh, and
then sorts it according to you, well,
again, we're working with language
models, so we have a boolean value of
yes, no, or maybe. Um, so we have uh and
then it it feeds back around and it it
it kind of pipes and I can just leave
this forever just spending my my uh the
credits that Google gave me to uh to
burn. Um, and yeah, it this is this is
really fun. Jill drug computer. It it
got pretty popular. Not not necessarily
as popular as the uh make real, but it
was uh it's it's pretty amazing what you
can do and it really rewards creativity
to put it lightly. Um I have seen people
using this to to do actual multi-stage
prompting um you know decision-making
analysis and you can imagine this being
asynchronous and somewhere up in the
cloud and maybe that's that's what we do
next where we say
take this CSV of uh email addresses of
people who've engaged with our product
them all get a response do sentiment
analysis if they'd like it you know do
something next uh you know so forth wait
for me you know text me and say like
should I really email this person again
maybe I say yes So having a big long
long lived asynchronous process um that
could be run in parallel this would be a
great great interface for for designing
that and everyone seems to get this.
Um when we when we originally uh did
this the creative prompt the the kind of
the the philosophy of this project
before we I went home and prototyped it
was like I want a computer that works
the way that I thought a computer worked
before I knew how a computer works.
Right? where we just have like I want
this stuff and I want to do this to it
and then I want to take the results and
go over here. So that's uh that's teal
draw computer. Um
I wasn't going to show teach but I will
show teach uh which is uh create a
flowchart that begins with AI and ends
with engineer incorporate existing
shapes. Um, when you have a really cool
hackable canvas, like an SDK for canvas
with like a runtime API, um, it it plays
really really well with other AI tools.
Um, and you can really quickly, even
though these models aren't like great at
this, um, you can really get it to work
with the canvas in a way that is, um,
kind of like a virtual collaborator.
Like you can kind of get it to do stuff.
I mean, the demo that I always show and
then I'll do really quick is the whole
like, you know, draw a cat.
Um, somewhere on this page is a uni a
pelican riding a unicycle, but I uh
um there's a lot of stupid drawings
here. Uh but yeah, draw a cat and it'll
draw a cat, but you know, it's it's
doing this stuff not as a as an image.
It's not painting pixels in the way that
like mid journey would. It's it's doing
it as text. It's like kind of returning
a structure that I can map into to
shapes on the canvas. And so, you know,
I can I can work with them myself. I can
correct it. Um, and it can it can work
with my stuff as well. So, if I do like
uh this it's like orange uh and I say
make the cat
the cat blow out the candle.
I didn't tell it was it was a candle,
but uh let's see if it can do it. Um I
don't think cats can actually blow
But I don't know that for sure. Uh
uh this is using cloud as as the back
end. If you want to know how this works,
definitely catch me up afterwards. Um
yeah, right on.
Hey, and we get smoke as well. That's
wonderful, right? uh that teal draw a
lot of this stuff and and in fact I
would say our uh our advantage over the
bigger companies in this space is that
uh shitty but amazing is definitely on
brand for teal draw. Uh
and yeah if this seems like a good
problem that you might want to work on
definitely talk to me because we have
some tools that make it easier. Um
people build all sorts of crazy stuff
with gelra. Um, this is a Grant Cott's
liquid, you know, simulation that's
using Teal Draw as like the the
geometric physical, you know, like
control layer. I don't know, authoring
layer on top of it. Um, companies build
really cool stuff with Teal Draw like
observables is building with TER now.
It's it's incredible. Um, I think we're
only it not even scratching the surface
uh of what can be done with this
paradigm and these tools.
Please build something amazing. Uh, I
got the canvas. We have the technology.
So, that's my talk. Uh, thank you very
much.