Small AI Teams with Huge Impact — Vik Paruchuri, Datalab

Channel: aiDotEngineer

Published at: 2025-07-15

YouTube video id: K-iYKDMFKhE

Source: https://www.youtube.com/watch?v=K-iYKDMFKhE

[Music]
Okay. Uh, my name is Vicas. I'm the CEO
of Datal Lab. And today I'm going to
talk about how we got to 40K GitHub
stars, seven figure ARR, and train
state-of-the-art models with a team of
three.
So I spent the last year training these
models like Britney mentioned marker and
sura I also built repositories around
them. I left my AI research job and I
started a company and raised a seed
round. Uh I did not get enough sleep.
It's important
and this is data lab. So we made our
first hire in January. We're now a team
of four. Faraz is new enough that he's
not pictured. Um we've grown in revenue
5x since January. We're at 7ig and our
customers include tier one AI labs,
universities, Fortune 500 and AI
startups,
including Gamma, who I used to make this
presentation. Um, so today's focus, I'm
going to talk about how we've grown with
a small team. I'm going to talk about my
philosophy on building teams and why I
think we're at kind of an inflection
point in how we think about building
teams. And I'm really going to talk
about this idea that headcount does not
equal productivity. There's like this
really persistent notion in Silicon
Valley that you raise money, you hire a
bunch of people, and you build more, but
it almost never in my in my opinion
works out perfectly that way. All right,
so my last company was called Data
Quest. I'm very fond of the data prefix
apparently. Uh, and we scaled to 30
people and 4 million AR bootstrap during
COVID. It was an online education
startup. Um, and then unfortunately we
had to do two rounds of layoffs postco
when online education kind of tanked. We
went from 30 to 15 and then again from
15 to 7. And it was obviously awful for
the people we had to lay off. But I
noticed something really interesting.
Productivity and happiness increased a
couple of months after both layoffs to
the point where we were actually much
more productive after both cycles than
we were at the beginning. And I started
to wonder why that was. Like how could
reducing the team so much actually
improve productivity? And I came up with
these four hypotheses. One, we'd hired a
lot of specialists. So as you scale like
Grant mentioned in the earlier talk, you
end up building these very specialized
functions and teams and those
specialists often can't flex across the
company to solve the key issues of the
company. Two, we were a remote team
which required a lot of intentional
process and heavy syncing which just
eats into your time and just just makes
it really hard to get on the same page.
Um because of that we had a lot of
meeting overload and especially once we
got middle management in place people
whose job is kind of professionally to
manage we ended up with just a lot of
meetings on people's calendars and not
enough time to actually work and then
senior people we hired kind of a mix of
experience like most companies do. We
hired junior, mid-level, senior, and
then senior people ended get up getting
kind of tied down in doing a lot of work
uh to manage the more junior people. I
we actually had a case where we had a
three-person team and we cut it down to
one and the team actually got much more
productive because it freed up the
senior person's time.
Um and kind of every company I feel like
goes through this journey. There's this
initial golden period when everyone is
aligned. You're on the same page. you're
building this amazing stuff and that's
really when you build the core thing of
your company um like Google uh with
search or Microsoft with Windows. It's
kind of when you figure out your
business model and then you hire a bunch
to fill out the edges around it. Like
you hire a bunch of enterprise sales,
you hire a bunch of marketing, you hire
a bunch of engineers who are kind of in
very small boxes to build very small
features. I had a friend at Amazon who
worked there for two years and built a
shopping cart button. Um it's it's fine,
right? But I mean at at that scale of
org, that's kind of the tiny box you get
fit in. Uh and you end up with a lot of
bureaucracy, a lot of syncs, a lot of
unclear priorities. Um and this pattern
is unfortunately very common. But I
started to think, what if that golden
period just lasted forever? Why why do
you actually need to end it? And as I
started working with Jeremy Howard at
Answer AI, I got to understand his
philosophy for building a company a
little bit better. And his idea is
basically hire less than 15 generalists.
So people who can really do everything
across the stack and really understand
all aspects of the company, fill in the
edges with AI and internal tooling. So
Jeremy's invested a lot recently in fast
HTML and things like Monster UI because
he sees them as kind of building block
libraries to really build out the other
tools that the company's working on. Uh
and then use simple boring tech, right?
Like you don't need to get too fancy.
You don't need a Kubernetes cluster when
you're a threeperson company. Um
but this unfortunately requires uh kind
of a high cultural bar for folks. Um you
need people who really want to and can
understand everything you're doing. So
you need engineers who talk to
customers. You need go to market people
who actually build. Uh and that's that's
not necessarily easy to find. You need
high trust. So um basically you need
people who are in it because they're
building something together uh and not
in it for other reasons like politics or
personal advancement etc. And everyone
needs to really care about the customers
and focus on them. Um I think these are
the prerequisites for this kind of team
working this less than 15 person team of
generalists.
I I'll give you a quick example. So we
recently trained a model uh Syria OCR 3.
Uh it we recently shipped it but have
not announced it yet. So it's 500
million parameters that supports 90
languages and 99% accuracy on our
challenging internal benchmarks that
include math. Um, and it also does some
features that no other model does like
character level bounding boxes. It uses
PDF text as grounding at a line level.
Um, so it was a very challenging model
to train and in order to do it, Darun,
who's a research engineer at data lab
and I had to handle the entire process
from end to end. So that included
talking to customers, figuring out what
they wanted. Uh it included reading a
bunch of papers and figuring out the
right architecture, prototyping, doing
the model training itself, which you
always hope is 90% architecture, but is
always 90% data cleaning. So building a
data pipeline library, building out the
data sets, then we had to write the
inference code. So we had to connect it
to our repos, get the inference written
for all our customers, and then
integrate it into our products. So this
is a scope that in a big company you'd
probably have four, 10, you'd have a lot
of teams doing this. And every time you
hand off between teams in a traditional
company, you lose context, right? The
people who talk to the customers lossily
communicate it to the people who build,
who lossily communicate it to the people
who train the model. Um, it just gets it
becomes very inefficient. You end up
eating a lot of time in just syncing
context. It never gets fully synced.
You're not able to build a great
endto-end experience as a result. And
you have very slow feedback loops,
right? Like you talk to a customer today
and it might impact your model training
in months. Um whereas if you have
generalists who can work across the
stack, you get seamless context, right?
You never need to share context and do
inefficient syncing. You get a really
tight integration between all aspects of
the company and very very fast feedback
cycles. Um and the reason we were able
to do this is we used AI to to take kind
of the easy lowleverage pieces of this
um like building a data pipeline library
or helping us really figure out how to
integrate it into the API whereas we did
the higher level work in each of these
silos.
So I if you get one thing from this talk
this is the thing more people does not
equal more productivity.
Um all right and like how do you make
this work like how do you operationalize
this? So the first thing you have to do
is hire senior generalists. And senior
to me does not mean years of experience.
It really means maturity. You need
people who can look at a problem and say
I'm going to figure out how to solve
this. I'm going to do what it takes and
I really care enough to iterate with the
customer to solve it. um you need to
avoid over complication right like I'm
an engineer a lot of us are engineers we
love over complicating things like hey
let me deploy this Kubernetes cluster
and multi-stage pipeline to solve like a
data extraction problem um but in
reality you need people who can go back
and like kind of set aside the fixation
on shiny tech and just do the simplest
possible thing which usually is I'm just
going to write a shell script to run
this on one machine there's that famous
like Hadoop versus shell script blog
post uh from a few years ago when you
like you could replace a whole Hadoop
cluster with just like a 64 core
machine. Uh you need people who who
appreciate that ethos. Um and you need
to work in person. I personally think um
remote is great for a lot of reasons,
but it's not great for a small team that
needs to move fast. Um because you need
to set up a lot of process and process
to me is kind of the death of this
really fast collaboration and tight
feedback loop.
And then how do you do it
architecturally? So um I I alluded to
this a little bit, but you have to reuse
components aggressively. So we reuse a
lot of components between our on-prem
and our API deployments. We keep our
technology super simple like we don't
use React. We don't use any fancy
front-end frameworks. It's all server
rendered HTML with like light HTMX and
Alpine and then super clean modular code
that AI can really add to very well.
Like we rearchitected our marker repo to
be extremely modular and easy to to work
with and well documented and that makes
it much easier to use AI to actually add
to it.
All right. So basically keep everything
simple. Code is clean, readable,
maintainable. Architecture as few moving
pieces as possible. Um minimize your
surface area and then process. Minimize
bureaucracy, high trust, continuous
discussions. Um if if you feel like
someone's going to need a lot of
management, like don't hire them. Like
you need people who can who can move
fast without being managed.
Um all right. And then how do you fill
in the edges with models? So, a
challenge we're going to face as we
scale is this idea that we're we're a
document processing document
intelligence company. And every customer
has a slightly different way that they
want to parse their docs. And if you go
back kind of to the last generation of
OCR companies, the way they solve this
is they hired a bunch of Ford deployed
engineers. You sat at a client site and
you just kind of iterated with them
until it was good enough. But in the
future, you can really train a model to
handle this complexity, right? Like we
can train a model to essentially loop
over customer outputs until it gets to
the the right state. So you can kind of
replace that entire forward deployed
engineering side of the org. Um, and
then when does this model fail? Like
we're early, right? I don't know exactly
when this model falls apart. Um, but
Gamma, as as we just saw, is a great
example of a small team with with very
very meaningful growth in ARR. I think
the key is being able to say no, right?
A lot of these edges are choices, right?
You can choose to go hire a bunch of
Ford deployed engineers and put them at
your client sites or you can choose to
solve it a different way. And maybe that
different way is slightly less efficient
in terms of revenue. Um, but it might be
more efficient in terms of your
long-term company trajectory and health.
Um, so it's really unknown if this will
work forever. But in my opinion, like
it's your choice, right? Like you can
choose to make this model work or you
can choose to to do the less efficient
let's scale to hundreds of people model.
Um, all right. So LLMs are surprisingly
bad at generating ven diagrams. So that
explains why this slide is is is not so
well done. Um but basically we have
three core roles and the the
responsibilities overlap a lot. So
everybody talks to customers. Um
everybody builds product in some way and
research engineer and full stack
engineer overlap quite a bit. Um and
then go to market is really like your
traditional kind of sales, marketing,
support functions all collapsed into
kind of like a more generalist role. Um,
and really like I feel like politics are
the death of small teams, right? Like we
want people who only care about the
work, the people around them and
customers, right? Like minimal ego. You
need some ego to to kind of advance your
own ideas, but not so much that you're
willing to fight for them at the
detriment of of kind of the health of
the company. Um, we pay top of market
salary, right? Like it's always weird to
me that startups pay 150 or 200k when
they've raised 20 million, right? like
you should be able to hire fewer people
with higher salaries and get more done
in my at least that that's what I've
seen um meaningful work so big
challenges in scope right like if you
come in you get to work across the stack
you get to ship things end to end uh and
that's very exciting for some people
it's it's not exciting to other people
and they kind of self- select and then
you really need a good way to screen for
low ego and GSD right like you need
people who will ship not talk about
shipping um and that's another downside
of remote culture in my opinion it's
very it gets very hard to tell the two
apart. Um, and then patience, right?
Like the worst hires I've personally
made have all been when I thought I had
to fill a role very quickly. All of my
best hires have been when I said, "Okay,
let me find the best person and and hire
them." Even though I may not necessarily
have a role today, they're a great
generalist. Um, this is actually a big
debate in NBA and NFL drafting, too.
Like best player available versus
drafting for fit. Um, all right. So
really I think the thing to think about
as you scale is like how do we scale
productivity not headcount and you can
do that in a few ways right like you can
raise salary bands as the company grows
so you hire more and more experienced
people into the same role um you can
invest more in compute right like a one
researcher with access to eight GPUs is
less productive than one with access to
64 GPUs you can invest in AI tools that
multiply productivity right there's so
many tools out there now um that are
worth paying for that can abstract away
a lot of these edges for
And finally, uh I'd be remiss if I
didn't say if this culture sounds
interesting to you, drop me a line.
Those are all my socials. Um we'd love
to chat.
All right. Yes. Uh I think we do the
microphone for questions, right?
So, um when you went from 30 to 15 to
seven, I mean, my take away from this
whole talk is like the human touch
points are really what slowed things
down, right? Um was there any
uh additional po um focus on reducing
the domains that you were focusing on or
like your capability sets or it was like
basically your same product offering
just with less folks focused on it?
Yeah, that's a really good question. So
at at a very high level we offered the
same product but we cut some features
that were less relevant. Like we'd we'd
built up a lot of those those edges that
you kind of like end up building over
over the years. Uh and we ended up
slicing a lot of those edges. So I think
I think what happens when you hire a lot
of people is you don't have enough work
and you start making work for people,
right? And they end up building all of
these edges that actually aren't that
useful to the customer. But when you
have a tiny team, there's so much work
that you actually have to ruthlessly
prioritize. And I think you always want
to be in that zone. And that's kind of
where we ended up back.
Oh, sorry. No worries. So uh it's a
hypothetical question for you. So, we
take you and drop you in the middle of a
giant company that's been around for a
hundred years, hundreds of thousands of
employees, lots of bureaucracy, lots of
ego, got super comfortable with a
revenue stream. Um,
and they're clearly folding over on
themselves with too many people. How do
you change that culture?
Yeah, I'm not the right person for that.
I've never done that before. Um I I
would say I would say you the people who
want to change the culture go start a
small company and build the same thing
just build it better. That that's a
common pattern, right? Like that's a
common disruption growth cycle. Um I
think that's the best way to do it. Like
it's it's just once a culture gets
oified like I've worked at the State
Department, Pepsi, UPS, like once a
culture gets oified enough like you're
not going to change it. Like it's just
it just is what it is.
Generally with that pattern, what
happens is these companies recognize
that they're a target and they start to
buy up those small startups and crush
them. Yeah, sometimes that happens, but
like I mean Google is a great example of
where that didn't happen, right?
So, you haven't talked about how you
source these these uh really good
generalists, I think.
Yeah, that's that's a great question.
Well, one way is is this.
Okay.
Uh another way is uh is just open source
and Twitter are great ways uh to hire.
Like uh a lot of a lot of best
candidates have actually come from
Twitter, which is weird. I refuse to
call it X. It's still Twitter. Um, but
yeah, uh, I I don't I don't have a great
answer to that, but I think if you do
good work and you put it out in public
and you talk about how you're building,
like that seems to attract people who
really care about this mission and want
to build in the same way. At least
that's been my experience.
Thank you.
Yeah. Uh, well, uh, actually it's
related. Uh, so how do you structure
your interview process in recruitment?
Like how does it look like you maybe do
a trial period or
Yeah, that's a great question. So, uh,
three steps. So, step one is people come
in, we do a short chat. It's really like
talking to a peer. Like, uh, here's a
challenge I'm having. Let me talk it
through with you and see if we can solve
it together. If that goes well, step two
is let's think of a project we can build
together. So, we do a paid project. It's
usually around 10 hours. We pay $1,000.
It's like, it sounds like a lot, but
it's actually a tiny amount of money to
to figure out if someone's a fit or not.
Um, and then we review the project, and
if it's good, we come in and just do a
culture fit. Like how does it feel if
we're all just interacting as humans and
people and and does if it feels like a
good fit like it's a hire?
Yeah. And what is your like success rate
there? Like maybe 10% of the people that
goes through the pro through that
process uh get
Oh, that's that's an interesting
question. Like usually we don't once we
kind of get someone to the beginning of
the process, we have high confidence
that we could like we don't want to
waste anyone's time. Um, but we probably
of the people we've interviewed, I think
40% have we've ended up hiring. Yeah.
Nice.
Thank you.
All right. I'm out of time. Thank you
folks. This is great.
[Music]