Structuring a modern AI team — Denys Linkov, Wisedocs

Channel: aiDotEngineer

Published at: 2025-07-24

YouTube video id: SbUxRluVRwk

Source: https://www.youtube.com/watch?v=SbUxRluVRwk

All
[Music]
right, thanks everybody for joining
today. My name is Dennis Linkov. I lead
the machine learning team at Wisdocs and
I'll be talking about hiring a modern AI
team.
So, who's heard this message before? We
are now an AI first company. We've seen
companies like Shopify, Dolingo, Zapier
all make these announcements saying that
they're AI first companies
and they're saying that there are new
expectations that before you hire a
person, you need to make the the claim
that you can't hire an AI agent or use
AI.
We're now seeing big tech companies and
many companies in general sharing how
much code is being written by AI systems
and how this is going to lead to the
extinction of the software engineer.
So now if you're in the position to hire
people, you'll ask the question now
what?
So I'm going to talk about three main
themes today. The first one is the
anatomy of an AI team. The second is the
evolution of a generalist and the third
is the question of hiring.
So let's start off with the anatomy of a
team.
So there is spectrum of different
companies and this is where we should
start. We have technology companies
where technology is the core value
proposition that is being offered. We
know big tech companies. as we know many
startups. There's also verticalized
solutions or services companies such as
Palunteer or the company I work at
Wisdocs and there's also tech- enabled
companies where the core product is not
technology but it is something that
benefits dramatically from from good
tech. You can think about banks,
retailers, small and medium businesses.
So who here is in each group? Who here
is at a tech company? Who here is at a
verticalized or services company? Who
here is in tech enabled? Okay, pretty
good mix.
Now, each of these different companies
have different challenges. Typically, in
a technology company where we've seen
blunders is the lack of domain knowledge
when launching a product, there's
usually some kind of business
misalignment with technology. In the
middle, either everything goes right or
everything goes awfully poorly. And on
the tech enabled side, there's usually
some kind of tech challenge, right?
Because tech is not your your core value
proposition.
So, you make different decisions based
on this, right? You typically buy data
or buy expertise if you're a tech
company, right? you go to a vendor and
say, "Give me labeled data." In the
middle, you either have everything or
you have nothing. Uh, and from a tech-
enabled company, usually by technology,
either through a service provider or uh
a true antenna solution.
Now, I bring this up because every
company, every organization, and every
person has a different perspective on
the role of technology solving our
world's problems.
This is my stance. I think we have 90%
of the technology to solve the problems
of humanity. Now, this might be a
controversial perspective, but I'll show
you why.
The fax market still exists.
Billions of dollars are spent on faxes,
and the market is growing.
In 2017, only 3% of payments in the US
were contact lists. That number is now
higher, but we're still paying in
archaic ways. Checks are still a massive
part of the market.
And it took 40 years after the
introduction of personal computing of
the internet for medical systems and
electronic medical records to become
digital. This number is much higher now.
But it takes time for technology to be
adopted. And many of you might have seen
this in your industry in your job as
well that technology is not the thing
that's stopping you from achieving
success.
So this is the core question. Is
technology the limitation of our
success?
And it's not about technology, it's how
we use technology. And the way you build
your team should reflect this by
understanding the problems that you
have.
So going to this question of do you need
to hire an AI researcher? A lot of times
when Chad GPT was coming out, every team
is like I need an AI engineer. I need an
AI researcher. And it's not always smart
to do so, right? Up until you hit a
certain scale or a certain need of
specialty, it does not make sense to
hire an AI researcher to work on models.
Free training models, even fine-tuning
models of a certain capacity is not
necessarily the first thing that you
need to achieve the value that you need
to get. There's a lot of transformation
work that that goes in before that.
Now, in certain domains, the best tech
is essential, right? So, if you're
working at OpenAI or model provider,
Anthropic Google or some of the
startups, you want the best team who's
working on that because that is a
product as we covered.
So here I'll I'll propose a wager for
you. Are people here familiar with
Pascal's wager philosophy? I'll give you
the successor of that Ampere's wager if
you're familiar with graphics card
architectures.
Here's your trade. You trade your team
for five researchers from the top labs.
And maybe you need to throw in some cash
and first round picks for that as well.
But do you make this trade?
Do you trade your team that has domain
knowledge has worked in the area for
five AI researchers? I want you to think
about that.
So we go back to the question of what
does an AI team need to do?
There's a lot of stuff, right? We start
off with defining use cases. We want to
go through and integrate with products,
right? We're not doing green field
everywhere. We want to measure ROI, find
the right data. We want to test and
refine workflows, build the interfaces
we need for success, sell this product,
and make our customers care.
And it's not one person who does this
job. You can't just say, "AI
researchers, go make me $10 million from
this product unless a very specific
niche.
So this means your success is not one
job unless you're a founder, but we'll
skip that.
So the goal here is that you need to
have a comprehensive AI team and you
need to figure out how are you going to
structure that.
And the thing that we need to remember
is that companies aren't just one team.
It's not just my AI team owns this small
segment, this deployment or whatever.
Otherwise, you ship your org chart and
you get some weird product behaviors.
So identify to yourself what is your
bottleneck? What is stopping you from
achieving success?
Is it shipping features? Is it acquiring
users? Is it retaining users? Are you
monetizing correctly? Are there
scalability issues? Are there
reliability and observability issues?
Right? All of us have probably run into
these things as we are deploying AI
products. So, we need to make sure we
can prioritize all these things and hire
accordingly.
And these are all questions that you
need to answer when building an AI team.
The key takeaway here is what kind of
team do you need? and only you know that
answer.
Let's talk about generalists and why I
think they're important.
So in 2021 I was building uh first
machine learning team and I adopted an
approach where we hired generalists. We
supported them by automation across the
board.
So at the time I was hired to a
conversational AI company working on a
platform. Sorry, let me rephrase that.
AI agent building platform. Just wanted
to make sure you guys understood what
that meant. And I was hired with the
mandate of we want ML. That was my job
description.
Change that. We want AI.
So after working with the business teams
and leadership team, the this was the
the final set of goals we set. We want
to serve hundreds of thousands of
concurrent models. It needs to be
multi-dommain. It has to be low cost and
we want to support real-time training
and serving. Those are some tough goals.
So this is what we did. Uh we wrote a
custom MLOps platform for deployments to
to match our requirements. We mainly
fine-tuned encoder models. We built rag
as a service and as a team we own six
microservices on tend.
So the three areas I focused on building
the team was model training, model
serving and business acumen. Now you
might say I want top grades on all these
things but that's a lot of money right
and as a as a team leader as somebody
who manages a budget you don't have
infinite money. So we have to pick along
this axis where do we want each of these
skills to lie for model training we we
don't want somebody at the very bottom
but we don't need somebody who can train
GPT3 and basically we went across and
said okay what are the key requirements
for model training we said somebody in
the the upper half who knows general
architectures of models uh can do
encoder fine-tuning does some data
engineering using hugging face is okay
that was the bar we set on the model
serving perspective on the first round I
was the first engineer at the company. I
spent a lot of time on building the ML
platform, but that was something I was
comfortable with coming from a cloud
engineering background. Now, after that,
there was enough abstraction built in
that we didn't need somebody who knew
the intricacies of how Kubernetes worked
and how we did serving or training, but
the capability to use these abstractions
and understand the trade-offs that were
being made.
And where I did focus on is the ability
of our engineers to get on calls with
customers, right? We didn't need a
business development rep who would just
call cold call people for fun. Uh but we
need engineers who didn't say my job is
coding in a basement. Right? So we went
through and understood these trade-offs
that that needed to happen.
Now in 2024 I was building another team
uh the new organization that I joined
and similar approach but open source had
advanced. When I was building the
original ML platform, we didn't have
things like shadow deployments or AB
testing in a lot of the platforms that
existed and we had a specific use case.
Now, since then, what's important to
recognize is that all these skills that
you're prioritizing don't necessarily
need to be one person. They can be
multiple people. You just have to find a
way to make the team work. So, once
again, we we set similar uh structures.
And in this case because open source h
had advanced uh in a number of different
ways and commercial models had advanced
some of the things shifted around on the
on the training side using commercial
APIs and and prompt tuning and model
fine-tuning commercial models became
important but we also expanded our
scope. We're now using decoder encoder
models which each have their nuances. Uh
on the serving side uh because we were
using a open source offering we didn't
need to write our own platform which is
nice. And on the domain side again
because of the nature of our business of
doing medical record processing there's
a whole nuance of what that domain
knowledge was. So that bar increased in
a different way.
So now that we know what kind of skills
we need for our team we can identify
this threshold and balance the budget.
Right? We can't just ask for infinite
money unless you're a specific subset of
companies.
You might have this question. What if I
already have a team? I have 40 people,
100 people. What do I do? How do I
reskill, upskill? How do I manage this
team? So, we need to figure out what the
goal of the team is as we were referring
to. And I typically like to think about
it through inner and outer loops. So,
inner loops are the daily activities
that the team needs to accomplish
together every day uh to be successful.
And the outer loop is the broader set of
activities that will set you apart. And
you might not need constant interaction
with that, but they're really important.
So, in my current team, this is how we
typically structure it. Uh so we have
model training, prompting, product
requirements, model serving, some domain
experts and the capability to build
business cases as the core nucleus of
our team. And again as you're building
your team and your function within your
domain, these will be different. But
this is a framework to understand what
are my priorities. Now we need to have
the expertise in our outer loop as well
to to further differentiate our company
and our team.
If you have a weak technical loop on the
inside, you're going to struggle with
the technical execution. If you have a
weak domain loop, you're not going to
find product market fit. So, you need to
make sure that you really understand
those feedback loops and the
collaboration loops that exist within
your company.
Now, depending where you are at the
stage of your AI strategy, uh all of us
fall on a different spectrum. You win
with different types of people. You win
with journalists at the beginning when
you're trying to find that fit, trying
to make that basic progress until you
get to the point where you exhaust the
knowledge and you need to move into a
more specialist model. So once again on
the general side, most companies as
they're going through transformation
fall in that category. Once you get to a
really good stage for your model
training, serving and so forth, you need
specialists to push the extra 5% of
performance there.
So generally my perspective is
generalists are good because they're
adaptable. And in most cases, you're
you're good enough with a general uh a
generalist who can do many different
things beyond just writing code.
Let's talk about upskilling, reskilling,
and hiring.
So I think there are three main things
as we continue go to go through this AI
wave that you need to do. People need to
learn to build. You need to become a
domain expert. And you need to be human
facing. So we've talked about vibe
coding and prototyping. We should go
from static product requirements to
functional prototypes that take those
details and elicit them. Right? We never
want to have those conversations again.
those dreaded conversations with PMs and
engineers being like that wasn't in the
requirements or that was an ed an edge
case right we want to shorten that
feedback loop we want to make sure that
people are writing evaluations that
domain experts aren't just providing
input and feedback that they're the ones
writing the use cases defining them and
having the literacy to to work with LMS
directly
we need to make sure that engineers are
on customer calls so we shorten those
feedback loops if your engineers say
sorry I can't talk to a customer um
that's a learning opportunity
Finally, you need somebody to sell your
product.
Now, the way my team works is that we
have weekly cadences to learn. Every
week we have a new topic either with
myself or other members of the team that
is brought to the table for 30 minutes
and we learn the underlying key
priorities of our team and our company.
And we make sure that every week we're
upskilling ourselves. If this sounds
intense, the consequences of not doing
this are much higher.
Let's close out on hiring. When do you
need to hire? I believe that people need
to be hired for two main reasons. One is
to hold context and the other is to act
on context. So it's important that if
you have too few people on your team,
things are getting dropped and you can't
execute on your priorities.
You might ask the question, can't AI
agents with a massive context window do
this? Maybe to some extent, but you need
expertise to be able to verify that this
context and this execution is correct.
And to have expertise, you need to have
context. And finally, humans should be
accountable for the systems that we
build. Uh, as we have in the old IBM
quote, right? We can't hold a machine
accountable.
So, who do you who do you need to hire?
So, we're hiring on a budget. And going
back to everything that we've talked
about today, you need to know your team
composition and the needs that you have
to set up this budget, right? If you're
trying to hire the top researcher, it's
going to be very expensive. If you're
going to hire a generalist AI engineer,
will be quite a bit cheaper.
Now, it's also important when you're
hiring is that you're not just following
trends. Who here has heard the trend
that junior engineers shouldn't be hired
or just using AI agents?
Okay, some people are asleep.
Now, the counterpoint here is why is YC
running a a school an AI school for
students and young people on AI? 2,000
people coming to to San Francisco in two
weeks. Why are they doing that?
Certainly entrylevel positions if they
were useless they wouldn't be bringing
in all all these young people. So make
sure that you verify the trends that
you're seeing and uh think from first
principles. What do I need? What is the
team composition? Is it new grads? Is it
people with 30 years of experience? Is
what are the retraining opportunities,
right? Uh there's lots of ways to to
build a great team.
Now just repeating this because I've
seen so many companies do this. Ask
relevant questions to the job. stop
putting people through lead codes that
have nothing to do with the job. Uh and
now that LMS can solve it, it's not a
great way to evaluate either.
So we go back to Ampear's wager. Uh you
have the question of am I going to have
five researchers from the top labs or am
I going to build my team in a domain
specific way. So, for example, in my
company, I'd rather have the team on the
left with the domain expertise, the
ability to sell, work, have empathy with
customers rather than just having five
researchers. That's the way that our
domain and company are structured.
Now, you can also answer Blackwell's
wager, which is do you want GPUs or a
team? Uh, so that's a story for another
day.
So, overall, we have three main lessons
from today. The first one is it's
important to start off from the
beginning and say what do what team do
you need to win? Once you know that
you'll start noticing that cross
functional teams will continue to be
effective but they'll be built in
different ways. The overlap will be
greater but all of us will have the
opportunity to work with AI systems and
contribute to our product. And finally
we need to continue learning. This is a
must right the world moves too quickly.
We we have Pelican evaluations now for
the past six months rather than the past
year, right? Hopefully that's an
illustration of how fast the world works
as well. So keep up to date, keep
moving, make it part of your culture to
keep learning. So thanks everybody for
joining. Uh this these are my handles if
you want to connect afterwards and I'll
be here uh later on if you have any more
questions. Thank you.
[Music]