Is the AI Boom About to Break Security? — With Grady Summers, CEO of Netwrix

Channel: Alex Kantrowitz

Published at: 2025-10-15

YouTube video id: YuyyGIihQfY

Source: https://www.youtube.com/watch?v=YuyyGIihQfY

Where does AI fit into the workplace and
how should companies manage it once it's
there? Let's talk about it with Grady
Summers, the CEO of Netwrix in a
conversation brought to you by Netwrix.
And Grady, it's great to see you.
Welcome to our studio.
>> great to be here. So, I want to speak
with you first about this discussion
about whether AI is going to cause folks
to lose jobs or potentially create more
jobs. You see it often in your position.
You see the data. So, can you weigh in a
little bit in terms of where this is
going and where it is today? Yeah, so
for us at Netwrix, we've had to really
lean into AI. You know, it's a it's a
crowded industry. There are a lot of
competitors in the space and we need
every leg up we can get, right? And so
for us, it was not a question of can we
do this or not? Like I'll talk to some
peers at other companies that are like
taking a really cautious approach. You
know, hey, we can't let any AI-generated
code get into the product, for example,
or you know, we we want to be careful
about how we use AI in decisioning.
For us, we had to be really aggressive.
So, we leaned into AI a lot. Um as to
whether it's going to create jobs or or
kill jobs, absolutely going to create
jobs. There's so much unmet demand in
our industry. Like if I could have every
developer be 10 times more efficient,
I'd hire double the developers, right?
We'd be able to move the bar ahead so
much more and I think we'd do it kind of
a lower like total cost
per per dollar output, if you will. And
so for us, it's it's just massive
demand. Our problem is not hey, we need
to save money. You can't cut your way to
great growth or great products or great
customer experiences. For us, it's like
how can we double down on that?
And man, we'll we'll start growing with
that. We're going to hire a ton more
people. Yeah, when people are saying,
okay, AI can do someone else's job,
I think that's just such a short-sighted
way of looking at it often because that
is assuming that a CEO is happy with
what a company is doing at that moment
>> Yeah. and has no ambitions to do
anything else.
>> to freeze in place? You know, if yeah,
you're right. If we wanted to stop what
we're doing now and just say we're going
to ship these products and we're just
going to cut costs and get another 10
points more efficient or whatever on the
EBITDA line, we could do that, but none
of us are happy with that. I mean, it's
always about growing and in our industry
especially, like I said, it's so
competitive. We have to grow faster than
others. We have to have more innovative
products. We have to go above and beyond
to delight customers. So for us, it's
very much
yeah, this isn't about like how can we
get cheaper? It's how can we get more
innovative? Right. And
there has been some interesting data
though that tells maybe not a different
story, but an interesting story about
where AI usage is going. So, Anthropic
recently put out this economic index
that said that the
percentage of use of its tools in the
workplace
has actually flipped. It began with
augmentation, maybe when the tools were
a little bit less reliable and now it's
gone to automation. So, what do you make
of that?
>> Yeah, I think that's right. I saw the
pod you do with Dario Amodei. I thought
it was it was great and I'm a big fan of
what you're doing.
I'll tell you when we started, so we had
an experiment in a way. We we licensed
AI tools, Claude Code from Anthropic and
ChatGPT from OpenAI. We put them out
there just to see what kind of uptake
there would be and there was, you know,
huge uptake. Everybody wanted to use it,
but I recently wrote a script that
analyzes like the sentiment what people
are doing with AI in the company and it
was about 35-40% of the usage was just
rewriting things, right? So, it's
helping people get more efficient, which
was great. I didn't mind that
necessarily, but it wasn't like
innovative leaps and bounds ahead. But
then we're doing some more advanced
stuff. That was sort of the control
group, if you will. We have people who
we've given space and time to like
really get trained and really settle
like what do I want to accomplish? How
much more efficient do I want to get?
Like a more structured use of AI.
And so we're doing I'll give you an
example.
A few really cool ones. One is every
time an inquiry comes into our customer
service desk, you know, customer service
is sort of in a way it's a ground zero
for AI, right? There's a lot of a lot of
sort of human decisioning that can be
replicated there. So, every time a
message comes in, we look up the
customer history. We have an LLM pop for
the analyst customer sentiment based on
the email history, even analyzing what
is is a request. You know, what's the
problem the customer's having and
recommended next steps. And so, this is
one where I think we go from like sort
of refinement, hey, help me write this
email better to moving up to hey, make
me faster. Like really, you know,
supplement me. Let me put on the Iron
Man suit and be the best customer
service rep ever. And then we have other
examples that are sort of replacement,
but I think they're they're for the
positive. For example, we're a very lean
company. We've never had a deal desk.
That's something a lot of big software
companies have. It's when a sales rep
has a question, they come, they ask,
hey, I here's a customer. They have need
X and need Y and they want to start on
prem or they want to move to SaaS. So,
what should I recommend and how much of
a discount level can I go to, right? And
that's often like a team of people or at
least a person.
But what we were able to do is create a
custom GPT and we put all of our
documentation, our SKUs, our price list,
our decisioning criteria. We interviewed
our sales leaders and said what they
thought the right discount thresholds
were. We put that into a GPT
that now a rep can go and just ask a
question. Any bizarre off-the-wall
question. I try to trick it all the time
and it comes back with these like
perfect crisp answers. So,
I mention that as like when I think
about it, there's such a progression.
Hey, help me refine. Hey, help me get
way faster. And then it's hey, maybe you
can actually replace someone and the net
net of that is we'll add more jobs next
year because I don't have to spend some
you know, the money to have somebody
doing deal desk work. I can hire an
engineer that's doing work to make
customers happy. So, for me, it's like
it's a total win-win. And where do you
stand on building versus buying because
there was there was this this MIT study
that said that said 95% of businesses
aren't getting ROI on AI investment.
We've talked about a lot on the show,
but one of the interesting things that
they brought up was companies that tend
to build in-house are struggling
compared to companies that are taking
off the shelf and you've actually
talked, you know, already about maybe I
and correct me if I'm wrong, two
different approaches. One using
off-the-shelf, something like Claude
Code, but on the other you've written
scripts on your own. So, where do you
stand here? Yeah, so
I think there's a few ways you can do
it. We could start like hey, we're going
to start with the most foundational.
We'll build our own models. We'll train
our own models and that's too expensive
for so many companies like ourselves. Or
we can say hey, we're going to tap
directly into those APIs.
Or we can say we're going to use Claude
Code to build faster. I view those as
sort of like levels of maturity. What
I've said we won't do at Netwrix is
look, I think AI is going to be so like
instrumental, critical to the future of
the company, like how we grow. And if I
believe that it's so critical, I can't
outsource all of that, right? I mean,
who are we if in 5 years we have this
amazing automation or 2 years,
incredible automation with AI, we're
moving faster. And what I mean by that
is it can't just be we're going to go
turn on the Salesforce Einstein thing or
we're going to go to you know, every one
of our vendors and check the box and pay
them an extra 50% to get their AI
module, right? Like it's an extra 50%.
If we're lucky, it's 50%. So, my point
is like I don't think you can like check
box your way to greatness and outsource
your AI to, you know, Salesforce or or
Palo Alto. It's great. I love these
companies, right? But we've got to build
in-house. So, it's a lot of first
parties. It's leveraging APIs into
ChatGPT. It's leveraging Claude Code to
build what we want but build it faster.
So, I think that's really important for
us.
If I could answer like why I think so
many of these fail, 95% of them fail.
What we're finding with AI
is it lets everybody go so much faster.
Like I mean, I think when we started
out, we're like can we go 20% faster?
Maybe 30% faster. I think it's like 10x
faster that we can move. So, take the
example of a one of our developers
building with Claude Code. They're going
so much faster, but the problem is
even a mediocre developer goes 10 times
faster. And I'd like to think, you know,
hey, we all want A players, but the fact
is you have you have a
mixed level of competency in an
organization and you give them all a
light saber or like this jetpack where
they can go super fast.
But if you're like you have a jetpack on
and you're like 1 degree off on your
azimuth, you're going to end up like a
100 miles off course. And what I mean by
that is if you if you take our great
developers, they're killing it, you
know, using Claude Code to build so much
faster and so much better and scalable.
But if you have someone who's not quite
sure, maybe they're more junior, maybe
they don't understand the architecture
where we're heading,
we've had some things we've ended up in
pretty bad places. Sort of like crimes
against technology, I call it, where
like this is not what we were trying to
build and we spent a week building this
like monstrosity over here that we have
to back off. So, what we've done is
we've just learned a lot. It's like it's
so important that everybody gets
aligned. And I tell our developers now,
it's so tempting to be like jump in
Claude Code and be like hey, build me an
app to do this. I want a new data
classification app. Um
we're saying time out. We're going to
spend like 2 weeks just understanding
the talk interviewing customers, talking
to experts in the field, like writing
better specs than we've ever written
before before we even start to use AI to
code because we don't want to end up 100
miles off course. So, I think back to
your question like 95% of these projects
failing, I think they're just like off
to the races and before they realize it,
like they're in a completely different
part of the map. So, you can build a
lot, but it's a scope issue. You're just
building the wrong stuff.
>> issue and I'll tell you I think one of
the most interesting things too is um
you know, there's this theory of
constraints, right? That you might have
had a class on. We all heard about like
bottlenecks and constraints in a
process.
And for so many years I've been in
cybersecurity for 25 years. I've been in
products for a long time.
I came up on the product side. The
constraint was always your developers
writing code. And what that meant is you
could get away with being a little bit
sloppy and like product management, for
example. Maybe your specs were just sort
of wishy-washy.
Product marketing maybe could come in at
the end in the last few weeks before the
product is launched to make white papers
or do positioning. Um but what we did is
we just pulled out the biggest
constraint and now the developers can
fly and that exposes other bottlenecks
in the system. So, now you're like, wait
a minute. Our PMs used to be able to sit
with developers like we're kind of
looking for this and the screen should
sort of look like this and then they'd
work on it. A week later, they'd get
back together and say, well, let's see
what you got, you know, and no, that's
not what we had in mind. We needed to
shift course.
Now you can have a meeting in the
morning where the product manager is
bringing like, you know, they built a
prototype in Replut that's already
clickable. Like skip Figma or Balsamic
or any of those other things we used to
use to prototype. They're bringing like
a clickable replet
demo or mock-up.
And then our engineers that afternoon
can have like an actual version working
connected to a database. My point is
like
that these constraints have like really
changed. Now we have to say, man, we got
to get a ton better at product
management and write way better specs.
We got to get better product marketing
and be in the room from day one. And so
I think it's fascinating how it's
changing software development by taking
out what used to be the biggest
constraint in building software and that
was, you know, the developer. One more
question about this. There's been a
little bit of a debate about whether AI
makes
any employee, namely as had about the
software developers, makes that does it
make an average software developer a 10X
developer? Like does it bring up the
floor or does it basically just add
rocket fuel to those who are already
good at what they're doing? What do you
think the answer is?
>> much more the latter, really. I think
what we can do with AI development now
is those good developers are flying and
the mediocre ones, I mean to be mean,
but sort of the average software
developer, you know, below average if
you will, to my earlier point, like
they're going to be one or two degrees
off. They don't have the same vision and
and so they're going to move just as
fast. It'll be rocket fuel for them, but
you're not going to love where you end
up. And that's why I think you hear so
many people say, "Oh, you know, it's so
quick to get started. Cloud code can
build me 90% of an app in in an hour."
And then it's like the next 10% takes a
month to get it right. It's because you
start flying without really knowing
where you're going and that's why I
think it just super powers the good
developers. I think the future here is
we're going to be paying good developers
a heck of a lot more.
You know, we'll probably spend less on
our R&D overall, but the good developers
will make a ton more.
And it's going to let us do things like
QA or documentation like almost
automatically. So on this question about
whether people should still go to school
to be a software developer, I'm hearing
that your answer is yes if you're going
to be darn good. Yeah, yeah, yeah.
And maybe hey, we should say that about
everything in life, right? Only go after
it if you want to be the best. But my
son asked me all the time, he's 16, he
he's thinking about
you know, going to college for computers
and he's like, "Dad, is is there going
to be a job left? Like is AI coming for
the jobs?" And no, I think great
developers are going to be more in
demand and and make it even more money.
So we've talked a little bit about your
belief in AI. You need to put it into
practice in your company.
Um
and you're finding that there's been
some uses from the current capabilities.
I'm just curious where you think this is
heading because on the show we we have
AI skeptics and true believers.
Going basically from the biggest
skeptics to the biggest believers.
Where do you think this this technology
is going to go? Where do you stand on
that spectrum? You know, it's
fascinating to be in the seat of someone
running an enterprise software company,
a cybersecurity company because in the
industry and on your pod, we you know,
there's a lot of conversation about like
doomers and accelerationists and like
this sort of macro picture. And
sometimes I feel like we're in the
trenches here trying to do what we can
to like go faster and get more
innovative all the time. So like we
bring a I think a different perspective
to it like actually
giving AI to developers asking them to
do more with it. Um so no, look, it's
going to be a net good. We're all in on
it at Netwrix. It's making us faster,
it's better. We're getting a customer
requests so much quicker. So yeah, so
it's going to be a huge net positive not
just for our industry, but like for our
customers and I think for cybersecurity,
too. Do you think it's going to get much
better than what it is today? Oh, so
much better. Look how much has changed
just in the you know, I mean two years
ago we thought it was cool that ChatGPT
could like write a limerick, you know,
about someone in the office or do some,
you know, funny, you know, maybe help me
write an email.
Now we're using it to build apps that
are like better, more scalable than we
ever have. And it's just improving so
fast. So absolutely, it's changing it's
changing everything about building
enterprise software like from writing
the spec to creating the mock-up to
building the software to writing the
unit tests. So yeah, I'm super
optimistic.
>> Okay, but as someone running a company
that is relying on this technology, do
you ever worry about the business of
these AI research houses? Because if I
was in your shoes, I would say, I'm
happy to be building on top of this
stuff, but I'm seeing all the money that
they're raising. Yeah. And you have to
make a lot of revenue to justify that
and eventually profit. So
so does that factor into your decision
at all in terms of like how much to rely
on these tools because one day maybe
they could go away or maybe they could
be 10X more expensive and then it might
change the calculation. Yeah, it's a
funny question because right now I feel
like we and everybody, all of us are
beneficiaries of, you know, venture
capitalists that are helping subsidize
our AI usage, really, right? I mean you
you've
I think you're referring to the fact
that often costs more to service
customers than you're bringing in
revenue. I mean these you know, the big
AI houses are not profitable now, but
we're certainly the beneficiary of it
now. So we'll make hay while the sun
shines. I think for us it's about making
sure that we're so efficient and
judicious in how we use AI that it's not
ridiculous and wasteful. And so like
we're doing we we monitor, we see people
who have like some of our best
developers might be using, you know,
$1,000 in Anthropic credits a month and
then we find somebody who's using like
4,000 a month. Like, "Well, what are you
doing over there?" And so we're we're
working to corral and make sure that our
usage is responsible, that we protect
our customer data, that we use it at the
right pace, that sort of thing. And then
outside the organization, I actually you
speak with lots of lots of customers
buying cybersecurity.
That's another place where I really
believe that this
moment is going to change things and
we're going to talk about a number of
areas where generative AI in particular
could come into play into cybersecurity.
But the sense I get having spoken with
people in the industry is that
generative AI is shifting cybersecurity
in a way. I think cybersecurity used to
always be about protecting from the
outside threats. Right? Who is going to
try to phish my employees or hack into
my systems? What's exposed? Where could
the problems lie?
Now
companies have AI just kind of running
loose within their companies and we've
already heard so many wild use cases of
like somebody speaking with a
off-the-shelf chatbot and it surfaces
the CEO's emails because the data was
not protected in the right way.
And especially, you know, you talked
about the customer service use case.
You're speaking to customers, you have
your AI speaking to customers, maybe.
Maybe they're starting to give deals
they shouldn't or share data they
shouldn't. And so actually, you know,
again going back to this idea that
so many of these projects fail, seems
like
first things first, you got to make sure
that that the security of the data
within the organization is is
in good shape so you don't have the
inside threat along with the outside
threat. Is that a right read on it? Like
how is AI You said it well. It's
changing it's Look, it's another threat
vector, as we would say. It's another
surface area for attack. It's Over the
last 25, 30 years we've dealt with you
know, insecurely written software and
buffer overflows and SQL injection on
websites and I mean, gosh,
just years ago it was Mac Office macros
that are being abused and that sort of
thing. Now it's another surface area.
And just in the last few weeks we've
seen these really innovative attacks
where somebody will open up, you know,
something that's got instructions to an
LLM so it's processed within an LLM, but
you know, it's hidden. It's like white
text on a white background so the human
doesn't see it, but the LLM can process
it and and you know, exfiltrate
information by by calling out to an
outside source. So you're totally right.
It's like just every every CISO, chief
information security officers now are
thinking like what's the threat? I was
just speaking with a CISO of a defense
contractor who's like, "We need a time
out a little bit on the AI while we
really assess the threat." So look, I'm
net really optimistic on it. Our
industry is always changing. There's
always a new threat vector. AI is a new
one.
I think we're going to be fine. We're
going to figure out how to properly
secure it. We have to.
That is true. Yeah.
One area of AI
security that I'm particularly
interested in is voice. Oh, yeah. I
think that voice has gotten so good. And
I'll say this with the disclaimer that I
have
licensed my voice out and
read you my big technology stories on
certain apps out there.
Um
I'm a believer I think it's that's good.
I'm happy I think that's a constructive
use case.
But I do think that when you combine
the ability to like really like have
human-like conversations, like Turing
test passing conversations, and then be
able to layer a voice interface on top
of that that sounds human, sounds like
people we know. Yeah. I would be
freaking out, I guess, if I was in a
company and I knew that there was a
chance some company could take my CEO's,
let's say I'm a public company, take my
CEO's voice from an earnings call and
start calling people and getting
information from the company. How do you
protect from that?
>> It's a huge risk. By the way, is this
where we tell everyone that we're AI
generated and this is all You got to be
careful about that because
the truth is it's getting so hard to
to see that that like sometimes you're
just like, well, I don't know. Like even
the CEOs of Zoom and and Klarna are
doing earnings calls as as video
avatars. It's crazy, right? By the way,
we this is real. Yeah, yeah, yeah. This
is flesh and blood. Reach out Look at
that. There you go.
Um
No, you're you're totally right. It's
we've actually I've worked with
companies that do a lot of these spoofs
like they they'll spoof their CEO's
voice and have them call out as like,
you know, like fishing, fake fishing
tests or fishing or whatever they call
it now like to to make employees aware
of how good it can be. Um yeah, it's
gotten so good. It's way past the Turing
test. I was
on vacation out west this summer, had
some like long, long drives through the
desert. My whole family fell asleep in
the car and I just like pulled up
ChatGPT in voice mode and we were like
talking about the, you know, hey, what's
that landmark over there? And where am
I?
And so my wife gives me a hard time
that, you know, she thinks I'm becoming
too friendly with AI. There's a South
Park episode about this.
I know the one you're talking about.
But
yeah, it's like I said, it's another new
threat vector. We just We're not going
to be able to trust anything that we
see. We're always have to verify. You
said that there was a company that you
know of or a few that have done testing.
>> Yeah, yeah. Is the voice AI more
effective at getting employees to fall
for these scams than tech savvy
>> Yeah, it is until they employees know to
be looking for it. And it's like it's
very much along generational lines. You
won't be surprised to hear this. I mean
it's
you know my parents kind of their baby
boomer generation and you have talked to
my mom and like they're a little bit
like in all of what's going on like did
you see that thing on Facebook and I'll
be like mom that was fake you know.
She's very discerning in other things
but sometimes that the AI stuff gets
her. Meanwhile I was playing something
for my kids the other day and they
immediately like that's fake AI voice.
It was a famous person saying something
like that's not real. I'm like how did
you know? You know but they're 16 18
years old like they're in tune with this
stuff. So you know
with cyber security Yeah. it feels like
every time I have a conversation about
it it
feels more complex. Yeah. And
>> Yeah.
I'm just curious to hear from someone
who's working at every day why why has
it become so hard like it just seems and
maybe I'm being ridiculous here it seems
like there's
some simple things that you can do. It
seems like once you find a way that
people attack you just sort of close it
up so
for me why is it so hard? I'll give you
maybe the industry answer and then I'll
give you like the real why I think I'm
maybe a little cynical but the real
answer is it is probably the only like
domain that we deal with professionally
where somebody's like actively trying to
exploit you all the time and they're
evolving their tactics and so the threat
landscape has changed and it has gotten
more complex. That's all very true right
we have to evolve defenses along with
it. I'm maybe a little more cynical take
is the industry has just gotten like
over complexified I think and there's so
much money in it now as someone who's
been going to like the RSA conference
out in San Francisco for decades
and I'm going to sound like a really old
person when I say this but it's just
it's it's changed so much there's so
much money in it now that you have
people who are buying products because
like they got to cuddle with a puppy in
a booth for 15 minutes or they got a
That seems like a reasonable reason to
buy
>> I agree like yeah
Yeah it's better than I got my picture
with a monster truck
>> most of my decisions. There's a puppy
involved. Yeah no the puppy could be a
good decision criteria. Yeah but I guess
if you're making cyber security
decisions it might not be the best.
>> Yeah you know you're you're trading your
info away for a can of an IPA or
something at the booth happy hour. My
point is like This actually happens at
cyber security conferences. Yeah I know
I'm not exaggerating people give their
PII for a for a beer.
>> Yeah absolutely. Yeah you know hey can I
scan your badge
>> think I see the answer. Yeah so it's
it's just
and we've convinced everyone they need
new things. Look there you can be
successful building a security program
with just three things and I've been
saying this for a long time. It's do you
know what you have? Okay that could be
computers it could be servers it could
be cloud assets it could be identities
it could be files and data.
So do you know what you have?
Do you know how you want to handle that
thing? You and me might say policy do
you have a policy for it but it could be
as simple as like look sensitive data
should not ever be put in a should never
be put in a public share. Like that's
really simple right? So what do you
have? Do you have a policy for it and
are you enforcing that policy? Like it's
that simple and there's a lot you you
know there are other things like
incident response if you are breached
what do you do but I'm just saying to
build a great foundation you just have
to do that and you can do that like
there are tools that will help you do
that I mean that's part of what we do at
our company but it doesn't have to be
overly complex. You don't need all these
like new four letter acronyms all the
time. So I don't mean to be cynical I
love this industry I love the people in
this industry I love the innovation here
but I think sometimes we've obscured it
like look there's only a few basic
things you need to do just know what you
have know how you want to treat it and
make sure you're you're handling it
correctly. With the AI I think that's a
huge issue right because
the government side is really under
developed right now. Yeah yeah you're
you're exactly right it's it's really
difficult. I mean just understanding how
our employees are using AI is requires
like I said earlier like writing custom
scripts so I it's getting better. I mean
it's incredible to see as we talked
earlier the pace of innovation with with
these AI companies so I
I know we'll get there but it's a little
wild west now.
Good guys obviously have more
capabilities bad guys have more
capabilities so have you seen the
um
the sophistication of attacks increase
and the volume of attacks increase
thanks to generative AI? For sure you
know it's
at our company now I'm not directly
dealing with threats but I've worked at
companies where we are I look back at
the things that we had to deal with just
a few years ago when we were doing
incident response and the attacks that I
see even directed at us now and it used
to be you know take a phishing email
it's the example that everybody can
relate with right we've all seen gotten
phishing emails they used to be sort of
broken English clunky you look at the
URL you'd be like that's not real.
They've gotten so convincing now that if
someone who's been doing this for a long
time I still have to do a triple take
and say is this thing real or not I find
myself now especially you know you have
to be really careful in my position so
I'll I'll call the CFO or the general
counsel and just say hey does this thing
look real like let me bounce it off you.
So I don't know we're all have to be a
we all have to be a little more careful
a little more cynical of what we
receive. Yeah I mean I just got an email
from a bank or a letter from a bank
being like we have some of your like
retirement savings from another employer
and Uh yeah. I called them and it sounds
pretty legit but I still don't want to
give them my social security number.
>> Yeah yeah.
>> maybe just keep that money.
Obviously it's not worth it. It's not a
right course to pursue but yeah it's one
of those things where it just helps to
trust but verify.
>> Absolutely we got to do that. So within
organizations
>> Mhm. AI Yeah. Is it is is it at a point
now where it's already wreaking havoc if
you let it sort of loose and you know
let the modern day agents sort of go go
do their thing like are there already
cases where they're like accessing the
wrong information and sharing it that
you've seen or Yeah yeah.
>> problem? No it's a now problem for sure
and I've
different example but I've spoken with a
CISO recently like they had some data
exposed that shouldn't have been. You
know part of the challenges with these
AI models when you let them go in an
organization the best ones
and you know I won't name names but you
can pick some of the the ones you hear
about every day they'll respect your
permissions right so you only get to see
things that you have access to. The
challenge is
you don't know everything you have
access to right somebody at one point
might have given you access to put you
in a group that had access to some
payroll data and that happened five
years ago I mean some of these
organizations we're seeing you know
people been in an organization 10 20
years you you accumulate permissions
right? Now you have access to stuff you
didn't even know you wouldn't have known
to go to that particular one drive
folder and access it but if you say hey
uh
you know chat
whatever hey LLM of choice you know show
me all the salary info I have. We've
seen actual real instances where it's
like sure here's a salary info from your
department when you're not supposed to
see it. And so you would not believe how
often this happens. Almost every CISO
I'm talking to now is is concerned about
giving LLMs access to data for that
reason not that the LLM is going to do
something wrong but it's going to let
people search and find things way more
powerfully and join information they
didn't even know they had access to. So
it's a it's a concern but I will go out
to say like you know we can't stop and I
don't say that because it's like
inevitable and we must just give in but
I remember the early days of the
internet being at a large Fortune 10
company where we said like we can't let
people got to block all the fantasy
football sites because everyone's going
to be wasting their time. We got to
block YouTube because people are going
to spend you know all day watching this.
My point is it was a very like knee jerk
reaction to a absolutely
transformational technology and that the
right reaction is hey we're going to
make a few mistakes along the way but we
got to let our people like get
comfortable with this and use it in
their daily workflows and I feel like
we're at that point with AI. We have to
be careful we don't want to be
ridiculous and clumsy and fall over
ourselves but we have to know there's as
as an industry there will be some bumps
along the way but but it's going to be
worth it.
>> So don't block the chatbots. Can't block
the chatbots you know it's I think it's
here to stay I don't think we're going
to stop them you know. I had a funny
thing that happened to me a a little
while back where a PR agency shared with
me a Google spreadsheet by accident that
had their entire it was basically their
CRM a list of hundreds of reporters and
notes about interactions with them they
were coded green yellow red in terms of
like the relationship with the company.
>> Yeah yeah. And I mean I I guess I got
lucky to see it. Yeah.
>> Unlucky for them but I clicked on it but
like imagine I'm sure there are others
that I've missed. Yeah yeah.
>> my
chatbot could go find it.
>> Yeah whether it's it's your Outlook
co-pilot or your Gmail and again it's
not the fault of these companies they're
building really effective tech that
honors your permissions but now you're
seeing things you didn't know you you
had.
>> Fascinating. Um I just want to end on a
personal note.
>> Yeah. So I heard through the grapevine
that you have interest in one day
maybe leaving tech and becoming a farmer
or blending the two interests so talk
about that. Yeah I'm fortunate to live
on a farm now it's my
>> are. I am on a farm it's my wife's
grandparents farm so my kids are like
fourth generation on the farm. And
you're right today I'm fortunate to live
on a farm
and I lease a lot of it out to a farmer
who farms it and so I get to watch him
on his tractor thinking like someday
maybe maybe I can be out there too but I
love it and I've talked to so many
people in tech first I thought it was a
little unusual like why am I interested
in farming I love planting trees planted
thousands of trees and it's like my
hobby yeah I got to get my hands dirty
but I've talked to so many people in
tech who feel similarly right? And I
think it's that we all look at a
computer screen all day we're dealing
with you know cyber security or building
software and sometimes it feels really
great to like just go dig your hands
into the dirt a little bit and like
touch touch real living things right?
And so it's it's a fun change of pace
for me. I was going to say you can't
hack a cow but I should say you can't
hack a cow yet.
>> It's only a matter of time till those
things are shipped at the pace of change
yeah for sure.
I guess it's important though I I think
it's a good point that you bring up and
a good note to end on just that like
there's this meme of like you got to go
touch grass Yeah yeah. you actually
really should like it's a good practice
to get outside and get away from the
screen.
>> Yeah we're lucky all of us to be in
technology I mean it's such a great
space but you've got to have a
perspective sometimes and like kind of
take your your eyes away from the screen
and kind of look up at the world So,
live in an amazing place right? You got
to got to take take advantage of it.
Definitely. Well, Grady, can you tell us
a little bit more about where people can
find Netwrix and what how they can learn
more about the company? Yeah,
definitely. We're at netwrix.com.
You can check it out. Check out our new
brand that that's just being launched.
And yeah, look, we help like I said
earlier, we help companies like
understand what they have and are they
properly protecting it. What we do is
pretty simple, but it's really powerful.
We have like 14,000 customers who rely
on us. So, it's a lot of fun. It's a
it's a new place.
Grady Summers, CEO of Netwrix. Thanks
for coming in today.
>> Thanks so much. Thank you everybody for
watching and we'll see you next time
here on the channel.