Security in the Age of AI: Vanta CEO on Compliance and Risk

Channel: Alex Kantrowitz

Published at: 2025-04-10

YouTube video id: uwpRnpEG-a4

Source: https://www.youtube.com/watch?v=uwpRnpEG-a4

Let's talk about how AI and automation
are creating a new generation of
security risks and how the technology
itself might prove useful in fighting
these new threats. We're joined today by
Christina Cassiopo, the CEO and
co-founder of Vanta to dive into these
issues in an exclusive interview
presented by Vanta. Christina, great to
see you. Welcome to the show. Thanks so
much for having me. So, I want to ask
you about the security risks that we're
seeing from AI and automation. Now, so
much of the conversation around these
technologies is all about, you know, how
is it going to progress? Is it hitting a
wall? Uh, but I, as far as I understand,
the current technology today is actually
pretty powerful in and of itself and is
already upending a lot of the ways that
the internet works today. So, what do
you see the role of these technologies
uh in the online experience? Uh, and
then what are the risks? Yeah, I think
there's a ton of excitement about these
technologies. is I mean some of the way
I think about it is they're letting
builders create these magical
experiences that people have thought of
before but have never been able to do.
So specific example in Vant's context is
uh security questionnaires are just a
part of selling software to enterprises.
Basically you get this long spreadsheet
of questions you have to answer. Um
everyone has always wanted to build a
product that just answers the questions
correctly the first time. that wasn't
possible prior to some of these
foundation models, but now it is. And so
just an example of of kind of the
magical product experiences that can be
built now, but couldn't even 3 5 years
ago.
Okay. So you have your automation tools,
being able to fill out forms, being able
to basically take action on your behalf,
and I guess being grounded in really
reliable information. Uh so that's the
plus. Um, but then I imagine you also
have an army of uh of bots that are
going out and maybe doing this and for
nefarious purposes or even um if we're
going to have this increase in bot
traffic on the on the web, uh it seems
like it's going to make things more
difficult for just like normal companies
going about their day-to-day. So, um we
talk about the opportunities, but what
do you see as the risks? Yeah, there
there definitely are risks. Um, the one
stat I found interesting is over half of
the Fortune 500 companies in their most
recent uh, annual financial reports
cited AI as a risk factor. Um, and so,
right, it's it's kind of prevalent. I
think I think about it in two ways. One,
just general technology deployment. AI
is a new technology. We're still
figuring out its parameters. Anytime
there's something new, there's a ton of
excitement, but there's also just a ton
of risk as we figure it out. So, there's
something just general there. You know,
there's a story of mobile 10 years ago,
maybe cloud 15 years ago. It's AI now.
Um, and then there's a specific piece of
these uh tools can create incredibly
accurate text and video that looks like
it's real or you know, but might not be.
And so there's a whole host of attacks
that are now again much easier. It's
sort of a different magical product
experience that we might all think is
actually less magical or the result is.
Um, but those have also become much
easier as well.
Yeah. So what are these new attacks
like? Yeah. So I mean we see some of
these at Vanta too. There's a lot of
impersonation. Um it is just easier to
you know I can think the old way was the
joke at Vanta was if you joined the
company you would get three text
messages from me asking me to buy you
gift cards but things were probably like
somewhat misspelled and you probably
wondered you know does the CEO really
need me to go out and buy gift cards?
Again it was sort of this you know
attack that was easy to laugh about. Um,
that one still happens, but there's
versions of that that are much more
compelling. So, CrowdStrike has recently
talked about how they had one of their
customers, um, I think it was the CEO,
but an executive at one of their
customers actually impersonated in this
credible way and, you know, go ask
employees that I think over video to go
do something for them. And, um, so
again, it's kind of you can see some of
these same attacks, but just way more
compelling.
Yeah, it's crazy that that's actually
happening. I mean, the gift card attack,
you know, we all know about that. We
generally know probably not a good idea
to go out and get gift cards and not
hand them to your CEO. If you do it,
hand them. Don't send them free. Uh but
but the fact that we're now seeing
video, uh you know, impersonation,
pretty convincing impersonation is is
wild. And I Yeah, I guess to be able to
do video, you need really good audio.
And that stuff is is wild what we're
we're seeing today. Yeah, I think you're
totally right about the, you know, audio
quality getting quite good, the video
quality quite good. And I think maybe
just taking a step back, one of the
things that, um, is so exciting about
this period is like that technology has
gotten so good is opening up lots of new
use cases and products. So again, one of
those is maybe fishing somebody else's
employees and trying to get their
information, which probably not a good
thing for society or markets or really
anyone. But another one is we're seeing
companies build like wholly interactive
experiences that again help them deliver
their products and services even better.
Um and do things you know for good in
the world rather than again try to try
to scam people out of their money. Um
but but these technologies can be kind
of used for both ways. I feel like this
is a lesson we've really learned over
the past 10 years of social media is
there's a bunch of great things that can
come out of new technologies. There's
also some things that are less great or
um less expected. Um, and I think we're
really seeing that sort of parable play
out in the age of AI as well. Yeah. And
you know, it's so interesting that you
bring up social media. I covered social
media for a long time and no one would
ever suggest that like the ways to
handle social media vulnerabilities was
just more social media or you know
different social media. I mean maybe
people people tried alternative social
media platforms but they were never cred
credible alternatives or credible
solutions to some of the problems that
we saw on social media. almost like
needed the platforms to reform
themselves or to change uh to be able to
address those those issues. But with AI,
uh, very different because this
technology that's available to all the
people that we're talking about using it
for bad uses, uh, is also available to
people who are trying to head off
security concerns. Um, and also
available basically for people who are
trying to be good citizens on the
internet and I think that is exactly
where Vanta sits is being able to help
them uh, use these products and would be
great if you could tell our viewers a
little bit about how you do it. Yeah.
So, Vanta was started under the dual
mandate of helping companies um
initially build out their security
program and over time understand it and
improve it and then uh that's all a
bunch of hard work, right? And get
credit for that hard work of building
their security program uh with the
market, with their customers, with
executives, with their board. And so,
part and parcel, we've always tied the
security work we help companies do to
again customer outcomes. And so the
first place we started doing this were
compliance reports. So uh we help
companies get sock 2, ISO 2701, GDPR,
HIPPA, um tons of standards like these.
We help with the security
questionnaires. We help security teams
demonstrate all of that hard work
they're doing. Um that helps customers
build trust and ultimately grows their
company's revenue.
And now are you using artificial
intelligence tools to be able to you
know put together these reports or to be
able to uh address some of the concerns
that we were discussing earlier on. We
are in a in a bunch of different ways.
Um we started building AI in from the
ground up across our product. So
everything from helping a company decide
uh which security controls are best for
them given their maturity, given what
their customers are asking for. um when
there's a you know error vulnerability
how do you fix that uh AI is helpful for
giving a a first pass or a suggestion
for the team to take action on and then
um a lot of the process of these audits
or requirements is just a lot of
documentation back to the security
questionnaires I talked about um at the
beginning uh there's you know you have
to do the work and then you have to tell
people about how you did the work uh
when I talk to security professionals
they usually really like doing the work
that's why they got into the job They
generally don't like having to tell
people about the work in a bunch of
different forms. So, you know, an audit,
a screenshot, a security questionnaire,
and a conversation. And so, one of the
places we've seen AI be really effective
is again to take all that work that was
done, summarize it, and put it in the
right format. Um, so the teams can focus
on actually doing the work. Yeah. And
can you talk about how that happens? So
is it basically a generative AI model
that's monitoring uh the people as they
work and then being able to sort of
translate that activity into natural
language. Talk a little bit about how
that how they operate that way. So it is
definitely generative AI models um kind
of the foundation models plus some post-
training um and a ton of kind of quality
improvements to make sure uh there's
there's high accuracy and security
compliance is a place where I think
there is less tolerance for creativity
and true generation and you know more
more desire for accuracy. But basically
take these models and not observe uh
what people do but observe the outputs.
Um so what have they done like what was
there before what did they do look at
the output rather than kind of the work
itself but then take those outputs and
say okay maybe you went and I don't know
changed a bunch of configurations in
your cloud infrastructure um in a way
that's more secure. Okay we can take
that new configuration
uh you can turn that into a policy
document. So there's a standard kind of
document written in sort of legal ease
or compliance that describes what there
is. Take that configuration um translate
it to answers for the security
questionnaire. So the next time you're
asked about your cloud infrastructure
configuration, you got the answer there
direct from from you know the reality um
of what's in the system live. Uh and
then finally take some of again take
that same information map it to a bunch
of compliance frameworks. And so if
you're going through an audit or going
for a particular um certification, uh
all that information is there um up to
date. Uh and again, no one had to kind
of keep it up to date or remember, okay,
after I went and changed all those
configurations, now I need to go update
the documentation. Documentation is just
there for you. Okay. Now I'm very
curious how you handle hallucinations
because uh I think that's it's it seems
like AI is getting better at not
hallucinating but as we see even like
the better models come out there are
still hallucination
uh errors that come in and um there was
this great uh story by Ben Dictanss this
analyst about how he wants to use deep
research open AI's research tool but if
it doesn't get him to 100% accuracy it's
sort of less useful for him as a
researcher now for you and your your
clients
uh if you have something that
hallucinates security questionnaire
answers, I would imagine that's kind of
be going to be um you know, sort of put
the entire reason for that product's
existence um on the rocks. So tell me a
little bit about how your team handles
hallucinations and if there's a way that
you've been able to get around them. I
think we're so not a not a um I don't
think we're doing anything that like
other leading AI companies aren't doing
honestly, but a couple of things. a
bunch of post- training, a bunch of
prompting around like, you know, just do
not hallucinate. There's parameters
inside some of these models to set where
you can sort of tell it like how
creative do you want it to be and we
generally say please don't be creative
at all. Um, and then we just keep humans
in the loop uh uh on both sides
actually. there's Vanta team on the post
training um uh we have this golden data
set and so you're constantly kind of
checking and making sure that from the
data we see not customer data but data
we generate how are we doing on that and
it's a metric we keep track of and then
also our customers in the loop so um
sometimes I describe this and people
think oh you just fill out the
questionnaire and then send it back to
the customer and no one sees it and that
would be a magical experience if these
things worked but given the domain we're
in uh that is not a magical experience
that's a very scary experience. And so
to that, you know, the way the way it
works is we'll take a we'll try to
answer all the questions. We put it in
front of the end user and say, "Hey,
what do you think of these um and
actually then look at that score and uh
you know they can go change things, they
can go send it, they can go about their
day and then our system gets smarter
because um we're learning from the
places where you know our end user
accepted an answer versus rejected one."
Um, so humans in the loop in a couple
places given the domain we operate in.
Well, you said you just said something
that I've never heard before and I'm
pretty fascinated by it. You can tell
the machine not to hallucinate and it
kind of knows what you're saying like be
less creative and it says okay, I'll
stick to for instance the spreadsheets.
That's really that's effective. It is
reasonably effect. It's not 100% fail
safe, but it's like it's pretty good
again. And it's um and again, it's it's
I think sort of a nice customization
that some of the foundation models have.
Um I think we'll see more stuff like
this over time. Honestly, I'm just I'm
just guessing, but as a product person,
right, because it helps them serve a a
variety of use cases. Like if you you
know, again, if you're in security or
compliance or legal, you like don't want
all that much creativity about the
facts. Um versus if you're like a design
tool, you actually want a ton of
creativity. And so having a dot aisle um
I think just makes these models uh work
better for more use cases. That's pretty
cool. That's why we love speaking with
people who are actually building the
stuff on the show and not just the
people philosophizing about it because
you're they're seeing how this works in
action and it's great to be able to
bring that uh to viewers. So you you
also mentioned that it can help people
sort of the technology can help people
outline or or build out a plan their
security plan. Uh can you talk a little
bit about how that works?
Yeah. So basically I think it's this
again it's this experience that we can
now build that actually I wanted to
build in 2017 2018 when Vanto was just
getting started but it was it was a bit
too hard but basically I think the
experience you want is you have some
combination of like hey here are the
best practices for a company of this
size. Maybe we're a 2,000 person company
maybe we're a 20 person company but like
here's some general guidance on best
practices. We used to call this like
don't be a doofus right? You just want
you want to be doing like generally the
right stuff. Yep. And then you want that
matched with like specifics about your
company. So again, maybe you're 20
people, not 2,000. Maybe you're fully
remote. Maybe you have, I don't know,
all your people in the, you know, spread
across the world and there's no offices.
So you don't want to be asked about key
carding into the office, whatever it is.
But you kind of want to combine, right,
best practices and company specific
things. And that is what the best
security and compliance professionals
do. they kind of know, they know the
ground rules, they know the regulations,
best practices, and then they adapt them
to to the specific company. Um, and I
think these models are kind of a really
interesting way to do that at scale.
Early Vanta, we built in some rules and
heruristics, but they weren't they
weren't totally what you wanted. Better
than nothing, but it wasn't like a 10
out of 10 experience. Um, you know,
again, and if you have an individual
person on the call doing that setup and
scoping, you get a great experience, but
you have to do the call. you need the
people, you have to do a lot of these
calls. And so these models kind of let
you again they understand the best
practices. You can get them to
understand the regulations and then they
can ask you some questions about um your
company or you can tell them about your
company and they can kind of merge all
of that into a set of okay here's what
you should do for your company given
stage maturity and your goals. Um and
we're we're now able to build that again
in a way I guess six seven years ago we
weren't.
Yeah. Yeah, it is amazing what all the
new technology is enabling uh companies
to do. Yes, I guess there's the one side
of it which is that it's kind of a gift
for bad actors. But if you're trying to
do things the right way, uh now it does
seem like there's a real chance that you
can actually do it much more uh easier
and more effectively by relying on these
models, which is which is great. Now,
you're working in a space that's always
changing. Uh and of course the AI world
is always changing no matter where you
are but you're also working with
companies to help them remain compliant
with let's say AI regulations and that
is something that is just moving at an
extremely fast pace uh in places like
the US where we know that AI is being
talked about within the White House and
especially Europe where there's this new
uh EU AI act. So I want to hear your
perspective on the impact of regulations
like the EU AI act, what it's doing for
your clients and then sort of how you
have to adapt as well. Yeah, I think
we're in because this is our business. I
think we are paying much closer
attention to a lot of these than many of
our customers. It's not to say our
customers don't care, they definitely
care. It's just um they they have an
they have a separate business, right?
And they want to do, you know, do the
right thing per these regulations. But
right now, a lot of these regulations
are still a bit up in the air, right?
Nothing has entirely landed. Nothing has
been implemented. There's uh very few
kind of implementation guidelines. Um,
you know, the US's position on the issue
has changed with the new administration.
And so, I think a lot of our customers
are in a like, okay, we're sort of
paying attention, but we're really
waiting for clarity, and then when it is
clear what we need to do, given the
markets we're operating, we will
absolutely do it. For us at Vanta, we're
paying closer attention. And I think
honestly have a bit of the almost luxury
of of being able to because it's our
business. Um and I think what we're
seeing is again people still trying to
work lawmakers uh policy makers still
trying to work through some of the
implications here. I think everyone is
you know has a high level goal of we
want to protect data we want to kind of
do the right thing and we don't want to
stifle innovation and and needing to
balance those two sides of the scale. Um
what else do I think? I think um it it
does seem like Europe will take a
stronger stance here than the US
especially especially within the new
administration. Um I think we've all
learned from the implementation of the
GDPR over the past seven years or so,
right? And seen um I think one of the
primary critiques there is that the
regulation is so open-ended that it's
actually quite hard to figure out what
you need to do. Um and again I think
that probably came from good intent. it
came from a desire for flexibility, for
not stifling innovation. But where it
sort of ended up is there's just a lot
of uncertainty and it's hard to move
through that. Um, and while you know the
European courts will clarify that over
time, it's just a lot of risk to ask
companies to take on. So when we think
about these new AI regulations, I think
we'll see probably a little bit more
guidance more than the GDPR in ways, but
again still trying not to be so
prescriptive as to, you know, kind of be
out of date um as soon as the the ink on
the paper is dry proverbally. Yeah. What
could be coming with the new AI
regulations? Any sense as to what they
might touch?
O um if you force me to bet uh I think a
lot of it will focus on data security
and so it'll be kind of under the you
know banner of AI but it'll probably be
a lot of the things we've talked about
with data security and I think with that
you know think through the best
practices of understanding like as a a
company or a service provider like what
data you're collecting why where it goes
when you send it to a third party. Um,
one analogy a security professional gave
me uh years ago was you should think of
data um as sort of like toxic waste. Uh,
you have it, you might need it, but you
want to keep it contained and if it
starts leaking out, it's very hard to um
kind of clean that up. Uh, and again,
you know, someone said me said this to
me years ago, but I I think it's true.
So I think you're going to see these
regulations try to like fence in that
data much much more so than um again we
were doing 10 years ago as software
builders. Yeah, it is interesting
because with AI you go from sort of more
deterministic software where you run a a
a query and it hits one spreadsheet and
then gives you an output. And with large
language models it's probabilistic. So
you have this whole wealth of data that
you're touching and then you're trying
to spit out an output, but you're never
quite sure what it's going to go into
and what it's going to share, even if
you have rules. And I imagine that's
going to um produce some very
interesting headaches for people. You're
nodding your head. I'm curious what you
think. Yeah. Yeah. I also think it's
like it's just a different way of
building software. One thing we've
noticed internally is again when it's
deterministic and you have a system and
it generally runs and you do the
monitoring and observability but like
your system's kind of your system unless
I know something like an input to it
changes. AI it's just not true. You
cannot change the code. You can get very
different answers. Um and so it's almost
a different uh mentality on building.
Again when it's deterministic you ship
it and you can kind of move on. Again
you want to keep an eye on it but like
it'll keep doing whatever it it did when
you first shipped it. AI systems and
they will do different things at
different points no matter whether or
not you changed anything about them. So
you have to keep you know much closer
track of of kind of the quality. It's
just different. Um it's a different kind
of way of thinking about software
development and maintainability and all
of that. Definitely. Can you bring us
into like the head of one of your client
like of a client? You don't have to name
the client's name but how do they
prepare like how do they deal with all
this stuff? Like what I mean they have
to watch out for this regulation.
They're already probably doing GDPR
regulation, a bunch of other compliance
that um that we've touched on at this
point. Is it just like constantly
monitoring for what might comes next or
like what does what what is it like in
in um the life of of your clients trying
to keep up with this stuff? Yeah, it's
really hard. I mean, our customers range
from like CISOs of you know, large
global organizations to like a first
security or compliance hire. So someone
who's like trying to do it all at a you
know small to mediumsized company to a
founder who's just getting started and
you know each of those folks kind of
thinks about or like does different
things with their day but I think all of
them are just trying to figure out like
what's going on what do we have to do
how do we keep track of all this work
and how do we not let it slow the
company down or our customers and
revenue down and so I think over time
you know people try to find ways to
unify all these concerns. There's a lot
of um uh sort of building or using
off-the-shelf frameworks for all the
things you have to keep track of and
then that at least gives like a
structured way to think about it all. Um
I think the folks who have you know kind
of larger company experience tend to do
that. I think founders tend tend to be
more um urgency based. I say this with a
smile as as a founder myself. Um um but
I think it's kind of this dual mandate
of like what's going to happen next or
what could happen, how bad is that? And
then how do I again not slow the company
down and hopefully even you know support
the growth versus be be the no person or
be the um well you can't do this yet
person,
right? And in the broader tech industry
there's a debate about the wisdom of all
these regulations. you know we have uh
like you mentioned already GDPR which is
something that people debate about um
there's so much there's a lot of
security uh compliance and regulations
which I think those are uh yeah probably
more universally acclaimed and now we
might have AI regulation that's coming
uh are you know you see the the impact
of these regulations firsthand and as
you mentioned like the security
professionals they're trying to work
within their organizations to, you know,
make sure that it doesn't slow companies
down, uh, trying to comply with them.
So, just on a whole, do you think that
all these tech regulations are are
worthwhile?
I think so. There's a, if you're, uh, if
you're familiar, there's this great web
comic called XKCD. It's like 15 20 years
old. Stick figures. Okay. Yeah. You
know, XKCD. So, of course, there's an
XKCD about standards. Um, where
basically, you know, you there's a you
need a standard for something. The old
ones don't work. So the answer is in
fact to like make an 18th standard. It
is never to go fix the other ones. Um in
like true XKCD faction, I think this is
often this is in fact how the world
works whether or not it's how the world
should work. Um people are constantly
coming out with new standards. There's
always a reason for them, right? There
is a real problem they're trying to
address, but it does just make the
proliferation really hard. Um, I think
some of the more effective efforts in
the past couple of years have really
focused on trying to create like an
umbrella over multiple standards or at
least a bunch of mapping. So, it's like,
okay, fine, you're doing the 18th, but
here's how it relates to, you know, 10
of the other 17. Um, so it's not like
you're asking for 100% net new work
because that's just that's just really
tough for everyone.
Definitely. Okay. I don't want to leave
here without hearing a little bit more
about the uh Vanta founding story and
you know where you think the tech
business is going today. Uh you
co-ounded it. You have a very
interesting story and you're right in
the thick of things with a growing
company um that's working with other
growing companies. So why don't we end
with that part? For sure. Um so started
Vanta uh several years ago. The the joke
that's not a joke about Vanta is I think
if you want to start a security company
you should start a compliance company.
Um by that I mean uh but the kind of the
founding thesis was security on the
internet is increasingly important. Uh
we'll be more conscious of it in the
future not less. Uh it's also something
that has real world implications even
though internet security is again just
just kind of online. Um but uh one of
the reasons I think getting you know
getting companies to have better
security quote unquote is hard is
because that's can sound like a nice to
have or it's a nice to have until it's
too late. Um, I think compliance in its
best form is unlocks new markets. It
unlocks new revenue when you get new
certifications. You know, you can sell
to larger customers, you can sell to
Europeans, you can sell to healthcare.
Um, and so it was sort of a way to say,
hey, security team, you're doing all
this really good work. How do we tie
that to direct business value? So, you
get credit for that, you get more
resourcing, you can do more of that good
work. Um and so uh that that was kind of
like the thesis behind the company was
again compliance uh it's kind of
misunderstood but it's this great way to
tie the security team's work to real
revenue and real business impact. Um it
was a nice idea on you know a piece of
paper. It turns out it's like working
quite well for us before um or now. Um
so company today uh we announced like
10,000 customers around the world um uh
and uh have been growing revenue at at
quite a good rate. Nice. And if people
want to learn more about working with
Vanta and Vanta itself, where do they
go? Vanta.com. We are always on the
always open for business on the
internet. Easy enough. vanta.com.
Christina, great to speak with you.
Thank you again for teaching us about
what it's like putting this technology
into play in a practical way and also
some of the things we should all be
paying attention to in terms of the
downside risk and how to mitigate that.
So, great having you on the show. Thanks
again for being here. Thanks so much for
having me. All right, everybody. Thank
you for watching. We'll be back again
with another interview soon and we will
see you