Is Generative AI a Cybersecurity Disaster Waiting to Happen? — With Yinon Costica

Channel: Alex Kantrowitz

Published at: 2025-09-23

YouTube video id: lATVlT5F-P0

Source: https://www.youtube.com/watch?v=lATVlT5F-P0

What type of novel security threats are
emerging as AI advances? Let's find out
with Whiz co-founder Enone Costikica who
is here in studio to speak with us about
what's happening. Enone, great to see
you. Welcome to the show.
>> Thank you, Alex.
>> All right, so AI is it's doing amazing
things. It is producing lines and lines
of code for engineers and helping people
build things faster than they ever could
before. The other side of it is it's
helping I imagine bad guys produce lines
and lines of code and attack faster than
they ever could before. Now you are the
co-founder of Whiz which is in the
middle of a sales process or selling to
Google for 32 billion.
>> Correct.
>> Okay. So you're the perfect person to
have to discuss this because Whiz is a
cyber security company. we've never had
a cyber security expert like you on to
talk about how what's happening um
especially as generative AI rises. So
just give us a little bit of a state of
play here in terms of um what this
explosion of the ability to code has
done to cyber security.
>> Yeah, it's interesting. I think the
ability to code is just one aspect of
AI. When we think about AI as a whole,
first I'm thinking about a whole new
stick that is created. We are now at an
era that is the big bang of technologies
and you are reinventing a whole array of
capabilities technologies that are being
brought in play whether it's the prompts
the model the infrastructure the
platforms and they're all playing
together in order to allow customers to
leverage AI AI can be used let's say as
a employee based like a chat GPD query
it can be as part of a SAS like you know
in cursor in GitHub copilot and it can
be your own developed AI like as an
enterprise you're starting to develop
applications all of these are leveraging
these new technologies now as with any
new technology it's based on software
and software by itself is obviously can
have vulnerabilities so when we think
about AI first we need to understand
that it's code and code has
vulnerabilities like any other software
that we have shipped before and it's
interesting just a Two weeks ago there
was pontoon. You know you know pontoon.
It's an amazing event where they bring
together the best researchers and this
year they had for the first time the AI
category. What does it mean AI category?
They're basically doing a context to
find vulnerabilities in certain
technologies and the more you know
impactful the vulnerability is h the
bigger the bounty you get back. So this
time we had for the first time in this
spawn to on event the AI category and
six technologies were presented. Out of
these six technologies four were
actually researched and found to be
vulnerable at what we call the highest
impactful vulnerability which is remote
code execution rce which means that you
can do anything with that technology.
The learning that we have from here is
that AI is very new as a software and
the fundamentals exist. It's a vulner it
can be vulnerable and you can actually
use it in order to just run your own
code on it like any other technology and
software we have used to ship. So that's
the first layer. But before you move to
the second layer, I just want to make
sure I'm understanding this is a
research competition.
>> Yeah. Was it that there were six AI
applications that were developed and of
those six they were five that were so
vulnerable that uh a bad actor could use
those vulnerabilities to remote in and
and run this software any way they
wanted to.
>> Correct. There are six technologies that
are used to build AI technologies like
Nvidia, like Postgress, SQL. These are
radius. These are common technologies
that are used to build the application.
Out of the six, four actually had
critical vulnerabilities at the highest
stake. And in by the way, AI has the
most vulnerabilities now that are being
disclosed with research itself like
competed in this pawn to own contest. It
won the first place, but we need to
understand that this is an area of
active research. It's a new technology.
Hence, it has a lot to do with the
maturity to it. erh in order to get to
the level of let's say trust right in
the software
>> it wasn't just it wasn't just an output
of an AI uh
>> enabled technology that was vulnerable
>> it was the actual building tools that
are used to build AI itself were
vulnerable all these companies all these
engineers who are relying on artificial
intelligence tools um because they're so
new they they may not know it whereas
what you're claiming that they may not
know it. Uh but uh bad actors could be
basically hacking into the code that
they are writing with these tools and
then uh not just controlling the the
tools themselves but the outputs as
well.
>> Exactly. And this is what we see. It's
scary. It's a new technology stick. It's
being built. It's being it's now
maturing over time. We're learning about
it. We're securing it. We're improving
it. But we also need to remember that as
with any technology this is new software
and it's now being out there you know
tested by pentester but also by threat
actors. So that's one it's a new
technology stick it's important to
understand and that's the most important
thing to understand about AI. It's
software like any other software.
>> Five minutes in you've already told me
about something that I didn't even
anticipate coming in would be a problem
which is it's not just the code. the
it's the foundational uh tools that are
used to output it.
>> You're bringing new software.
>> Okay. And is the second thing that
you're about to go on to the uh
unreliability of AI produced code
itself?
>> Not even that. First, we're going to
look at the infrastructure, right?
Because in the end, you're running these
new tools, but they're running on top of
infrastructure like any other
application. You have identities,
workloads, you have you're using the
basic components. uh you are storing
your training data sets in buckets that
can be now publicly exposed. You're
using identities that can be overly
permissive. You're using VMs or
containers that can be also compromised
or misconfigured. So just to translate
to a non or less technical audience all
these um the infrastructure that you're
talking about is you're bu people are
building AI programs uh and then they
are relying on the cloud correct all the
tools that are used to support uh
basically anything that runs online
software that runs online and there
could be vulnerabilities there as well.
>> Exactly. vulnerabilities,
misconfiguration, and in
>> fact you have a program and it's running
off of a cloud a cloud system uh but
you're exposing a huge part of um the
way that your program runs.
>> Yeah. Without knowing it.
>> Exactly. So if we look at incidents
around let's say AI applications
we had an incident where a very large I
would say software provider exposed a
bucket uh through with all of the
training data sets in there a lot of
sensitive data and they didn't intend to
expose it. This is a very basic security
issue that we have experienced in the
past decade now applied to AI. So again
when we think about AI it's not all new.
It's new software, it's existing
infrastructure and when we think about
securing it, the basics apply and it's
very important to remember it. The
fundamentals apply the fundamentals of
patching vulnerabilities, securing
configurations, uh, managing identities
and so on. So this is like the second
layer that we think about the
infrastructure.
>> Okay. And can I ask one more question
about that? So basically what what I
imagine a lot of organizations are doing
right now is when they want to build
personalized AI uh personalized AI tool
tools for their company, they're saying
we want to build something specific to
us that does what we do that can you
know maybe replace our employees or
augment our employees. Let's just
download everything from the
organization and we'll throw it into the
bot and they've been promised by some
API provider that the bot will not use
that data for training or it won't spew
it out elsewhere. Uh but the thing that
they miss is as they download all this
important secure information uh uh from
their company, they might store it
somewhere. Exact. Because where else are
you going to have the treasure trove or
this like motherload of data other than
when you're starting to train a bot
specified for your company?
>> Exactly. So if I'm a threat actor, why
wouldn't I use a good old tech like
techniques I'm using to expose buckets,
databases, misconfigurations in order to
exfiltrate this precious data? why would
I go through complicated AI prompts and
so on if I can go directly at the
infrastructure. So that's a second layer
that we have to be mindful of and these
are all best practices that we apply and
we follow but they apply the same way on
our AI applications and actually when we
look at AI related incidents the
majority of them is within this layer of
using the infrastructure in an insecured
manner that allows threat actors to just
do what they are used to do in cloud in
the past.
>> Okay. And now let's circle back to the
question I asked in the beginning
because you know we've been building up
to it which is how vulnerable is AI
written code?
>> So AI written code is interesting
because one yes it is it may be more
vulnerable because we haven't instructed
it to sec to be secured. We need to as
we use AI to build applications, we need
to also instruct
it in what we want the application to
do. We need to instruct it in the way we
want it to be secured, right? Apply list
privileges,
remove data if not in use. So there are
all of these, you know, best practices
in securities that we apply today if we
have a human developing it and we review
it. But with AI we need to now specify
the same manner and as an example uh we
have released the research team have
released it's it's a rule set that you
can feed into AI generate generator like
code generator through AI and this rule
set will guide AI to build a secure code
more than if you didn't and this is just
one aspect but the more interesting
thing around securing code that is
generated by AI is what happens if you
actually need to fix it and who is the
owner of this code. Now there is a very
interesting question that arises because
if I'm the developer of the code and now
there is a vulnerability or someone
reported a security issue I wrote the
code I know it by heart and I have
someone reviewed it but let's say I vibe
code my entire application and it's
funny because on the first blog intro
like post introducing vibe code the the
person who posted it said you can forget
that code even exists But no, you
cannot. The code is vulnerable and now
if something happens, you need to get
back to it and fix it. But you need to
know the code to do it. So there is a
broader questions on responsibility
around the code that is generated
through AI and who is going to go back
and fix it if something happens. By the
way, it's not only security also
reliability, availability,
scale. How do we how do we assure that
we have the proper capability to
maintain the software that we have shift
shift like shipped across security
availability reliability in the long
term.
>> Okay. So I think that what you're saying
is basically AI can build pretty secure
code if you introduce the instructions
or the the protocols to build secure
code into the prompt.
>> Correct.
>> That seems fairly that seems good. But
then the other side of it is um what
we're seeing now is engineers are
starting to be removed from the code
that they are writing because often
times they are shephering that code
through a processing.
>> They're vibing it and the AI is is um
doing the rest.
>> Are we already seeing cyber security
problems uh within companies who who
have had developers that have just
vibecoded or AI coded applications?
Yeah, there are actually there are very
known examples of someone some people
that have posted that they've vibe code
an application and then it got hacked
hours after or days after and now they
don't know how to recover because they
didn't have the skill. So the way I'm
looking at it is that
vibing code is a great way to accelerate
but it doesn't remove you from the
responsibility of actually knowing your
code being able to address issues within
the code and guide AI
further into the you know maintenance
process as we go and use the application
and mature the application. So this is I
think a maturity thing that we need to
go and to do and another aspect is when
we think about how would an agent
architecture work because you're talking
only on one role which is the developer
but if we are fast forwarding
like agentic concepts into the future.
So why wouldn't you have a security
reviewer that is also an agent. So you
commit code, you're developing your code
and next you should have like a security
review to your code but also performed
by an agent that is minded for security
with all the security best practices
guideline that we have provided it and
someone should maybe look at the
architecture and someone should look at
data privacy and you can think about and
again it doesn't exist today yet but as
we fast forward AI doesn't mean that
it's only doing the coding it can do
other things in the development life
cycle that we can rely on.
>> But this is scaring me even more because
I thought when you said, "Okay, there's
people are going to vibe code." By the
way, vibe coding is coding just by
prompt. So you say, "Build me this
application and it builds an
application." Um, as opposed to just
doing the code yourself. I thought,
okay, so now we'll have defined roles.
will have you know you if if vive coding
becomes a thing people will vibe code
applications and the role of a developer
will be to audit monitor look for
vulnerabilities there and address them.
So that will be their expertise is not
actually building the things it is
securing them uh and maybe improving
them in some ways that the bots can't
handle. But what you're saying now is
we're we now have uh we're looking
towards a future where there are going
to be agents like another AI bot whose
role is to do that thing to secure the
Exactly. Okay. So then aren't doesn't
that isn't that like a triple risk then
or a double risk because you're now
going to have uh a AI bot whose core
competency is to build an AI bot whose
core competency is to secure. So if
something goes wrong now, we're
definitely not going to have a human
with the skills to be able to diagnose
and address.
>> So it actually doesn't conflict with
each each other. We can accelerate or um
automate more, but it doesn't actually
remove us from the need to have a let's
say a human in the loop that we will be
able to then account when something
happens in the details. So there are two
separate questions and that's why when
we started I said yeah vibe coding can
be improved h it can it can be bad but
it can also be improved in various ways
but also it doesn't resolve from the
actual need to say who is the owner of
the code what is the operating model
when something happens and let's say AI
cannot fix an issue so who does and who
owns this over time and this is maybe
the most I would say erh
when when I look at the challenges
technology will continue and accelerate
and help us to build the applications
but still we have to figure out what is
the operating model that when something
happens we know who to turn to and we
know that we will be able think about it
you're an enterprise you're developing
an application forget about security
let's say basic reliability the
application goes down for whatever
reason you cannot just wait and rely on
you know maybe AI will figure it out
maybe it doesn't right we need to
>> seems like a bad strategy
So we need to have accountability in the
end. Okay. So how do we maintain this?
And if I'm a developer and I shipped an
application and it goes doesn't matter
if I built it myself or through AI but
it goes down tomorrow and I'm not able
to fix it and get it up again. I'm not
doing my job right. So this is a big
question. How do we continue to own the
application while leveraging increasing
amount of automation?
>> Okay. Okay, so we've talked about the
liabilities in the tools you use to
build AI. Uh liabilities in the
infrastructure used to store key
information for AI and now liabilities
in AI code and Vive coding in
particular. But there's another side to
this which is I love the fact that you
know people are able to build things
just by a prompt now or maybe a little
bit more sophisticated. They prompt,
they code a little bit, they build off
each other and now something that might
have taken a team of 10 can be done with
one person. Um, but you know, we've
called them threat actors. I call them
bad actors. Basically, bad guys looking
to do damage or looking to hack into to
uh computer programs. They have all
these tools at their disposal as well.
So, are we already seeing them putting
those tools to use in an attempt to hack
into software? And has that increased
the sophistication and the level of
hacking that we're seeing already? So
there are several ways in which threat
actors can use AI. Okay. And the first
thing is just hacking into an applica AI
application, right? And if you think
about how do I hack into AI application
and why it's appealing there is what's
called like the trifecta of let's say
risk factors in AI. One, it's exposed.
Second, it has access to private data.
Third, you have um untrusted content
like the query the you know the chat
queries that you send to it the prompt
that are expo that is also exposed to
the threat actor. So one you have the
layer of directly aiming at the AI
application and trying to extract via
the prompt sensitive data. So that's
one. The second thing that you can do
you can automate and iterate more on
what is already known. And I think this
is one thing that we look at AI, it's
really good in automating repetitive
tests. So instead of trying one type of
attack or one type of vulnerability, I
can automate and iterate through NI like
built like purpose-built application to
try and test many more options, right?
And then you have the third layer that
am I able to discover new type of
threats using AI? Am I able to do
vulnerability research to find new
vulnerabilities because I've trained AI
to do something very specific in that
case. So these are the three levels that
we can look at and I think that the
interesting thing is and let's try to
tackle it one by one. When I'm targeting
directly the AI application, it's like
any attack surface. We need now to
understand how to secure AI applications
the prompt the data within the models
against threat actors and that's almost
like application security applied to AI
whole new stack whole new type of
applications still a lot to learn how to
secure it and yes threat actors can do
it today I think it's very available for
them today any we should think about it
that any prompt that we exposed is going
to
tested also by threat actors. So that's
one. Second layer, the automation. This
touches a very I would say the deep uh
issue with security which is the
attacker to defender asymmetry. And if
we think you know historically just for
those who are not familiar the asymmetry
means that a defender has to defend on
all fronts all of the time while the
attacker has to find only one thing that
actually works in order to get in. So
the asymmetry is insane. It means that
the more estate we have the more we need
to secure as defenders the harder and
harder it becomes because we need to
secure everything out there. But the
threat actors can find one vulnerability
and they will still get in. So
historically we've become better and
better in improving the ability to
secure the foundations, remove the risk
proactively, detect and respond. So this
is where security has been throughout
history. Okay. And improving the our
ability to cope with this asymmetry. And
now with AI the interesting thing that
the threat actors can automate a lot
more but from a defense perspective it
doesn't give me the same order of
magnitude of let's say improvement that
the threat actor can gain. So in essence
there is an aggravation a significant
aggravation in the asymmetry that we're
going to face because I can test the
threat actors I can try more as a
defender it doesn't help me to detect
more at the same order of magnitude so
this is a challenge that we are going to
see more and more more automation on the
threat actor side and the reason is that
from a detector detection point of view
from the defender point of view, I
cannot withstand a high false posit pos
positive rate. From the threat actor's
perspective, I I don't care about the
false positive. I just need one. From
the detection, if I have a high false
positive rate, I'm done. I cannot even I
I cannot find the hay in the ST because
let's say even if I have 0.1% 0.001%,
001%. You take it as much lower as you
want. Just multiply it by the attempts
of threats that you get, you're going to
be bombarded with noise. And noise is
the enemy of security.
>> They can like DDoS your infrastructure
pretty much with attacks.
>> DOS your security team. You're actually
the
>> that's what it is. It's like a physical
like we're just going to DDoS is
basically you send a bunch of traffic to
your website and take it down. So go
ahead.
>> Yeah. Actually run security. I can tell
you a nice story. Not not technology
oriented. One time I walked into a
building and I saw a new employee
training for the guards of the building
and um they were just next to the fire
like fire alarm. And then the one who
was giving the tour was saying if the
alarm is sound, if you hear the alarm
this is a fire alarm, this is where you
go, you turn it off and then the alarm
stops. And then a lady asked from the
new employees, "And what if it's a real
fire?" And he told her, "Listen, it's
not a real fire, it's a false alarm."
Why do I love this story? Because this
is exactly the challenge the security
teams have to face with false positives.
If we're not going to be really accurate
in detecting the highfidelity alerts and
really making the extra mile to make
sure that if a security team sees
something, it's actually it's actually a
thing they need to take in very fast.
We're becoming like unable to cope with
the real threat.
>> So this is the risk is that if you
automate, if you're a threat actor or
someone trying to hack into a system,
you can automate it. You can overwhelm a
security team because every false
positive they need to put attention to
and then eventually you get past their
defenses.
>> Exactly. And this is why there is a good
way to tackle this. It's actually
reducing the noise by investing more in
the fundamentals and less on the
detection. Okay. So if we are always
trying to detect, it's going to be hard.
We're going to be bombarded with alerts.
If we're proactively reducing the risk
and the chance of being attacked in the
first place, namely patching
vulnerabilities, fixing
misconfigurations, we are in a much
better position to turn down the noise.
So this is where security has invested a
lot of time and now it's going to
aggravate if AI is being applied by the
threat actors.
>> Who are the threat actors?
>> Everyone. AI is very accessible as a
technology. We need to understand that
it's accessible.
It's simplified. It can be used at
scale. It's not very expensive. You
don't need to be a superpower to use it.
You don't need a lot of funds to use it
to be honest. It's accessible.
>> So, is it like organized crime,
governments,
>> everyone? State nations, organized
teenagers, teenagers in basement. Yeah,
it's easy. It's accessible. Why wouldn't
you do it?
>> Well, you could go to jail if you get
caught. Oh, I'm saying as a threat
actor, why wouldn't you try to use AI?
>> I mean, same answer. Oh, why wouldn't
you use AI if you're if you if you
already decided to do something bad?
>> If you are already a threat actor, of
course you're going to use AI, right?
So, that's why I'm saying as a threat
actor, it's a new tool in your stick and
now you can leverage it to accelerate to
automate. I think there is a thing about
cyber crime. It's it's a business,
right? And we don't I don't know if we
always it's not it's not just bad guys.
These are like businesses. H if you
think about ransomware, it's a business.
It has rules to it, right? You know,
when you pay the ransom, you get the
data, right? It's it's a rule. You don't
you're not tricked into it because it's
a business and they need to maintain
their reputation. So, as a business,
same way that any other business is
looking into AI and thinking how can we
use it to accelerate to automate
cyber crime, state nations, they have
the same logic. Today, in the year 2025,
these bad guys, teenagers in basement,
whatever you want to call them,
businesses, uh they've had access to
pretty sophisticated generative AI for a
couple years now. So, talk a little bit
about the curve that you've seen since
the introduction of Chad GPT. Are we
seeing about the same uh threats like
we've seen previously or has it
escalated exponentially?
>> Currently, from what we see, we are
seeing and we are releasing like the the
state of AI and we are monitoring for
threats. there is a site threats with IO
that we monitor all of the cloud AI
related incidents and we do a breakdown
of basically what was the root cause the
techniques in and to be honest we're
seeing today more of the same right it's
not that we are seeing a significant
shift in the threat landscape it's
really turning back to the
infrastructure the things that we know
that are working
>> so no the same vulnerabilities but the
magnitude of attacks have they gone up
>> always they're always trying to do more
but you need to remember that al also on
the blue side on the defender side we
are really getting good in protecting
our infrastructure and I think that we
are seeing a trend and it's always easy
to talk about the increasing rate of
attacks and the red side and I think it
will always be that way there will
always try more it doesn't mean that
there will be more successful because on
the blue side we have major
transformations in how we look at
security that is helping us to improve
improve the foundation, improve the
processes, improve the ability to
proactively reduce the risk, the ability
to detect and respond. And I think that
we are seeing um like since we are still
at the foundations that we know how to
secure, I think that we are seeing that
we are not at the phase that you're
saying where we're seeing crazy stuff
happening.
>> I'm not saying it. I'm just asking about
it.
>> No, not yet.
>> We're not seeing the crazy stuff.
>> Why do you think that is? Because I
think that with every conversation
around generative AI, there's always
this, oh, you got to be looking at cyber
security part of the conversation
because all these tools are in the hands
of the bad guys now for all these
reasons, but yet we're, you know, in in
reality, the actual ability of them to
get through is not higher. Is it I mean,
is your answer just because companies
are doing a better job securing their
infrastructure? Because if so, then it's
really not a big problem. I think that
we are in the process when when I think
about like the last decade right
>> this process of automation it's not new
to us right automation has taken place
before AI long before AI and will take
it place after AI long before we are
always in a journey to continue and
automate what threat actors a decade ago
have done on the keyboard you know
manually and when we look at cloud
attacks for instance the ability to
automate What happens as soon as I walk
into an account as a threat actor? What
do I do? Well, I can automate quite a
bit. And we have seen this level of
automation. We have seen this level of
automation for instance with ransomware.
We have seen automation happening over
the course of the last decade. But in
response within security, we have
basically developed the capabilities to
respond to this automation. AI is
another layer that allows us to automate
more. right now. Okay, I'm not talking
because we're not seeing the crazy new
threats yet. We are right now at the
phase where we are seeing accelerated
automation of the known threats, known
risks and this is a journey security has
been into in the past decade and it's
only one step up. So it's interesting
because I guess it's what you're talking
about sort of follows I think a lot of
the progression of generative AI to date
which is that it's a very promising
technology but companies trying to put
because if bad guys are companies
they're businesses companies trying to
put it into action have seen mixed
results and so actually what I'm getting
from you is same thing happen is
happening with the bad actors but then
the question is if these models get much
more intelligent um does that open us up
to bigger risks I believe there are
areas that
yes if they become increasingly better.
Let's take vulnerability research as an
example. Vulnerability research is one
area. Let's explain what's a
vulnerability. Okay. A vulnerability by
definition is ability to move from one
trust level to another trust level in a
way that is not permitted. For instance,
if I can run remote code, then I'm
moving from the outside to the inside.
And this is the worst that can happen
because I'm literally running code from
remotely external in your internal
environment. Right? So that's a
vulnerability. A vulnerability can be un
like unauthenticated access. So
authentication bypass. I'm logging in
and I have this trick that I'm giving a
false password and I'm still able to
login. Okay, that's authentication
bypass. I was able to walk into an un
like a higher trust without the
permission to do so. So that's a
vulnerability. The ability to research
and find vulnerabilities. This is kind
of the bottleneck of the security space.
Okay? Because vulnerabilities are what
allow threat actors to move from, you
know, low trust to higher trust
environments. And the ability to
automate research by
you know AI of vulnerabilities can open
up maybe a race where you can find many
vulnerabilities and unable to patch them
at the same pace.
There are solutions today that are
already leveraging AI to detect
vulnerabilities.
These are companies, security companies
that are using AI to analyze your code
and find vulnerabilities in your code
that up until today you needed like a
very deep research to do it and it's
automated. It's really nice from a
security perspective. Threat actors may
try to weaponize it to find faster and
faster vulnerabilities and automate the
process to put it into action so that
they can always use what we call in the
industry zero day vulnerabilities. Zero
day vulnerabilities means that it's a
vulnerability that besides the threat
actor that found it, no one else has
seen it. So it's
under the radar. That's the meaning of a
zero day vulnerability. So if we look
into what could be a risk, yes, if we're
able to create a perpetu that finds
vulnerabilities, operationalize them,
and puts it into action, it's scary. But
I think that today we're again looking
at the detection mechanism and so on,
>> right?
>> Keeping the fight.
>> Okay. So we're deep into the
conversation, but good time to really
introduce what Whiz is. I mentioned in
the beginning that Whiz has sold to
Google for 32 billion. Um, so what your
company does is basically just that. It
looks at everything and you correct me
if I'm wrong, but looks at everything
that an organization has uh in the cloud
and tries to find those vulnerabilities
proactively.
>> Correct. Whis connects uh very quickly
to any cloud environment and assesses
all of the risks to the cloud. We call
them attack p. So risks that you know we
talked about the noise. We remove the
noise. We focus on really what are the
critical attack path that threat actors
can use in order to get gain access to
an environment and we surface them to
security and development teams so they
can proactively reduce the risks before
the bad guys find them.
>> Okay, so that's what WHIS does.
Interestingly, it doesn't happen only in
cloud. We can do the same thing
>> on prem
>> on on prem private cloud but also during
the coding phase which is even more
interesting. This means that as you are
developing, I can actually predict what
would become a critical attack path and
guardrail and let's say help developers
make the right choices while developing
before it gets to the cloud. Right?
>> So this is like even preventive
security. So this is and the last pillar
is like detective. This is where we
don't want to be but we have to have the
controls is to monitor the cloud
environment and if we are seeing a
suspicious activity detect and respond
to it. So this is a bit on what whis
does.
>> Do you think increasing threats from
generative AI played into why Google
wanted to acquire the company?
>> Security again it's a
we're in this battle for decades. I
think
>> we are improving dramatically as an
industry. I think the generative AI
increases one the use of software. So
cloud security has become even more
important today because we understand
that AI will be used at scale by any
business. By the way, historically we're
looking at cloud. There were I call it
three migration waves into cloud. The
first one were the cloud natives.
Spotify okay we're recording in Spotify
studios.
>> So we are cloud native started in cloud
born to cloud. Then came covid. Covid
actually brought the second wave where
businesses realized that staying on prem
means they may be disconnected from
their business and moving to cloud is
the way to go and they then came the
second wave right the during covid the
major the banks the financial pharma
they all moved to cloud because they
realized okay that's not the playground
that's a strategy and now we're are
watching at the third wave which is AI
so AI augments the use of cloud it
augments the use of technologies
transformation into cloud and AI and it
all piles up in the same direction. So
just going back to the point yes AI
increases the use of software increases
the importance of cloud AI security what
we call the fundamentals how do we build
secure application so absolutely it
plays into how do we think about you
know strategically what's important for
an organization cloud and AI security
right so that's one second thing I do
think that AI requires us to innovate
faster than ever before and this is an
interesting piece we talked about the
new STE, we talk about new threats, we
we can talk about the pace of adoption
of technologies which is also
mind-blowing.
We are at an era that we are innovating
more than ever before. And in order to
keep up, you need solutions that can
help security to enable the business to
move forward and not secure a solution
that will block sec, you know, block the
business from using technologies. And I
think that when you look at whiz, whis
is really a technology that allows
organizations to adopt technologies in a
way that enable the business rather than
blocking the business. And this will
become a critical attribute for any
company out there. Otherwise, you know,
AI is moving and businesses will move
with AI and any other business should
keep up with AI because this is going to
define businesses in the upcoming years.
>> Yeah. I mean I I think that I'll just
add that obviously like Google is trying
to be the one where startups anyone
who's trying to build I mean they're
seeing tremendous growth in cloud I
think like 30% quarter over quarter uh
in the most recent quarter but that's
been the case for for a while as AI has
built up and if there's a chance that
these vulnerabilities actually start to
um show themselves and become more
difficult then I would imagine that Whiz
is pretty well suited to help them out
there. I think there is an interesting
concept for those who are not familiar
with the history of cyber security. Um
there was a very interesting initiative
um back in even I think basically in the
2000 okay year 2000 of by Microsoft and
I want to just it was called trustworthy
computing and trustworthy computing was
around this concept that if humans won't
be able to trust their compute they
won't use it
>> and that started a whole initiative
within Microsoft to secure Windows to
secure all of the software that they
ship. And this changed the security
industry forever because this created
the concept that security has to be
built in in order for people to trust
it. And when we think about AI, when we
think about cloud, security has to be
baked into these processes. So we can
trust anything that we leverage in terms
of cloud and AI moving forward. Security
is a cornerstone for our ability to use
technologies at scale. And yeah, this is
I think we have to solve it, right? We
have to make sure we can trust cloud and
AI.
>> Oh, clearly Google thinks so. I mean, I
think this was the biggest acquisition
in their history.
>> Correct.
>> So, it tells you everything you need to
know. All right, we got to take a break.
Uh, but after we come back, I want to
talk about what might uh be like the big
black swan events, whether uh bad actors
can hack into humanoid robots when we
see them. And also, you had a very
interesting report on Deep Seek when it
came out. So, we'll touch on that as
well. All right, back right after this
and we're back here on Big Technology
Podcast with Enon Costika, the
co-founder of Whiz, also its VP of
product. Um, uh, let's talk about, uh,
DeepSeek right away because there was an
interesting release that Whiz put out
when Deepseek was, um, sort of all the
rage and it got a lot of headlines.
Yeah. Uh, it was here, this is from your
blog. Whiz research uncovers exposed
deepseek database leaking sensitive
information including chat history. Now,
I took issue with this because I I had
just met you at the Amazon um the re
reinvent conference and you had talked
to me about how what Whiz does is it
proactively we were just talking about
before the break. It proactively looks
through your code and finds things that
might be exposed. And I saw this blog
post and I was like, well, that's what
Wiz does. And you could probably write
similar headlines about many different
uh companies. And I think what happened
was this the news cycle ran with it and
and people who were uh afraid of
deepseek pointed to it and said ah look
deepseek is trying to leak your
information to the Chinese Communist
Party which is actually not what it was.
So what do you think about my read on
the situation there? Okay, so let's go
back on the details on the deepseick.
What happened? Deepseek was introduced
and gained a lot of traction and
interest from businesses, media,
everything and from our research team as
well looking into um DeepSync and the
research team found again we talked
about it in the fundamentals. It's not a
very advanced AI you know centric
capability. It's an exposed database in
the end if we simplify it that includes
a lot of sensitive data uh from deepseek
h they closed uh the exposure after uh
we disclosed it and then we published a
blog post about the fact that deepse had
a an open open database if we simplify
and I think that deepseek has a few in
the deepseek that week there are a Few
interesting things that worth noting.
One, deepseek adoption was something
that we need to pause and understand how
fast technologies can propagate uh
within our estate. H within a week
almost 10% of organizations were using
deepseek
let it sink. Okay, within a week and I
think that this is just one aspect of
when we think about you know from a
security perspective what do we need to
do as we are facing this rapid adoption
of new technologies in the end deepseek
raises interesting questions around I'm
going to use it to train s on sensitive
data I'm going to what are the questions
that we should be asking when we bring a
new technology into play by the way it's
not just dipsick any technologies that
we're bringing in and I think deepseek
was interesting because of the rapid
adoption on one hand second the
questions around okay where did it come
from it came out of nowhere right nobody
anticipated it and all of a sudden
everybody started downloading the models
using it and third we found what is a
critical vulnerability that we felt like
you know we need to let people know
about and I think that these areas. This
is
really important for teams to ask the
question on how do we enable safe use of
technologies that doesn't h prohibit the
business from using them but does in
like
instill uh the security measures we need
h in place and this is really around
what we've learned from this deepseek
technology is adopted faster than ever
we need to know what we're what our
teams are doing with the technology And
third, we need to do the scrutiny around
what is the security posture of the
technologies we use. Again, deepseek
like any other AI tools. As we talked
about earlier, it's a new space, new
software. It's it's being developed as
we go. We need to put the to scrutinize
the technologies as being adopted as we
have done in the past decade. It's
nothing new.
>> But my point is I feel like you guys
find similar vulnerabilities like the
one that you found with DeepSeek every
day. Mhm.
>> And this release let it let the media
run with this. There was already fear of
deepseek and the media kind of ran with
it and said see these you know the whole
point of deepseek was like a data
exfiltration exercise.
>> This is what the media takes of it
right.
>> So yeah but I want to hear from you what
what was really
>> I think that in general we focused on
the research on the entire AI stack over
the past year and a half. uh we launched
products to secure AI AISPM and which is
AI security posture management we have
released the AI state of the union like
what's being used not used and I think
we have also researched many many AI
technologies and that we found to be
vulnerable you can cover in our
>> you said earlier five of six of these
foundational technologies exactly so was
the were the vulnerabilities in deepseek
any worse than what you typically find
>> it was a very what I would a typical
exposure that exposed a lot of sensitive
data. But yet if we look historically
there were similar incidents for
instance with Microsoft releasing a
token that had access to sensitive data
in you know a bucket used for training
AI and I think there is there are
vulnerabilities more complex but
vulnerabilities in Nvidia's toolkits
that they released for developing AI. I
think what media does with it is one
thing right but the research shows like
throughout the past years and the
research we are we have shared on around
AI that AI really the messages that we
started with AI is a new stack of tools
there there are software tools they have
vulnerabilities we need to put the
effort to secure it by the way the the
tools we use and the infrastructure we
It doesn't mean that we can obiate the
need to do infrastructure security the
same way that we have done. And that's
basically the message I think that we
should all take including from the
deepseek incident. This is a very common
misconfiguration that happens.
>> Yeah. Exposing a database.
>> Having spoken with you, I was thinking
that as well and I was like trying to be
like come on. But I think that it's good
that you guys talked about it. But I
feel like the I'm looking back at the
story of the context the media missed is
this stuff happens all the time to
everybody.
>> Yeah. New
>> everybody.
>> All right. Uh let's talk about some uh
wild stuff now. Some more pie in the sky
things. Uh so all these AI labs are
trying to develop super intelligence.
They've gone from like AGI to super
intelligence over the course of a couple
of months. Uh one of the things that you
can you would imagine you would find is
is uh with super intelligence is the
ability to break into anything. Uh I I
was speaking with a researcher recently
who gave this interesting thought
experiment that I kind of want to run by
you. I do want to run by you which is
that um if an a if if an AI lab built an
AI system uh that was an expert at
detecting vulnerabilities could get past
any system today no matter how good. you
know, no offense to anyone working in AI
security, but the idea of super
intelligence is you can surpass what
exists today. Would they then have and
and they imagine they saw um other labs
getting close to building the same
thing, which would eventually like
destroy security on the internet, but
they then have a responsibility to go in
and delete the competing AIs because of
what could happen if this technology
prolifer proliferates.
You're looking at it from the threat
perspective. I always try to think about
it from the defender perspective. Are we
able to
generate faster technologies that will
defend against this sort of stuff?
>> But doesn't the idea of super
intelligence then worry you a little bit
because a we're not quite sure. I mean
if it happens and it's still a big if,
we're not quite sure if it's going to be
something that's developed by the good
guys or the bad guys. And even if it's
developed by the quote unquote good
guys, there could be hidden bad stuff
going in there going on in there.
>> It's true to any technology. Quantum
computing is another example. Like there
are many new technologies and there will
always be new technologies that will be
introduced. And I think that as
defenders
and in general as a good guys, we need
to think about how do we create the
proper guard rails to secure and build
foundations that help us to continue and
adopt technologies in a way that is
controlled manner and secured. Now of
course you can think about scenarios
that are far exceeds the worst case
scenarios that we have seen up until now
with security. But I think that
historically that wasn't the case. And
historically the same things were said
about cloud. They were said about data
legs. They were said about containers.
They were said about many different uh
innovations that were introduced. And I
think that we were good enough in an
industry. I'll give you like the
positive perspective. Most of the
technologies that were introduced in the
past were introduced without thinking
about security at all. And if we think
about you know the said story of you
know why we ended up in a I would say a
huge deficit of security in the industry
it's because security always came later
on it came bolt on we invented
autonomous cars and then we figured out
okay so you know they can now be
controlled and you know being you know
crashed into places without control so
we need to secure it. um the concepts
behind uh I think today AI security are
actually well thought today rather than
much later and it shows a bit on the
maturity of not only the security teams
by the way it shows a lot on the
maturity of the industry that we are
having the discussion on what's safe AI
what how to secure AI architecture and
unlike before you know talking about new
technologies and their Security was
always something that were was perceived
as a inhibitor to innovation. Let's
let's get it right first and then let's
secure it. Let's not let's not get the
folks scared about it, right? So they
won't block it. But today it's actually
a much more I would say integrated
discussion of of course we're using
going to use AI but of course we're
going also to uh put the guardrail the
security in place the standards. We're
going to ask the questions. So as we do
this, we also develop the muscle of
let's secure AI at the get-go. And I
think it actually shows a lot on the
maturity. I'm I'm I'm I'm positive
actually. Yes.
>> About this because I think we are at a
position that we've never been in it
before. Like it's it's relatively new
that we talk about technology and the
security of the technology at the same
time. It's great. I mean, I know you're
in security and so I would I guess I'd
be optimistic if I was, you know,
co-founding a security company, but I
think you're underrating the risks.
>> I just think that as an industry, we are
there are always risks and the risks are
always
scary and they far exceed what you know
ordinary people can think about when
they just see a technology. But I do
think that we have strong foundations
within the security industry and the uh
development industry to build the proper
guardrails and I have trust that we have
established enough track record on how
to secure
applications environments that we can
continue and innovate and as I'm going
to talk not only about whiz like the
entire industry if I look today at the
pace of innovation around AI security
look about look at the number of
startups that are already focusing on
providing solutions to AI security, AI
firewalls, AI security posture
management tools, AI security gateways,
right? There is a whole array of
security industry solutions that are
basically creating AI aware controls
that are now already in place already in
place. It's not like futuristic. It's
existing solutions. Yes, we need to
mature. Yes, we need to research. Yes,
we need to do a lot more. But we are
building the capabilities towards there.
>> What about quantum? You mentioned
quantum. Uh everyone talks about how
quantum will break encryption and break
security. You have a big smile already
hearing the word
>> quantum is interesting. I think it's
also it's an exercise, right? What if
exercise? I think we're seeing a lot
more awareness to post quantum
cryptography, right? the ability to
basically use cryptography that will
withstand quantum quantum you know if
quantum computing happens what happen
>> I don't know I'm not in the space for
speculations I do think that you know
it's good practice to improve again
>> you it's the same thread actually it's
interesting because you will hear me
giving the same speech about quantum
computing like security as AI there are
foundations we need to be really good in
running initiatives that will that will
secure our foundations in AI. It can be
whatever we talked about but in quantum
computing it should be about using the
cryptographic tools that we that are
suitable for this era and we do I think
the one thing that has changed around
quantum computing is a thought of maybe
it is closer than we think and then the
threat actors are you know stealing data
now so they will be able to decrypt it
later once they you know figure it out.
>> That's interesting.
>> So you know steal now decrypt later.
Yes, it's it's an experime it's you know
it's a thought right we need to also
give it attention and I think there are
more and more companies especially
regulated companies
government that want to make sure that
they are securing their data today
against crypto like postquantum
cryptographic risks. So, all right. I
want to end with this. Um, I want to
talk a little bit about the threat to
hacking uh autonomous vehicles and
humanoid robots if they eventually show
up.
>> I have to ask this in a way that we
don't get the same I'm confident we'll
be able to get ahead of it response. Um,
because I actually want to know like,
you know, what's likely to happen if
this actually goes down. I heard you
speaking recently about there's been
attacks on hospitals where um their
system uh their systems have broken down
and they've been unable to uh send
dispatches out and give care to people.
Yeah.
>> Um
if if this happens hypothetically uh is
it a magnitude greater risk if let's say
humanoid robots which might be in
people's h houses or embedded in society
or autonomous cars which we know are
like all over the roads now. If there's
a vulnerability there, is that like a
level up from just the typical software
hack?
>> You know, there are there are things
that happened in the history of cyber
that were very surprising. For instance,
what would happen if I break into your,
you know, I don't know, oven, it has an
IP. I'm breaking into your, you know,
remote cameras. What will happen? Just
I'm getting installing something there.
So
historically there was a DOS attack that
actually utilized all of these IoTs in
order to target one specific let's say
target and they brought it down because
we're talking about millions and
millions of devices that are flooding a
certain area but who thought that this
will be the use of these IoT right it's
a very creative use by the threat actors
right and I think that the way we think
about the risks
Threat actors are very creative in how
they utilize these sorts of stuff. And
it's not always in, you know, the direct
way. Okay, we're going to hack
autonomous cars to run into people and
robots to go into people's house and do
stuff. But I think that this can cause
significant outages for instance, right?
We can use it. It can cause significant
dysfunctional,
let's say, societies, right? if we rely
on it too much or we don't have the
proper ways to secure against it. So I
do think that as things evolve we do
need to
always think about how this can be used
not only in the direct way but also in
part of other campaigns that can take
place in cyber and with this creativity
is amazing in when we innovate h it's
frightening when it's used by the threat
actors right we're seeing it you know
not only in cyber we're seeing it in
everywhere, right? And
I think we should be always,
I would say it, careful about the way we
apply technologies to our day-to-day and
the guardrails we put into play. Uh
because the last thing we want to do is
to build trust on something that breaks
at some point and we don't have the
guardrails to protect against it. Yeah,
it's kind of crazy because um throughout
this conversation we've talked about uh
how we're relying more on AI to code and
now we're going to rely more on
technology to work autonomously in our
society whether that's agents uh taking
action our behalf on our behalf with
computer use or whether that's cars or
humanoid robots and so I think when
people like look at the trend of why
cyber security is important the more
cyber we put in our life so to speak the
more security we're going to
Correct. The more state we cover, the
more security we need. And one concept,
you know, one concept that I do want to
say, something that was a cornerstone
for, at least from my experience with
customers in cloud security, it's a
concept of democratizing security.
Because if we think about just security
teams doing security for the whole
world, it's not going to scale. But if
we think about our responsibility as
employees, humans, developers,
security teams, IT teams and how can we
contribute to this resilience of our
systems, right? The cyber resilience of
the technologies that we use, this is
going to end up in a I think a much more
scalable approach to security. And one
of the most interesting trends in cyber
security is actually that
democratization aspect. Another very
simple example is fishing emails, right?
If we're just a security team, one
security training team tries to filter
all emails, you know, something will get
in and in the end will work. But
>> yeah, we haven't even spoken about
people uh spoofing other people's voices
to try to get in also. See, that's
happening now.
>> That's happening. That's happening.
We've seen live cases
>> protect against that.
>> This is another as I said, all security
products across the industry should
become AI aware. Right.
>> And if I'm thinking about anti-ishing
tools, they need to be able to cope with
emails that are created via AI generated
capabilities that are very very targeted
and seem very very credible. So email
security tools should be able to find
them. Fishing through fake voice
messages deep like basically these are
the deep fake stuff. So these are the
things that all of the industry has to
mature across all of them. There are
solutions that are now in play. Security
is not security today and this is like
me being optimist. I think security
today is geared towards finding new
threats, responding to them in ways that
we can operationalize at scale. But we
have to be aware that with new
technologies come new threats,
>> right? There's a human side to it also,
I imagine, where like when somebody is
telling you they need money or something
like that, like you got to like
you can't just believe them anymore.
It's weird. Even if it sounds like a
parent or sibling, child, there has to
be like you have to be like, I'm just
trying to be sure that this isn't an AI
generated voice. Here are some things
I'm going to do to trick.
>> Correct. And that's our responsibility
in the end. everybody as humans,
employees, developers in how do we make
sure that we are part of this resilient
system.
>> Right? So I get I trying to think about
how I'm going to how I feel after
today's conversation. I mean, on one
hand, you sort of help broaden the
spectrum of threats that I ever thought
about, whether it's the ability to hack
the actual tools themselves. On the
other hand, it seems like the way that
Generative AI is working today, it
hasn't led to this massive uptick in
security vulnerabilities or even
attacks, even though we're always seeing
them escalate. So, I I'll choose to
leave uh with your perspective,
optimistic, and uh we'll have to keep
talking.
>> Amazing. Thank you very much for having
me.
>> Thank you, Anon. Thank you everybody for
listening. We'll be back on Friday to
break down the week's news. Thanks again
and we'll see you next time on Big
Technology Podcast.