Is AI A Privacy Disaster? And How To Fight Back. — With Andy Yen

Channel: Alex Kantrowitz

Published at: 2026-03-25

YouTube video id: 63-pbAxcLfY

Source: https://www.youtube.com/watch?v=63-pbAxcLfY

Could AI expose us to a whole host of
new privacy concerns? And what should we
do about it? Let's talk about it with
Proton CEO Andy Yen in a conversation
brought to you by Proton. Andy, great to
see you again. Welcome back to the show.
Hey Alex, it's great to be back. And
yeah, I think a lot of digging to here
AI and privacy. Definitely a hot topic
these days.
That's right. You know, we spoke a few
years ago before the LLM craze took off
and I really started to thinking of you
recently when I realized what was going
on when I'm using applications like
ChatGPT and Claude and Gemini and what
was going on in the background. So, let
me just give you an example.
Uh I recently found out that we have all
been opted in to AI training when we use
ChatGPT and Claude and only if you go in
and toggle off those settings can they
not use the conversation that you're
having with those chatbots to train
future models. That kind of astounded
me, to be honest. I mean, I was
definitely having some pri- what I
thought were and I probably should know
better at this point, but what I thought
were private conversations with these
chatbots. And the next thing I know, the
material can be used for training. So,
I'm curious to hear your perspective
about that and what do you think it says
about the state of digital privacy today
that that can happen? Well, actually
it's worse than you say. It's not just
it can be used for uh training.
Uh even if they don't use it for
training, they can still store it uh for
a long time, actually forever.
Uh they can still hand it over to a
government under a subpoena.
Uh they could uh leak it if they're
hacked in the future. Uh so, just
because you've opted out of training
doesn't mean you opt out of data
collection. In fact, all the information
is uh still there uh you know, on
Google, on OpenAI, on you know,
Anthropic's servers. Uh so, it's
actually quite a bit worse than you
imagine. Uh it's opting out is one
thing, but it doesn't make mean that
they don't collect it. So, this is I
think the first point I would make. But,
to kind of your point, uh yeah, I think
LLMs to
you you to it frankly,
it's a privacy disaster.
Um
And what is LLMs actually
for most people, it's kind of like a
more efficient way for humans to
communicate with computers. This is This
is all AI actually is, right? People say
AI is this huge new thing. No, it's just
it's a continuation of a trend uh past
30 years of being more efficient. But
efficiency comes at a cost. Uh
you know, a lot of people today in fact
use AI almost as a replacement for
search engine. It's like a
conversational search engine.
The difference is when you use search,
Google knows what you're interested in,
but they don't really know that much
about you. When you're having a
conversation with an agent, uh yeah,
with with AI chat, and you're speaking
to it for
months and months, and you have uh tons
of data being collected,
well, now it knows what you talk, the
way that you converse. It has a much
deeper insight into your personality. Uh
and this is why I think the information
that uh AI has, like if you share that
information instead of Google search, it
is really supercharging Google's ability
to understand who you are, and it knows
a lot more than if you were just using a
search uh product. Uh so, it's You can
think of it as Google's business model,
but now on steroids.
So, let me tell you one more thing that
I do to try to defend myself, and you
can tell me whether or not this is
foolhardy. Mhm.
There are There is an option uh both in
OpenAI's ChatGPT and Anthropic's Claude
where you can toggle to an incognito
conversation.
>> Mhm.
Does it Does that protect me at all?
Mhm.
Well, uh Google Chrome had incognito as
well,
uh but then, if you've been reading the
news, Google paid probably several
billion in class action lawsuit
settlements, uh and that basically
showed that incognito wasn't as
incognito as it was claimed. I At the
end of the day, you're kind of taking
their word uh on whether it's incognito
or not.
Uh and past history shows that uh, when
Google says something is incognito or
anonymous or you know, uh, uh, private,
it generally hasn't been the case. Uh,
and and this is why I I think uh,
anything that is any like trust-based
model of privacy
uh, has its limitations. So, it's
probably better than uh, you know, using
the default setting as you say that
allows training and all that. Uh, but I
wouldn't consider it to be let's say
bulletproof given Google's uh, track
record here.
So, I did upload my tax returns in this
secret browser. That was a mistake.
Yeah,
>> secret chatbot. Yeah, you probably
shouldn't have done that. Maybe you put
it in Proton Lumo, that would have been
better, right?
Okay. Yes. So, Proton does you have your
own LM called Lumo. We're going to talk
about that uh, in a minute, but um, but
basically your perspective here is um,
when we're using these these bots,
don't trust them. Don't trust them to
protect our privacy and that is scary
because
we tell, you know, speed Yes, they're
more efficient, but I think you've
already pointed out this. We tell these
LLMs more uh, than we tell any
technology platform about ourselves. By
by like orders of magnitude. Mhm. Yeah.
Well, it's a bit like if you have a
conversation with someone in real life,
you will know them a lot more
than you know, someone that just looked
at what you ordered on Amazon for for
instance. Right. And as humans, we
actually get most of the insights about
people through conversations. So, just
imagine like you know, you and me
talking, right? Let's say we meet at a
pub somewhere.
I know so much more about you cuz I met
you in person and had a conversation
with you versus if I was just like you
know, reading
your LinkedIn or or or something like
that. Uh, and this is the element that
we'll miss is that
in fact, AIs are going to get better at
discerning who we are than probably even
a psychologist.
And some people use these AI's as
psychologists in fact.
Right. I mean, there's an argument to be
made that they will know us better
than
a friend would because
we'll tell them things that we won't
tell Yes. Exactly. Yes. And that's scary
when you think about it. It's it's it's
completely scary.
Definitely. Now, another issue that's
come up around the AI world, but I think
it's actually something that's been a
pervasive concern
across all tech platforms is the use of
these platforms by kids.
And we know that kids have been using
them,
you know, LLM's in particular, but also
social media. There are currently trials
going on about social media addiction
and what social media leads to.
And of course, there's worries that as
kids get more engaged with chatbots,
who knows where they're going to end up.
And
so Proton, which again does encrypted
email, still doing docs, calendar, LLM
has done a survey of parents and asking
them about the way that they feel about
their kids use of technology. And I
thought it would just be a good
opportunity since we're talking for you
to share a little bit about what you
learned and what the parents told you
guys. Yeah. Well, it's quite interesting
because there are so many people
who, you know,
today just give kids phones and then,
you know,
kids have always also discovered AI.
In fact,
I would say the younger generation is
maybe even better using AI than some of
us, right?
And so just give you some top line
numbers.
Uh
Today around 70% of kids have access to
a smartphone by the age of 10. Right?
This is this is like seven in 10.
Uh and of those actually around three
quarters are using Gmail.
And that's a staggering number if you if
you think about it.
And a 10-year-old child doesn't really
understand privacy. It doesn't really
understand what ChatGPT and OpenAI is
doing with their data. They they have no
clue whatsoever. It's not really taught
in school, uh but they're already on
these platforms. And
I think parents are starting to realize
sort of the risk that comes with this.
So, if you look at uh the survey that we
ran,
if you look at uh if you ask parents
what they think about all this, well,
41% of parents say that if they could
start over and do things differently uh
with putting their putting their uh
children online, uh they actually would
want to, you know, uh start over.
Uh and 60% uh would want to be able to
erase all the information uh that about
about the kid that today is on these big
tech platforms.
Uh now,
it's obviously a little bit too late now
for many of these parents, uh but it
shows that there's a huge amount of what
what we call parental, you know, uh you
know, uh regret.
And it also leads a lot of anxiety,
right? I I you know, if you look at
parents, uh around 80% uh are really
concerned about uh you know, their kids
uh you know, online privacy. Uh and
these are sort of uh massive numbers if
if you really think about it. Uh and and
this is I think sort of um
quite surprising, cuz a lot of parents
in the last couple of years, they've
just sort of sleepwalked into the
situation. They're now starting to
realize, "Oh maybe we shouldn't
have done that." Right. And so, but let
me ask you because sometimes there'll be
platforms like Facebook or
or maybe Google as well, that will say,
"You have to be like X years old to use
our platform. Do you certify that you
are?" Some are even saying, like I think
OpenAI has been saying recently that we
have technology that can determine, you
know, smartly whether you're a minor or
not, and then prevent you from I think
the use case was using adult mode in
ChatGPT. Uh
>> [clears throat]
>> do you trust those platforms with that
sort of those guardrails or no?
I think the the guardrails could work
and it's good that there are some
guardrails in place.
But ultimately, the business model of
these platforms is to extract the data
of our children. This is what they are
all about. And
in some sense, the guardrails are
running counter to their business
interest. And
we have seen historically that when
there's this conflict of interest
between what is good for the business
and what is good for society,
we know the choice that Zuckerberg is
going to make. We know the choice that
Google has made in these things. So, I
wouldn't really trust the guardrails put
in place by a company that doesn't
really have the incentive to put in
guardrails.
But one of the There was this social
media trial. I think it was a meta
executive that said, "We make the least
amount of money on our young users."
So, that would maybe suggest that they
don't wreck as much. Uh
He He didn't say He didn't say we make
no money, right? He said we make less
money compared to parents. Right. Uh but
it's about building habits. It's about
getting children What is What is social
media?
Uh I I I view it literally not so much
different than a neighborhood, you know,
part or crack dealer. Uh it's there to
get kids hooked and addicted and put a
lifetime of of an dependency.
Uh and yes, children may not be your
most lucrative uh audience today,
but uh they do get older, they do have
more purchasing power, and in 5 10
years, they become sort of your uh you
know,
uh the cow that uh you're going to milk
for profits.
Um so, in the long game, yes, they do
want to get children hooked because that
is how they're going to turn profits in
the future. This is a building out the
customer base that's going to buy their
crack 10 years from now. Yeah, I I I
don't know if I'm fully on board with
the crack example, but I will say this.
Uh it's like it's obvious that when
you're a kid, you're not going to have
the purchasing power that an adult
would. So, to come out and say it's some
saintly thing that you don't make as
much money on, let's say, under 18 as
you would, you know, 25 to 40, well,
obviously not. Yes, but it's also a long
game. It's about getting an entire
generation completely addicted so that
they can monetize them in 5 10 years as
they get older.
Exactly. So, so how do you think, you
know, we've talked about a couple of
issues here. Talked about uh and
privacy. Talked about kids use of
of technology and the problems there.
Any solutions that you might have to
that?
Well,
fundamentally, it's a business model
problem. And if your business model is
ex- is exploitation of data and abuse of
privacy to generate money,
uh that conflict of interest is always
going to exist.
And what Proton has tried to do
differently is have a business model
where the only incentive that we have is
to protect user privacy.
And this was really the motivation
behind, you know, launching Lumo last
year, which is sort of
the private AI. The way that we describe
Lumo is
it's the only AI where your conversation
is actually private. Uh and this is
ensured through, you know, very strong
encryption. In fact, uh we don't have
any ability to go in and query past
conversations. This is different from
what Open AI can do. This is different
from what Google does.
Uh and
this is because our philosophy is the
best way to protect data
is not have it in the first place.
And that is only possible because we
have a business model that doesn't
require us to get your data in order to
make money.
Uh so, it's really a business model
problem. And that is also the solution.
And this is why, no matter what Google
and Open AI say about your privacy,
they simply don't have a financial
incentive to to protect it. And Proton
is a very different business because
that is the only incentive that we have.
The only reason people pay us anything
is cuz we
we actually put out privacy. And without
that, uh we wouldn't have a business.
And this is different because it really
aligns our interests with the interests
of the customer. Uh and this is
in my view, the foundation to a more
ethical and let's say uh more
responsible internet.
All right, but maybe it's maybe
cigarettes is the better analogy, right?
It's just
you know it's bad for you, you keep
doing it. Yeah. You need an alternative,
but it seems like there's no way out.
Mhm. Yeah. And and and and for us, uh
there is clearly a way out, right? You
know, um the whole point of the Proton
ecosystem is it's it's a way for you to
opt out of big tech data collections, uh
you know, a framework.
Uh today, if you look at email as kind
of example, uh
email people think about as
communication, but really what it is is
identity. It is the essence of who you
are. It's the thing that connects
everything online, you know, around you.
And
the way to look at this is
if you go back to the history of like
Google,
the first product was of course, uh you
know, um search.
But what was the like second product?
For for the ones who are older and
remember this, uh you'll remember that
the next thing that Google released uh
was not all the other stuff they have
today, right? Actually it was email. Uh
Gmail was probably Google's second
product, uh you know, maybe maybe third
if you count the ads uh product that
they put out as well.
And why did Google go so quickly uh into
email?
Uh the reason is because they figured
out in order to
get all your information and correlate
it to single profile,
they need you to be logged in.
And the one thing that you're always
logged into all the time online is your
email account.
So what email actually is, it's a tool
that allows them to have a profile
linked to your real identity uh that
gets all the your you know everything
you buy, everywhere you travel, all your
communications. But not only that, it
forces you to be permanently logged into
Google's ecosystem. So when you go to
any website that has Google ads, uh the
Google cookie is there. When you go to
any website that has Google analytics,
and this is like 70% of the web, uh all
that information gets added into the
profile that Google has on you. Uh and
the reason uh Proton we started with
email is because the way that you opt
out of Google's ecosystem is actually by
logging out of Google. And to do that,
you need to have something to replace
Gmail. And And this is why uh you know,
in our view, uh ProtonMail was the first
product that was the most effective at
letting people opt out of Google's
ecosystem entirely.
And so then going back to the kids
conversation, if you I'm just going to
like throw this out there. I was just
speaking with a friend who's like, "We
got our baby an email address."
That is one of the more consequential
decisions you're going to make for a
child in terms of
well, if they are they going to start
leaving this digital footprint from the
time they can type? Yeah. And so
sounds like you have a counter to that.
And I you know, that Proton is going to
make email address is going to enable
parents to reserve email addresses
for their kids. And I think it's a
pretty pretty compelling package that
you're offering.
>> Yeah, I'm happy to talk about this. Uh
today
at some points
your child needs to have a a a one
existence.
And what is an email or even any
account? Uh you know,
the your email
it's more or less your digital passport.
And I would argue in the 21st century
the more important passport than even
your physical passport. Yeah, like I've
already give you the option between
losing your passport versus losing your
email.
Actually, losing your email is probably
more painful than you know, losing that
passport.
Uh so what you're doing when you get,
let's say, a Gmail account uh for your
kid?
Well, you've gotten them an email
identity,
but you've also created for them an
advertiser ID,
which is going to be linked to them for
uh the rest of their life,
and is going to opt them in to Google's
master surveillance and data collection
ecosystem.
Uh, so, all the things are wrong with
the internet today, all the things that,
you know, have caused so many problems
to to to society, to democracy, to our
privacy,
you're essentially enrolling your uh
child into this super shitty ecosystem
that in the future is probably going to
come back and ruin their life in some
way uh
20 years, 40 years down the line that
you can't even foresee and predict.
Uh, and what we want to do at Proton is
give an alternative. Um,
you know,
every day, and this is our data, right?
76% of uh parents, the first email they
and the first account they give their uh
children
is Gmail.
And this is really a mistake. I think uh
you cannot be a responsible parent
today,
know what Google does, and willingly
sign up your child into that ecosystem.
Uh and
so, what we want to do with Proton is,
you know, we have this uh new campaign
around the idea of born private. A- A-
And what is born private?
It's simply
the reflection that all of us when we're
born actually were private. No one is
born into Google's ecosystem. Uh it is
you're added in through sort of, you
know, bad choices made by your parents,
more or less.
Uh so, what if instead of having
children default into Google's
ecosystem, there's an alternative where
you can actually get, you know, a Proton
address uh for your child, reserve it
today, and opt them out of, you know, uh
Google for the future. Now, reserving
email address is quite uh difficult. If
you look at, uh you know, Reddit
conversations, uh actually the
suggestion is, "Hey, if you just had a
newborn, please go create a Gmail
account and then log in once a year so
it stays active and give it to your kid
when they get old enough, right? Now,
um
that's number one, kind of a shitty user
experience cuz you have to log in every
single year and and keep that account
alive. Uh, so Proton's uh Born Private
program is just it's completely
different. It's
first of all, you're not in Google, so
you have avoided the entire Google
ecosystem. Uh, you get a Proton
username,
but we but actually we we reserve it for
you 15 years. So, you don't have to go,
you know, um
every 6 months, every 1 year log in
continuously for 15 years in a row. Uh,
you just get your Proton account for
your child once.
It is valid for 15 years. And at any
point in the next 15 years, you can
activate it uh or, you know, um your
children can can activate it.
Uh,
and all and and and actually uh in
exchange, uh you know, it's not going to
be completely free because things are
free can tend to be abused. Uh, so there
is a uh $1 fee. It's a very, very small
amount of money.
Uh, but it's a $1 fee to reserve
an account for your child for 15 years.
And it's really giving them a future
outside of the Google ecosystem. Right.
And and you at Proton have not only
email, but competing uh services like
Docs and Calendar.
Yeah. And um Yeah. So, it's it's the
whole thing.
Uh, it's uh you know, um email, uh
calendar, file storage, uh Docs, uh you
know, Sheets as well.
Uh, also an AI that is uh you know, um
competing with Gemini, but of course
private. Password manager, uh you know,
a VPN service. And I would I'll bet
probably 15 years by the time, you know,
your kid is ready to claim that account,
probably a lot more in the ecosystem.
Uh, and so, our goal in the long run is
to provide an alternative ecosystem that
is privacy-first, user-first, uh that is
not built on an advertising and
surveillance business model. Uh, and you
know, uh I
the born private campaign from Proton
this born private program
it's just a way to make sure that the
next generation
doesn't fall into the same trap that our
generation fell into.
Right. So you won't do ads you've never
thought about doing ads? No it's just
not our business model it's not it's not
consistent with our business model.
Actually
you know ads is only effective if you
can very effectively target and show the
relevant things to you know your
audience. But because everything in
Proton is encrypted we don't have the
ability to go in and read your emails.
We can't tell what you're interested in
what you've bought if you're male or
female if you're a gay straight you know
if you're a Democrat or Republican we
see none of that information. And as a
result
we would if we if we want to do ads we'd
be very terrible at business cuz
probably no one will buy our ads and we
will show you kind of the wrong ads that
you would have clicked on anyways.
And so that's why our business model is
really not compatible with ads
and I would also say the ads business
model is not compatible with privacy.
privacy and this is a very clear
distinction that we want to make. Now
what if a parent were to going to say
you know Andy this sounds good but
we all know that sort of to be
competitive or relevant today
you know you're living in a Google world
and so would it restrict my kid to like
not give them access to all that Google
tooling?
Well
it's actually giving them the choice.
You know
life is long and kids take a lot of time
to grow right? Takes some 18 years
before they come fully formed adults in
in many cases.
Once you put them in the Google
ecosystem you can't take them out right?
They're there for for good.
There's nothing that you know prevents
your child in 15 years from making a
decision that actually they don't prefer
the Proton ecosystem they want to be in
Google instead
but they can make that decision on their
own fully conscious of the risk that
that entails. So it's not either or. Uh
it's just to give an additional option
to children. So, if they decide that
they want to live outside of Big Tech in
the future and have an independent, uh
let's say, uh future
away from Big Tech, they have that
possibility.
Uh and I think this is a powerful option
that you're giving your children for the
future.
Now,
one more question about this. How uh
important is this authentication option
in terms of allowing these companies to
build profiles of us? Uh for instance,
ChatGPT or OpenAI is building a login
with ChatGPT option. And I think that's
like the utility of that might be even
more than like, "Oh, it's easy to log in
with Google because it, you know, you're
in your Gmail."
Uh maybe when you log in with ChatGPT,
it can sort of fill out all your all
those annoying forms for you. Uh what
are you giving up when you log in with
ChatGPT?
Well, uh now,
you don't need a login with ChatGPT to
fill out forms. Uh you know, we have a
Proton Pass product that fills out all
your forms for you. So, so so, you know,
it doesn't doesn't require AI to do
that. I'll say, you know, this feature
has been around for, uh you know, a long
time. But,
if you use like login with Google or
login with ChatGPT,
what you're doing is enabling a massive
correlation engine.
Uh it allows them to track your activity
across multiple products and services
and build a much stronger profile on who
you are and, you know,
your interests, what you're into.
Uh and that is
in fact, uh how Google has kind of
gotten its, you know, tentacles
everywhere, right? Uh by pushing things
like login with Google.
Uh and so, when you log in with Google
on like a third-party service, you're
essentially giving Google a full insight
into your usage of this third-party
service. Uh that's that's effectively
what you are doing.
Uh and this is why uh I say
everybody that goes online today, you
need to pick, you need to make a choice
of who is going to own your identity.
And today your options are basically
meta, uh, Google, uh, you know, maybe
Apple, uh, possibly Microsoft, but not
really, right? Uh, and these options are
all American, they're all big tech, uh,
they're all spying on you, uh, you know,
um, they're all, um, let's say, not the
most responsible ethical, uh, tech
businesses. Uh, and what I think Proton
is trying to do is to say, if you want
different option that has a business
model that is aligned with, uh, you
know, the best interests of, of users,
and something that can guarantee your
privacy through encryption,
we want you to have option number five.
Uh, you're not required to go through
the store, but if you want to be
completely outside of big tech, you now
at least have this option. Okay. I
didn't realize that when I'm logging
with Google, Google, can Google see more
than just the number of times I logged
into that service? They can actually see
what I'm doing within that service?
>> Uh, it it depends on how the service is
integrated. And it also depends on how
often they're calling Google to
authenticate. Uh, and and, you know, a
lot of time these services are running
Google Analytics, right? Uh, so all your
visits
are currently recorded. Uh, a lot of
times these services are also pushed
putting in the user activity, and
pushing into Google Analytics as well
from the back end. Uh,
so, uh, yeah, I think a lot of times
these services are from Google Ads as
well. Um, so you so so it's a
surprisingly large amount of information
that that, you know, um, you get if you
use log in with the Google option. Now,
I mean, I think increasingly the
question will be, all right, uh, if I
want a privacy-forward alternative like
Proton,
um,
it would be great to sort of go back to
where we started this conversation for
myself or my kids to have access to a
leading foundational model or leading
LLM. So you have Lumo, which is
privacy-forward uh, AI LLM, but I'm also
seeing that big tech is going to spend
something like $700 billion
this year alone on AI infrastructure.
So, what's your plan to keep pace with
them given that like [clears throat]
you know, the one thing you could say
about an ads business is it's high
margin and they can just plow that money
right back into
building better AI models. Well, there's
some people that say that spending
number is just Nvidia paying Nvidia and
you know, cycling revenue in a circle.
And okay, maybe we have to leave it to
the financial analyst to decide.
But,
interesting about AI that is different
than the past cycles is
if you look at today the most advanced
proprietary models
and you look at the most advanced open
weight models
and a lot of
LLMs are open.
The gap between the open models and the
proprietary models is quite small. And
actually decreasing over time. So, this
is something that's very interesting
about this cycle that is, you know,
different. So, that also So, so what
that basically means is
even a company that isn't building a
foundational model
can get an AI product that is pretty
comparable to, let's say, the
top-of-the-line proprietary stuff. So,
today if you use Lumo, for example, and
you compare that with ChatGPT, yes,
there's some things that doesn't do that
ChatGPT does,
but it is surprisingly competitive and
in in fact quite close and the gap is
closing, you know, as the open source
and proprietary stuff are converging.
So, this is a thing uh the first thing.
But, the second thing that's interesting
is
there's also something called, you know,
Moore's Law. And Moore's Law is
basically that computing is doubling
every 18 months and it's something that
has surprisingly held true for a couple
decades.
Um
And what this, in my view, will do in
the long run is it will actually
commoditize LLMs and commoditize AI. Uh,
what I mean by this is today if you want
to train a top-of-the-line you know our
frontier model
that might cost you a couple of billion
dollars.
Uh but in a couple of years that might
be 50 million to you know 100 million.
Uh and then a few more years on the line
uh it could be one or two million. Uh
and so what this really means is AI and
language models
in in large effect uh they're going to
be commoditized.
Uh and that's why
I believe you know these billions that
you see spending
Uh uh yeah these numbers are huge right
now but that is also going to
exponentially uh go down with time.
Uh and so I think the money is a factor
yes but it's less of a factor as you go
deeper and deeper into this revolution.
Yeah and I think maybe something playing
in your favor speaking of Nvidia's
they're about to spend 26 billion on
open-source models. Mhm. Right? So
That's a good investment for Proton.
Yes. Uh and and we benefit from that but
it's also very smart for Nvidia to do
that and the reason is
Nvidia sells GPUs and they don't sell
proprietary models. And they have great
open models that everybody can use then
everybody is going to buy GPUs. Uh so so
so so so this is is also kind of an
interesting trend where
the most
let's say the highest market cap company
in the world which is Nvidia
is strongly incentivized
to develop strong open models
which is going to make companies like
Proton who are doing things privately
first way uh have a massive advantage
compared to the revolutions of tech that
happened in the last 10 20 years. And
this is and this is I think um quite uh
novel.
Uh and and this is
for the world great because if you ask
the average person like you know who
uses AI everybody uses AI. But if you
ask them like you know how many of them
trust Sam Altman
very very few actually trust Sam Altman.
And this is I think also the opportunity
for privacy companies because everybody
needs AI but no one trusts it.
And this is a problem that I think we
can actually solve uniquely because of
the dynamics that we just discussed.
Definitely. Well, look, I'm definitely
much more optimistic now having this
last part of this conversation than I
was
when we came in thinking about how much
of my personal information I've given
over
to these models. So, all right, Andy,
and good news for those who do want that
15-year $1 email address reserved for
their kids, we're going to put the link
in the show notes where they can go to
sign up.
Yeah,
I think it's
by the way, I think another thing I want
to mention about this is the dollar that
we charge, you know, we really don't
want to do it, but we have to do it in
order to prevent abuse.
But actually that dollar goes to the
Proton Foundation. It goes to Proton's
non-profit.
And I think you know, so you also
support a good cause. You support
freedom of speech, you also support you
know, democracy and freedom online
with that $1 donation to the foundation.
And you're giving your you know, child
an option for the future. And I think
that's a small price to pay to you know,
give them a future outside of Big Tech.
Definitely, Andy, I think one of my core
takeaways from our discussion today is
just it's important to have that option.
It's important to be building that
option. And so I really want to say
thank you for coming back on the show
and thanks for building the option for
us. Yeah, thanks a lot and you're
looking forward to see what the future
will bring. And we want to build even
more options for the future. So, I think
it's
>> [music]
>> we're at a turning point in history
where we are from a generation that
didn't have options, but I think the
next generation will have options. And
this is I think something very
important. All right. Well, Andy, always
great to speak with you. Thanks for
coming on.
>> a lot. All right, everybody, thank you
so much for watching. We will see you
next time on Big Technology.