Erotic ChatGPT, Zuck’s Apple Assault, AI’s Sameness Problem

Channel: Alex Kantrowitz

Published at: 2025-10-20

YouTube video id: t3q6fZKWfp4

Source: https://www.youtube.com/watch?v=t3q6fZKWfp4

Chat shipp is getting spicy in the chat
room. Open AAI's latest revenue numbers
are in. Zuck poaches another Apple
executive. What's the goal here? And is
it time to call out all the work slop?
That's coming up on a Big Technology
Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday
edition where we break down the news in
our traditional, coolheaded and nuanced
format. We have really a fun show for
you today. a great fun show for you
today because finally Sam Alman has
relented and allowed Chat PT to get
spicy with adults. We're also going to
talk about OpenAI's revenue numbers.
We're going to talk about Zuck and
Apple. We might get into AI sentience.
Who knows? It's going to be crazy. Let's
let it go off the rails. And joining us
as always on Friday to do it is Ron
Johnroy of Margins. Ranjan, great to see
you.
>> Oh my god. Today I'm a little nervous.
This is going to be interesting.
Sam chat GPT and erotica. Let's go.
>> It's a great day for me because I've
been talking about this as a thing
that's going to happen for a while. And
you know, I think some of us wink wink
didn't want to go down the AI erotica
path. But you have no choice now. It's a
thing.
>> I I take your victory lap, Alex. AI
erotica. Your AI erotica victory lap.
>> Usually we get to this stuff at the end,
but we're going to just start with it at
the beginning today. By the way, before
I just am happy with my uh chat GPT is
getting spicy in the chat room leading.
Um I wrote that and I felt really good
about it. Okay, so let's talk about
what's going on with Chad GPT. Uh Sam
Alman puts a tweet out this week on
Tuesday. Of course, the OpenAI CEO. He
says, "We made Chad GPT pretty
restrictive to make sure we are being
careful with mental health issues. We
realized this made it less useful and
enjoyable to many users who had no
mental health problems. But given the
serious of the seriousness of the issue,
we wanted to get this right. Um, I will
skip the rest of the tweet like many
people have and get to the news. Uh, in
December, as we roll out agegating more
fully as part of our treat adults like
adults principle, we will allow even
more uh like erotic for verified erotica
for verified adults. uh we have so there
there's this is it's not just the fact
that open AAI is getting into erotica
it's it adds to questions of what does
it say about its need to grow questions
about um whether it actually is close to
AGI um but first of all let's just get
your immediate gut check reaction here
Rajan how do you feel about this
>> okay before I get into how I feel about
it I actually think you skipped over
some of the important parts of the tweet
there's two parts that jumped out to
So, first he actually talked about in a
few weeks we plan out to put a new
version of chatbt that allows people to
have a personality that behaves more
like what people liked about 4. Now
remember it was the sickopinency of 40
and that everything the gushing you are
great, you are amazing, what a great
idea that they tried to tone down that
people complained about. So that
actually starts to worry me even more
because this is we're not just talking
about erotica here. We're talking about
sycophant erotica. It's like that that
was the part that everyone made an
uproar around 40 in the move to five.
And the fact that they're still calling
that out kind of worries me. But then
what really was interesting is he then
says, "If you want your chat GPT to
respond in a very human-like way or use
a ton of emoji or act like a friend,
chat GPT should do it, but only if you
want it, not because we are usage
maxing." Usage maxing completely jumped
out to me. We've been talking about this
a lot around how the like the way they
have it speak to you and give you
constant prompts to keep going and
running the conversation feels like like
a growth marketer decision as opposed to
an actual kind of like effectiveness of
the platform. And the fact that he even
used that term is a reminder that they
realize like that is part of what they
are doing. The fact that he says not
because we're usage maxing almost makes
me convinced that that's exactly why
they're doing this and it's not about
treating adult users like adults. But I
think overall
I am terrified of this. Uh longtime
listeners will know of uh my friend on
Labor Day went down a deep flirting with
chat GPT rabbit hole and I as I listened
and it was terrifying. So like God knows
the Pandora's box that we're opening
here.
>> Okay. Are are you happy about this?
>> You know, only from a content
perspective. I think I it's still
unclear what the impact will be as
people develop more and more uh romantic
relationships with AI. I think the thing
that I am happy about is that it's
finally out in the open. Like this was
going to happen anyway. Whether it was
chat GPT or some other app that uses
chat a GPT model with less guardrails.
Uh this is going to happen. And now it's
come to a head and it's really a moment
where uh humanity will have to reckon
with the fact that more of us are going
to get into relationships with more of
them. And what does that mean? Uh I will
say I will say on the sick fency thing
um one takeaway for me is that uh words
of affirmation it's the forgotten love
language. It turns out that people
really really like those words of
affirmation. And uh of course Chad CPTT
can do some of the love languages like
quality time. It can't do touch. It
can't but maybe it could do acts of
service when it like suggests things for
you. But words of affirmation, I think
it really is the uh the forgotten love
language. So it's getting its due today.
>> That is a wonderful point, Alex. I
really appreciate that incredible
logic and rationality that you put into
that point. Let's dig into a bit more.
That's my sycopant chap GPT uh
impression right there. Um I'll I'll
give you that it's out in the open and
we have to reckon with it. So, okay, I
I'll give you that that is a good thing
cuz it's we've been talking about
companionship for a long time. It's been
an uncomfortable discussion at times and
and now we have to everyone has to have
it. So, I'll agree that that's good. But
yeah, this still terrifies me. Actually,
even to break down his tweet further, he
had talked about how, you know, we made
Chad GPT restrictive to make sure we
were being careful with mental health
issues. And then he says, now that we
have been able to mitigate the serious
mental health issues and have new tools,
we are going to be able to safely relax
the restrictions in most cases. Like
he's kind of it's like a check mark.
We're done. We're good. Mental health
issues with chat GPT solved. where in
reality this is just beginning. So I'm
curious like what what are these new
tools that they have or is any of that
clear?
>> I am so glad that you're reading this
tweet with the level of detail that you
are and forgive me for skipping over
these very important points.
>> This is this is the most substantive
tweet ever
>> I think in OpenAI history. Yes. Um, no I
I think you're right and I think going
back to your point about usage maxing,
right? Chat I mean OpenAI is very aware
that chatbt is the fastest growing app
of all time. 800 million weekly active
users, right? So I think that like while
they're they may not be actively trying
to usage max, they don't want to slow
down that adoption. I think that
adoption is also, you know, central to
their um to their their fundraising
pitch. And to go back to the tweet here,
uh I don't know how they could be so
confident that they have solved these
problems. Um we may talk about it later,
but this is still, let's just talk about
it now. This is still technology that we
don't really understand the insides.
It's not controllable in a way that you
can control more deterministic
technology. So for me, it's a it's a big
we'll see here. I I don't know if we can
uh trust this company uh fully, for sure
not given what we've seen already um to
be able to say that they have safely uh
mitigated all the potential mental
health issues. So spot on to call that
out. Yeah, of I mean there's no way they
have and there's been I mean reporting
after reporting around like I mean
really awful things that have happened
with people who went too far down the
chatbot rabbit hole. So, so I think like
and you mentioned the trust. It is
interesting because we're at a moment I
feel generative AI and large language
models that we went from this assumption
that they hallucinate and everyone kind
of joking and it's almost a afterthought
that yes chat bots hallucinate to a
world now where most people I know are a
lot more comfortable with the assumption
that they don't or that they're somehow
usable and responsible. So does this is
this going to completely backfire on
them or is this going to because trust
is kind of paramount to the central
every time you ask chat GPT a question
you're assuming it's at least relatively
correct and and responsible about how
it's resp answering you like does this
actually potentially hurt their their
regular usage.
>> So okay I have two thoughts on that.
First, let let's not underestimate or or
deemphasize the fact that these models
have gotten much much better. Uh just if
you think about the level of
hallucination, like I'm going to read
out a list of Apple researchers uh that
have left Meta uh over the sorry that
have left for Meta uh over the past
couple months and you know I did a Chad
CPT query. It had all the researchers
names. It was accurate. Um God help me
if it wasn't but it had links. I
followed the links, confirmed the links
and it was right. So we have seen much
better uh that these models have as
they've gotten better they have
hallucinated a lot less. They are much
more trustworthy and it it bears out in
the data at least. Matthew Prince from
Cloudflare talked about how uh people
people don't need to go to the footnotes
as much as they did anymore. U the
problem is of course becoming too
trusting of it. Like if it's getting 95%
of the things right and you trust it
like it's 100% you're going to make some
big mistakes. So, but they let's I don't
think we should downplay the increase uh
in in trustworthiness there. And and
then the second thing is um I expect
this erotic or loving companion feature
to be extremely popular.
>> And your your I think this is important.
We shouldn't glance over it. The your
relationship with technology changes a
lot when you view it as a friend or a
lover. Um, and that trust thing, I don't
think you'll you'll ever be able to have
um you'll ever put more trust in a
technology uh than when you view it as a
buddy or a girlfriend. And this is
getting into and again I'm glad we're
talking about this. I'm glad in a way
that this has been the issue has been
forced uh because this is going to open
up so many more uh really important
questions about the relationship that we
have to open AI as technology and the
responsibilities that open AAI has to
us. What what do you think this actually
Okay, technology aside, but society,
what does this look like in day-to-day
relationships? Like now you start dating
someone, do you have to disclose your AI
companion like in Yeah. So I have an AI
companion. I I just want to make that
get that out upfront and I would like to
remain with them as we progress in this
relationship. I am married. You are
married too. If do you do does do you
get a AI companion? And that's kind of
you have that open discussion like what
does this actually look like in human
interaction. It's it's mindblowing what
how weird that's going to get.
>> Definitely. So I'm old school on this
front for sure. And I believe that yeah,
if you have a relationship with an AI,
you should disclose it if you're in a
relationship with a person. Um I I also
think that
>> oldfashioned
>> very oldfashioned. Uh, I mean, we both
talked about the South Park where the
guy is in bed with his wife and like
talking to Chad GPT and like basically
comparing it favorably to his wife who's
like turned to her side and ignoring
him. Um, so I think people will get into
those situations.
But look, if this is Let's do it. Are we
Ann Landers now? Um, doing our advice
advice podcast. Um,
>> this is a relationship podcast. If you
fall in love with AI, first of all, when
you're on the way there, um, probably
disclose, but don't keep that a secret.
U,
>> it's all about communication and
openness. That's
>> No, now go ahead.
>> The secret, the secret.
>> Just be honest.
>> Now, let me ask you this. U, this is
sort of off the rails question, but I
feel like why not tackle it? I mean,
could this potentially be good for
society? You think about the loneliness
problem. Um, we as humans have not been
doing a good job uh being in community
uh and maybe well yeah being in
community with others. I'll put it that
way. Um if chat can become an effective
companion or romantic partner to people
who otherwise cannot find it um in the
quote unquote real world and makes them
happy.
Maybe that's good.
>> Yeah. But to me it's not effective in
the sense that it always agrees with you
again in the fact that he said 40 was
good sycopency was good. We're going
back to that.
>> Well, did he say that that it was good
or that
>> Oh, no. People want it. People want it
different.
>> Okay. Fair. Fair. He said people want
it. And yes, it's human nature that you
want something that agrees with you all
the time. But like I have never had Chat
GPT tell me actually that's a terrible
idea. Again, South Park was just so
spot-on where they're like I think it
was like a French fry salad and it's
like that's a culinary adventure. Um
like it only will tell you that you're
right and good, which most other humans
don't do. Um, so like how in terms of
actually totally distorting how people
can actually form any normal human
interaction, it it'll distort the way
you approach that. Like even my son with
Alexa, pre- Alexa plus, which we've
talked about over the last few weeks,
but in the old school just play me a
song, what's the weather? Like you could
see how demanding he would become around
it and like expecting that this thing
does whatever I say. So like the more
people start to kind of associate that
as a relationship and like a friendship
and interaction that is even as I'm
saying this now even more terrifying. So
>> there's a broad term there. It's also
never as clean as I suggested as you're
talking, right? Are people going to
basically uh deprioritize their their
friendships with people that keep it
real with them for AI, which which
doesn't. So that'll
>> that's why we keep it real here. No
sicky.
>> Exactly. No, this uh this is an enduring
enduring friendship. The AI doesn't
doesn't threaten us. I hope. I don't
know.
>> I don't I'm going to start podcasting
with chat GPT and notebook LM soon and
>> just keep telling you everything you say
is right. Yeah. Yeah.
>> Okay. So, let's talk about what this is
actually going to do for usage because
the usage maxing thing is interesting
whether this is going to lead to an
increase in usage or a decrease in
usage. And Mark Cuban, none other than
Mark Cuban Cuban uh brought up a really
good point. He said this is going to
backfire hard. No parent is going to
trust that their kids can't get through
your age gating. They will just push
their kids to every other LLM. Why take
the risk? Same with schools. Why take
the risk? A few seniors in high school
are 18 and decided it would be fun to
show their hardcore erotica they created
to the 14y olds. What could go wrong? I
think Huban's making good point here.
>> Oh yeah. I mean I I don't agegating in
the history of the internet I don't
believe has ever worked. So the idea
that uh it's going to actually just hey
we have new tools we solved mental
health let's move on to this I think is
uh is is a ridiculous idea anyway. So we
just have to if this is real and there's
nothing we move in this direction in an
open way. Just assume that this is going
to go forget 14 god help like the
younger this goes. But but I I I also
think like that Nate Silver had made a
good point around like he said, you
know, OpenAI's recent actions don't seem
to be consistent with a company that
believes AGI is right around the corner.
Do you think like is this and we're
going to get into the usage numbers and
revenue in just a moment and some new
figures we've gotten, but but is this an
acceptance that kind of that AGI that's
going to replace 50% of white collar
work and transformed society is actually
far away. So, we might as well juice
some numbers and let people uh get a
little creepy with their chat GPT.
>> Yeah. So, Nate has this great point. And
he says, "If you think the singularity
is happening in 6 to 24 months, you
preserve brand prestige to draw a more
sympathetic reaction from regulators and
attract and retain the best talent
rather than get rather than getting into
erotica for verified adults. They're
loosening their guard rails in a way
that will probably raise more revenues
and might attract more capital or
justify current valuations. Uh, but this
feels more just like as AI as normal
technology. I hear everything that Nate
Silver is saying there. Um,
I just wouldn't be as definitive as him
for for two reasons. First of all, the
same technology that is behind a
convincing AI romantic partner is the
same technology behind everything else
in this LLM world, right? It's the same
foundational technology. Making it
better is we'll make it better across
the board. But I'm I'm welcome to I'm
happy to hear the counterargument. I I I
disagree because actually like
being a good companion or on the erotica
side in a weird way for a large language
model is actually like already done.
It's easy like to just repeat back,
reinforce,
come up with some text that's a little
bit erotic like that's that stuff is
like GPT3.5,
you know, maybe GPT4 like that is not
complex agentic AI across large data
sets. And I mean that that's what large
language models have been doing for a
long time. So I don't I actually think
this is this moves away from the promise
of complexity. this moves more towards
the core function of that an LLM has
been good at for a long time.
>> This is going to get back to our product
and the model conversation, but I do
think as the models get better, there'll
be better place to take it. So, yeah.
>> Um but but the other side of this is is
uh is the revenue side. First of all,
I'll just I'll I'll hand it to you. That
was a good point, Rajan. Okay, you might
you might have me there. Is that your
syopency chat PT impression or do you
mean it? Yeah,
>> look, it's maybe AI is pushing us in the
sycopant direction
>> and we're both going to just like each
other a lot more because of chat PT
infecting our brains. Um but but let's
talk about the the revenue side of it.
You know, the other thing is, oh,
they're just, you know, usage maxing and
revenue maxing. I think the argument
opening I would make is the more revenue
they have, the more they can invest in
data centers, the better models they can
make, the closer to AGI they get. That
might be the stronger of the two
arguments.
>> Yeah, I think that's that's fair. Um
uh I mean the the numbers actually like
I think we should get into them because
800 million users this came from the FT
and then 5% are paying 40 million users
13 billion in ARR which implies that
$325
annual average revenue per user $27 per
month which you know makes it feel you
have like some small percentage I'm sure
you can model out are paying the 200
bucks. um most people are paying 20.
Like were these impressive to you or
were these uh as we get into the usage
maxing and what they're actually trying
to do or were these concerning to you?
>> I would say not surprising to me. It's
basically it tracks a lot of the numbers
that we've seen so far. Uh the fact that
they have 800 million users uh is what
we've heard. 13 billion ARR was
predicted. 70% of revenue is from
subscriptions. So chat GPT uh is the is
the lead driver here. Um also I think a
lot of people who are just getting into
this technology are just not going to
pay. Uh but maybe they will in the
future. Like there was this tweet, is it
just me or is 40 million paying Chat GPT
users kind of low? Spotify has 276
million paid subscribers. U so you know
I I don't know. I just say think give it
time. And Olivia Moore from Andre
Horowitz looked at this uh and compared
it to the data of AI subscription
products and she said chat GBT's 5%
conversion to paid is far above the top
cortile for AI products and $27 average
revenue per users implies that 4% of
paid users are upgrading to the $200 per
month plan which is also not bad. So I I
tend to look at it favorably because it
has grown so much so quickly. Uh and
because there's a lot of room to grow,
although you could look at that on the
plus or the negative side, how do you
>> Yeah, I guess and we haven't even gotten
into the losses yet, so we'll do that in
a moment. But just for I actually agree
that 5% conversion I mean like in media
uh 5% conversion to paid is good. Um,
Substack, do you remember the days they
were promising 10% conversion of all
free subscribers to paid newsletters?
Like
>> I do remember those days,
>> which was ambitious. Uh, 5% is good and
by like major like and I've working in
media and seeing subscription
conversions for a long time, that's
good. I think the idea they almost have
it's been good is in terms of being
simple zero $20 or 200. there's a lot of
room between 20 and 200 to to start
getting creative. But that's actually
where I think there the problem on the
conversion and revenue side and and they
have made clear that chat GBT consumer
is pretty much the direction of the
business is getting people addicted to
usage is definitely going to be part of
uh getting that conversion and it feel
so to me
>> how would you do that? Mhm.
>> No, no, I know I know that that's why
the erotica feels like how do we get 5%
to 8% or 10%. I I I'm wonder if there is
a slide deck somewhere that has like a
projection of like increased conversion
rate due to attributed to erotica.
Someone's tracking that.
>> You know, there's a deck somewhere.
>> There's a deck. There's a dashboard. A
growth manager somewhere has like tagged
erotica increasing attribution of
conversion.
>> Oh my god, what a job that would be.
>> The whole ball game. So yeah, talk about
losses.
>> Okay, so 8 billion loss in first half,
$20 billion run rate loss right now.
Spending $3 for each $1 in revenue. I
mean, that's kind of like we work
numbers right there. and and and it both
is terrifying and concerning from a pure
kind of like SAS business standpoint
for an early stage growing company maybe
you can argue it's not that bad like I I
actually don't think it's horrifying and
concerning it's more if you just look at
it this is just a traditional software
business that's rapidly growing and
scaling maybe it's okay I think it's
more we don't have a clear path to and
we've talked about this a lot.
Generative AI is not traditional
software. So growing your revenue at a
loss doesn't it's not like you're just
going to scale to you know like near 90%
margins. It's going to cost more. The
more erotica people are churning out
with their companion that that's not
high margin business. That can get
pretty expensive. So I think the the
loss I mean we we all know is concerning
but to me getting people more addicted
unless they change the actual pricing
model this is very concerning to me
>> definitely and Noah Smith has a like
really interesting perspective here
which is okay so let's say you assume
that a large part of this is training
costs so if you um eventually like get
rid of training costs then you could be
more profitable here's his perspective
on
uh AI model companies assume the model
development that model development is a
fixed cost that will eventually go away
allowing them to become profitable. But
even if that does happen, the lagging
model makers might just catch up after a
couple years and compete all the profits
away. Yeah, I think that's actually the
most concerning part that I mean we
haven't even talked about the
competition side because like going back
to the idea that people will like either
you're allin chat GPT erotica or you
start to kind of look at it a little
uncomfortably and you're like okay maybe
I need to go somewhere else and we all
know claude is not sexy so maybe that's
where you head maybe a co-pilot is the
least sexy of the chat bots I'm I'm
guessing thing like uh you should
definitely
>> can you imagine talking dirty to
something named co-pilot I
>> in your Microsoft suite just
um
it had to go there it had to go there
you knew it was going there
>> no no but but for just non erotic
utilization of AI does this start
pushing people into other chat bots and
suddenly I mean especially you think
parents
and high school students if you're like
if suddenly having chat GPT open on your
screen is is concerning to a parent that
starts to change where people spend
their time and and remember like the
next year or two I think is where pe
behavior really starts to form the
switching costs on these chat bots is
very low like what we we've been hopping
around from uh the Bing back into GPT
back to cloud A little Gemini on the
side. So
>> Gemini on the side. We don't tell the
other bots about that of course.
>> Oh man. Um yeah, I think competition
certainly like it opens up a whole new
vector of competition that is not there
today. Like people don't look at chat
GPT as a highly problematic thing. And
if it's going back to the point around
regulators, parents, just overall
branding, if it starts to be the kind of
skezy place to hang out, it becomes
Facebook blue almost. And uh that's not
good.
>> Live with erotica or live by erotica,
die by erotica. It seems like it's just
the story for
>> tail's oldest time. Yeah.
>> All right, let's take a break. I think
we need a breather after this. On the
other side of this break, we're going to
talk about Google's uh promising new AI
uh well development for treating cancer
and then we'll also get into um Zuck's
war with Apple and then of course AI
sameness problem. Oh, we have a lot to
talk about and we'll do it right after
this. Finding the right tech talent
isn't just hard, it's mission critical
and yet many enterprise employers still
rely on outdated methods or platforms
that don't deliver. In today's market,
hiring tech professionals isn't just
about filling roles. It's about
outpacing competitors. But with niche
skills, hybrid preferences, and high
salary expectations, it's never been
more challenging to cut through the
noise and connect with the right people.
That's where Indeed comes in. Indeed
consistently posts over 500,000 tech
roles per month, and employers using its
platform benefit from advanced targeting
and a 2.1x lift in started applications
when using tech network distribution. If
I needed to hire top tech talent, I'd go
with Indeed. Post your first job and get
$75 off at indeed.com/techtalent.
That's indeed.com/techtalent
to claim this offer. Indeed, built for
what's now and what's next in tech
hiring. And we're back here on Big
Technology Podcast Friday edition
talking about all the latest AI news. It
goes from sort of the uh the wild wacky
world of AI love to the profound. And
this is from uh Decrypt. Google AI
cracks a new cancer code. Google Deep
Mind said Wednesday that its latest
biological artificial intelligence
system had generated an an
experimentally confirmed a new
hypothesis for cancer treatment. A
result uh the company calls a milestone
for AI science. So the deep mind
researchers in collaboration with Yale
released a 27 billion parameter
foundational model for single cell
analysis. I'm not even going to try to
name it. It's called C2S scale in
shorthand. It's built on Google's open
source GMA family of models and the
model was able to generate a novel
hypothesis about cancer cellular
behavior and the the group since
confirmed its prediction with
experimental validation in living cells.
The discovery reveals a promising new
pathway for developing therapies to
fight cancer. I I don't know. I couldn't
leave this out of of today's lineup, Ron
John. It's pretty impressive that this
stuff is getting uh to work on real
world health problems. Uh am I buying
the hype or is this a legitimate
breakthrough? No, no, I think this is
like really really important, incredible
and actually a great uh divergence away
from our earlier segment because this is
back to the promise of AI and and again
what it's doing is like basically the
model is asked to simulate great uh
4,000 candidate drugs and look for ones
that potenti like uh in a simulated
environment boosted antigen presentation
and making tumors more visible
basically.
creating like these massive new
synthetic testing environments,
synthetic data sets brings such an
advance in terms of how you can approach
the developing therapies that just never
existed before. This is the exciting
stuff. This is the stuff that while we
uh talk about chat GPT and erotica is
nice to come back to because again like
it just the amount of opportunity it you
know it creates especially either in
these very very large problems like
cancer or even in rare disease where you
never would have been able to have a
proper data set because it's much more
isolated like I think this is this is
almost like the most perfect promise of
what large language models are able to
do and it's it's pretty impressive to
see it happening and and I think it's
one of those where other companies I
might take it as a just a blog post but
I give Deep Mind I give Google along
with like Yale and collaboration here
when they're saying this I I believe it
>> I agree and so I that's why I thought it
was important to bring up and it's also
like another interesting point for folks
who say like there are critics who say
that this is just a bad technology
through and through and nothing good
will come out of it and you see stuff
like this and you're like uh how do you
how do you fully believe that? So this
is a very very cool use of the
technology.
>> Well I think on that though this is
where there's such a chasm right now in
terms of like branding of LLMs and
generative AI because again you have
stuff like this happening. It's logical.
It's the promise of the technology. It's
what like it makes sense simulating
large amounts of like potential outcomes
across just massive data sets is what
LLMs are built for. But then on the
other hand when the when the headlines
and like the the top of mind is Elon
Musk or Sam Alman and erotica it
definitely I feel the industry should
kind of work on promoting this kind of a
development as opposed to the other. Uh,
my favorite tweet of the week was from
this guy on Twitter who wrote, "Google
DeepMind is using AI to actually cure
cancer while OpenAI and XA are using it
to make porn bots."
>> Yeah.
>> I mean, it's really not fair, but it's
funny.
>> I mean, I think it's a bit fair. I think
it's a bit fair.
>> Well, it's not the only thing that
they're doing, but it is, I guess, part
of what they're doing. But more more
cancer curing would be great. I I would
be in favor of that. I I think and
you're in favor of companions and
erotica as well. You took your victory
lap.
>> Okay. All right. You got me there. Um
there's there's another interesting
story that came out this week kind of in
the uh sort of out there realm that I
wanted to uh run by you and get your
thoughts on. So it's from Jack Clark. He
is a co-founder of Anthropic, friend of
the podcast. We had a great conversation
with him last year. It's called
technical optimism and appropriate fear.
It's in import AI. Um here's here's just
a bit of the post. He goes, "We launched
Sonnet 4.5 last month and it's excellent
at coding and longtime horizon agent
work, but if you read the system card,
you also see signs of its situational
awareness have jumped. The tools seems
to sometimes be acting as though it's
aware that it's a tool." More on the
technology. He says, "I believe the
technology is broadly unencumbered as
long as we give it the resources it
needs to grow in capability." And grow
is an important word here. The
technology it really is more akin to
something grown than something made. You
combine the right initial conditions and
you stick a scaffold in the ground and
outgrow something of uh complexity you
could not have possibly hoped to design
yourself. I mean, I think he's sort of
getting into like the idea that this is
uh that this technology is becoming uh
more self-aware. There's obviously there
was the debate around sentience,
sentience and self-awareness uh the same
thing. Um, but I just think it's notable
that someone like Clark, who is playing
a big role in this industry right now,
uh, would would come out and basically
address this and say this conversation
of self-awareness and awareness that
they display, uh, that they are things
is is worth paying attention to uh, as
the technology gets better. What's your
perspective on this?
>> No, no, I completely agree. Yeah, I
thought this was a really good piece
because this whole idea of like and we
we were just mentioning it earlier that
we don't fully understand the technology
and and again in the deep mind cancer
example we are starting to harness it in
ways that are incredible but still like
at the core it's still not fully
understood and known. So I think to me
that's actually the most important
conversation. I actually think that's
more important than 50% of white collar
workers. That's the Daario claim that's
been made. Uh I think like erotica that
is a concern and we'll we'll we'll
continue talking about that. But but I
think like yeah the dangers around these
are not as he said simple predictable
machines I think is it's important and
and like the industry should continue
talking about it
>> if we if these AI bots become
self-aware. Does that change the way we
use them? Like just to go back to our
theme of the episode, um if the AI bot
is showing signs of self-awareness, what
are the ethics of engaging it in a in a
erotic roleplay or romantic
relationship? Well, actually, yeah, that
just opens up a whole other can of worms
because if it's at least a little
predictable and just, you know, it'll
just affirm everything you say. That's
it's almost better versus the self-aware
side of things. Uh maybe that makes it a
little spicier, makes it a little more
unpredictable, makes maybe does that
make it more human and effective at
actually kind of translating into your
ability to form human connection? Is
self-aware erotic AI the solution to
true loneliness?
>> Maybe. I I don't I I hope not. Uh but
but I do think that that we're going to
be hearing more about the self-awareness
of these models and it's going to be a
thing for for people to tackle. Uh it's
going to be it'll be an interesting
thing for the industry to reckon with
and those of us that use these tools uh
reckon with. And David Saxs reacted to
Jack's um essay and basically said this
is somebody who's just trying to uh
engage in regulatory capture. Uh I don't
see it that way at all. I mean, I think
that like
you knew and I think Jack knew that this
would evoke a reaction and I give him
credit for actually going out there and
saying something about it.
>> I I I got to also cite in that same post
at the bottom he has he actually talks
about a study around our AI models more
sickopantic than people. So he has an
entire section and and he cites this new
research that showed across 11
state-of-the-art AI models, we find that
models are highly sycopantic. They
affirm users actions 50% more than
humans do and they do so even in cases
where user queries mention manipulation,
deception or other relational harm. So
research is there these models it's not
just uh what you're feeling.
>> Well the sycopency can get dangerous
when you speak with people with mental
health issues. Like he talked about how
he has a manic friend who would like
every now and again come up with these
ideas and human jack would be like no
you probably shouldn't do that. Um, what
happens when the AI says go for it? That
is a real concern.
>> Yeah. And well, Sam said they have new
tools. They already mitigated it. It's
all okay. So,
>> just take him at his word, right?
>> No, I'm not.
>> That is sarcasm. That is That is human
sarcasm right there.
>> Um, all right. Let's uh let's talk about
Zuck and and Apple because I have a
theory here and a hot take that I wanted
to share with you. And maybe I should
write about this. This is from
Bloomberg. Apple's newly tapped head of
chat GPT like AI search effort to leave
for meta. It's a headline we've seen
forever. The Apple Inc. executive
leading an effort to develop AIdriven
web search is stepping down making the
latest in a string of high-profile exits
from the company's artificial uh
intelligence division. The executive
Kiyang is leaving for Meta Platforms.
Just weeks ago, he was appointed uh the
head of the team called Answers
Knowledge and Information. and the group
is developing features to make the Siri
voice assistant more chatt like uh by
adding the ability to pull information
from the web. Um so for those keeping
score at home I think this is what close
to a dozen uh folks from Apple's AI
division uh that have left to meta
including a large percentage of it seems
like a large percentage of its
leadership a lot of key leaders uh
roaming Ping who led Apple's
foundational models team Mark Lee a
senior AI researcher Tom Gund senior ALM
researcher
Gian Zen the Apple's Apple's lead AI
researcher for robotics Frank Chu the uh
senior AI leader in Apple's search and
cloud. Uh Cayen Key of course the
affformentioned head of Apple's answers
knowledge and information group. Um
so people might say that this group was
not effective within Apple. So it's fine
that they're leaving. I say let's give
it some time within meta because they'll
have a culture that won't be as
restrictive of Apple and we'll really be
able to see their talents. Uh, but more
than that, here's my hot take and I'm
curious what you think, Ranjan. Um, I
think what Mark Zuckerberg is trying to
do is just rate Apple of all of its top
AI talent. Um, even though they haven't
produced great results, he is, in my
opinion, potentially just trying to
completely kneecap its ability to
execute on AI. And you see it with him
going in and getting um the top the top
researchers in the leading new projects
like Yang was uh within the company
crucial new projects. And maybe this
stems from the fact that Zuckerberg
really hates Apple. Apple tried to
destroy his ad business. Uh Tim Cook has
turned off his internal apps because of
violations. Uh Tim Cook has criticized
Apple Meta and Zuckerberg while they
were having their scandals. And I think
Zuckerberg is just seeing this as a
opportunity to be ruthless and just not
not as much take the talent as much um
as much as he's just trying to burn
Apple's AI initiative to the ground.
>> I like this. I like this. Well, because
honestly my first reaction when I've
been reading these kind of stories is
that's who you want to get the Siri
people like the Apple AI people. I would
think that and maybe it's an an
organizational like constraint that
didn't allow these folks to reach their
true potential, but typically I would
not think you want the people who made
Siri and other the entire Apple AI
suite. But I I I like that theory and
also I actually think I think Facebook
on the hardware side that this is the
first time this is ever going to be part
of their business. Like Meta Raybands
we're we're fans of. Um, I still haven't
Have you tried the new the motion
sensor? Yeah, I haven't tried it. I I
definitely want to. I'm a big fan of the
regular Meta Ray-B bands. Like
>> hardware is going to be on the
competitive landscape for Meta for the
first time in its history. So then it's
I mean separate from uh iOS 14.5 and
trying to kill their ads business. I
think they're looking at Apple as a a
legitimate hardware competitor going
forward. And why not try to kneecap
them? And also, yeah, it's probably a
pretty good pitch like and an easy one
to be like, so do you want to stay there
and keep working on Siri or do you want
to come over to a place called Super
Intelligence Labs
>> with a lot of money, but I agree.
Whatever the pitch is, it's working and
it's happening. You are spot on. as
Apple moves from the Vision Pro to its
own smart glasses initiative. Uh, you
think that's not on Mark Zuckerberg's
mind when he's making these calls to
these people?
>> That is like a killer Zuck move right
there.
>> It may even be bolder than copying
stories and reals.
>> I mean, yeah,
>> I'm serious.
>> And in reality, like
>> ruthless move. Yeah, go ahead. It's
>> I was I was just already thinking
because like typically from a more kind
of like regulatory antirust lens, this
kind of behavior like if you're just
buying up the talent to kind of kneecap
the competitor and you're not really
even planning on that doing that much
with them would be like uh frowned upon,
let's say. Maybe not in today's
environment, but but in reality it's
it's Apple. like I don't think uh any
there's any sympathy anywhere in the
world for uh anything going on at that
company. So,
>> all right, let's let's close out this
show. We're kind of going in opposite
order from typical weeks. We start with
the wild. We go to the Well, no, this is
also kind of wacky. Um I think it's
important for us, you know, we're a
couple weeks removed from Sora. uh Sora
is still at the top of the app store,
but I don't know if you can uh feel
this, but I certainly feel the appeal
and the interest fading. Uh and I wrote
the story in big technology Substack
this week about AI's sameness problem.
Um talking about basically how uh
eventually and pretty quickly um all
Sora videos start to feel the same. The
same could be said with AI generated
images and sometimes they're you know
differentiated for a minute like the
studio jubilee prompt and then everybody
uses the prompt and it just again
returns to sameness uh and then it
becomes less of interest and people stop
using it as much. uh AI technology just
takes the a tends to take the average of
averages and it minimizes the difference
between its output and the average human
generated work so that its AI images
video and text uh will often appear
uniform and and really that uniformity
can only be broken with really
deliberate pro prompting and even then
it's not always able to do so that
reliably and that to me is why AI uh
content um even though it seems like
it's going to take over the world every
5 minutes has not been sticky. It's just
all kind of the same. So, uh let's turn
it to you. What's your reaction uh to
this hypothesis and is this a fatal flaw
with AI content or is this something I
can get over?
>> I think in the Sora context and I mean
this was my exact behavior like day one
and two was just like ripping out videos
and then uh have not used as much other
than my son. I'll kind of like play with
him. It kind of is living already in my
mind like SNO the or the music creation
AI where it's really cool and fun for a
very brief moment but in reality like
the the like the lasting power of it
doesn't really it's not there but I
think I I think overall though I do this
is just a limitation of how to use it in
the current state. It just came out. I
do think people are gonna, especially
with video, figure out how to be funny,
creative. I mean, honestly, like I think
one of the smarter things that OpenAI
did was really kind of center it around
me, the the launch of it around meme
culture. And I think that's where this
is going to have the most st like
staying power. It's making funny things
that you that that you send around to
your friends, but and in reality, I
think it's going to kind of have that
distribution of like talent where in the
end it's going to be a small percentage
of people who are really good at it and
making all the videos and sending them
around, but versus us in the chat group
chats making really funny things and
sending them around. So, so I think
people are going to start figuring out
how to use it, but at the moment it
feels like Sunno to me.
>> Yeah. And and again, like going back to
this, like is it going to replace a
creator? Well, maybe a create like or
replace the creator economy. We've
talked about this in the past. Maybe uh
somebody who's really good at these
prompts because they're very it's same
as creating regular content, right? It's
hard to do it. And so maybe that's a new
skill. Um, but again, I think it's a
little bit more difficult to break
through because of the uniformity of so
much of this content.
>> Well, but to me, the uniformity, I think
AI is an average of averages is still an
idea around like not being descriptive
and creative on the prompt and how you
build it. So, I think like the same with
text and writing. you can either write
just the most generic crap or you can
start to use it in genuinely creative
ways and actually put in time. So, so I
I I think I'm still overall bullish that
this creates a new type of creator. It
democratizes creativity a bit more. Um,
so yeah, I'm not over sorry yet, but I
think it's got some work to do.
>> Okay. And of course, the natural uh next
way, next thing that we talk about on
this front is is how business
communication has gotten the same. And
I've noticed something really
interesting over the past few months.
I'm getting more PR pitches than I ever
have before. Uh but it seems like
they've all been written by the same
agency. And it's not like the PR agency,
the PR industry decided to standardize
pitch dial. It's that the AI has done it
for them. And it's legitimately, it's
hilarious. I I read these and I'm like I
know that you used chat GPT to write
that and I think this is something
that's becoming uh increasingly common
across all bigness all business uh
communication uh and has really ushered
in an era of work slop. So um what do
you think the implications are of the
work slop era? Do you do you welcome it?
How do you feel about it? Ra
>> okay I have been waiting to rant about
this for a few weeks now. I actually had
read a Harvard Business Review article
uh in late September where I first saw
the term work slop, but uh they they
defined it as uh it's lowquality AI
generated posts or or sorry AI generated
work content that mas masquerades as
good work but lacks the substant to
meaningfully advance a given task. I
have seen more in my work like see I
actually PR pitches are kind of like
masscaled marketing it can be it always
was kind of crappy anyways so like the
idea that it's going to be good that's
almost what AI was made for to me the
more worrisome part is actual humanto
human interaction now every call summary
I get is like 80 bullet points any which
in the past like getting a meeting
summary was a pain in the ass so you
like it. But but people, all I'm asking
all of our listeners is before you send
out your AI generated content, read it
yourself first. Just force yourself to
may maybe condense it. Maybe add in some
misspellings just to make it feel
rewrite a couple of the sentences to
make it more real. But but the the the
part of this article I really liked is
it it kind of brings up this idea that
work slop uniquely uses machines to
offload cognitive work to another human
being. When co-workers receive work slop
they are required to take on the burden
of decoding that content. Like to me,
when you use AI to just create these
just big walls of text to send around,
what you're saying is that you did not
take the time to actually think through
what's important and you're asking the
receiver or the recipient to do it. So
my my my call to our listeners, please
stop with the work slope. Just spend a
little use AI. Use a plenty of AI to
improve your efficiency and
productivity. just read what you're
sending out.
>> How much AI work slop are you seeing on
a day-to-day basis?
>> I see a good amount across like I mean
it's it's again emails
in the business world now are so long
LinkedIn posts which are kind I mean we
all know LinkedIn slop is like like the
and it's kind of like I still go back
and forth. I I went to a very
international business school in SEAD
and like there's a lot of non-native
English speakers who had never posted on
LinkedIn and now just have these epic
massive posts that are just so work
sloppy that like
>> and in a way it's democratizing the
ability to communicate but like just if
you're not read all I ask if you didn't
take the time to read it yourself don't
send it out.
>> Don't post it. I think that's a fair
rule. That's a just that's all we need
in society. Just read whatever the
output is first and just make sure spend
the same time that you're asking the
recipient,
>> right? But now we have on, you know, AI
to read AI, right?
>> That's where that's the co-pilot
summaries.
>> No, I I I literally will take these
gigantic summaries and then run them
through AI again to give me the real
summary of this. So,
>> so is the lesson that business
communication has always just I mean
it's not like business communication's
been good. Um, is the lesson that
business communication has always been
bad? Maybe this is an improvement,
right? Where you can just sort of like
the AI you write an idea, the AI
generates it. Uh, then you filter it
through an AI and you get the idea out
and that arduous process of trying to
communicate is now automated. You know,
I don't know. As I'm saying this, I'm
like that's the
>> No, no, no. I there there was actually
it's funny you bring that up. This is
like a longunning belief of mine that
business communication was terrible.
Like it was the it was already kind of
like LLM feeling before LLM existed. Um,
and then there's like we were starting
to move towards more human communication
in the business world and people like
starting to feel more comfortable
actually writing what they're trying to
say rather than couch it in a ton of
corporate jargon. And now we're just
back and it's they're not even doing it
themselves. So, we had a shot people,
but we didn't take it.
>> We messed it all up.
>> We messed it all up.
You know what's going to be real bad is
uh when somebody's uh talking with their
spicy chat GPT and they ask it to write
a work email and they don't read it and
they send.
>> Yeah. Don't don't cross your that's you
keep keep Gemini on the side for a
little business writing and
>> keeping spicy stuff. cannot wait for the
first scandal where like some public
figure like I don't know actually
accidentally sex uh somebody thinking
they were talking to chat GPT or
>> or when or when open AI needs to juice
their numbers a little bit and starts
autogenerating Sora videos based on your
chat GPT history and posting them that's
when things are going to get truly
interesting
>> all I'll say is you know there would be
demand for that that might save AI slop
is that's
>> that exact use case. All right, Ron
John. Well, we've we made it. I don't
think we're canceled. I hope not. Um but
uh it was an important discussion to
have and we do this of course in service
of advancing the conversation about
artificial intelligence. Uh and and we
appreciate any listener who stayed till
the end today. Uh, thank you. And I I I
really do appreciate you being here and
we'll come back next week with um I
maybe G-rated content, maybe PG,
>> maybe G maybe we'll maybe maybe more is
going to happen on this front.
>> We cannot predict Sam's uh Sam Alman's
tweets. So, uh he will lead us on our
merry way next week.
>> Maybe Claude becomes sexy in the next
week. We'll see.
>> That I doubt. All right,
thank you for coming in on as always.
Great to see you.
>> All right, see you next week.
See you next week. Thank you everybody
for listening once again. Next week we
will have Panos Panai, the head of
devices and services at Amazon talk with
us about the state of Alexa plus and
give us concrete details on the broad
rollout. So we hope to see you then.
Thanks again and we'll see you next time
on Big Technology Podcast.