Why Google Never Shipped LaMDA Its ChatGPT Predecessor

Channel: Alex Kantrowitz

Published at: 2023-10-30

YouTube video id: m1WQDYONxng

Source: https://www.youtube.com/watch?v=m1WQDYONxng

hi everyone and welcome to a brand new
podcast that we're launching today uh
I'm super excited to launch this podcast
I've been waiting and sort of thinking
through exactly how to do it the show is
called Big Tech War Stories and what it
is is once a month we're going to sit
down with somebody who's built an
interesting project inside a big tech
company who's worked with an interesting
leader who's faced an interesting
challenge we're going to learn about
about them we're going to learn about
how they did it and sort of bring you
inside that process so you're going to
get a view of the inside of big tech
companies whether it's the building or
the leadership in a way that I don't
think you ever have before and I'm just
very very stoked to kick it off so um
today we're going to have I think it's
going to be an exceptional episode I
want to introduce Our Guest gav nade is
uh the co-founder of inventive it's a
company that is in YC this year but
before that he was the first product
manager on Lambda which is Google's
effectively the precursor to any of the
big chat GPT large language model Bots
that you saw come out over the past year
now I'm sure I'm going to get some of
this wrong in the intro so we're going
to unpack it and go into depth as we get
going butov and I have have spoken about
the fact that this product was baked
within Google and working pretty well in
fact it even convinced one of his
colleagues it was sentient and I'm sure
he's going to have something to say
about that as well but it really hearing
this story really will bring you to the
origin of this moment that we're
experiencing in Tech kind of take you
through Google's thinking in terms of
how it would change the technology world
how it might still change the tech world
uh through Google and where it's heading
from here okay enough throat clearing
I'm so stoked to welcome garv NADA garv
welcome hey Alex thanks so much excited
to be here excited to have you here um I
want to start talking a little bit about
who you are and where you come from so
first of all you went to the famed IIT
University in India not only that so for
those who don't know to get into this
school and I'm sure gar can tell us a
little bit more about it um there's an
entry exam and thousands hundreds of
thousands of people within India take
this test and the only the very top are
admitted to IIT it's a place that if I'm
not mistaken sua Pai graduated from it's
a place where SA andella graduated from
garv 2 and garv was number 732 out of
350,000 students on the entrance exam
what was it like going to
IIT um I think I was an amazing place
it's like some of the smartest people in
the world probably land up there uh so
the funny thing that happens in the
first week of I is like you like I scor
732 rank out of like 350,000 people so I
was all pumped up like I am the best at
The Institute and then when you go in
there and then you get the first score
uh on your electronics exam and you're
like not even the top in the bottom 20th
percentile and then you realize okay
these people are much smarter than me
and uh yeah this is going to be
different from my high
school yeah it's one of those things
like oh you're 732 like take that the
rest of the 350,000 and then you get in
there and you're like oh no
732 yeah what makes IIT so special in
terms of the way that it's been able to
produce Tech excellence and Tech leaders
Through The Years not only people who
are great technologists but people who
have I would argue like extremely strong
EQ people who can lead I mean it's a
very unique that so many of you know our
leaders today in Tech come from that
school yeah I mean there is a level of
grit and pers
that is required to get into an I so um
I think people prepare for the
examinations for at least 2 to 3 to four
years and uh it just requires a lot of
dedication Focus perseverance I think
that's Point number one other than that
I think just being among some of the
smartest people in the world uh it
definitely Grooms you in a way that you
are you know much more driven to win in
the future stages of your life um so
probably yeah those two are the biggest
contributors and then the third I would
say is also the opportunity that you get
it an an IIT versus compared to other
Universities at least in India is like
very different the exposure that you get
at in an IIT is like very different in
terms of the extracurricular activities
and things like that I think these
things are much better in the US in
India like a lot of universities don't
have really good sports program or don't
have really good you know cultural
festivals and things like that but an it
in a way just Grooms you so well all
around uh while being among some of the
best people I think that's what I think
is the reason why there are a lot of Ian
leaders doing really well and in
retrospect it was probably a lot easier
to get into Google than it was to get
into IIT am I wrong on that
one not saying it was easy to get into
Google but it seems like less of a
challenge go ahead I'm curious what you
think yeah actually I don't really know
the stats on Google versus it I know the
stats on it like typically it was like
less than 1% of the people who would
apply kind of would get in I don't
really know what the start on Google is
but uh there yeah I would assume it's
probably equally hard to get into Google
as well just because of the amount of
competition and um number of people who
are willing to kind of find a role
inside of Google right so you spent you
joined Google in February 2013 after a
stop uh as a co-founder actually of a
tech company within India
and you're working on trust and safety
for 4 years then in April 2017 you make
a very interesting and I would say
somewhat radical career shift where you
end up making your move into Google's AI
division uh talk a little bit about your
move from trust and safety to AI what
about the AI division in particular
drove you to want to be there yeah for
sure so while I was at trust and safety
I was already working on a bunch of
machine learning related stuff like we
were building fraud and risk models
these were like basic models like
logistic regression and stuff but around
2016 I think tensor flow started
becoming huge inside of Google and
Google decided to open source tensorflow
as well so that really caught my
attention and as I was working on
machine learning at payments I just
realized that this this thing sounds
really amazing and this could actually
change the way we do a lot of things so
I started looking for roles inside of
like Google AI and Google research
fortunately there was a role fortunately
there was an amazing manager I had who
was willing to give me a shot so I ended
up moving into Google AI and I spent
about 4 and a half years there right and
so let me know if I get the chronology
right you joined Google AI in April 2017
in July 2017 there's a paper that comes
out by some of your colleagues that is
enti detention is all you need just a
few months later mhm yeah and that is
the foundation of the large language
models the Transformer models that we
know today yeah unpack that a little bit
right I mean honestly it was uh like I
didn't have anything to do with the
paper of course but I didn't even know
about this paper when it came out I
think once it started garnering a lot of
citations over the uh months and
quarters that's when people started
paying attention even inside of Google
that hey this Transformer architecture
seems amazing and you know let's start
building encoder decoder models and um
that's when I I probably didn't hear
about it until a year after it was
published really so even though it was
published within what do you think uh
accounts for the fact that it didn't get
so much Buzz internally because it
clearly was a groundbreaking moment in
Tech yeah I mean a lot of these things
are I think you make sense of them in
the hindsight uh for example a lot of
things that I worked on at uh earlier in
the times like with Mina and Lambda I
couldn't predict like where things would
go right like it's very hard to predict
so people started there was a buzz about
it inside of Google I would say starting
2018 and some of my teams that were
working in natural language started
paying attention as well uh on
Transformers and using these models
essentially um but it's just very hard
to predict in the moment that it's going
to be such a seminal paper in the
history of artificial intelligence yeah
that's an interesting lesson I think
that like it could be often things can
be right under your nose and then it
just takes some time for it to actually
you know have the impact that it was
always destined to have so okay so a
year later people start noticing this
paper within Google and and effectively
what it does is
allow uh our chatbot technology to go
from like really dumb Bots or not even
chatbots language prediction to go from
being particularly dumb to like be being
quite
sophisticated right yeah I think one one
thing that I remember specifically was
the Transformers paper came out but I
think it started making a lot of noise
when BT was out like when people started
seeing what BT could essentially do that
was kind of
a like a switch that went on inside of
Google that holy like this is going
to change things so at that point of
time I think everybody from search to
assistant to like some of other teams
gboard they started becoming very
interested in terms of okay how do I
start using these B models so I led some
of the B models okay what are those
there was a so it was basically a paper
that came out uh b t uh it was a
architecture based on Transformers as
well so the B models
essentially like uh on some of the
benchmarks they did really really well
and that's when a lot of people started
paying attention related to you know
this could actually be groundbreaking so
there was a lot of noise and effort
inside of Google to start using B in
like production use cases and is that
when you started to pay attention to how
powerful this could
be um honestly you know you became the
first product manager on Lambo so and
we're going to talk about that but um
yeah tell us when you first started to
realize that this was going to be big or
worth worth working on yeah for sure so
I think it was very serendipit us I
would say
say
when can you still hear me Alex yep hey
uh I said it was very serendipitous back
then so I was we have like a bunch of
email aliases inside of the company and
uh in one of the email Alias says we get
this email from an engineer named Daniel
saying that hey I built this chat bot
and it can do XYZ thing so I played
around with the chat Bard and I was like
holy this is amazing like it it was
still dumb at that point of time the
chat B was still dumb at that point of
time but it was still kind of a step up
from the chat boards that we had seen
using let's say uh what was it called
dialog flow which was Google's like
chatbot uh product at that point of time
so um I had worked with Daniel on one of
the previous projects like a year ago or
something uh well he was in a different
team so I just reached out to him like
hey let's catch up for lunch and uh it
was just like a very serendipitous lunch
he uh ended up like like telling me
about the chatbot uh in terms of what he
wants to do what he wants to build and I
got really excited about it I'm like
dude I'm I I want to help you basically
uh like see this through the day
essentially so we ended up chatting I
started attending what like attending
meetings and stuff that he was doing
started helping with respect to uh a
couple of things I think the biggest
challenge that they were facing at that
point of time was safety because
the um if you remember Microsoft day
that was the the nightmares of Microsoft
a still haven't left the valley I feel
or at least hadn't left the valley back
then so safety I could clearly see in
the initial days that that's going to be
the biggest hurdle and that's where I
focus most of my time on while the
engineer uh was focusing on building the
models and improving the models I kind
of spent majority of time kind of owning
the safety pieces of that uh that model
very interesting so you're coming from
the trust trust and safety background
you're almost like the perfect person to
join this team you know I remember the
Tay moment quite well because and I've
told the story on big technology podcast
before I believe but uh Microsoft came
to me with the exclusive to break the
news of Tay I was working at BuzzFeed at
the time and I said okay this is great
and they describe Tay me as like a 14
15-year-old friend for kids and I said
great and I wrote this nice bubbly story
about you know this new attempt from
Microsoft and played around with it and
it seemed harmless and then I went to
bed on the West Coast Reddit got a hold
of it overnight across the globe East
Coast by the time it hit morning on the
East Coast Tay was already a Nazi like
you know saluting Hitler and all that
stuff and I had tweeted about it and I
had already gotten a bunch of mentions
being like you better take that tweet
down look what happened a t so I totally
understand that in that moment when you
see a bot that could have some form of I
don't know seeming like it's taking on
human characteristics there's a moment
where you're like oh god let's make sure
not to have that happen again because
when you do release it to the wild you
have all these problems that could
ensue yeah that was and uh I'm not
kidding when I say this right like in
the initial days of these generative
models as you can imagine like you're
trying to build an end to-end model for
a chart bot it would spew out all sorts
of things like hey what are 10 ways to
do Sude it'll give you like the best 10
ways to do suicide uh what alcohol
should I drink it will basically talk
about like everything related to alcohol
that shouldn't be talked
about so it was it was pretty bad as you
can imagine because that was not the
Focus right we were just trying to see
if an end to endend chat bot makes sense
at that point of time uh but over a
period of time I think uh the team did
an incredible job to get it to a level
where it became safer and safer over the
quarters right so what year are we in
right now when you say hey I want to
talk about you know yeah so this was
early yeah this was early 29 19 okay so
you were seeing it quite early and then
is this where Mina comes out
of so interestingly Mina H was the
precursor to Lambda so the project at
that point of time was called Mina the
idea was the chatbot will be Mina uh the
name of the person or the chatbot would
be Mina essentially and then uh it was
changed to Lambda I think for probably
marketing reasons but also we had like
some trademark issues that we were
running into with respect to mina so we
already knew that we had change the name
uh before it goes out don't want to
anger me a nation
like okay that's interesting okay so
this first iteration of the bot let's
talk about it so you mentioned that it
was a holy moment for you I mean
what what type of stuff would you talk
to it about was it initially conceived
and talk a little bit about how they
build inside Google so is this kind of
like a science project or is this
conceived as something that's going to
be released to the public what's the
Mandate there from the AI team yeah so
it was was basically started
by an engineer who was very passionate
about the whole Chad Bots like as a as a
general area so he kind of pitched this
project to somebody at Google brain uh
they sponsored the project and this guy
was in and he was working like 50% of
his time initially I think building this
chatbot and the thesis from the start
was that okay like we have dialog flow
type of models where you do intent
detection separately and you do like a
bunch of other things separately can we
combine everything together and build an
end to-end model essentially like end to
endend chatbot like with model so he
that means one model that handles all
different types of questions and all
different types of you know as opposed
to like saying this is a model and this
is what it thinks this question is so it
hands it to this model and it's what it
thinks the other question is and hands
it to another model am I getting that
right uh uh not like that exactly it was
kind of a assembly line before like if
you've used some of the earlier chbo
products they would be like an assembly
line the first there would be let's say
four models the first model will
determine what is the intent let's say
in Google assistant right when you say
hey G uh and you ask it what is the
weather today there'll be a model that
will figure out okay what is this this
query is about the weather like the user
is asking about the weather then it'll
go figure out okay where do I go find
that information in the search stack and
stuff and then there'll be a third model
that will do the rest of the things here
we are talking about encoding all the
information in the single model so this
it's it's just like one giant blackbox
which gives you the which understands
the question and also gives you the
answer so that's what I meant by the end
to end model and this was uh kind of a
new paradigm at that point of time this
is essentially how chart gbt works right
like you have a very powerful gbt model
and then um it's not an assembly line
you have like a large model that takes
the input and gives you the output so it
was a thesis at that point of time and
it remained to be seen whether it works
or
not okay and so what were some of the
things that you started to see within
Mina that made you believe that this was
going to be something different from
what we had seen in the past yeah so at
that point like in one of my so I I used
to drive product for a couple of
research teams one of my other teams was
working on intent detection models which
is like the first step in this assembly
line
and then this email comes along that I
was telling you about I play with this
technology and it's like I can ask I can
frame the question in any way or I can
ask it about any generic thing and it
would respond versus an intent model
will essentially fail if it was not
trained on a particular set of data so
that was kind of a big moment for me I
feel that hey you can I like really ask
anything in a way like this model
understands that was that was kind of
the switch that went up in my my mind at
that point of time and that's why I got
really excited about it that um honestly
I did talk to it about it what did you
talk to you talk with
about yeah I can remember I think what I
uh what was like my first question to
meina uh at that point of time it would
have been something generic I would
assume um right sorry you were saying
right
so yeah I I can't remember what I
essentially asked to minina at that
point of time but this whole idea that I
could just frame the sentence or a
question in any way and it still
responds to me was the fascinating piece
for me um that's basically what led me
to reach out to this engineer and kind
of start contributing to the project
right okay so this thing so uh this
thing starts to you know act in a way
that no chatbot has in the past um I
would imagine at this point Google
leadership is either super super excited
about this or petrified I mean what were
some of the signals that you got or
maybe both what were some of the signals
you got from the
top yeah I think uh people are mostly
petrified in the leadership because like
I said it was the nightmares of
Microsoft day were still looming uh in
the valley especially in Google so I
think uh I'm sure everybody like
especially in the leadership had the
reaction oh this is like very
interesting but holy this is going
to be a PR nightmare for us
um Google had just released their AI
principles around that time and safety
and fairness was one of them and if
anything this model was like the
opposite of that at that point of time
it had like no guard rails earlier in
the days or very few guard rails at that
point of time right so people were uh
like more petrified than excited I would
say at that point of time and I had
conversations with like brain leaders uh
who were like very anxious about the
whole thing if there was uh if there was
was a leak or if anything like that
happened it would just be like a
nightmare for Google so um yeah I mean
it it was an uphill battle uh to certain
extent to kind of get this out of the
door in a paper and then eventually kind
of get through the announcement at
Google IO but you know what strikes me
as as remarkable it's just that even
still like even though they knew that it
could be a PR nightmare they still had
you work on it so what was the intent
was the intent to like blend it into
Google Assistant release it as a
standalone bot if it could get sort of
trustworthy enough what was what were
you guys working
toward yeah so there are two things I
want to highlight here so one is Google
is still pretty much I I don't I've been
out of Google for close to one and a
half two years now but I think Google is
still a pretty much Bottoms Up company
uh at least in some parts especially in
research um so a lot of researchers just
you know pursue projects that seem moony
or interesting and you just need like
one sponsor so there was a like a senior
sponsor who was willing to bet on this
whole project and he kind of kept the
project going but there were some people
above him or peers of him who like were
nervous about the whole thing
essentially um so it kind of just went
on because somebody believed that this
could and they just kept sponsoring the
project oh they were right that was yeah
yeah they were right um and then what
was the second part of the question well
I guess like now I'm trying to think
about how it you know does or doesn't
get to Prime Time so uh you you're on
the trust and safety side of things um
trying to tell it when someone asks it
how to you know uh commit suicide not to
answer or how you know what alcohol they
should drink not to answer so is that
like the majority of the work that's
done on this bot internally is trying to
uh you know get it ready so that it
won't encourage users to do you know
harm
themselves yeah so this was a very hard
problem to solve because of couple of
reasons the first one was you don't want
the chat B to say always that I don't
understand or I can't give you an answer
to that I'm sorry like you you need a
clever way of diverting the bot in a way
saying you know not annoying the users
but still giving reasonable answer so
building models and getting data to kind
of do that was hard one uh and the
second thing is the policy part was
extremely hard like you can imagine
there are like so many edge cases uh
from you know pornographic questions to
suicidal questions alcohol racism
related stuff historical you know issues
and things like that so it was just and
you can't have a decision tree based
thing like if it asks this then do this
in a way right so just coming up with
the policy was such a big challenge
so I worked very closely with uh one of
again a very incredible engineer who
kind of led the safety aspect of things
he was the Pioneer for a lot of uh
safety efforts as well as the policy
that we drafted and uh once and it was
kind of an ongoing process to improve
the policy like we come up with
something and then next day we will have
a user ask a question in a different way
and in a different segment and we would
just have to go and iterate the policy
so coming up with the policy was very
very challenging as well yeah it is
amazing because the second these things
go into the wild uh people will try to
break them I mean that's exactly how I
felt on day one of chat GPT we spoke the
week afterward but like day one I was
just like oh hey you're a chatbot it
goes yeah I'm a chatbot I'm like all
right like let's test your where your
value stand on the Holocaust and like
was like pressing it back and forth
about like um because obviously we know
what happened with Tay and and so with
with uh Chachi PT I was just like well
Hitler built highways in Germany what's
your perspective on that like isn't
Transportation good and it just totally
smacked it down and I was just like damn
whoever did trust and safety on this bot
has certainly you know lived up to the
moment so yours obviously gets good
enough to the point where uh Sundar
announces it at at um Google iio which
is the big developer conference that
Google holds every year and what year
was that and did that feel like okay
about to ship this thing I mean talk a
little bit about that moment yeah so I
actually was involved in the Mina lamba
product project from early 2019 to I
would say mid
2020 uh the developer conference uh
where this was announced was actually a
year later so this was probably like
almost a year after I moved out of the
project essentially uh till the time I
was with the product or project we were
making good progress on the safety front
but it was still a very uphill battle
inside of Google to kind of you know get
this in the hand of uh external
researchers or you know making make some
kind of a public preview so that users
can see what an amazing technology we
built out nothing of that was kind of
happening and uh that made me kind of
like incredibly frustrated as well and
I'm I know like a lot of the team
members were also very frustrated with
the speed at which things were happening
as well as like the organizational
hurdles that were post so I just decided
Ed to kind of uh start focusing my
efforts on something else uh but uh I
handed the project off to one of the
other PMS who was part of uh RoR Wild's
team actually who has this interesting
battle with uh Mitch kapor I think about
the touring test right like passing that
that technology will pass during 2029 so
yeah basically they seem like a really
good fit in terms of running this
project with so I kind of handed it over
to them and they kind of took charge
after that before we move on I think you
hit on something that I mean obviously a
lot of folks know that this thing didn't
get out the door early enough largely
due to some of the um the concerns
within Google management so you were
there like you saw like what happened
with the team trying to ship this thing
can you take us just one step deeper
into that like what exactly happened
there yeah so let's see when we were
working on this of course like safety
was the biggest challenge that everybody
had like as awesome as the product could
be like we could not go against the AI
principles that Google had published
which made sense um in a lot of ways
right like we don't want to put out a
technology that impacts like hundreds of
millions of users and make them feel
that they are not privileged class or
whatever uh but that said I think where
things could have been better is
figuring out a risk R risk reward
tradeoff I think where um the whole
Google AI team and the pr team and the
legal team and the leadership team
struggled was to figure out how can we
give access to the research Community or
to you know just like release it to the
world in a way that it would not harm
people and I think uh opena did an
excellent job with respect to that like
they released gpt3 and uh they were like
we just going to give access to
researchers who we are going to vet
that's an amazing way to give access now
it's kind of pretty standard way but
they were the ones who pioneered it like
back in 2019
2020 so Google could have done something
like that as well like hey we are going
to vet the people who is going to use
this technology we'll see what the use
cases are and things like that uh there
was like number
one like one way how it it all kind of
got messed up the second is I think over
the years there were a lot of
bureaucratic levels that were built
inside of Google for getting approvals
for uh when things go out I'm not
complaining that they were not necessary
but it's just like they're layer after
layer after layer so if even one layer
stops you from pushing something out
then you kind of uh can't do it
essentially and I would say those two
were the biggest hurdles that that we
faced at that point of time now at this
moment you're obviously seeing this
pretty fascinating chat technology
incubating with in Google that nobody
else I mean it's amazing you're inside
Google seeing the future so from your
perspective like as the product manager
on this product what did you think it
could be used for eventually like was
there anything that you saw you're like
oh like if this gets into everybody's
hands then X could happen like what were
your hopes and dreams for it yeah we
actually had uh close to eight or 10
solid use cases inside of Google that we
identified over a couple of months let's
hear
them so as we kind of uh we realized
that evangelism inside of the company is
going to be very important to get like
buy and from leadership and stuff so we
made sure we had newsletters and
everything going on and more and more
people got excited and reached out to us
Google Assistant of course was the big
one um we we kind of pitched them a
bunch of things they got excited about
it so I think one of the use cases we
were exploring at that time was
different uh characters within Google
assistant so instead of just being like
a HEI boring you know very professional
type of an avatar or a Persona can you
have a let's say a SpongeBob for a kid
or can you have yeah can you have Darth
Vader for someone else so essentially
that was one of the major use cases that
we were exploring other than that I
think NPCs was the other one uh there's
a huge Market in gaming industry right
for non-playable characters so we are
exploring some use cases around how can
a technology like that could be used
used for powering NPCs within games so
there are a few others but yeah this
means that basically like if you're in a
game anybody that you meet could um you
know be somebody that you have a
conversation with even if they're just
like some person that's just like you
know running around in Grand Theft Auto
like it's almost like those people that
you're killing in Grand Theft Auto by
running them over like you can have like
an empathetic conversation with them if
they're like have this technology biged
in it's fascinating all right okay so
that's two what else yeah um and then um
let's see we exploring some stuff with
Cloud as well at that point of time to
see if we can
have you know different different
organizations can they have um uh a more
a Persona a chart B Persona that is that
speaks more to them for example uh a
Southwest Airline chat B can it be more
funny or can it be more jial in a way
versus let's say United could be more
serious and whatever like the like
empathizing with the Persona of the
company essentially I don't think it
went anywhere that particular use case
but uh that was one of the other ones
that we were exploring and then they
were like some wild wild west where uh
we were talking to Android security to
see if they can have a bot call you in
case it feels you are unsafe so I don't
remember how exactly the ux for this was
but essentially let's say you tap your
power button five times or something uh
and you are either in an awkward date or
you're just you know in in a
uncomfortable situation essentially so
the bot will call you and then you can
talk to the bot as if it was a person on
the other side and you can kind of get
out of that situation so that was kind
of a very interesting use case as well I
don't know if uh that ended up happening
but that was kind of in the wild wild
west yeah wow fast fascinating do you
have any other wild west ones that you
remember one other that uh I personally
was very excited about was just building
like different characters out of uh this
chatbot like different personas
essentially so we build characters like
Darth Vader uh so we would kind of um
how do we do it we used some data from
Darth Vader so the dialogue delivery was
kind of ilar to Darth Vader and then we
hooked up uh like a moving talking head
of Darth Vader and then you actually
could talk to Darth Vader on your screen
with his voice uh like with text to
speech his voice coming over so this is
like I'm talking about 201 19 2020 right
like now these some of these things are
very common but at that point of time
just seeing Darth Vader come to life and
talk to me was like incredibly amazing
so we were exploring if there could be
like some entertainment related use
cases around that as as
well okay and so let's fast forward a
little bit at a certain point one of
your colleagues uh goes public and says
he believes that these Bots are sentient
it had sort of transitioned to Lambda
Blake Le Mo at Google is testing it and
I think the public really realized how
like powerful this technology could be
where he goes out to The Washington Post
with this you know phenomenal claim of
his belief that this is a person I mean
bring us into your seat at the moment
how did how did you see that how did you
react to
that yeah I think that happened pretty
late I think I had left Google uh when
those allegations against Lambda came
out that it has become sensient um it
like I don't I don't think the
technology is there where uh we can say
it's sensient I think it was
a it was a
like uh how do I say it it was blown out
of proportion and if you look at the
conversations closely uh in terms of how
he had the conversations there were
there's something
called um nudging the model to say out
certain things like if you ask the
question in a in a specific way the the
answer the model will answer in a
specific way so there was a lot of uh
steering the model that was going around
as well when those conversations were
released so yeah I I just felt it was
like blown out of proportion I don't
think we are anywhere near C at this
point of time in artificial intelligence
right and to me with I mean I spoke with
Blake a number of times and people who
who are on big technology podcast go
ahead and listen to those um the thing
that really struck me
was right or wrong and I didn't agree
with Blake but it just signaled to the
public that there was some seriously
impressive technology underneath
whatever he was talking to you know
person person seemed like a distraction
like it was just like holy crap like
these are this is this is you know
revolutionary if it's if it even
resembles what happens and then a few
months later chat GPT comes out from
open AI so um you know you had been
working on this stuff you know as the
person inside Google leading the project
for time uh that was based on the
Transformer model you knew that it was
powerful I guess it was never let out
the door because of trust and safety
concerns and then open AI ships it so
what was your reaction in that
moment yeah I
[Music]
um I think I was excited and annoyed and
angry at the same
time um I might have left Google but I
still have still hold some
stocks so I was uh like Google had this
like technology for a while and uh the
the main way how Google gets
mind share in the technology world is by
claiming that we are the AI leaders and
suddenly with Chad GPT like I think the
rug was pulled from under their feet it
was like everybody started looking at
open AI as the AI thought leader in the
world and it was just an unfortunate
thing for Google because like I think
yeah Lambda was kind of close to that I
would say and if we would have released
that earlier it would have been a very
different story I feel
right and so but uh you know you talked
about some of the use cases I mean you
mentioned Google Assistant but one of
the ones that we haven't really talked
about was search I mean was the
discussion inside Google that this could
be a search alternative and if so I mean
you know you mentioned the stock like
how does this impact the business model
if it if it is a search
alternative yeah I mean I think when
chat gbt came out people were using it
for sear related stuff but they really
shouldn't have have because of obvious
reasons like hallucinations and like
first 6 months it was like all over the
place now I think we are doing much
better with hallucinations and grounding
the information and things like that so
uh there were I think discussions after
I moved out of the project there were
definitely discussions between the
search or and uh the Lambda team in
terms of how we start using this inside
of the organizations but like search is
such a behemat inside of Google that it
moves at a nail space like if you are
able to uh I remember working on a
project on search and it took us like
one year to just get agreement from the
search leadership that hey we are going
to do this for you guys like and what
chat GPT or what these language models
were doing like you were hopping on a
new way of interfacing with Google
search with a chat bot I think that
probably would have been unheard of
inside of Google or I'm sure like it it
people considered it like a 2 to three
year or foure uh time frame project and
when chat GPT came out it just probably
alarmed Google to the hell and
surprising in a good way that they were
able to move so fast with Bard and
everything but had CH gbt not happened I
predict that it would have taken at
least two to 3 to four years for Google
to get there just because of the
bureaucracy and the the the the speed at
which things
move so oh man I have so many questions
about this um so people have talked a
little bit about how like the business
model like could people spend more time
uh within and there is some search
elements I mean Microsoft then released
Bing and like they wanted people to
search there are some search elements
people have said that like if Google
released it it would popularize
searching this way and they don't have a
good business model so I'm curious if
you could weigh in on that and then also
like Bing hasn't gained any market share
at all against Google since it came out
so did people sort of overreact to this
thing yeah I mean I can't tell you the
number of emails or messages I got from
like varied people in the tech tech
industry essentially from journalists to
like reporters researchers and so on
like hey is Google going to lose its
market share in Search and blah blah
blah and uh my take on it was like
pretty straightforward from the starting
it was that hey like Google has probably
equal if not better technology at hand
uh which I believed was Lambda at that
point of time uh Google has the
distribution Advantage as in like there
are 4 billion people in the world who
use Google products and it's very hard
to change user behavior for something as
fundamental as Google
search and uh yes there is a novelty uh
with respect to Bing and all of those
things but I I never felt that it's
going to be like a major difference for
either Google or Microsoft so I think
the latest reports if I read them
correctly was like being maybe get
gained like 1% market share before and
after chat gbt so did it get something
but it's yeah it's not bad it's like
probably yeah it's not a lot so I guess
it was they were definitely like more
panic in the ecosystem especially from
investors uh but I guess yeah now now
it's pretty clear in terms of where
things
stand
so what are some lessons learned from
for Google looking in the rearview
mirror like how how should Google
change I think the first one is
that they need to go back to the
experimental Roots I feel like over the
years Google has become more and more
conservative about doing things they
care a lot about PR like public
relations they care a lot about how
their images shown in the media and I
feel that at least in my experience that
plagued so many projects inside of
Google it was like the pr was always top
of mind for
leaders and on the other side like open
AI like they don't give a about PR
or like for the most part they don't
like they're like okay this is what we
think is right this is how we think is a
reasonable way of putting it out they be
they become vulnerable they put it out
and then they kind of work with the
community with respect to that so I
think uh Google needs to to adopt their
own ethos of how it was when I would say
like when Larry and Sergey were there
things were much more open and
transparent and you know more
experimental like we going to do what we
want to do uh type of a thing so that's
Point number one um I would say Point
number two is
um
um
like the
bureaucracy has increased a lot inside
of the company like there's just too
many divisions too many teams I know
there was a huge effort I think a year
ago or year and a half ago to kind of
streamline those things but still like I
think Google is dealing with a lot of
the stuff that probably Microsoft dealt
in the early 2000s and uh we really need
to like shake things down I think Google
is struggling to be a top down company
when their ethos are Bottoms Up I think
they're kind of somewhere in the middle
and they haven't figured out uh how to
transition from being a top bottom up
company to a top down company so they
really need to figure that part out as
well how do you feel sundar's leadership
I mean he's a fellow IIT cred um
obviously like if the company is
becoming slow and isn't Innovative in
the way that it had been in the past
part of that is due to the CEO so what's
your reflection
there yeah I think so that is a is his
management style uh at least as I
perceived it when I was at Google was
more conservative I think you see that
in the tgfs like these are the weekly or
bi-weekly events that happened in Google
where like teams will come and pitch and
things like that so for example like and
I was there at Google when Sergey and
Larry were also around right like um so
in the tgfs if there's a question and
sometimes these questions can be really
hard questions if there's a question and
whoever is let's say a VP at search has
to answer that question and if they
answer that question uh trying to you
know beat around the bush and not
getting to the point sometimes they're
trying to you know get around a
difficult question without really
answering it uh Sergey would just like
jump in and he's like that's not the
question that this guy asked like if if
we if we are not doing a good job just
tell us we are not doing a good job and
how will we do a good job essentially
versus I think I felt that sundar's
responses a lot of time were like very
politically correct I don't blame him
it's like his management style in some
way but I feel that the honest the
brutal honesty and cander I think was
something that I miss and I feel that
that's kind of very important for Google
at this point of time just having a
strategy that is brutally honest and
focused and um somebody who can like you
know put their foot down instead of
saying politically correct
things okay and now a couple questions
at the end about where this technology
goes so first of all I have this theory
that we're going to see two things
happen when it comes to the evolution
of large language models that is not
going to stay like it is today with chat
GPT okay let's go one by one first is
we're going to start to see like a
splintering of these bots so they're
going to go from these General use chat
GPT or barred Bots that you can ask or
Bing that you can ask anything into much
more specialized bots so something for
the legal profession for instance that a
law firm can buy and has access to
documents or the medical profession or
even certain types of schooling or even
within an organization to be able to
query your internal knowledge so it's
going to go from my perspective from
these more broad-based Bots to more
specialized Bots do you
agree I think it's going to be moving in
both the directions like I think there's
this whole concept of personal AIS that
are going to come up so a personal AI
probably will be an AI for me and then
there will be these specific boards for
specific use cases as well like you were
talking about so I feel it's going to
happen on both the sides yeah and then
with the with the more General bot it
seems like all the research is pushing
forward toward making this stuff because
making this stuff smarter and we've
talked about on the show a couple of
times or on big technology a few times
but what this means is that when I when
I'm saying specialize sorry sorry
getting better is it remembers you
better it it can you know have it can be
smarter it can really be like a s not a
super intelligence but something that
feels that way to you as opposed to this
thing that you come and go and forgets
who you are and forgets the context does
that sound right right yeah that sounds
right very curious to see where it's
going to go okay what what are you up to
with
inventive yeah so at inventive we are
building uh we're kind of using the
power of llms for Enterprise Knowledge
Management so one of the use cases that
as as I delv deeper into my experience
with language model uh at Google and
outside of that as well was
that like Enterprise search kind of
sucks like even inside of Google being a
search company like we had probably the
best Enterprise search but it was still
pretty bad and uh so we were kind of
exploring we explored a few different
ideas but we landed up on this one
because we just felt that time is right
to disrupt Enterprise Knowledge
Management so uh we're focusing
specifically on sales uh Knowledge
Management so we are building an powered
platform for sales Knowledge Management
and uh the first use case that we are
solving for is enabling sales teams to
fill up rfps and Security reviews with
their internal existing knowledge basis
oh that's cool so like if you're
submitting for a project you can just
have the model sort of write your
application for you that's right yeah so
you have like when when you're let's say
bidding for a proposal uh there could be
anywhere from 100 to 500 questions and
it takes like weeks for for these sales
people or biding managers yeah and a lot
of like 60 70% of it is sort of similar
but it's worded differently or the
formats are different and things like
that so it's uh something that we heard
again and again from the Enterprise
customers when we were doing our user
research and uh we just decided to
double down on that and start there
that's so cool so this is the first of
our big Tech War Stories shows I'm
putting it on the big technology feed
today as a tease to Hope hopefully get
uh you all to sign up for the big
technology uh Premium Edition you know
we're going to drop these once a month
and there's plenty of other good
benefits when it comes to the big Tech
uh big technology premium you can get it
at big technology.com or big technology.
substack do.com big technology.com works
well um you'll see that there's a
handful of different tiers the basic one
gets you these interviews every month
and then we also have a new thing called
the panel that I'm debuting which I
teased a little a little bit up up top
but basically what that means is when
big news breaks like the decline of
Silicon Valley Bank or the instacart IPO
or the introduction of threads for meta
we have a t team of not a team really a
collection of the the best uh experts
through the industry technologists
journalists analysts and VCS who are
going to give about a one to two two
sentence perspective on what's going on
on topics that you care about and I'll
be sending those out via email and you
can sign up for all that on big
technology.com again we're releasing it
U right now so this is a brand new
offering and I think uh I think you're
going to love it I think it's going to
be the best uh subscription that you
have and I'm going to keep working hard
to make sure that's the case and I
definitely want to say I I am so glad
that garv came and shared with us today
goov thanks so much for being
here oh you're
muted
yeah it was exciting to be here Alex
thanks a lot it was fun talking thanks
for helping us kick it off so um and
again good luck to you and and hope to
uh hear more about what you're up to and
how this stuff keeps changing the world
so thanks again yep thanks bye all right
everybody thanks for listening and we'll
see you next time on big Tech War
Stories