The Next Gen AI Models: Reliable, Consistent, Trustworthy — With Cohere CEO Aidan Gomez

Channel: Alex Kantrowitz

Published at: 2024-10-30

YouTube video id: 2Xra8wLdWFg

Source: https://www.youtube.com/watch?v=2Xra8wLdWFg

we're joined by Aiden Gomez the CEO of
coher an AI platform for Enterprise and
he's also the co-author of the famous
attention is all you need paper which
invented the Transformer and started
this whole AI thing Aiden great to see
you welcome to the show thanks Alex
thanks for having me I want to begin
with some myths and facts about AI we
have debates all the time on the show
where's the technology going is it worth
the investment and no better person to
ask than someone who was there at the
beginning now currently an entrepreneur
in the space so earlier this month
openai raised 6.6 billion but the
reporting saids they might be losing 5
billion per year and you know in some
ways okay you need the investment to
build the models uh but in another way
it's like okay you know where does this
end because the the compute the data uh
the energy to train these models keeps
getting bigger um the requirements for
that keep getting bigger and so does the
money and you know does this ever become
sustainable so what do you
think you know I I definitely see a path
towards sustainability there's
definitely a massive upfront cost for
building the technology in order to even
bring it into existence before you can
use it at all you have to spend a lot
you have to build supercomputers you
have to uh create these models in the
first place and it's really
expensive but the technology that we're
talking about buil buing spending that
Capital to build it's something that we
know is transformative it's digital
intelligence uh it's um it's automating
something that is really precious and
valuable which is intelligence um and so
I certainly understand the urge for
people to see the the numbers being
spent on training um and be concerned
that it's not going to recoup in value
but I I think that those numbers are
actually small relative to the long-term
value that the technology will deliver I
I think it is like now time to prove
that so last year was very much the year
of the proof of concept people were
getting familiar with the technology it
was their first time working with it and
so um there were a lot of small tests
and experiments but this year is very
much one of going to production and
getting these models into the hands of
people at scale of course we're already
seeing uh a high degree of Roi in the
sense that there's now hundreds of
millions of people who are using the
technology it's actually in their hands
it's part of their day-to-day uh and so
that certainly is Roi and with what
coher focuses on the Enterprise side
we're starting to see this technology
get into the hands of employees and get
into Enterprises it's a much slower
process it's a bigger lift you have to
integrate with existing systems within
Enterprises you need to train uh
employees on how to use this technology
but that's well underway now uh and
we're seeing quite dramatic growth in
adoption so I think we will find Roi and
it's coming soon and we'll talk more
about the specifics of coher and Roi in
the second half but let's keep on this
line because it goes to another one of
these missing facts which is that the
next set of models are going to be this
Godlike set of models and you know you
you talked about how there's going to be
like a lot of cost at the beginning
right and that's necessary cost to train
these massive models and the sense that
I've gotten and from my reporting one of
the things I've heard is companies have
been willing to make those Investments
because they think that the this next 18
12 or 18 months in model development is
crucial and the capabilities will
advance
significantly as they put more uh
compute data and energy into the process
so uh let's go to this myth and fact you
know number two which is does the next
set of models give us that Godlike our
our you know Godlike AI model and so
many people are exp I don't know about
Godlike I I don't think I'd ever use uh
that term to describe what's coming I
think we're going to have some really
powerful and useful tools emerge I I
think that's what's coming um the idea
that uh we're building AGI or something
that's just going to like solve all our
problems for us
I think we need to set that aside I
don't actually think we'll get to that
but let me ask you more pointedly on
this on this next generation of models
okay so you say we're going to see some
new tools what does the Next Generation
because they they are being trained on
much more resources than um than have
they have been previously so what
tangible step forwards are you expecting
to see from the next generation of
models I expect them to be far more
robust
reliable um and um I expect that they'll
just be more capable there's lots of
things today that models break in UNS in
surprising ways and I think that's going
to start to become rarer and rarer we're
going to be able to put these models
into more high stake situations trust
them more really have them be a partner
in getting work done um and so that's
the shift that I I see coming what type
of high
stakes it could be anything from um
these models being your assistant for
the work that you're doing and if you're
a um if you're a doctor it might help
you summarize patient notes so that you
can uh spend more time with the patient
actually speaking to them learning about
their symptoms and not just getting up
to speed on what's happened in the past
um and when you're getting up to speed
on a patient you're going to have a much
more accurate summary of that you won't
have two decades of patient notes that
you have to read through and it takes
you 2 hours instead you'll have a very
contextually relevant summary to get you
up it could be even more than that
because I was speaking with a doctor
recently who said their hospital system
has implemented an AI tool and I don't
have the name but this person was
speaking firsthand about the AI being
something they
consult for or consult with for
treatment which just kind of blew my
mind and I guess like many of the
current models there's I think you're
right on hitting on that trust Factor
like it gets things wrong at a high
enough clip that we just don't trust
them and we don't use them because it's
not it's not useful if something's going
to continue to you know be 60% right but
as it gets to that you know 90% accuracy
that might be where we start to see some
of the mind-blowing and and you know
truly useful applications of technology
so maybe that's what comes comes
next what do you think yeah I think so I
think so I think the reliability and
Trust Factor um is huge and also just
the competency right the accuracy with
which it it gives you answers um and so
all of those are going to increase I
don't see a step change coming but I see
a steady uh continuous course towards
very high accuracy very high
reliability uh
AI now there's been so much hype in the
industry this is one of the things that
sort of comes along in this discussion
which is like everybody I speak to who's
on the ground says yeah we're not going
to see a step change with like let's say
GPT 5 but exactly as you describ more
reliability more
consistency is there I mean is there a
worry that some of the air is going to
come out of the you know this AI moment
if you know cuz again like I when I say
Godlike I'm not I don't believe that's
going to happen but I'm reflecting what
a lot of the hype is starting to expect
and so if it's just steady you know
steady improvements and reliability
which is actually like you know we both
pretty big um but do you think that that
sort of takes some of the steam out of
this moment for AI because people will
look at the step change as as a failure
given where the hype
is well listen I I'm not one of the
people who's saying we're going to be
building oh yeah God saying you're doing
yeah I don't I don't have much to um say
towards those claims what I would say is
that even if you know like just as a
hypothetical um even if the technology
froze and what we have today is all we
get there's so much good to be done
there is so much work to go do to
implement this technology across the
economy uh really boost productivity
drive better outcomes build tools um so
the technology does not have to move in
order for incredible value to be
realized um we just need to go do it and
it takes a lot of effort and time and
work to go realize that
value okay and again more of that is
coming in the second half where we go a
little bit more tangible but let's stay
with the theoretical or at least like
the industry stuff what do you make of
the fact that the U the gpus so we talk
about like the ingredients again this is
coming from your paper right the
ingredients that are required for um
these models to get better they need
data they need compute need energy and
the compute right now is starting to go
like through the roof in terms of the
amount of compute that's being used to
train models so just for some context so
meta's uh llama 3 Model uh which was
like state-ofthe-art like 10 minutes ago
it used 16,000 gpus to train that one
now we're hearing that uh Elon Musk is
building this super cluster I think it's
called project Memphis that has 100,000
gpus so multiples of what The Cutting
Edge is being used to train on so I'm
curious if what what you think that uh
increase in gpus are going to get us
first and foremost and then we'll talk
about whether the right way to scale
these models is with just throwing more
compute uh because I know you have a
Nuance take on that uh but like first
and foremost like if you go from 16,000
gpus at the state-of-the-art to 100,000
what what do you think that
delivers uh it definitely delivers a
bigger and better model you have more
compute we know that scaling up improves
things um there's questions around
saturation and whether continuing to
scale up is Justified whether there's
going to be enough gains from that
strategy to justify the increase in cost
my personal perspective is that you know
building a massive model it's not
actually useful for the world if it's
too big to be consumed if it's too
expensive to actually deploy and so for
coh here we've been very focused on
building the right size of model um but
if your question is what is more compute
unlock it will be a better model
objectively we know that scaling leads
to more capability a smarter model
that's more reliable um and so that's
the the output and does that ever end I
mean that's one of the big debates here
is that you know basically you could add
compute and data basically to infinity
and it will continue to improve or is
there a tipping you know sort of a
saturation point
um I don't think within any achievable
scaling up for Humanity that will reach
that Tipping Point it just saturates the
gains become much much smaller and so
you're much less willing to want to pay
double the price for a minute difference
um but it is pretty consistent that
bigger is better and that just continues
but it tapers off over
time okay and so then let's talk about
this AGI thing that you brought up and
we should talk about so I mean open AI
has talked about how like their goal is
to build AGI a lot of people in the
industry talk about AGI is Northstar I
know coher is more like let's make this
practical for businesses but I want to
get your sense because because that's
not your Northstar I think you can speak
a little bit more about like more
honestly about what it means and whether
it's achievable so do you think that
let's just use this definition of AI is
intelligence that's as capable as humans
in the tasks that humans do uh do you
think that that is uh that is something
that we should even be thinking about or
is it a marketing tool like and is it
achievable I mean with that definition
of hii I I think it's both achievable
and and a fairly reasonable Target so we
we can measure how good humans are in
any particular task um and then yeah I
mean it's a reasonable goal to want to
create technology that can perform that
well in that task
um so I I think based on that definition
I think when we start to think about you
know you described it as like Godlike
models these are being described as well
beyond so yeah so I think my my
definition is probably artificial
general intelligence and I think that
this Godlike is the super
intelligence thing that a lot of and I
think a lot of people will use AGI as a
synonym for a super super intelligence
which seems wrong to me but there is
this belief that once we hit AGI we've
already reached super intelligence
because if it can do everything that
humans can do and doesn't get tired
doesn't need to sleep doesn't need to
get paid necessarily you know then
you're already at Super intelligence but
sorry go
ahead yeah um but no I think that that
definition of
AGI is a reasonable one I think it is
exciting I think that that's that's
definitely the target what we want to do
is we want to create machines that have
this unique property that humans have of
intelligence um and we want to be able
to deploy them in the places to take
work off of the shoulders of people and
and put it onto these machines um to
make work better and easier um and in
order for you to do that in order for
you to trust the machine enough to shift
that work over um it better be as good
as a human otherwise you're you're
paying some price you're reducing in
accuracy things get worse not better um
and so that that's a very reasonable uh
objective and when do you think we might
reach
it in many respects we're already there
in many fields the models are right
we're still like I don't think those two
things are in Conflict I I don't think
that really why not um well because I I
think that
um we will never see mass unemployment
of humans
I think that this technology is going to
unlock more opportunities it will let us
do more as opposed to scaling back what
we do um humanity is very supply side
constrained not demand side we always we
want more we want better we want to be
um healthier we want to do more we want
to have things be cheaper um and so we
have all this demand and we're trying to
keep up with our own society's demand
and this technology it's its true
promise is in bringing productivity and
letting us do more now you can zoom in
and you can like pick a specific field
and you can say this field might be
automated by Ai and I think that's true
and you know we should be thinking about
retraining and shifting certain skill
sets over to other uh new domains like
retraining people but in general at the
macro scale I think this technology will
create much more opportunity uh then it
will take
away I mean if we have ai technology
that can basically do work for us
whether it's knowledge work or whatever
right me we already have a lot of
technology that can
automate um you know factory work why
are we continuing to
work I think we it brings purpose and
meaning to a lot of lives and we enjoy
it I think that the right form of work
is something that fulfills you
um and that is enjoyable intellectually
interesting compelling and that's really
what I want to spend my time doing is
that as opposed to number crunching or
um and maybe someone else enjoys number
crunching but uh for me I'd rather
Outsource that between my my Excel
spreadsheet right um so yeah I I I think
that work in its best form is incredibly
fulfilling and that's never that's
something that humans will never give up
we'll always want to do that um but if
we can hand off and if we can have an
assistant that is you know on 24/7 and
has access to all the information and
tools that I have access to and I can
ask it to do things for me that's a very
compelling value proposition it changes
work in a way that is extremely positive
I think for uh for almost everyone how
far away do you think we are from having
reliable assistance like a lot of people
looked at uh open a 01 reasoning model
and they're like oh this is just kind of
like a step toward assistive AI um what
do you
think I think the notion of
using reasoning or letting the model
have an inner monologue to work through
problems think through them um make
mistakes but then realize that catch
mistakes and correct them I think that's
a crucial piece in improving not only
the the accuracy um or robustness or
usefulness of the model um but also
the the trust in the model because
you're able to inspect how it arrived at
its conclusions how it decided to do
what it did you actually trust it much
more it's explicitly written out um and
so I think we've all
known these sorts of tools uh would need
to emerge um and yeah I think it is a
big step towards dramatically more
reliable assistants ones that you can
trust and work with and give feedback to
um yeah I think it's really
exciting okay and then where does that
put you on and this is again going to
our missing facts theme but like the
fear around AI I mean if AI can sort of
you know go step by step figure out
these processes realize where it went
wrong go back uh take action right I
think that's sort of where people get
weirded out is when these things start
to take action on their own what what
can they do that we're not prepared for
um so what do you think about that are
you worried uh that AI might cause harm
to
people I I think it's really important
to remember that we get to choose where
we deploy models it's not like they get
to choose where they work or what they
have access to um we have to plug them
in and we have the opportunity to
implement safeguards so to make sure
that before these models are put in any
very high stake situations that
there're there's oversight that a human
has to approve high-stake actions
um it's not cart blanch and the model is
now smart and we just plug it into
everything and
say go at it uh it's very much
intentional um and we're going to need
to be thoughtful and careful about that
um so I'm not scared of like a Doomsday
like Terminator scenario I I I think
that um media has certainly instilled
that with lots of sci-fi stories and
it's it's a very compelling story which
is why well before AI was remotely
competent we were coming up with stories
about how this might happen it's not
just media right it's like also AI
leaders are saying you know how many
people signed that statement that said
we should be treating AI risk the same
way we treat climate change and nuclear
why do you think there are so many
people in the industry that are stirring
up the fear around this
stuff great that's a great question to
ask them uh I did not sign that letter
and so it's it's not um me and I can't
speak for anyone else um for list we
have we have tried by the way to get
these people on the phone so some have
come go ahead yeah I you know I I I
would want to ask the same question I I
would say so it puzzles you as
well yeah yeah I mean I'm I'm empathetic
to the fears because yeah like this is a
very Salient story that's why it's been
so popular in sci-fi and and all this
sort of stuff so I'm
actually to I'm naive to those sto or
sorry I'm I'm uh understanding of why
people are
so um attached to those
stories but as more and more evidence
emerges that these models are much more
controllable than we may have thought um
that they're a little bit less capable
uh than we may have thought um it's
harder and harder to make that narrative
and I I I think you see the discourse
shifting now I think the discourse has
begun to shift away from Doom and and
existential risk and now it's much more
about practical concerns which I'm
really happy to see stuff like okay this
technology could be really useful for
healthcare but it could also cause harm
if we don't do it the right way and so
specifically how do we set up the
safeguards to make sure that harm
doesn't happen same thing with Finance
right and like Distributing loans or
something like that um or with uh these
models pretending like people using them
maliciously to pretend to be human um
and trick people how do we prevent those
things that discourse is super
productive like that's very effective
and so things are shifting in that
direction now and I'm I'm excited to see
that change now some of this fear comes
from this line of people saying oh there
are emergent behaviors in the models
right that basically that they've found
them able to sort of come up with things
that are outside of their training set
and there have been some papers that say
okay actually they don't really have any
mergent Properties or emergent behaviors
and as someone who wrote the paper that
kicked this all off what do you think
about that can llms have any emergent
Behavior or discoveries that they
weren't trained
on I think that they can um what's the
right word I think they can interpolate
between skills and so if they've seen
how to do a and they've seen how to do B
they can get kind of the average of A
and B um but they don't just go
completely beyond anything that they've
seen um I've never seen a model behave
in a um
totally unexplainable way um they're
really good interpolators if you show
them different domains they can blend
domains quite well and
um but yeah I I've heard the same thing
about emergent behaviors and um I I
think the research is really
inconclusive there uh there's not a lot
of compelling evidence that says we're
going to have
some uh total step change or capability
takeoff um
even in the the like the latest
state-of-the-art research a lot of it's
about synthetic data and models teaching
themselves and so um self-improvement is
this notion of can a model actually
teach itself without human
intervention this is now a huge part of
model building it's a big part of how we
create data cohere um and before this
started to become mainstream and
actually part of the production process
of creating these models people were
saying self-improvement these things are
just going to take off they're going to
become super human overnight and we
won't be able to control it um well it
turns out that doesn't actually
happen this intelligence explosion or
intelligence takeoff is not something
that noens it's not happening it's not
happening um it improves for it can
self- improve for a while and then it
tapers off and so yeah you get some good
Improvement out of it which is why we
use it but then it Plateau it doesn't
just keep going forever um
and so I think there the evidence points
firmly in the direction of um a lot of
those fears may have been
misled right now talk a little bit about
that it's interesting you bring up the
use of synthetic data and having the
machine self self-improved because
another one of the big questions about
whether this plateaus is you know does
the world run out of data to train the
AI and I was watching one of your recent
interviews where you talked about how
you know back in the day you could run
up to anybody and they can add knowledge
to a model but as the model got smarter
the models got smarter and smarter
became less easy for people to add
supplemental knowledge to them which
points to like sort of running out of
available data to make these uh AI
models smarter so how does like AI
generated data actually solve that
problem and where is synthetic data
being used to make these models
better yeah um yeah so I think the
example gave is is a good one like we're
it's getting harder and harder to get
the data that incrementally improves the
model and it's important to note that
that's because the model is getting so
much better um and so before we could
just grab anyone off the street and they
could teach the model something and then
that signal started to go away and so we
had to go to undergrad students in BIO
to teach the model about bio and then we
had to go to Master's students and then
phds um and we're kind of at that level
where we're we're currently um hiring
phds to teach the model in their
specific domain um but then after phds
where do you go right like I guess
professors uh what about after that um
so I I think um the models are catching
up with the state of knowledge across a
bunch of different fields
um I would say that synthetic
data probably doesn't get us out of that
that that issue I actually I don't know
if synthetic data outside of easily um
verifiable domains like math it's hard
to use synthetic data to drive outcomes
so we'll be able to do it in so how is
it being useful for you not not for
making our models um fantastic
philosophers or um making them fantastic
social scientists or something like that
for that we we rely on humans um what we
do use synthetic data for is for
crafting um how the model responds to
stuff um and in domains that are
verifiable like math like coding in
those places it's actually quite
effective um but that's still a huge
domain of interest for people building
and and deploying these models we want
them to be good at math and and computer
science um and so more and more
synthetic data is becoming a huge chunk
of the data that we train on okay
fascinating um one last part of this
this discussion is um sort of what
methods help get this AI to improve and
there's been a question of whether llms
can take it like all the way or you need
to combine llms with different forms of
training whether that's reinforcement
learning well I guess that's part of it
already um but the other side of it is
um do you have to like build World
models with like robots going out in the
real world and learning things like
things like gravity and what happens
when you bump into things which you just
can't convey uh in text so I'm curious
if you think the current methods are
able to get this field to the promised
land or whether they need to be combined
with
others there's definitely proof points
out there which
suggest uh large language models or like
the transformer architecture is capable
of handling a bu of different modalities
and so you can merge not just text but
video and audio as well into the model
so you can give them a much more
um uh balanced experience of the world
you can show them the world you can show
them videos that demonstrate physics you
can um let them see hear speak um and so
as a platform it does seem like this is
a pretty good platform uh as far as they
go um there's a more philosoph opical
argument which is had among academics um
around is text enough right or even is
supervised learning enough is it enough
for the model just to observe the world
or does it need to take part in the
world to really understand it like um
for instance would you understand the
world if you read all of the internet
and you watched every video on YouTube
would you really understand it or do you
need to actually be embodied be a little
robot out there kicking a ball or you
know running down the street I actually
take I think the less popular view which
is the internet is enough and by
observation you can actually learn
enough
um to be extremely extremely compelling
I I think that's um if we're talking
about AGI and doing things as well as
humans do I think that's
enough okay all right I want to take a
quick break uh hear from our sponsor
come back talk about Roi and then just
talk a little bit about like your
journey Aiden from being somebody who
wrote that paper to where we are today I
think it'll be interesting for listeners
so we'll be back right after this and
we're back here on big technology
podcast with aen Gomez he's the CEO of
cooh here also the author of The
co-author of the attention is all you
need paper that kicked this
entire generative AI moment off right
invented the
Transformer uh before we get deeply into
Roi Aiden just a personal question for
you I mean are you what does it feel
like having seeing what does it feel
like seeing your
invention uh being taken in all these
like wild directions and sort of being
this key moment and uh truly like step
forward for the tech
field uh I mean it's like beyond my
wildest dreams I I think um I don't take
full credit for it at all uh I assign
the overwhelming majority of the credit
to my co-authors on the Transformer
paper so for me it it's hard for me
to accept the reality of what the
Transformer has accomplished out in the
world as my own um but it's so
incredible like even if I step away from
being uh one of the authors of the paper
the impact and what the architecture has
been able to do for the field has been a
huge shock like colossal shock just the
technology we have today I thought we'd
be
here maybe in like half a century you
know not seven years um so it's it's
really uh surreal and amazing yeah has
Google effectively C capitalized on it
given that this came out of
Google I think Google has done super
well I uh yeah I think you know they
supported uh Google brain in creating
this technology um and it's been
integrated all over Google yeah okay all
right let's talk quickly about Roi or
maybe let's go deep into Roi we'll see
how we how we end up here
um again we talked about all this all
this uh this money being spent on you
know upfront cost training the models
and you mentioned that even if the
technology stopped today there would be
so much work to do with it uh because
there's a lot of benefit out there that
isn't being realized yet but talk a
little bit about the places that you're
seeing already getting a return on the
investment in terms of implementing
generative AI technology because I think
in the common conversation people don't
even think those places exist but seems
like you're seeing it on the ground yeah
I I think today we're starting to see it
integrated into production um in
Enterprise it's much slower than in
consumer there's a much higher lift to
actually get it integrated and there's a
higher bar of trust necessary to drive
adoption um like I was mentioning
earlier you know last year was very much
the year of the proof of concept um but
this year we've started to see it go to
prod so there's some good examples of
that with our partner Oracle which they
have this
Suite of applications which basically
power Enterprise HR supply chain um all
of these sorts of back office functions
and we're powering over 50 different
applications within those uh within
those software tools and so it's
actually starting to get into the hands
of employees and and drive
efficiencies um we on talk about what
like what that looks like in for an
employee on the ground there um how does
the software that they were working in
change when you uh put generative AI in
it drco here yeah so you're you're
automating parts of the job and so um
little tasks within the application you
can now just push a button and the model
will will do that you might need to
provide a high level a good example
might be um in writing job descriptions
right so uh a manager hiring manager
wants to hire for uh a specific job what
they want to do is just put in bullet
points I need someone who has this
background does this
Etc um and then press go and it will
generate the full job description with
everything that the company needs uh
included there and in a way it's
actually presentable uh to the people
applying um another good example case
yeah so let's hear something
else yeah another good example might be
um in supply chain uh when you're
looking for an alternative supplier to
one of your products um doing that
search and retrieval and being able to
iterate with the model and not just do
singl step search where you search over
suppliers but where you give feedback
you say actually no that one that you
just recommended doesn't work for this
reason and you're able to refine
iteratively with this assistant or agent
um and these models basically touch
every vertical and so there's no
particular vertical
specialization um it's totally
horizontal uh so we we're working with a
a legal Tech startup that helps with uh
reviewing contracts and building an
assistant for a lawyer to help them
review contracts more quickly flag uh
you know concerning terms that type of
thing um we're working with a a
healthcare startup that tries to use
news and social media to track uh
pandemic and are people getting sick in
a particular area reporting
specific um symptoms and so using models
to screen for that um it really impacts
every single vertical can I take the
Devil's Advocate uh position on this let
me see if I can Channel an AI critic and
see what you think about this basically
what they would say is
um job descriptions okay it will save
you a tiny bit of time if you have the
AI right the job descriptions if you're
looking for a supplier chances are if
you work in Vendor management you're
going to have a good familiarity with
those suppliers anyway um if you're a
lawyer like yeah it might be a little
bit of time but you can comb through a
contract and find out what's um you know
what might be concerning about it this
is your expertise you're like a you know
almost like a narrow neural net trained
for that one specific purpose and now
we're giving it over to Ai and they'd
look at just the billions being invested
in this technology and say well what am
I really getting for that that if this
is effectively doing some of these
things that humans are quite good at to
begin
with I I would counter and say that um
risk to supply chains is many trillions
of dollars I would say that lawyers are
extremely extremely expensive and you
don't want them coming through your
documents no matter how efficient you
think they are and same thing with
doctors we we really want them spending
time with patients not um combing
through hundreds of notes
uh and filling out filling out forms
afterwards um I would say those are
maybe this stuff feels benol maybe
productivity feels um boring compared to
some of the hype of AI but it is the
value this is what we're trying to build
for um and so I I would push back quite
firmly on
that uh react to this I think this is a
thread from Benedict Devon Benedict
evans's Tech analyst there's interesting
difference between people outside Tech
sneering at generative AI as chatbots
that get things wrong and make crappy
you know quote unquote stolen images and
people inside Tech who are mostly
working on using it to automate a huge
number of boring back office processes
inside Giant corporations for billions
of
dollars uh I think that's a a great
observation I think that there are some
uh very superficial critiques of of
generative AI um that have become very
popular I think
um yeah I I think the substance is in
actually doing the work and getting this
technology to be productive for Humanity
uh and a lot of people are working on
that right now it's going to take time
like I've said um but the opportunity is
immense it's the biggest in a generation
yeah I think that's kind of the the
misconception and that's the interesting
point about what this Tech te ology can
do so I was speaking with flexport again
Supply man Supply Chain management and I
think about writing about how the fact
that like supply chain is actually like
Ground Zero for where this technology is
being applied and useful but they're
basic they're like getting faxed things
they're getting
PDFs um you know to try to log that and
comb through that you know the volume is
crazy and they're using generative AI to
read through the documents and give them
actionable insights on it and they're
like look like it's not going to be like
the most exciting use case but is saving
us a tremendous amount of time yeah I
was about to say I'm like it's so boring
but it is so Val like people don't
understand the actual scale of impact of
some of these crucial benol things and
if we can scale them up make them more
accurate more reliable um yeah it really
is world changing isn't it kind of crazy
that like the picture of AI again is
this just like you know I guess maybe
it's because chat GPT was the thing that
started the hype cycle but the picture
popular picture of AI is like this
masterful again like Godlike technology
that you know can do all these things
and you be this friend for you like the
character AI type startups and people
talk about AI girlfriends but then the
value is really being realized in like
the the back office I mean it's pretty
crazy sort of Divergence there never I
don't think I've ever seen a technology
with that type of
Divergence yeah I I mean I think like
the internet is a good or like Computing
in general um like these General
platforms um for supporting new types of
products and and
tools
um yeah sometimes
they have biases in certain ways but
it's all about diffusion like diffusing
into an economy diffusing into our daily
lives um and it takes time for that to
happen and and we should remember that
we're like 18 months into that Journey
uh and so it's still it's really so
early
um but yeah I I think the internet has
had huge impact both on the commercial
side on the Enterprise side um as well
as with us as consumers and and people
and AI will be the same there will be
products that are pure play AI products
targeting consumers uh that bring tons
of joy and value to Consumers and then
there will be uh platforms like coher
that enable huge value uh within the
Enterprise world yeah again like talking
a little bit about how impactful this is
in Enterprise I think this is from
Reuters accenter generative AI business
which helps companies automate
operations to save costs and boost
productivity recorded about a 50% jump
in new bookings quarter over quarter
this has outpaced growth and centures
other core businesses as a go-to
consultant and Outsourcing service
provider uh for companies migrating
their operations to the cloud analysts
expect slow for such Service as
Enterprise spending plos so basically
this is like finding ways to automate is
like giving life to the Consulting
industry what do you think about
that I think you know ENT is a really
good partner and there's just so much
work to be done implementing this
technology um that that makes perfect
sense like there's a huge technological
shift happening um and the technology
has unlocked a whole new set of applic
and so now we need to go out and do the
work to to realize it yeah and what type
of Partnerships are you having with
Accenture is it like going into
companies and again automating back
office or like what is what's going on
there yeah so they're are solution
integrator partner um and so yeah it's
about taking on projects inside of
Enterprises to help them accomplish
something like maybe it's implementing
for their Finance team there's some
function that they're stuck on and that
takes a huge amount of their time but
it's totally non strategic they
shouldn't be spending time on it and so
can we automate that or a big part of it
using these models it's about these
strategic projects to try and unblock
and automate uh parts of usually back
off office functions it's amazing how
like this I wrote about this a little
bit in my book but we're like living in
the knowledge economy and even still
like we've gone from industrial economy
which is like literally like pulling
levers uh and pushing buttons to make
stuff toy Ecom which is all about
knowledge but even in the knowledge
economy so much of our time is like
legitimately on like straight up you
know repetitive kind of tasks that we
wish we could automate to make room for
us to do more knowledge stuff yeah yeah
no I I I think it is incredible I I hope
that that goes away to a large extent
but I don't think it will like I think
there will always be on the margin these
sorts of not good uses of our time that
we spend time on and we'll continue to
push that margin back and back and back
and try to automate as much of that as
we can um but it's a it's a
huge huge project uh what we're focused
on is kind of building from the
foundation start by automating the
biggest of those the ones that you're
wasting the most time on um and then
gradually get into more Niche targeted
specific uh automations or applications
[Music]
sorry is anybody using uh your
technology to
replace full-time
employees I am not aware of that I I
don't think I I have any example of that
happening it's very assistive actually
so it's it's less about
replacement um it's more about
augmentation like at the moment what
everyone's building are tools to augment
their Workforce to make them more
productive uh I can't think of a single
example of displacing people okay yeah
that's also one of the things we'd like
to figure out here is are people losing
jobs because of this and by and large
the answers no although there are some
examples okay uh I know we're we're
running out of time one more thing I
want to ask you about is sort of like
the role of cloud providers versus like
the role of like people buying direct
and like how this is helping or what
type of pressure this is putting on
cloud this is again we talked about this
recently so anthropic they just broke
down CNBC just broke down anthropics
revenue and thirdparty apis like Amazon
and I think Microsoft Azure if they're
available there no maybe they're not
let's say Amazon 60 to 75% of their
revenue so how important are like these
Cloud providers like Amazon like Azure
in driving this
forward uh the cloud provider is a great
partner
um to cohere that's where the majority
of like compute workloads are happening
um but not all of the workloads so coher
has had a long time focus on on Prem as
well because for a lot of regulated
Industries like like finance and
Healthcare a lot of that data doesn't
actually go on the cloud um but
certainly for many industries that are
Cloud first that's the place that their
AI workloads are going to happen and so
I think it makes sense for Revenue to be
coming from those sources um but for
coher we support both and so it's
perhaps a little bit more
balanced yep and so so your technology
is basically going to work your your
company will basically work to integrate
your technology into existing systems or
you have your own
software um so we build our own models
from scratch uh and we we build a
platform that lets people plug in their
data sources
the tools that their employees use um
into the models uh via a system called
rag retrieval augmented
Generation Um and that's something that
we're specialized in the guy who created
rag uh when he was at meta is Patrick
Lewis and he leads our rag efforts um
but it's basically the dominant
architecture or system that enterprises
are looking for right now they want to
customize these models with their
proprietary data and the best way to do
that is with rag uh so that's something
that we we provide out of the box in
like a super simple plug andplay way
okay
and I'm just looking at some more
examples that I wrote down is this are
you guys doing this that you can um
model a budget for a construction
project so put in parameters put in the
info you need and it spits out the
budget projections the timeline is that
type of the type of stuff that you're
working
on um I have not heard of that
application someone might be using coher
to do that um okay
but
uh yeah I haven't heard of that
application if you know the company
that's using us for that I'd be
interested to do again okay um let's end
with this can you give us your
prediction for what the AI field looks
like in the next two years and five
years yeah in the next two years I think
we're going to start to see really
compelling assistance um it won't just
be little convenience functions or or
small features it'll
look a lot like um a partner that you do
work with someone that you interact with
every single day and you view as a as a
collaborator over the next 5 years I
think
um it's not a major shift but it's an
increasing in competency the the scope
of those assistants will expand they'll
be uh trusted with doing much more um
and they'll be integrated into many more
systems system so they'll be
dramatically more capable um so I view
it as like a continuous change over time
towards much more compelling independent
agents uh that we can collaborate
with
fascinating uh well Aiden thank you so
much for coming on great to see you and
thank you so much for sharing everything
about the industry in general and where
you know companies are finding theiry I
do think that this idea that listen like
it may be quote unquote boring but hey
if it's safe billions of dollars then
don't tell me that that's a boring
application of Technology that's kind of
my main takeaway today and I think it's
pretty fascinating stuff that you're
working on yeah thanks for having me on
it was great to seeing you Alex you too
all right everybody thanks so much for
listening we'll be back on Friday
breaking down the news and we'll see you
next time on big technology podcast