Generative AI After The OpenAI Crisis?

Channel: Alex Kantrowitz

Published at: 2023-11-28

YouTube video id: jHE_3GJyjYk

Source: https://www.youtube.com/watch?v=jHE_3GJyjYk

hello YouTube how are we doing we're
going to continue our coverage of the
open AI aftermath and also talk a little
bit about where the AI field is going
after this so you know as opposed to
Bringing on you know a commentator on
this I thought it might be worthwhile to
actually speak with some people in the
field who can talk a little bit about
some of the things that we're hearing in
the headlines and also give their
thoughts about where this field is going
to go next um we're doing it with open
stream and we'll tell you a little bit
more about what open stream is as we go
but we have folks from the very top here
the CEO Raj tumur is here welcome Raj
hello hey thanks for being here and we
also have Magnus rang who is the chief
product officer at open stream and a
former Gartner analyst welcome Agnes
thank you pleasure to be here all right
folks as we go um if you have any
questions feel free to drop them in the
chat and we will definitely get to them
um I just kind of want to start with qar
right which is something that open AI
has uh apparently come through it's this
big breakthrough that they're talking
about um you guys are working on
reasoning and that is what qar is also
apparently supposed to be an advance in
just from the very you know high level
what is this advance that openi is
apparently uh you know touting behind
the scenes and you know is it actually a
step forward in the development of Art
icial
intelligence um well you know without
actually having the facts on hand I can
speculate based on the news that I have
read um and and of course my our own
knowledge of where the field is
advancing um I'm afraid uh we're not in
for um a lot of celebration at as far as
reasoning and other things go uh it is
extremely unlikely that AGI system can
actually have the planning and reasoning
based on where things were just earlier
this year there was a report um by
Professor Rob at Arizona State
University that was funded by JP Morgan
that actually found out including gbd4
they all performed very poorly almost
like 2% score on a planning and
reasoning uh capabilities of the of the
Genera a systems that they have between
2% to you know whatever breakthrough
that people are talking about now
we really doubt um whether that is
really
true okay but you're talking about a
previous model and this is a new thing
so what what makes you doubt that the
new thing is um the fundamental approach
itself um because uh planning and
reasoning
capabilities uh require a lot of um
active domain knowledge that is
contained and you can't really do that
across all kinds of domains um no matter
how many how much of hardware and
systems that you throw at it um which is
one of the reasons why you know not to
speak about ourselves but much of our
you know industry is focused on the
neuros symbolic approach which is you
know the symbolic approach is pretty
good at reasoning and planning
capabilities and the and and the uh
neuros neural systems are pretty good at
scaling and doing that in an
unsupervised way so unless you have a
mixture of neuros symbolic approach
approach uh you are very unlikely to
have a breakthrough in terms of
providing reasoning um I mean beyond the
epistemic reasoning I'm talking about
deontic reasoning um across you know you
can talk to few documents here and there
but but nothing beyond that when it
comes to planning and reasoning and
explainability comes from as an offshoot
of
that so so I I can add something and
that is that qar is supposed to have
according to rumors this is
unsubstantiated facts
um supposed to have uh or be based on
reinforcement learning um to do
reinforcement learning in the way that
people have been hyping this qar thing
uh is that you would need to have
something called a universal feedback
loop uh right because
basically um you evaluate with
reinforcement learning you
evaluate how well the answer was
that the model was giving right so that
it can
self-improve uh but to do that in the
Universal
fashion is is far off of the
capabilities that we're capable of we
can do it in in localized systems like
chess computer games stuff like that uh
but bringing that into the real world um
is you know uh is is not something I see
uh being done in in a sort of a a re
Revelation with a single model you would
see multiple models working toward that
ability of universal feedback loop uh
even wouldn't suddenly sort of somebody
wakes up overnight and and releases it
what do you mean by Universal feedback
loop it a universal function that can
evaluate the quality of the output uh in
reinforcement learning you need to
evaluate the quality of the output in
order to improve the model and to do
that in a universal
fashion uh like like we evaluate around
outputs as as humans right is was that
good was it bad um to do that Universal
fashion is is something you know in in
Academia and in
papers you know very few have have even
tried to uh address that problem and and
there's just simply a lack of uh papers
leading up to that breakthrough right so
let me just make sure that I get that
right so that's instead of trying to um
do one specific task right un you know
basically being able to like universally
do Universal reinforcement learning
means you have a generalized system and
it just gets better as it goes yeah is
that is that and that's kind of what
this qar thing was supposed to be well
well it's some of the rumors going in
that direction right uh talking about as
like a math so how does that relate to
the math you know being able to do math
that's like the roters and open and uh
the information were harping
on yeah uh
so um so so if you look at the
generative AI models today they're
really poor at math right because what
they're doing is essentially next word
prediction uh so they're they're
predicting the next word in the sequence
and uh they use a lot of computing power
and they're really good at at doing it
uh but it means it has some trouble with
numbers and and uh and and understanding
math a planning and reasoning system uh
could can turn math problems into
symbols and actually do logical
deduction and logical uh uh reasoning on
top of math and then turn that into you
know an output again um and you know
that that's not uh that uh
unusual uh uh to do um
so so so if it does that in sort of like
a in speculated uh in in the same kind
of architecture vector-based
representations like like they have on
their other models um I mean it's it's
just a it's a linear appro you know
thing
from even even to add to what Magnus is
saying since you asked about math even
simple transposition of of the things
like you know a plus b is not same as B
plus a you get different answers with
generate UI as you know um so you know
it's it's not so to really uh seek a
reasoning system across different
domains on the Fly is is really very
hard I mean it's it's probably doable in
my lifetime but not not anytime soon I
mean at least that's my having spent my
25 plus years in in this field at least
reading through the papers and our chief
scientist has Dr philen has has spent
his lifetime on and reasoning systems
and uh given where we are from our vage
Point uh we don't believe such a
breakthrough across domains is possible
it is possible that you can take a
narrow domain and then achieve you know
something that U most of the other
neuros symbolic systems are able to
achieve and that is probably what
they're alluding to again you know I
don't want to be speculating but uh I
would love to see if uh that is indeed
the case Okay interesting so where do we
head from here now um what do you think
are the areas to watch in the AI field
you know after we've seen open ai go
through this convulsion and now we have
this development that you're both
dubious about yeah so you know um there
is say backdrop to this whole thing
which is which is kind of um alluding to
the fact that somebody is capable of
thinking and reasoning and those of us
you know including you who are in this
field know that theism and existential
crisis are all really U exaggerated to
say the least um you know none of us are
really genuinely concerned about it
although we are concerned about uh the
employment of techniques that are an
ethical and that do not have you know
checks and balances but that said it's
the AA systems are not going to take
over anything anytime soon um again I
don't want to sound like I'm discounting
something that I I don't know uh if it
is serious but uh the everybody is is
including from meta to uh many of the
other companies they are focused on
bringing that planning and reasoning
capabilities because real life
conversations are collaborative we
understand each other you know for
example I'm watching you I'm I'm seeing
whether or not I have your gauge
attention I know that you're paying
attention to what I'm saying and you
also read my facial expressions and a
lot of communication happens through the
non-verbal
Behavior as well uh so sometimes even
our inferences are more from the
intonation and the facial expression
than the actual word spoken uh because
much of the meaning is conveyed through
that so we I mean we're all working
towards that which will lead to better
engagement Beyond just question
answering in terms of having
conversations with systems providing
helpful assistance to our and customers
um you know in an empathetic way uh I
think that's where the AI field is head
it uh from from what what we are seeing
and almost all the companies are at
least the advanced ones are focused on
bringing that at the same
time before you go on how does that how
does that um how is a computer going to
be able to take in information that's
non-verbal I'm gonna have a camera on my
face as I speak to chat GB as you
speaking yes you know we all having Zoom
calls we have talking to the system so
instead of you know typing something on
the chat bot you're able to take the
phone and and talk to a system so that
can actually analyze your internation
that is better than just typing and then
similarly if you have a video call with
the with the system you know customer
support can be video based and as it is
right now in most of the healthcare
related applications where patients
actually do call in there are there are
things that you can actually analyze and
assist the uh uh the the the customers
uh by observing their facial expressions
instead of asking them what their pain
level is on a one to scale of one to 10
you actually can deduce that from the
Expressions that you are seeing what
what their pain level could be and there
are many other such uh behavioral
markers uh biomarkers that are visible
to the system and a is Advanced enough
today um on any video call to actually
do that analysis and assess whether you
know you are suffering from anxiety or
depression or things like that so those
are pretty much possible and the same
could be used in any social conversation
too so if somebody is having some kind
of inhibition about saying something you
can actually uh make them feel at ease
by adopting appropriate verb Ag and
ination interesting Magnus you seems
like you're ready to say something yeah
I'm I'm agreeing uh
but it is it is the more data sources
you have available when you communicate
the more modalities you have available
the better you can respond using
whatever modalities are are available
for you to respond in so so so to limit
yourself into text only for example uh
means that yeah you can be really good
at text but what happens if you have
other things available because our
natural way of
communicating is to use gestures and we
use other things and when we use it we
expect it to be
understood so so so if I if I you know
do things like this and and talk with a
certain tone um I would expect for
example if I if I do a voice-based
interface I would expect it to be able
to detect
irony uh or or or things like that
because I'm using my voice and I'm
expressing irony with my voice but if it
this all the system does is turning it
into text where it's really hard to see
irony you you know it it it will most
likely not detect it and answer in a way
that it becomes readily apparent that
the machine was not understanding what
was being said yeah so where does that
leave open AI I mean where does is this
something that they're working on I
mean natur to assume that they are right
so it is natural to assume that they are
they have all the money and resources at
their disposal and obviously they have a
dolly and other models as well so it's
it's it's only time that people can
combine but right now the way these
things are happening are in sequential
fashion like you know you have one that
is doing a generations and ofar
generations and another one that is
doing the text analysis and it's U but
whereas in real life communication
conveying of an intent happens through
combination of all these modalities for
example I would say you know take that
pencil out of your mouth so you know so
like to magnet you know I'm watching
what he's doing and sorry Magus I didn't
mean to say that
I'm referring to something that is so
contextual and assic as it is happening
while I'm talking so it's not based on
your historic knowledge of something you
have to be observant and so whatever I'm
saying is is finding it to take that out
of your mouth what is that that that is
something that I'm seeing I'm showing
and I'm gesturing it so the referential
resolution happens through combination
of these modalities and that is what we
spend our time and life on and there are
companies that are focused on that and
it is natural to assume that open AI
included uh is working on that same
thing too because that's the the
ultimate Holy Grail for uh for this
field actually to be able to replicate
human behavior uh
completely I want to add something to
that and that is when you look at
individual models capable of doing
amazing stuff like you look at an
individual generative AI model you look
at you know di or or generative sort of
image based uh model uh you have to
remember that kind of if you look at a
human mind which is what open AI say
they're going to do like artificial G
gener general intelligence if you look
at a human mind our human mind
employs many many different cognitive
strategies to solve problems and one of
the things we do is that we use our
executive
function uh to regulate what kind of of
of focus do we have on a problem what
part of the brain are we engaging to
solve that problem uh and combine
different solutions together to what's
the ultimately the right
solution um and you can have these
amazing
models but I think we'll all find out
that maybe the big problem is to
actually combine them together uh and
select appropriate way to attack a
problem uh using different kinds of
models so that replication of the
executive function Uh I that can have a
toolbox of cognitive strategies to solve
stuff and put them to put together the
answers afterwards that is
likely uh the biggest
problem in Ai and I I think looking at
the capabilities of individual models
like like like the llms and stuff like
that and seeing they're super impressive
they're going to surpass you know human
ability on all cognitive strategies
because it surpassed human capability
and one cognitive strategy it's kind of
somewhat underestimated underestimating
the complexity of the human brain um so
so yeah I just wanted to put that in
there so it sounds like you're both kind
of skeptical like we're going to
approach anything close to human level
intelligence anytime in the near future
well on certain fun yes yeah sorry sorry
Magnus certain fun we can we can have
overum level performance on tasks
without having over superhuman
intelligence right um because humans are
extremely versatile especially in
groups um you you can't you know pull up
gd4 and ask can you give me the unified
theory of physics mhm right and we're
very very
far from when you're able to do that
because it's reiterating things we
already know uh which is amazing but
still it doesn't generate new knowledge
uh in the way a group of dedicated
humans can that's the most important
thing because we draw
inferences uh you know that are probably
something that is congenital pre-wired
to draw those inference you know there
are many researchers that you probably
have seen this but you know a kid just
who is not trained on anything and if
you are struggling to put some books
into a shelf and the Shelf is closed you
know this is done by one of the leading
professors uh this video is shown there
and the kid kind of watches it couple of
times and intuitively goes and opens the
Shelf door MH so which is something it
has never seen before and such a thing
that you have never seen before is not
something that uh the AI models of today
are even capable of right so um so
therefore you know the skepticism is
stemming from that because um it's we
know at least you know some of us know
how long it takes to to even get there
to because the human man has this unique
ability to focus on something at
microscopic level and micros macroscopic
level at the same time so as you choose
I can see the screen right now as set of
individual pixels if I want want to or I
can see the whole thing as a as a face
or with the background and I can choose
to like to Magnus Point uh ignore all
the books that the day one of course I'm
not ignoring your book but the day one
and other things that are there on the
backdrop I can ignore and just focus
only on you so this is this is something
that we can we develop as a strategy uh
during the conversation as we go about
doing things and if we know which ones
are to be ignored intuitively and that
is really very hard to achieve
achievable probably in my lifetime yes
MH but not uh not anytime soon okay you
guys want to take a question or two from
YouTube sure okay we got one from bario
Gorman who is working to bring these
models together who motivated to do this
what standards exist to uh lead us
towards this happening well excellent
question there are many people in fact
people don't realize that there are many
standards including the the one I
co-wrote at w3c the worldwide Web
Consortium called multimodel interaction
so which is not generally talking about
generative AI or some AIS of type if you
will but it's like speech recognition
and various other gesture recognition
ink modality non-verbal Behavior
generation and many other things how to
piece them together there is some kind
of that the often times we have
standards that come only after the field
has advanced sufficiently this is where
the standards kind of came ahead of the
the field advancements itself so but
yeah many companies are working on um
you know you can Google multimodel AI
and you will you'll find a lot of
companies including ours speaking of
Googling multimodal AI what's gonna go
what's going on with Gemini you guys
have any idea when Google's going to
release that
model no idea no idea but what could
possibly be the
holdup well uh you have to say that the
Google is more risk
averse than open AI which is startup
right um they they they uh they have
amazing AI researchers at Google uh and
and probably the most academic papers of
of any Source in the world right so it
doesn't come down to um you know from
their looking at them it doesn't come
down to you know being unable to do the
research and create the models and stuff
like that it is their uh how much risk
they want to take in making those things
into products for you know General
people and they have an established
Market where they are dominant so
naturally they are you know looking at
not disrupting that with uh when when
they release uh release things that's
you have to see it through that lens
right right it's funny because uh
someone talked about I forgot who it was
but how you had open AI who's a company
that would effectively you know take the
risks that Google wouldn't building on
Google technology fire at CEO because it
wasn't safe enough like yeah this whole
and then bring them back like it's
obviously clear now in the aftermath
kind of where that company is focused
does that sound right to you
guys I think it shows that uh you know
the the the valuation of the company is
really important for the employees of
that company
so do you think that um I mean obviously
yeah they didn't want to lose their
their options do you think that this was
like amazing marketing for them what
happened it seems like it was the best
possible marketing like people who never
are you sure about that though are you
sure about that question yeah you you
tell me from a marketing perspective
perhaps but with the ones that want to
buy into this valuation of them would
they think twice now or they wouldn't
think twice before what do you guys
think is happening with that tender
offer our CMO will cringe at the
possibility of somebody adopting this
kind of strategy for gaining
popularity leave it at that but but you
know I think it's I'm arguing back let
me argue back for a second yeah yeah of
course now the world knows Sam Alman
Greg Brockman you know Mira moradi every
Eliot who nobody knew who these people
were before last week now they're
spending the entire weekend freaking out
about whether qar is going to kill the
world whereas before they would
completely ignore this it's not this not
been incredible for awareness for that
there is modum of Truth in uh what
you're saying it is very much possible
like with anything we don't know for
certain but it is too risky for a
company to adopt something like that as
a stratey I'm not saying they did it
intentionally
after the fact it worked out yeah after
the fact yes after the fact it may have
worked uh in their favor but but then
also please understand you have ruffle
the feathers you have shaken the
confidence of many companies and not not
not talking about big companies that are
banking on this but there are many
product companies whose products are
solely built on this kind of technology
and they have you know second thoughts
about their own existence and survival
and they want to now have some kind of
backup plans and that's probably the
Fallout of this are you guys building on
open aite Tech yourselves or are you
completely proprietary no we are agostic
sorry go ahead agnostic when it when it
comes to uh using uh large language
models we do have our own we do have our
own but uh we basically let the use case
Define uh what kind of model is more
appropriate so the platform itself is
capable of orchestrating with uh with
many type of models what have uh what
have you how have you felt about your uh
work with open AI now is everything
going to change
or I think it's the the the customers
uhw customers reacting yeah they're more
aware of there being Alternatives right
um and we find that often um you know
you you can take an open source model
and you can pre-train it a little bit
additionally and you can fine-tune it
and get the same
performance for specific use cases that
you're employing it for uh than you can
get from gp4 right um and and that's
basically when you work with Enterprises
that wants you know they want
transparency they want to reduce
hallucinations uh they want uh to
monitor the use they they have lots of
needs that are you know not out of the
box um so our job is often to take that
the broadness of these providers that
are able to do anything and that's what
they're targeting right we want to do
anything
and we're focusing that into the
specific tasks that a particular
Enterprise needs we
found a a portfolio of smaller models uh
trained on Enterprise data actually
works better from a performance
standpoint and cost standpoint and every
other standpoint than any of the
ubiquitous large model that you can
deliver to an Enterprise so you know if
you do just based on that that alone
most enter es were on the journey even
without this crisis that you you're
referring to and now they are patting
themselves on the back for having had
that that kind of a strategy yeah yeah
this whole idea of the models
commoditizing the cost coming down and
then the building of Open Source has
sort of been Central to my theories
about what's going to happen does that
sound right to you absolutely yeah
absolutely we are with you on that so
you guys must be loving meta these
days
well I think think their AI strategy is
really good and is beneficial for the
overall AI Community what they they're
doing at meta
and I do like that I also do
like yeah I do do have to say that uh
you know shout out to an Lon for for his
unrelenting focus on bringing planning
capabilities or you know not using
choices words to describe the lack
thereof of the current models that's
right yeah Yan and I have been talking
about about that for like seven years at
this point we have been focused on that
you know we have been very highly
heavily focused on that and that's
something that we we stand you know far
far ahead of most of the people that you
you're probably seeing in the
marketplace yeah I like to ask everybody
that I speak with about this thing what
do you think is going on with this you
know e accelerationist and effective
altruist battle oh I will I will answer
that
um so so first you got to which one you
are Magnus I'm neither I'm neither
um I think that it it is uh
worrying um the influence basically what
philosophical belief systems are getting
into the boardrooms of of AI companies
because uh it's nothing wrong with being
an efficient altruist or being a
accelerationist
but it is philosophical
beliefs and philosophical you know
background to to to what you're you're
standing for uh and that should be
open right and the consequence of that
should be open that yeah I believe that
you know like efficient altruism now I
would say it's the long term not not
efficient yeah effective but it's the
long
termism portion of it that is is a
challenge in AI the belief that explain
that yeah yeah so so the belief is is
really out of you know Oxford and
Stanford philosophic philosophy
departments um and it it is the belief
that future humans that are not born yet
at the same moral
value as people living today and that
brings you into some thinking which
means that if there is an existential
risk and the and the chance of that even
if it's really really small mhm if it
can do anything to avoid it we should
because there is an infinite number of
unborn people in the future so if there
is a chance of of that not happening
that means that you know you you should
uh uh not deal with uh or or avoid
existential risk which means that even
you know if you go into the center of a
effective altruism and stuff like that
you will actually see that they believe
the chance of existential risk for AI to
be very very
low but because of the long- termism
view they still act on it today which is
you
know I feel
that people there's nothing wrong in
believing that like there's nothing
wrong in believing lots of things but
you have to be open about
it yeah you have to state that this is
why I'm doing it not that you know I
believe the world's going under no you
believe there's
0.1% MH uh uh exactly chance of it
that's a pretty clear explanation that's
good yeah
yeah all right as as we come to a close
if anybody has uh any questions on
YouTube just drop them in I'm going to
ask a couple more and then we're going
to wrap but um what what else like
looking forward looking ahead you know
obviously this open AI story kind of
caught Everybody by surprise um
and what what is what are going to be
the next things that you think will be
important to focus on like if you were
my position like what would you be
exploring I I can go Raj um go
ahead I would focus on um neuros
symbolic AI um and and I'm explain why
as well so I did explain about the
executive function and different you
know different types of models working
together MH um and really gener AI uh
when it came out kind of supercharged a
lot of of things in Ai and one thing it
did was make every old AI approach that
hit some snag in terms of too much work
too much this too much that suddenly
became relevant again because you could
use you know large language models in
tandem with these other ways of doing AI
uh to eliminate those those
bottlenecks uh so so when we talk about
neuros symbolic it means that you use
large language models multi modal um uh
models generative AI models to
understand a chaotic world and turn that
into symbols that you then reason over
and plan over and then you turn it back
into the chaotic world uh using uh using
these generative AI tools again uh So
you you're basically taking different
approaches to Ai and combining it in a
way that keeps the strength of each of
these Technologies and eliminate the
weaknesses um and that is really uh why
I think uh neuros symbolic is going to
be uh the next big thing people are
working on neuros symbolic for a long
time right uh but it's really with large
language models and generative AI that
that it's it's turned up an arch what
what is possible cool
Raj yeah uh absolutely because um humans
are good at certain things and we have
built over the years um or the centuries
a kind of logical reasoning
capabilities um and if you can and we
can plan and collaborate with users to
arrive at answers to to help them
achieve things and ultimately what is
the objective it's not to create a movie
or something like that right so this is
I'm talking about practical use cases
where this is very helpful um with this
technology when you combine them there
are I don't want to throw the baby with
bath water there are a lot of good
things about the generative AI as the
model stand today and they all can do
those things well but to leave the
planning to to such model by itself is
not something that I would be
comfortable doing it for two reasons
obviously Hallucination is not a word
that I would use lightly uh in in in our
industry in the for our customers uh if
you hallucinate you are in for a lawsuit
you have to close down so you that's not
not even an option right so people have
to vet what kind of things that you
would say and how you say it um so it's
okay to do some search and then get
wrong results but you know you can't
you're you're doing deontic reasoning
you are trying to provide answers to
People based on applicability conditions
and whether or not they can for example
file a claim or whether or not they can
use a particular medicine and whether it
will have a side effect or not and these
are the kinds of questions that we hope
to see automated uh through the human
expert or virtual expert that is kind of
mimicking human for something like that
to be offered you need to have some kind
of determinism thrown in and that cannot
come with uh with neural approach alone
and therefore a combination of these two
would be the ideal use of technology I
will call it as augmented intelligence
as opposed to artificial intelligence
great yeah and where can people learn a
little bit more about Open
stream sorry where can people learn a
little bit more about Open stream well
we are featured in all gner reports and
Forest reports all over um probably you
know we are the sole Visionary in the
Gartner magic quadrant for conation a
platforms over the last year two years
um and of course they can come to
openstream do.ai and our people are
waiting to hear from you great Rog and
Magnus thanks for coming here and
speaking with me very interesting stuff
fun to have a great discussion with you
guys and let's keep in touch and
everybody thank you for watching uh
we'll be back on the feed relatively
soon and uh looking forward to seeing
you there all right that'll do it take
care
bye