OpenAI COO Brad Lightcap: GPT-5's Capabilities, Why It Matters, and Where AI Goes Next

Channel: Alex Kantrowitz

Published at: 2025-08-08

YouTube video id: vnvTCMDr0rc

Source: https://www.youtube.com/watch?v=vnvTCMDr0rc

GPT5 is here and OpenAI COO Brad Litecap
is with us to break down the new model's
capabilities, what it means for the AI
business, and what's next for this
promising technology. Brad, it's so
great to see you. Thank you for joining
us on an emergency episode of Big
Technology Podcast.
>> My pleasure. Thanks for having me.
>> All right. So, briefly, I just want you
to talk a little bit about what GPT5 is.
So, maybe within like 60 seconds or so,
can you talk about what it is and how it
improves on previous OpenAI models?
Yeah. So, GPT5 is uh it's our next
generation flagship model. Um it does
something really interesting which is it
actually combines into one model the
ability to dynamically choose uh whether
to think hard about a problem and reason
about it to give you an answer or not.
And so you'll remember previously you
had to go deal with the model picker in
chatgpt everyone's favorite thing. Um
you had to select uh a model that you
wanted to use for a given task. Um, and
then you'd run the process uh of asking
a question, getting an answer. Sometimes
you choose a thinking model, sometimes
you wouldn't. Um, and that was, I think,
a confusing experience for users. GPT5
abstracts all of that. So, it makes that
decision for you. Uh, and it's actually
a smarter model. So, um, you're going to
get a better answer, uh, in all cases,
regardless of whether you're using the
thinking mode or not. Um, and it's
vastly improved on things like writing,
coding, uh, health. um it's much more
accurate uh is much faster. Um and so
all around we think a better experience.
>> And now for those of us who've been
following the hype, uh I think we
probably imagine you would lead with
this is an explosive increase in
intelligence versus um there's a
switcher on the model that will go to
reasoning or non-reasoning when it makes
the most sense. So can you explain like
what the what's the disconnect there? Um
and why lead with the usability versus
the intelligence increase? Yeah, because
intelligence really is a function of how
much time the model is going to be
thinking. And so depending on how much
you want to allocate thinking time to a
problem, you're going to get a better
answer. Typically, the longer it thinks,
uh, the better an answer it can give
you. So when we test the model on, uh,
certain benchmarks and eval,
it will dramatically outperform any of
our existing models by far. Um even
though if you don't allow any thinking
time uh you still get a typically net
better answer than you would for one of
our non-thinking models like GBT41. Um
so it is a dramatic improvement in
intelligence. Uh it should be I think a
better quality model across pretty much
all dimensions. Um but that reasoning
time and being able to use the reasoning
time dynamically to think uh we think
actually is the important part. It makes
it for a much better user experience.
>> Now I'm going to parse your words a
little bit. You said that it is a
dramatic improvement over previous
models. Sam in a press call said that
GPT5 is a pretty significant step over
40. Simon Wilson uh who's been using
your model for a little bit says it
doesn't feel like a dramatic leap ahead
from what other LLM from other LLMs, but
it exudes confidence. It rarely messes
up and frequently impresses me. Um, I'm
just setting this up because I'm curious
whether we could say or whether you
would say that this model is an
exponential increase in capabilities or
an incremental increase in capabilities.
>> You know, it's it's hard to measure it
that way. I think we're now kind of into
this regime of um having to measure
intelligence across a lot of different
dimensions. Um, which isn't a way to
dodge the question so much as it is to
explain why GBT5 is such a special
model. Um, and so obviously it's better
at the core things that you'd expect it
to be better at. It scores better on
things like Swebench. Uh, it scores
better on all the kind of academic evals
that we put it through. Um, this one in
particular, we actually made a real
emphasis uh to have it score better on
certain health benchmarks. So it's
better at medical reasoning and other
health rellated things. Um, but there's
a lot of things that go into what makes
a model good now because you have a lot
of dimensions to play with depending on
kind of how that model's trained and how
it can think about problems. So um if
it's faster for example, we think that's
actually uh indicative of it being
better. If it can give you a better
answer per unit of time thinking, we
think that's a an improvement that um is
an important vector to measure. Also, um
if it can do things like uh structured
thinking, problem solving, tool use, um
all these things are things we actually
measure and they're kind of invisible to
users. You know, if if you're just using
chat GPT, you don't necessarily
appreciate each of these things
happening under the hood. But all those
things are better for GPT5 than they
were for our previous models.
>> Right? And the reason why I'm asking is
because I think a lot of people have
pointed to the leaps from GPT, original
GPT to GPT2, GPC2 to GPT3, GPT3 to GPT4.
And one of the things people have seen
is just a general increase in
capabilities across the board. There
were no caveats of like um and maybe
there's a reason for those caveats, but
there were no caveats of you know
there's uh intelligence increases in
this place and that place it was we
trained a bigger model. I'm pretty sure
this is what it was and it's better
across the board. So have things
changed?
They've changed. Yeah, from a technical
perspective, I think when you go from
GBT2 to GBT3, 3 to 4, these were really
just uh exploits of what was uh and is
the scaling paradigm of training larger
pre-training bigger and bigger models,
training larger models. Um it's kind of
one vector of training. Uh and you get a
better model that uh as a as a result.
Um, and that continues to hold true, but
we now have this kind of other category
of of of training, which is
post-training. Uh, and being able to use
test time compute in more interesting
ways than we used to as almost kind of a
second stage of training. And so we
think that that actually gives us a
little bit of a boost um, a force
multiplier on our ability to push the
model toward new intelligence levels um,
and also be able to train into it a lot
of the things that you want an
intelligent model to be able to do. Um
so using tools for example is something
that we think is really important uh for
overall intelligence. GBT2 and 3 um
couldn't really do that as well. GBT4
could do it in a more nent way. Um and
now GBT5 you get that baked in uh with
the benefit of of these kind of multi-
multi-step and and longer horizon
reasoning processes. So um yeah we we
want to abstract that from users.
Obviously we don't think that you as a a
chatbt user should have to stop and
think about that. And in some sense, I
think the model picker being a point of
frustration for people was an expression
of the fact that people don't
necessarily want to have to make those
decisions every time they talk to an AI
model. Um, they kind of want the model
to make those decisions for them. And so
that's why we think GPT5 is a big step.
>> And going back to that increasing uh
pre-training, increasing the scale of
pre-training, delivering predictable
improvements in model performance. Um,
yes, now post- training is in the
picture. making models better in really
impressive ways. Um, but are you of the
belief and is open AI of the belief now
that there are diminishing returns from
pre-training? Um, given that we're now
talking about different forms of
training these models,
>> not at all. Um, our scaling laws still
hold. Uh, empirically, there's no reason
to believe that there's any kind of
diminishing return uh on pre-training
and on post-training. We're really just
starting to scratch the surface of of
that new paradigm. um you know the the O
series of models which were kind of the
previous reasoning models um were really
just the beginning of uh us starting to
explore what's possible in that
post-training regime and I think that's
going to be kind of the dominant theme
here for the next year or two um is
continuing to scale in that dimension uh
and continuing to see the gains that you
get there um simply because they're so
significant uh and so now we're pushing
on two axes for how to improve models
and we think that's going to tighten and
condense the rate of of of innovation
>> this is opening I believe that the vast
majority of improvements from here are
going to be coming from scaling or from
algorithms
>> I think it'll be a combination
>> it's always a combination right um it's
it's always algorithms uh uh scale uh
compute and and data right and so um we
we push on all three um and they all
play a really important role I think in
uh in how we look at the future um and
then the hard part obviously is having
them come together. So being able to
train larger models requires typically
that you want to train on more data
obviously with more compute. Um and so
that's a delicate balance between those
things because just scaling up doesn't
necessarily mean uh you know in all
cases that you're going to get kind of
the the same uh you know corresponding
rate of improvement. You have to be able
to bring uh those other pieces also. So
um it's not like we push one button or
the other. We we actually make a really
conscientious effort to try and kind of
pull all those of those together.
Okay. And you're not calling it AGI. And
I have to say I've lost a bet on this
show because I was listening to Sam on
the Theo Von show. He says he said GPT5
is smarter than us in almost every way.
And I said, "All right, well that sounds
like what you would imagine AGI would
be." And then, you know, G GPT5 comes
out yesterday or as the release happens.
Sam says, "I kind of hate the term AGI
because everyone at this point uses it
to mean a slightly different thing, but
this is clearly a model that is
generally intelligent."
Help me understand what's going on. Uh
because it seems like maybe maybe he
wants to call it AGI, but you're not
yet. So why is this not AGI?
>> Well, it is it is a hard thing to
define. um you know you ask the joke
here is you ask five people what AGI is
you'll get seven answers um and I think
the way we kind of look at it is it's a
cumulative process right it's a system
um and I think you have to define kind
of what is it that that system is and
what do you expect it to be able to do
and for me at least that's a system that
is reliably able to learn new things
that are kind of out of distribution by
virtue of its ability to reason to think
to solve problems to use tools to come
up with new ideas and so I do I think
we're at a system that I would call AGI?
No. Um, but I think we see we start to
see the traces and the um the pieces of
that overall system for for generalized
learning start to come together uh in
models like GPT5 and I suspect suspect
in in its successors. Um I don't know if
we'll have a point where we are like
okay we've crossed from a non-AGI world
into an AGI world. Um, and even if there
were, I'm not sure we'd actually realize
it necessarily until after the fact
because one of the things we've learned
working with the models that we have is
the capability overhang is significant.
Um, I think when Sam refers to the
intelligence of the models and having a
PhD in your pocket, we haven't yet
really exploited that as uh as a thing.
um you know that in some sense like I
think you could pause AI progress right
here for 10 years and you'd still have
about a decade worth of uh of new
products to get built of new ways that
people figure out how to use the models
uh even at a GPT5 level model um in
interesting products and interesting
processes um and so there's and one of
the kind of interesting things is I
think as the models get smarter they
almost demand more from a a product
building perspective in terms of how you
actually plug them into the system I
always kind of roughly analogy it to
like you could have a really really
smart intern um and you know at the end
of the day they're only capable of doing
a few things for you. They can take
notes in meetings, they can write
summaries, they can pull basic analyses
together, but if you bring a PhD to work
um that person has a tremendous
capability set that may they may not be
totally effective at on the job on day
one, but your job is to really figure
out how to expose them to enough uh
context, enough information, give them
the right tools to make them really
effective later on. And that process
actually takes longer to get them to
their full effectiveness than it would
an intern. And I think it's going to be
similar with AI models. And so uh you
know it it is a continuous process and
it I don't think it will be linear. Um
but uh where we are today I would say uh
you know we're we're probably not quite
yet at something I would call like an
AGI level system.
>> Yeah. And it brings up such an
interesting question which is does it
really make sense to try to make the
models smarter from here or is it about
trying to build those ancillary
capabilities? You know I think Sam
mentioned this on the media call but
GPT3 he said was high school level
intelligence GPT4 maybe the level of a
college student and GPT5 an expert. So I
guess I wonder for open AI is the quest
to add more intelligence to the mix or
is it to focus on capabilities other
than smarts some of the things that you
mentioned like memory and continual
learning
>> it's going to be I think all those
things um certainly there are some
unsolved problems uh you mentioned a few
here and I would agree with those um
that you know you'd expect a really
smart person to you know it kind of
comes by default that our models still
struggle with um and so there's open
research there that we still have to do
I think to be able to kind of close the
loop loop on what I would call the full
spectrum of intelligence. Um, but you
know, there's intelligence like we were
talking about earlier in in the podcast
expresses in a lot of different ways.
Um, and part of it is just your, you
know, pure IQ. It's your knowledge of
how things work and your ability to
recall information, but then it's also
your ability to reason about how to use
other tools to solve problems. uh it's
your ability to be reflective and to
look back on your own chain of thought,
your own line of thinking and actually
course correct when you feel like you
know I actually went down the wrong path
and maybe I didn't come up with the
right strategy to solve this problem.
And so uh that's one of the cool things
we see is GPT5 on those vectors um we
can actually reliably measure as better
than the previous systems we had. And
for us, I think one of the real world
things that we really want to understand
is how do they actually perform uh in
you know in in the real world. How do we
how do developers use these models? How
do enterprises use these models to
actually apply them to existing
problems, real world problems and see if
the next models kind of do better than
the last models. Um and so that's uh for
us I think the real world benchmark is
increasingly becoming important uh as a
sign of intelligence relative to the
academic benchmarks. And how big of a
priority is continual learning within
OpenAI?
>> We have a lot of priorities. I think um
you know certainly that's that's among
them. Um but uh we feel really good
about our research.
>> Middle low priority.
>> It's hard to you know the cool thing
about OpenAI is um the way that we kind
of you know has I think have like
systematized being able to do research
and this has really been true from the
early days of the company. I I joined
OpenAI in 2018. um is we we take this
kind of highly exploratory approach to
research and so we're very much not tops
down I think in how we uh how we
approach research where there's one idea
and everyone kind of just you know gloms
on to that one idea and we kind of do
one thing at a time. What we really do
is a lot of open-ended exploration in
small teams. We explore different paths
and see if those lead to new ideas that
we then kind of cycle back into the kind
of core idea, the main line of ideas if
they work. And if they don't, we kind of
uh we recombine those teams into other
ideas that seem to be working and then
allow other, you know, new ideas to
offshoot from there. And so it really is
kind of feeling around in the dark a
little bit. And when you find that kind
of patch of grass that you're like,
okay, we we might be on the right path
here, you kind of bring everyone to that
point and then kind of let everyone feel
around a little more. Um, and I think
that's kind of how it has to work. Um I
think it's really hard a priori to know
these things uh you know in advance. I
think you can have intuition and I think
our researchers tend to have kind of you
know better intuition than than the
average but um it really is still
scientific exploration.
>> Now I want to talk about whether how
your plus subscribers or how the people
who are using these chat bots will feel
using chat GPT will feel the
improvements. You know there's an
interesting comment from Ethan Malik the
Wharton professor who is also
experimenting with GPT5. He says, "I
think it's a big step forward, but not
an unexpected one. If you've been
following the curve, he says, "These
models got gold at the math Olympiad
this week. I'm losing track of what
massive advances mean. All the models
are improving very quickly right now."
Their question is if if you have a model
that's capable of graduate level or or
college level biology and then it goes
to graduate level biology. Um the
average chatbot user may not feel that
even though it's um even though it's
gotten much smarter. So I guess I'm
curious how how you think this will be
reflected the the increased smarts will
be reflected in the average users chat
GPT experience and the plus users
experience who've been using these
reasoning models uh for a while. Is it
going to feel any different for them?
>> Yeah. Um I saw something on on X that
was akin to what you're describing which
someone basically kind of said I think
for the you know upper echelon of of
chat GBT users who are probably in the
paid tiers who are very you know active
on a daily basis and are really kind of
expert level at using these systems they
it it's going to feel like an
improvement but maybe a uh you know a
more subtle improvement but for the
average user for the free user um and
we're we're bringing GPD5 to our free
tier it will feel like a dramatic
increase um if you actually look at kind
of the way free users have used chatgpt.
Most of them have actually not
experienced the power of the reasoning
models. Um they mostly are using GPT40.
Um and you know they they mostly are
kind of using it for this very kind of
um you know turnbased kind of like very
quick uh you know back and forth almost
search-like uh that ways and that I
think don't actually kind of express the
full capability of the model. And so for
a lot of people, this will be the first
time using a model that has reasoning
capability. And not only will it be, you
know, the first time using it uh with
reasoning, but it'll be the first time
that they're experiencing a model making
a decision about how long to think about
a problem and how good of an answer to
give relative to how hard the question
is. And so we expect that like for yeah,
for the average user, it will feel
dramatically different. Maybe for the
kind of upper echelon of power user, it
may not feel as different. So I would
agree with that. Um and and I think
that's a natural thing. I don't I think
that's actually a good thing. Um that
you know it's it it is uh if you've been
following the kind of rate of AI
progress and you're you're you're kind
of exploiting the frontier at every
point. Uh yes, it probably is dizzying,
but um it all it starts to feel uh it
starts to feel more continuous than if
you've kind of you know you're using
what is basically kind of the the best
model from a year or two ago,
>> right? I think you're so spot on about
the average user is using it as like a
search version of search and they're
like, "Well, what should I use?" When
they speak to me, they're like, "What
should I use AI for?" I'm like, just
upload stuff and start talking to it
about the things you upload. And I had a
friend who like was uploading pictures
of his son's uh football practice and
asking it for tips about like for
coaching tips and he was like fairly
blown away that this thing is giving
some like real analysis of positioning.
Uh I mean I wouldn't use it as a
football coach, but um I do think that
as the average user gets into these
capabilities, it's going to be fairly
mind-blowing. Yeah, it's, you know,
there's everyone's got a little bit of a
different entry point and that's the
cool thing about it is like it's really
personal for everybody. Um, you know, we
we focused on health a lot with this
release because that was one of the
consistently common things that we heard
from people as a starting point for how
they've used powerful AI was uh in when
they're navigating a health journey. Um,
and so we really wanted to make an e
effort on on making sure that if people
are going to be using AI systems for
health related things that we could
serve them the best possible model and
so that was a big a big push for
training DPD5.
>> Yeah, you brought up health a couple
times. Do you want this to replace a GP?
I mean, a lot of people are really
underserved with healthcare, but I kind
of worry about handing them a model that
can hallucinate uh and saying this is
the substitute now.
I don't think it'll replace GPS, but
what I think it helps people do is
become have more agency in their
journey. Uh a little bit more control
over uh their you know the process of of
managing care. Um it gives people also
just an awareness of the condition. So
um you know we hear stories all the time
of uh people managing uh conditions that
you know they didn't really understand
because no one actually took the time to
explain it to them. Um, and that's not
because anyone did anything wrong. It's
just because the health system, the
health care system as it's designed,
doesn't allow for there to be time to
allow people to understand what it is
that they're they're managing. And so
even just giving people that baseline of
education of like, you know, this is
this is the condition you're managing.
Is this common? It's going to express in
this ways. You're going to feel these
types of symptoms. Um, that's a huge
unlock just in people's kind of
psychology for uh what it means to be to
be managing a disease. Um, and you know,
I don't I don't think I think you still
have to kind of work with a GP for care,
you know, a specialist for care. Um, but
having uh something that can can can
kind of handhold you through that
journey, I think for a lot of people is
really comforting and in a lot of cases
has actually proven to be helpful. Um,
obviously like we want to make sure that
model is as accurate as possible. Um, so
being able to kind of push the model
capability in that domain specifically
has been a big area of focus. Um but we
think now with uh GPT5 and obviously
with you know with future models um
we've seen consistently the the rates of
accuracy and the rates of hallucination
um go up and down respectively. Um GBD5
I think depends on how you measure it
but it's you know four to five times uh
more accurate than its predecessors. So,
and that, you know, that may be more
accentuated in health. Uh, we we I don't
I don't know off the top of my head, but
um but so we have, you know, a lot of
control, I think, uh, and and are
pushing in the right direction on being
able to make them reliable and accurate.
>> It's pretty interesting. We're talking
about things so far beyond the chatbot.
Like, of course, there's the chat
function, uh, but there's coding,
there's health, and then, of course,
there's enterprise or the way that
businesses use these models. and
businesses are notoriously slow uh at
implementing this technology and um I'm
sure there's so many approvals and
reviews and um it's tough to get things
out the door but I do think that when
you have better models this is sort of
my belief when you have better models
you sort of are able to push that
forward uh much faster and much more
effectively. Um so talk a little bit
about what a better model in GPT5 will
enable on the enterprise front or
business front.
>> Yeah. No, I I would agree with your
assessment there. I think um
in many ways I I always kind of say we
haven't yet seen uh the chatbt moment I
think in business for AI. Um I think AI
was an amazing tool for consumers where
you're uh your surf space so to speak is
is more narrow. Um and you've got a more
constrained problem. uh you've got
obviously a much more narrow context uh
that you're processing and I think you
know you can kind of take things turn by
turn with very very few kind of external
dependencies uh and you really just kind
of let the model's pure intelligence
shine. Businesses are a different
category of of of difficulty. So uh
you've got complex business processes,
you've got a lot of uh multi-user
dependency. You've got a lot of context
that you have to process. You've got a
lot of tools that have to be brought to
bear. those tools have to be used in
succession in certain ways with this,
you know, with certain guard rails. Uh,
and there have to, you know, there's not
as there there's not as much fault
tolerance for for when they don't work.
Um, and so we, you know, kind of goes
back to what we were talking about
earlier. I think you look at models like
GPT5 and the impact that they're going
to have in business. It is that baseline
of capability that's moved up. It's
their ability to uh to use tools to to
you know think in a structured way to
solve problems um to kind of recursively
correct uh you know their own mistakes
um to do long context retrieval things
like that that actually you know these
little things do matter on the edge and
that you don't feel them every day in
chat GBT as a as an individual user but
you will start to feel them as a
developer or an enterprise and so um we
see this anecdotally too I mean we've
worked with uh large enterprises and
small startups and the entire spectrum
in
uh on testing these models and GPD5
specifically before release. Um and we
get a lot of feedback from companies
like Uber and Amgen and Harvey and
Curser um uh Lovable uh you know um uh
uh Jet Brains. I mean all companies that
have use cases that are highly highly
sensitive to the model's ability to
reliably call tools um to deal with long
context uh to you know uh to um to to
problem solve and and reason
effectively. And so um it's a it's a
rising tide I think across the
enterprise and it's just really going to
be on on the developers we work with to
to be able to kind of uh you know
understand the the difference and the
improvement and then implement them in
the applications that they're building.
Yeah, it is interesting to know that
you've been you have been already
working with many companies uh and
letting them use GPT5 already. So, has
there been a sort of unified we couldn't
do this with the previous models but we
can do it now with GPT5 or is it sort of
spread out in terms of the capabilities
that it's now enabling?
>> Um I would say it's it's been uh you
know rising tide across the board. So
every everyone who's kind of
benchmarking and all the companies that
we work with typically now are are
pretty accustomed to to evaluating and
benchmarking performance across all the
models that they use but um everyone has
kind of reported you know much higher
kind of consistently higher performance
on those eval there are a few areas in
particular we've seen spikes so one is
coding for sure um I mentioned companies
like cursor jet brains windsurf uh you
know cognition and others that we work
with who um anecdotally are all uh you
know have have all said that GPT5 now
feels like the most capable coding model
whether that's in an interactive coding
environment or more of an agent coding
environment. Um and then also one of the
things that we see consistently now is
its ability to reason and problem solve
in very technical domains uh is
significantly improved. And so, um,
Harvey is a great example of that where,
uh, you've got, you know, Harvey AI
working with legal firms, uh, and law
firms, uh, is, you know, very very
reliant on its ability to, uh, reliably,
accurately, um, and, uh, and
consistently portray, uh, you know, uh,
cases that that that it's looking at,
legal analysis, um, to provide that kind
of level of structured thinking you want
when you're doing legal analysis. And
so, I expect we'll see that carry over.
I mean financial services is a very
interesting area heavy on data analysis
heavy on research heavy on planning
those are all areas that we've seen
improvement in and so as we continue to
kind of see GPT5 permeate the market
we'll get more and more of that feedback
and can continue to improve on those use
cases and how about pricing because it's
half the cost uh of an input an input
token is half the cost than GPT40 output
token is the same u are these lower
costs going to help enable more use
cases and and on that note I mean how
Does lowering cost sync with the fact
that you've raised like 48 billion this
year or announced 48 billion in funding?
Is it really possible to lower costs and
deliver on the expectations that the
investors are expecting on that front?
>> Yeah. So, we've, you know, in open
access history, every time we've cut
cost, we've seen typically some
corresponding increase in consumption
that usually outweighs the cost cut. And
so um you know for as long as that trend
holds uh we will continue to to cut
costs on models. We know that there's
this complicated dance that developers
have to do between latency uh model
quality and intelligence and price. And
I think you know what we've tried to do
here basically is take the market's
feedback on all three of those fronts
and really place these models, these
GPD5 models, not just the standard
model, but also the mini model and the
nano model on this frontier of quality,
cost, and latency that kind of optimizes
for what we think the market needs to be
successful. And so we tried to find a
really attractive price target um at a
very attractive latency
um and then obviously with um uh the the
the kind of built-in model quality and
intelligence you get with GPT5. And so,
um, we will continue to push that
frontier. And I think the more we push
that frontier, typically the more we
just see people want to use it for more
things. Uh, and so for, you know, that
equation to exist, we're very fortunate.
And it motivates us to try and make them
better.
>> Are you ever going to be profitable?
>> I hope so.
Okay, we'll take it. All right. Uh,
Brad, before we wrap, uh, let me be the
first to ask you, when is GPT6 coming?
>> Well, you're not the first to ask. Um I
uh I could tell you, but I already
>> u Yeah. No. Uh Twitter is uh has is
quick on the trigger on that one, but um
uh no, I mean like look, we're like I
said, we we think GBD5 is
extraordinarily capable. Um we we think
there will be better models in the
future. We know there will be better
models in the future. Um for now, we're
just focused on how do we get this in
people's hands? How do we support the
companies that are building with us
using this model? And then we're still
in in in the science of it, I think. Um
that's the exciting part is like we're
in the first inning of it and we
ourselves are just understanding the
paradigm we're in and so this is I think
an important first step and you kind of
have to understand where you are to to
understand where you're going and um you
know hopefully the the the learning from
this will make GP6 much better.
>> Well Brad so great to have you on
especially today on uh GPT5 launch day.
So whenever GPT6 comes uh we'll have to
do it again. Thank you so much for
joining. We
>> look forward to it.
>> All right folks GPT5 is out. You can try
it on uh chat.com and it's going to roll
out to everybody. Uh so uh give it a
look and uh we'll be back to talk more
about it tomorrow where Ron Johnroy and
I will break down the week's news,
especially uh what the latest is on
GPT5. Thanks everybody for listening and
we'll see you next time on Big
Technology Podcast.