AI Predictions for 2025: Geopolitics, Agents, and Data Scaling — With Alexandr Wang

Channel: Alex Kantrowitz

Published at: 2024-12-11

YouTube video id: shMX2N89MdQ

Source: https://www.youtube.com/watch?v=shMX2N89MdQ

scale AI founder and CEO Alexander Wing
joins us to predict where AI is heading
in 2025 looking at everything from Geo
politics to AI agents that's coming up
right after this welcome to Big
technology podcast a show for Cool Edit
Nuance conversation of the tech world
and Beyond so thrilled about the show
that we're bringing to you today because
we have Alexander Wang here he's the
founder and CEO of scale AI that company
uh it's worth $14 billion it raised a
billion dollar this year creates data
that powers llms from open AI meta and
other big companies and it also provides
Technical Solutions to businesses and
the US government which helps them build
and deploy AI so Alex is working with
all the big companies really there in
the heart of what they're doing and
including you know not just companies
but the US government and we're
definitely going to touch on that so
Alex great to have you here thanks so
much for coming on the show thanks for
having me super excited to to be
chatting today yes and we're going to
get into plenty of your predictions and
I just want to kick off the one that I
find the most interesting which is that
you see some geopolitical shifts coming
up in the next year in the world of AI
why don't you lead with that one so I
think the one of the big questions of AI
has for the past decade has always been
the US vers China arms race and I think
the question that's often asked is which
of the US or China is going to come out
ahead on AI technology and certainly
it's been a pretty tight race at various
points over the past decade as we look
at technology from to technology like
with autonomous vehicles it was very
close uh and then now with uh with
military uh use cases of AI was very
close and then now with um generative Ai
and large language models it's once
again quite close I do expect that the
new admin will come in and help
accelerate things um to enable the US to
compete more aggressively with China and
and ultimately come out ahead on the te
technology but my prediction really is
that um we're going to talk be talking a
lot more about not only which of the two
superpow wins but which one has AI
systems that are going to be adaptable
uh sorry adoptable and exportable
worldwide so which country is going to
is going to have the AI technology that
become sort of the infrastructure and
the foundation of the world's AI systems
and you know there's a lot of countries
that are kind of caught in the middle
most of the globe is sort of caught in
the middle between us and China and
there's always these questions where I
think both the US and China ask them hey
you have to pick a side when it comes to
which technology you're going to rely on
and so you know we we like to call these
geopolitical swing States or you know
many many countries which are sort of um
you know they could go either way they
could go to uh Western and US
Technologies or they could go to uh to
Chinese Technologies I think one of the
best examples of this was in the past
year the Biden admin posed to the UAE
hey which way are are you going to go in
terms of AI technology you could either
go into the sort of Huawei China stack
or you could go into the Microsoft um
United States uh technology stack for AI
and they ultimately pick the US stack
but I I think this is going to be one of
the under the line battles that really
defines the course of the next uh few
Decades of geopolitics I I don't think
we can really afford another Chinese
expansion expansionary Expedition like
the belon road initiative or h waste
technology being exported very broadly
we need to ensure that Western AI
technology um is dominant uh globally so
basically what you're positing is that
there's a series of AI models that us
companies like open aai Google Amazon
meta are building and then there's a
series of models that Chinese companies
like Huawei are building and they're
going to be in competition with each
other in the globe and it's important
that the US wins or the the Western
version wins cuz we also have mraw in
France why is that important there's two
sides of this I think first there's the
Tactical question of okay which one is
more powerful us AI versus Chinese Ai
and this is very relevant for National
Security I mean I think that like if you
believe that there's some potential of
some kind of conflict over Taiwan or
other some kind of other like hot
conflict between the US and China um
then we really the United States needs
to ensure that we have the best possible
AI technology to ensure that we would
Prevail in any kind of hot conflict that
that democracy would Prevail and that
ultimately that we're able to sort of
continue uh ensuring our way of life
having the better having the better chat
PT isn't going to make you victorious in
a conflict over Taiwan certainly it will
not be the only Factor but the history
of war is a history of military
technology and time and time again you
know you see uh when there's new
technologies and new technological
paradigms that come to Warfare uh it has
the ability to fundamentally shift the
tides you know we saw that most recently
in Ukraine with drone Warfare becoming
all of a sudden the major Paradigm by
the way that I think that the Drone
Warfare in Ukraine is becoming more and
more enhanced by generative Ai and more
advanced autonomy so that's definitely
one thread that is
continuing before you move on before you
move on where would you say the US and
China are in terms of competitiveness on
AI technology and especially uh not not
even broader but like especially about
the way that they apply it in war so if
you look at just the raw technology the
US is is ahead but China is is is moving
is fast following you know and we like
to break it down across three dimensions
so AI really boils down to three pillars
it boils down to um algorithms
computational power and data um so
algorithms are the kinds that that you
know Folks at open AI or uh Google or
other companies build um computational
power comes down to chips and gpus um
you know the kind that Nvidia uh
produces out of tsmc's factories or
tsmc's Fabs um in Taiwan and then lastly
is data which uh is maybe the the least
focused on of the three pillars but
certainly just as important for the
performance of these AI systems if we
were to rack and stack vers China we're
ahead on algorithms we're ahead on
comput computational power thankfully
due to a lot of the export controls that
the Commerce department has put in place
um and then on data it's a little bit of
a jump ball you know the the
conventional wisdom is that China is
actually probably going to be ae- on
data in the long run because they don't
care as much about sort of personal
Liberties and and um you know protecting
personal data in the same way that we do
in the west and so um so right now the
US is ahead that being said the sort of
deployment of AI to military you know
it's hard to track exactly the pla
doesn't tell us exactly what they're
doing people Li liberations Army out of
out of China they don't tell us exactly
what they're up to but I certainly am
worried that they're moving faster than
we are in the US and this has been the
sort of pre pre-existing precedent when
it comes to China's use of AI technology
for National Security or military use
cases so the best example of this is um
in the past decade they rolled out
facial recognition technology widespread
across the whole country for things like
weager supression or Global surveillance
of their CI citizen base and they did
that uh incredibly quickly much faster
than any comparable technology scale up
in the United States so my expectation
is that they will actually deploy AI to
their military faster than the US even
though the US is ahead on the core
technology okay so that that's the
military point so basically you're going
to want the Western countries to be
stronger than China uh and AI makes a
big difference there so it's important
for the AI Industries uh to be stronger
because if you're not stronger then
you're there's a liability especially as
this stuff gets put into production on
the battlefield with things like drones
and computer vision I guess applied on
top of satellite imagery to figure out
where people are stationed in the middle
of hot conflicts but there's a more
subtle point which is which is that it
actually not only does it matter for hot
Conflict for war Etc it also matters
just in terms of okay which technology
becomes the uh commercially or
economically speaking the global
standard right and this is your second
Point here yeah exactly and and because
in the US um you know we benefit uh as a
country from being the global standard
in a number of areas you know we are the
global standard for currency um that is
something that's incredibly beneficial
to our economy and to everything that we
do um you know certainly our uh our um
uh our search um uh so Google and a lot
of our technology companies are the
global standards so for search and for
um for social media many of these are
the sort of like Global standards we
benefit a lot from these being the
global standards and I think when it
comes to AI you know it's a very
interesting technology because not only
is it a sort of technological utility
but it's also a cultural technology
ultimately if you if a lot of people
within uh on the globe are talking to
AIS to you know understand what to think
or or how to about certain things than
ensuring that the AI substrate that gets
exported around the world is one that is
democratic uh in nature that is sort of
believes in the ideas of of sort of free
speech and and sort of um you know open
conversation about whatever topic is
necessary you know that's a that's a
really uh powerful cultural export that
we can have from the United States that
will over time I think fulfill a lot of
America's vision of ensuring that we
have you know freedom and liberty for
all
so I think it's one of these things that
is unbelievably important um even beyond
the sort of hot military uh implications
it's one that's important just for uh
culturally ensuring that the United
States is able to export our our ideals
so you're saying there's a soft power
issue here as well yes exactly I want to
ask you about China's uh development of
AI because I always hear two
contradictory things about how China's
progressing with AI uh the first is that
they have the government that's willing
to put all the resources that they can
into building the compute power to train
and run models and they don't care about
data privacy so they have all the data
that they need right and then the
algorithms are you know they're
basically all published in that Google
paper you know you can tweak them a
little bit but basically they have the
algorithms so they should be the lead
and then you look at what's actually
going on on the ground which is that and
you correct me if I'm wrong right now
China is using a lot of American models
open-source models in fact meta's model
the Lama model which is a open- source
model that they have developed and
released we know for a fact has been
used uh in applications by the Chinese
military so explain this one to me how
has China been able to um effectively
you know put all these resources toward
the problem but still has to rely on
American open source technology to build
the things that they want to
build well um there's there's probably
two major things I mean one one um
undeniable uh Trend over the past let's
call it five years has been um uh the
the sort of the collapse of the Chinese
startup sector um and this is really
driven by policies from the CCP to to
significantly you know they killed
certain startup Industries they really
like hampered the entire innovation
ecosystem and you see it in the numbers
the sort of amount of capital flowing
into the Chinese Innovation ecosystem
has fallen off cliff pretty
precipitously so why did they why did
they do that before you move on why did
they do that I know they also somewhat
disappeared jackma right like they had
CH Chinese Tech icons that have sort of
Gone Away uh was it that the tech
industry was growing so large at
threaten the government or what it could
be the possible logic there yeah I do
think that was that's the sort of fun
FAL risk I mean I think that um if uh if
the government if the CCP has a desire
to ensure that they consolidate all the
power either they have to nationalize
the tech firms or they have to ensure
that they stay weak and so um and there
were some other yeah in the foot stuff
totally and I think it's a lot of this
hinges and I think they do really see
the world differently from the way that
we do I think we you know um in the west
it seems totally insane but I think in
uh in certain doctrines or in certain
with certain ideals I think it make can
make total sense right um but but the
there is a death of the Chinese
Innovation ecosystem so um so a lot of
what they have to do uh in AI is is just
catch up and copy what what we've been
up to um uh which they have been pretty
successful at so for example the the um
you know open a released 01 and released
the 01 preview a number of months ago
this reasoning model yeah this is open
ey's advanced reasoning model which is
great at sort of scientific reasoning
and mathematical reasoning and reasoning
in code Etc and the very first
replication of that model and of that
Paradigm of model actually came out of
China um from a lab called Deep seek the
Deep seek R1 model so they they
certainly are extremely good at catching
up now there is a there is a um very
real hamper in a lot of their progress
too which is the chip export controls
and this has been um an incredible
effort I think from the US Department of
Commerce and the you know the Biden
Administration in general to sort of um
hamper the ability of the Chinese AI
ecosystem to build Foundation models of
the similar size scale and magnitude as
the ones we have in the US because you
know um they have not been able to get
access to The Cutting Edge um Nvidia
gpus that uh that we have in the that we
have in the states and so you know
whether or not you think that's good or
bad policy it has hampered the progress
of Chinese AI development um which
enables us to stay ahead so let's Circle
back to your prediction that you talked
about how us and China will be
head-to-head trying to get their vision
for AI adopted across the globe um so
that's your prediction of like what's
going to happen who do you think is
going to win there I think that the the
trend right now is currently very
positively in the direction of the
United States or or of the West broadly
speaking we have the most powerful
models um we also have I think the most
compelling value proposition in terms of
our models are going to keep getting
better and yes maybe the Chinese ones
catch up over time but uh we are the
Innovation ecosystem we are going to be
the ones who who innovate far ahead of
the the um the adversaries that being
said I think that there's um you know on
the flip side you have to look at What's
the total package that um the CCP or
China might be able to offer you know in
the Bel and Road initiative it was
through um this like sort of like total
package of technology plus
infrastructure build outs plus debt um
that sort of uh managed to to move a lot
of folks over to their side and so I
think we need to watch it closely to
make sure that we always have a
compelling total value proposition um I
do think you know one sort of sub
prediction that I have too which is
important to mention here is that um you
know the technology is move is moving so
quickly that I do think that uh 2025
will will be the year where we start to
see several militaries around the world
start utilizing AI agents in active War
fighting environments to great effect um
I think you're going to start seeing
this in some of the hot Wars that we
have going as well as some um sort of
militar advanced militaries who aren't
at War start utilizing AI agents and so
I think that the the temperature so to
speak on on AI deployment to military is
going to is going to go up pretty
dramatically over the course of the next
year yeah I just s a uh post on big
technology about how AI is going to be
an Enterprise thing for a while right
like companies B2B software companies
not exactly the most exciting stuff in
the tech world is going to be where this
stuff is adopted because it solves a
problem for them where they have loads
of information they can't organize it
they can't share it they can't act on it
and generative AI in particular is quite
good at handling that and then you think
about well where else could this be of
use if it's not going to be for regular
people right like we're not we don't
have an AI phone right now but we have
like plenty of companies working in AI
software and the military is just like
the perfect example of where it could
apply because of all of the information
and the logistics issues yeah exactly
and and I think that this is you're
hitting on the core point which I think
is some is is often glossed over I think
when people think about uh the military
and think about a war they often think
about the the literal Battlefield and
the sort of actions on top of the
battlefield but um you know 80% of the
of the effort that goes into any Warf
fighting effort or any milit
is all of the logistical coordination
that goes into you know the
manufacturing of weapons or the
manufacturing of various supplies um the
logistics and sort of delivery of all
those supplies to to a battlefield um
the decision- making process um the the
sort of data processing of all the
information that's coming in and so um
most of what happens actually looks to
your point a lot like in Enterprise the
stakes are just dramatically higher yes
yeah military today is is all about
logistics it's like the firing of the
guns is like the last thing that happens
buta it's a logistics game and so just
to you know drill down a little bit on
one of those sub predictions that you
made so how do AI agents help in that
case so I you know there's there's
probably two core areas where I think AI
agents are going to have immediate value
um one is in is in to you know kind of
to reference your point on Enterprises
it's in processing huge amounts of data
right now
most militaries already have you know
more information coming in the door than
they have the ability to process um
there's terabytes and terabit of data
that that come in whether it's data from
the battlefield data from their partners
and allies data from satellite networks
data from other um uh data collection
formats and they need to process that
into Insight that actually can help them
you know make real decisions about you
know what they should be doing
differently so so the first is just sort
of this like huge problem of massive
data ingest into uh real decision-
making that's um and that that sort of
General problem set fits a lot of sub
areas whether it's in logistics or
intelligence or um military operation
planning or whatever it might be um the
second area where I see it uh having
very very real impact is just in in
fundamentally coordination and
optimization of of complex systems and
this is this is really where the I think
logist istics or the manufacturing cases
are very clear where um these are
incredibly complex processes with lots
and lots of moving parts and um it's
hard for humans to get your hands around
those processes and really optimize them
effectively whereas AI systems can uh
ingest far more information about the
processes than otherwise can run
simulations on their own around what are
various configurations that might
operate better and they can sort of um
self-optimize those processes to to
perform better and then there's I think
the sort of third area which are more uh
sort of speculative or sci-fi which is
the use of AI agents more actively in
drone autonomy or a lot of the
autonomous missions that are being run
right now and you know I think this is
an area of active experimentation for a
lot of militaries but I think if you
start to see that happen then you're
going to you're you will have more
autonomous drones that are able to be
more and more lethal more and more um uh
effective and that's going to be a a cat
and mouse um game in and of itself a
real race that scares the out of me
are you comfortable with
that um uh I think it's no I think I
think ultimately we're going to need to
have Global conversations and Global
coordination around to what degree we
actually want a lot of this uh a lot of
AI agents to be used actively on the
battlefield um that being said there are
there are hot Wars going on right now
where uh militaries and countries are
desperate and and I think they'll do
whatever they need to in the near term
to get the uh to get the leg up yeah
it's one of those things that I feel
like once it leaves the station it ain't
coming back and when we talk about
agents it's basically like AI
applications that make decisions on
their own if we end up having that you
know deployed in in war it's just going
to once somebody does it it's just
everyone is going to do it it's it's
like the opposite of mutually restored
uh destruction with nukes I think where
that's like oh like you know if we do
this then the world is over um whereas
with like agents deciding what to bomb
where to bomb uh how to attack uh as
long as they don't have access to nukes
it's really tough for that to go uh back
in the barn because if you don't use it
you're going to be destroyed yeah yeah I
think the good news is that if you take
nukes as as an example what has happened
with nukes is like we've we've built
incredibly advanced technology
technology that has the ability to
frankly be world-ending but that has
actually led to more peace than without
it because um you know you have this
deterrent threat of of the utilization
of nukes and so my hope certainly is
that while ai's application into
military is something that is is um very
concerning and potentially extremely
powerful it is the sort of same overall
effect which is to ultimately deter more
conflict than than create I hope you're
right and I'm wrong um and we did have
Paul Merl lucky on the show a couple
months ago and he talked about countries
don't start wars that they believe
they're going to lose and so maybe that
adds to that I mean that's certainly
been the case with nuclear all right I
want to get into your second prediction
uh we already have brought up AI agents
but I think we should go a little bit
deeper because you know I think people
hear about AI agents and they say is
that supposed to be something on my
computer that's going to like book me
travel book me tables at restaurants
look things up for me uh do my expense
reports if they if I need them to do
that or you know basically
indiv agents that act on behalf of the
individual we haven't really seen those
yet we've seen some examples of
companies and militaries using these
things and the average person doesn't
get a chance to touch that but you think
it's going to change yeah I I do
think yeah I think that 2020 uh 2025 is
really going to be the year where we
start to see some uh kind of very basic
primordial AI agents really start
working in the in the consumer realm and
create
uh sort of real consumer adoption um you
know another way that I think about this
is is you know we'll see something like
a chat chpt moment in 2025 for AI agents
which is you know um you'll see a
product that starts resonating uh even
though to technologist it may not seem
like all that or may not seem like that
big of a leap relative to what we had
before and I think a lot of that is
going to come from um probably two main
threads first obviously the model is
continuing to improve and getting more
reliable and sort of you know uh getting
down that curve and the second is really
evolving in the UI and experience of uh
what an agent does I mean right now
we're so stuck as a um I think tech
industry still on the sort of like chat
Paradigm and you know um having
everything be a chat with one of these
models and um I think that's a
constrictive paradigm to enable agents
to actually really start working and and
to me what it what it really means for
an agent to start working is you know um
uh me as a user or or consumers in
general start actually Outsourcing some
real workflows to the agent that they
would have had to do otherwise and so
we're start we'll start to just sort of
like fully trust the agent to do um full
endtoend workflows you know maybe it'll
be something around travel maybe it'll
be something around calendaring U maybe
it'll be something um even around just
like you know
producing presentations or managing your
workflow but um we'll start to really
offload uh some of the the meaningful
trunks of our work to the agents um and
there will be something that that really
starts to take off um you know I don't
know if it's going to be one of the big
Labs or it'll be a new startup that
comes up with it because I think so much
of it will come from kind of like um uh
experimenting and and the natural
Innovation ecosystem working out but you
know what we see is that the models and
their capabilities are certainly strong
enough to enable a pretty uh a pretty
incredible experience you know there's
no um uh there's all this talk about
whether or not we're you know we're
hitting a wall or what not but the
models are really really um powerful and
uh and we should see something big here
okay so just walk me through like what
that experience might look like you
don't you know we don't have to um stick
with this like it doesn't have to
necessarily be the use case but since
you've imagined uh the idea that AI
agents could end up helping us in 2025
like what are some experiences that are
in the realm of feasible for someone so
let's first let's let's walk through
what what's an ideal AI agent an ideal
AI agent is one that that I think is um
observing and and naturally in all the
sort of like core flows of information
and core flows of of of context that you
are in digitally so you know it's it's
in all your slack threads it's all your
email threads it like you know it see it
reads your your jera or all of your
tools to understand everything that's
going on in your in your work life and
then it helps to sort of organize all
that information to start taking certain
actions and so like one agent that I
think um will would be super beneficial
and one that I think is in the realm of
feasible is you know something that
starts to um uh take a hand at
responding to a lot of your emails um
you know flagging when it needs you for
like additional context or information
to to be able to to address your emails
um can sort of summarize a lot of your
emails for you naturally and so
something that just turns the the
experience of doing email from hey I'm
like having to respond piece by piece to
every single email to leveling you up to
being hey this is like all of the
overall work streams and workflows and
how do you want to engage um at a high
level on top of those workflows but this
is this is a business use case and I'm
curious if you think that like how
everyday people might end up using AI
agents or is that just still a ways off
like maybe not in
2025 uh everyone works you know
so give me an example outside of the
work context yeah I think I think one
that's more personal I mean I think
similarly um I think in everyone's
personal lives you're also juggling and
navigating a whole set of
of various priorities you know I'm
planning a trip with my friends over
here and I'm I need to you know get
gifts for my for my family and figure
out what they what they want for
Christmas and then I need to I have all
of these sort of personal projects which
are still sort of like sitting there and
so I think in the same way helping you
sort of like level up on top of all of
the projects that you're navigating and
sort of like help you sort of coordinate
between all of them more naturally I
think that's something that uh that
we're going to start seeing now I don't
know the perfect way that that happens
right I don't I think that the product
experience is so so important as a part
of this and having a product experience
which um where you don't expect it to be
perfect but you expect it to uh be
pretty good um I think that's like 99%
of the challenge and that's why we
haven't seen it yet despite the fact
that the models already can do lot of
the stuff pretty well my 2025 prediction
is that guys use AI agents to use dating
apps for them and uh some get found out
and some don't and we're going to see
some stories about how like uh some guy
like set it on autopilot and ended up
you know lining up more dates than he
could ever hope for yeah yeah yeah that
well hopefully uh maybe that's already
happening hopefully there'll be good
dates yeah I I don't know what are you
seeing what are what you know you I know
you had Benny off on the the podcast a
few a little bit ago what are you seeing
as the things that uh that seem to seem
to make sense from an AI agent
perspective well I think that Mark benof
the Salesforce CEO when he came on
talked like pretty convincingly that
like we'll have ai agents at work and
again this is like the work or the
Enterprise use case because work has all
this data and there are all these tasks
that we do all throughout the day at
work that are just arduous and really
quite you know quite annoying preparing
reports making dashboards
um going to meetings we don't need to be
in pulling out highlights from those
meetings sending them to our bosses
telling our bosses you know in the
Salesforce instance for instance like um
how each conversation went and what our
expected pipeline is to close that
quarter and all this stuff uh can be
used for AI I think it can be used with
AI I think it's really interesting in
the medical use case um I was just
speaking with GE Healthcare about how
they've now put in dashboards for
doctors um sort of summaries of cancer
patients medical histories which would
run thousands of pages and um and the
doctor never had a chance to read the
whole history and now the generative AI
is summarizing it and uh going out and
finding available treatments for them
and notifying them when they miss tests
and I think this is also an example that
benof gave about the healthcare um
example where that can actually be
proactive in scaling medical advice and
medical treatment in a way that you'd
never hear from like your doctor after
you showed up to an appointment and now
can they create an agent that just kind
of keeps you on your plan you know in
terms of like follow-up stuff that you
need to do um on the consumer side like
for everybody else that's kind of where
I wonder uh because all of our internet
has been designed to effectively combat
Bots but if we have agents that work on
our behalf on the internet like travel
sites dating sites social media sites
I'm very curious like whether they're
going to come up against these bot
Protection Systems like are they going
to do captas on our behalf are they
going to get the text messages and throw
put fill in those numbers so they're
able to log into different systems
because again the whole internet has
been built to defend against these
things so I'm curious what you think I
mean is this vision of you know personal
agents that act on our behalf to do
things like book travel keep up with our
health take action on Internet services
for us is it even a a fe feasible thing
to do given all the protections to sort
of guard against them up until this
moment
we will have to sort of fundamentally
reformat how the internet works to be
able to support it and I think that like
the um you know we're going to need in
some in some senses like there will be
like two webs there will be the web that
uh that that that humans use um when
they need to navigate stuff on their own
and then there will be the web that
agents use which is sort of under the
surface and something that humans will
never see but allows them to sort of U
you know conduct actions on hals more
more efficiently and easily and that I
think will be in the long run what what
ends up happening and my my honest take
is I think that to the degree that most
of us uh you know there there's sort of
like two kinds of usages of the internet
today there's there's um uh sort of
consumption which is um which is where
we're seeking out content and you know
we're curious about things and then
there's utility um based usage and I
think the the sort of addressable market
so to speak uh for the agents is all the
utility work like everything where I'm
using the internet just to like get
something
done I want that to happen faster easier
better I would rather have to not have
to do that actively at all let's say
it's like booking appointment looking up
a particular piece of information or um
you know uh figuring out how to like you
know fill out my tax return or whatever
it might be like that stuff should all
be handled by the agents and we're still
going to have to you know do a lot of
consumption of of content just to sort
of like you know as part of our as part
of what we like to do and so um uh yeah
I think I think it's a it's a really
good point I mean I think ultimately I
think agents are going to start in an
area that that'll feel pretty um uh
it'll feel like a toy just like with any
technology so maybe um you know we'll
all start with like a language learning
agent or we'll start with a cooking Aid
agent or it'll just be something that
feels pretty innocuous but then we'll
start to realize we can really rely on
it and then and we'll start relying on
it for a lot more and that's kind of
what happened I think with chat gbt
initially it was sort of we realized you
know it was kind of a toy and then
people started using doing a lot of
homework with it people started to code
with it and then now people do all sorts
of stuff with chat GPT and other chat
Bots um that'll be the the the thread
let me ask you this question before we
move off of Agents do you think it's
ethical for me to like have my AI agent
which can type and talk uh go out and
email and call a bunch of humans on our
behalf people working you know let's say
in customer service or uh I don't know
if I'm applying to schools and they're
trying to find out like information
about like whether I qualify and what I
need to submit I mean these processes I
maybe they've been designed as arduous
to sort of filter out the people who
aren't willing to do the work to sort of
get in or pass that application
threshold so it's in some way it's
combating these guard rails that
companies and institutions have set up
for us on the other hand it could end up
wasting a lot of people's time uh like
I'm I have really U anticipating like no
agent policies from like certain schools
or institutions being like if you're
going to reach out to us it has to be a
person versus an agent what do you think
you know I I saw this uh this thing on
Reddit there was this post of of how um
a uh
a uh an admissions officer she sort of
um created all these uh all these ways
in which they could they could track
whether or not an essay was AI generated
or not and there were very detailed
things it was very specific there were
list of maybe 20 or so criteria that um
that they that they looked for and um
and I think that you know to your point
it's it's uh it was kind of
heartbreaking to see because that means
that you know let for if a student used
an AI to generate an essay you know they
have to spend way more time just
figuring out whether or not it was AI
generated to like sift through all the
noise um and so yeah I think I think
you're totally right I think we're going
to need there will um almost in the same
way that there will be like an internet
for humans and internet for agents there
will be processes for humans processes
for agents and and a lot of um uh a lot
of things that are high intent or very
expensive or otherwise special in some
way are going to be reserved for humans
um only and and it'll sort of be the the
sort of like more transactional stuff
that uh that can be handed off to agents
in in Mass that's right I mean in some
ways I'm looking forward to this future
on the other hand I do sort of think
like the more we talk about it how how
much AI will take care of for us I do
sort of feel like we're hand and balling
our way towards that Wall-E future where
we're all fat and drinking big big sodas
and having roombas take us around the
world it's uh yeah I think I think uh uh
ease and convenience which definitely
are the directions that technology has
taken us uh you know uh clearly there
should be limits at some point but uh
but if we if they exist we don't know
where they are exactly and this idea of
like removing friction and in some ways
it's made the world great in other ways
it sort of changes the brain chemistry
of people where like we don't expect to
go through hard things and when we do we
lose our minds and that's why you end up
seeing the YouTube videos and the videos
on X of people in the airport because
we've removed so much friction and
companies have competed on the base of
customer experience to the point where
now if something goes wrong we're
fragile and we think that you know we
deserve better and there is something to
be said uh for friction toughens people
up a little bit totally all right we're
here with Alexander Wing CEO and
co-founder of scale AI $4 billion
company um that works with others to
help generate a uh data AI data for them
and also uh help them scale their AI
Solutions we're going to talk a little
bit more about Alex Alex's thirdd
prediction when we come back right after
this and we're back here on big
technology podcast with Alexander Wing
the CEO and co-founder of scale AI uh so
Alex I want to ask you about um this
interesting shift that we're seeing
right so up until this point we've
talked entirely about AI models on the
basis of how many gpus or chips they're
trained on right it used to be that you
could trade a model in like 16 chips
right by the way there's not they're not
cheap like 20 to $40,000 each
uh then I went to a thousand and now
towards the end of the year we started
hearing crazy numbers like 100,000
200,000 we I was just at Amazon's um
reinvent conference in Vegas and Matt
Garman the CEO of AWS told me that
they're going to train the next
anthropic model on hundreds of thousands
of gpus gpus or GPU equivalents and then
I was like oh that's a lot and as he's
saying that Elon Musk came out and was
like well we are going to train the next
uh xai model uh in Memphis on a million
gpus so I think we're really hitting
like maybe we're hitting the limit I
don't know of what you can do with chips
um and so you believe that we're going
to shift this conversation Beyond chips
in terms of what makes the most powerful
model so I will te you up for prediction
number three yeah and so so um so much
of the dialogue to your point over the
past few years has really been around
gpus and computational Power and I think
what's going to happen in 2025 is we're
going to um we're aren't going to only
be focused on who can create newer
better chips or bigger um data centers
with more chips but also who can create
um newer and better data and uh one of
the things that I think we're going to
see is a focus of um the Focus Shift
from just computational power to
computational Power Plus data um being
sort of uh considered nearly equally you
know data really is at its Cor core the
raw material for intelligence so um the
conversations around data are going to
be really interesting and one of the big
topics that's been um uh that's been
bounced around uh for the past few
months has been uh you know are we
hitting a wall have we hit the data wall
and um are we hitting a wall on progress
overall and I think the interesting
thing that's been happening is um you
know this has come from an approach of
scale up computational power at all
costs it just scale up the number of
gpus and create huge bigger and bigger
um you know Data Centers of gpus without
creating more and more data to train
these models on then we're going to hit
issues and we're going to hit walls and
barriers where we we stop seeing the
level of progress that we expect out of
the models so um one of the big things
that we see especially in our work with
a lot of the the frontier Labs is you
know it is true they're scaling up the
GPU clusters they're scaling up the
number of chips um uh you know that's
still a very aggressive path for them
but the the in parallel conversation is
how do we scale up data and that you
know there's two sides of that one is
obviously scaling up the volumes but
also scaling up the complexity so um
they're seeing the need to go towards
more of what we call Frontier data so go
towards Advanced reasoning capabilities
um agentic data to support the agents
that we were just talking about um
Advanced multimodal Data we just saw
today for example example that opening
ey released Sora um and and so the the
needs for you know video data and more
um complex combinations of video text
audio imagery Etc Al together is going
to be really really interesting going
into the next year and so um I think one
of the lessons that's really played out
uh more recently with the models is that
you know you can't just scale gpus and
expect to get get the same levels of
progress you need to have a strategy by
which you're going to scale up all three
of the pillars you need a strategy to
scale up the compute you need a strategy
to scale up data you need a strategy to
continue improving the models um and
it's only through the sort of concert of
all three of those things that you're
going to be able to get um keep pushing
the the boundaries and barriers on AI
progress
um uh but I'm curious what you think I
mean you you've talked to all these CEOs
what are they talking about I mean this
is exactly the thing that they're
talking about we had Aiden Gomez from
coher in a couple weeks ago and he
basically said that this has sort of
been the path of training the models
whereas in the early days you could
effectively bring anybody off the street
take down anything they had to say and
it would be new information for the
models and then you started have to
bringing you started to have to bring in
grad students to talk about their
because that general knowledge base was
built so then you bring in grad students
to talk about their uh area of
discipline then you go to the phds and
then he goes where do we go next because
we have all this general knowledge and
now we have all this the specialized
knowledge that we've used to train these
models on and by the way it's just
amazing the way that they've improved
and been able to sort of handle some
complexity it's it's really crazy um and
so the question is like where to go next
and I I think that's that's what you
guys are working on now and I'd be
curious to hear what the process is like
on your end for you know generating more
data for these models to train on yeah
so so it's exactly what you just
mentioned like a lot of what we're
focused on is how do we bring in
expertise and and um really the sort of
um expertise from every field you might
imagine from you know medicine to law to
math to physics to computer science to
um you know even even knowing about
really Advanced systems of various kinds
or being a great accountant or you know
whatever field you might imagine getting
the sort of all of what are all the
Arcane knowledge what is all of the sort
of really specific um deep knowledge
that exists in each of these areas and
pull that into you know large scale data
sets that we can use to help train these
models to keep improving in a lot of
these areas and um a lot of the a lot of
the effort for us has been um something
that we call hybrid data so how do we um
so one of the things that we've seen
over the past few uh past year in
particular is that synthetic data has
not worked as well as I think um
everybody had hoped you know pure
synthetic data just using data generated
from the models to try to train future
models that can sometimes cause real
issues for the models um and so one of
the things that we've been really
pushing forward is this idea of hybrid
data so you have synthetic data but you
use um human experts to mix in with the
synthetic data to ensure that you're
producing data that's really really um
accurate and high quality and it won't
cause issues but also you're able to do
it very efficiently and at Large Scale
so you also have those PhD that will sit
down and kind of write what they know or
dictate what they know and then you feed
that into the models yeah exactly and a
lot of times it's even more um targeting
that you know you run the model until
you you realize the model's making
mistakes over and over again and then
you know you've hit sort of a a limit of
its knowledge or limit of its capability
and you have a
PhD sort of come in and help you know uh
set the model up on the right track so
to speak um what's the limit then in
terms of where are we going to get to
because if we let's say we have all
these specialized Fields input their
knowledge does that eventually make like
AI complete if it just kind of knows
everything about every subject or does
it have to hit like a new Benchmark to
really show that it's has this like next
level intelligence like does it have to
start making discoveries of its own what
do you what do you think the Benchmark
should
be yeah I think I well um to me I think
there's like clearly many more levels of
improvement so now it's it's sort of
testing okay can it can it do each of
these things right once or um or how
there's sort of the first track was just
reliability so getting these models from
doing something um right once and five
times to right
99.99% of the time and that requires a
lot of development just to get to that
you know increase the level of
reliability of the systems and then to
your point it's really about how can the
model start taking more and more actions
in a row you know one of the things that
really um is true in all the models
today is that they're not that good at
you know uh at taking multi-step actions
whenever it has to take an few hops
whenever it has to take chain a few
things together it'll invariably make
mistakes along the way and so um the the
next level of improving reliability is
really enabling the models to do more
and more multi-turn more and more
multi-step um uh reasoning to be able to
enable them to to sort of do more and
more complex tasks and then I the last
piece as we go is to and and this is the
the key to where you're going is like
eventually it'll able to start be able
to start making its own hypotheses
running those tests on its own and sort
of ultimately making its own sort of uh
discoveries or realizations or or um
sort of conduct its own research and
even then it's still going to get stuck
sometimes and still going to need a
human PhD to come in and sort of help it
just in the same way that like you know
a PhD student these days you know still
needs an advisor to sort of still give
it the right nudge and so um and so I
don't think the sort of like the
symbiosis so to speak between the humans
and the AI will ever go away like I
think we'll always be able to sort of
will always be very important in helping
the models you know get on the right
track and and ensure that they always
are continuing to improve but we're
going to see the model sort of level up
in terms of
um what is the what is the degree to
which they're able to be autonomous and
the degree to which they're able to to
operate on their own and on the M on the
multi-step thing right taking a bunch of
different steps I heard something
interesting from Moody's last week and I
want to run it by you where they said
basically they've created 35 individual
agents so let's say they want to
evaluate something for their portfolio
like a company for their portfolio
they'll have one that will look at One
agent will look at the financial data
another agent will look at the uh let's
say weather risks another agent will
look at the location that they're based
in another one will look at the industry
they have 35 different variables or
whatever it is and then they have they
all the all of them come back and they
deliver their results to this compiler
agent which evaluates all all of it and
then runs the results by voting agents
which ask okay is this reliable or not
uh I walked away from that impressed by
the idea but also like kind of my
reporter brain went off and was like I
don't know if this is real or not so I'm
curious what you think is that a
possible solution and how feasible is
that in terms of a way to get into these
multi-step
processes um so that's a very um in my
work uh it it's a very sort of um uh
regimented way to try to to try to
enable the systems to do multi-step
reasoning because ideally what you want
the model to do is to to just like how a
human does be able to sort of go through
and figure out what are the bits of
pieces it needs to know as it goes along
um and be able to do so on its own
dynamically without having to sort of
like um pre predetermine and preset this
entire regimen for the models to need to
go through um so you're saying that
might be something that a model can do
entirely on its own that's pretty
cool I think in the future like we're
we're going to the models will improve
to be able to get there you know and and
I think the real um on the multi-step
side and the multi-step reasoning point
I do think that the um there's a lot of
blockers because um this is the kind of
thing that humans learn how to do kind
of from a lot of trial and error and
experimentation like we'll try to um do
a complex task and then we'll realize
we'll learn that oh we actually missed
you know let's say you try to bake a
cake for the first time for you know
reasonably complex Endeavor and then you
realize you missed a b c and d and then
the next time around you'll be like okay
I'm definitely going to remember just
pan of flour that came out of the oven
where did I go
wrong um and uh but yeah exactly I mean
like we learn a lot through trial and
error and right now the models are the
models are early in their process of
doing the same thing of going through
and and sort of uh um and being able to
do these sort of dynamic processes where
they learn through trial and error and
they were able to continually learn from
their mistakes that's where we need to
get to okay great I I know we have uh
just a couple minutes left so let me
throw a couple um quick hits at you and
then we can head out first of all I'm
just curious we talked a lot about how
data is going to matter a lot but I
can't get my mind off the fact that uh
elon's going to try to build this
million GPU uh super cluster what's your
prediction for what that spits out I
honestly think right now at where we are
in AI development today um we are more
bottlenecked by data than we are compute
so I think incremental Improvement then
with something like that yeah I think
the I think the real step changes come
from data okay um so just a quick
followup to that if we um if we end up
like I just saw there was a news from
Google today about this breakthrough
they had in Quantum Computing which
we'll probably cover more on the Friday
show uh if we have working quantum
computers which can process data much
faster what do you think that does for
AI um I I really think so I had the
opportunity to to tour Google's Quantum
uh facility earlier this year uh it's
very impressive um it's I think Quantum
Computing is on um kind of like the way
AI was back in 2018 it's on a few
scaling laws where you can definitely
sort of squint and see that you know in
5 to 10 years this is going to be a
really really impactful technology and
ultimately I think what it's going to
enable is it's going to speed up AI
ability to to um uh do scientific
discovery and so whether it's you know I
think I think a lot of the use cases
that excite people are in biology or
chemistry or Fusion or a lot of these
very chaotic and difficult understand
you know Natural Sciences I think that's
where um uh Quantum Computing has the
ability to be pretty transformational
fundamentally and I think AI will be
able to use it as a tool to be able to
enable it to do inred research in those
fields that's crazy okay so all right
last one for you we're in the middle of
like this race where it seems like every
week the foundational model companies
put out a new uh development whether
that's open AI whether that's anthropic
or even xai Google Amazon just released
a set of new models last week so who do
you think is in the lead at the end of
2025 oof that's hard to say I mean I
think think that um one thing that we
see today with the models is that uh
because all the benchmarks that we're
used today
are what's called saturated I.E you know
in other words like all the models do
really well at the benchmarks it's
really hard to discern actually which on
which models are fully on top versus not
on top you know there's a lot of
argument for example on the internet you
know at least in the Twitter feeds that
I see um in terms of whether Claude is
better or A1 is better and there's a all
the comparisons and um between the two
of them um so one of the things that I
think we're going to need in 2025 are
much much harder benchmarks and much
much harder evaluations that are going
to able be able to help us figure out
you know separate the wheat from the
chaff a little bit um I don't know who's
going to who's going to be in the lead
but I do think that um I think that we
need much better measurement to actually
be able to discern between all of these
incredible models that that labs are
pushing out right
now okay all right we'll take it no no
prediction on who's going to be the best
but a definite interesting perspective
on evaluations Alex great to meet you
thank you for coming on the show I think
these predictions have been fascinating
definitely stretched my mind in areas
that I wasn't thinking about so thank
you and we hope to have you back
sometime soon yeah this was a lot of fun
thanks for having me thanks for being
here all right everybody thank you so
much for listening we'll be back on
Friday with Ronan breaking down the news
uh we will see you then on big
technology podcast podcast