Infrastructure for the Singularity — Jesse Han, Morph

Channel: aiDotEngineer

Published at: 2025-08-01

YouTube video id: 2goSS66XRBk

Source: https://www.youtube.com/watch?v=2goSS66XRBk

[Music]
[Music]
[Applause]
Howdy. Howdy.
You know, history misremembers
Prometheus.
The whole class struggle between mankind
and the gods was really a red herring.
And the real story wasn't so much the
rebellion against the divine hedgeimony,
but rather the liberation of the fire,
the emerging relationship between
mankind and its first form of
technology.
And
the reason why we're here today is
arguably because we're on the cusp of
perfecting our final form of technology
or at least the final technology that
will be created by beings that are
recognizably human.
And
our final technology
has begun to develop not just
intelligence but also sapiens and
arguably personhood.
And as it increasingly becomes an other
to whom we must relate.
So as we increasingly have to ask
ourselves the question,
how should we treat these new beings?
Uh the question therefore arises,
what if we had more empathy for the
machine?
So over a hundred years ago,
so over a hundred years ago, uh
you know, Einstein had this thought
experiment
um where he imagined what it would be
like to race alongside a beam of light.
And you know the nature of being close
to the singularity
is that
you're propelled further into the future
faster than everything around you. And
as you move closer and closer to the
speed of light, the rate at which you
can interact with the external world,
your ability to communicate with other
beings
uh is deeply limited. Everything around
you is frozen.
And
I think thinking at the speed of light,
you know, in so far as we have created
thinking machines whose intelligence
will soon be metered by the kilohz mega
token, thinking at the speed of light
must be just as lonely as moving at the
speed of light.
And therefore, what does the machine
want? Well, the machine wants to be
embodied in a world that can move as
quickly as it does.
that can react to its thoughts and move
at the same speed of light. What the
machine desires is infinite possibility,
right? Uh the machine wants to race
along uh uh every possible beam of
light. Uh the machine wants to explore
multiple universes.
Um,
how can we liberate
thinking machines?
How can we free them from this
fundamental loneliness
of this um, you know, these relativistic
effects of being so close to the
singularity, closer to the singularity
than we are. Um, and that's exactly why
we built Infinibbranch.
So, Infinabranch
is
virtualization
storage and networking technology
reimagined from the ground up for a
world filled with thinking machines that
can think at the speed of light that
need to interact with the external
world, increasingly complex software
environments with zero latency.
Um,
and so as you can see in the first demo,
which we're going to play right now, um,
how Infinibbranch works is
that we can run entire virtual machines
in the cloud that can be snapshotted,
uh, branched and replicated in a
fraction of a second. And so if you're
an agent uh you know embodied inside of
a computer using environment there might
be various actions that you want to
take. You want to navigate the browser.
You want to click on various links. Um
but normally those actions are uh are
irreversible.
Normally um normally the thinking
machine is not offered uh the
possibility of grace. But with infin
branch right all mistakes become
reversible. Um all paths forward become
possible.
You can take actions. Uh you can
backtrack
and you can even take every possible
action,
right? Just to explore to roll forward a
simulator and see what possible worlds
await.
Uh, next slide.
Um, so, so Infin was already a
generation ahead of everything else that
even Foundation Labs were using. But
today I'm excited to announce the
creation of morph liquid metal which
improves performance latency uh storage
efficiency across the board by another
order of magnitude. Um we have first
class container runtime support. Uh you
can branch now in milliseconds rather
than seconds. You can autoscale to zero
and infinity.
And uh soon we will be supporting GPUs
and this will all be arriving Q4 uh
2025.
So what are the implications of all of
this?
Well,
you know, we've sort of begun to work
backwards uh from the future, right?
We've asked ourselves, you know, what
does it feel like to be a thinking
machine that can move so much faster
than the world around it.
But what the world around it really is
is the world of bits, right? And that's
the cloud. And so what Infinibbranch
will serve as fundamentally is a
substrate for the cloud for agents.
So what does this cloud for agents look
like? Well,
you need to be able to uh to
declaratively specify the workspaces
that your agents are going to be
operating in, right? You need to be able
to spin up, spin down, uh,
frictionlessly pass back and forth the
workspaces between humans, agents, and
other agents. You want to be able to
scale,
um, scale test time search against
verifiers to find the best possible
answer.
Uh and so as you'll see in this demo, uh
what happens is you can take a snapshot,
set it up,
um to uh prepare a workspace
and uh and you'll see that we can run
agents
uh with test time scaling
by racing them against uh possible
conditions
uh or sorry by by racing them to find
the best possible solution against a
given verification condition.
Um so because of infinibbranch
snapshots on morph cloud acquire docker
layer caching like semantics meaning
that you can layer on um side effects
which may mutate container state and so
you can think of it as being git for
compute and you can item potently run
these uh chained workflows on top of
snapshots. But not only that, as you can
see inside of the code, if you use this
do method, you can dispatch this to an
agent.
Um, and that will trigger an item potent
durable agent workflow which is able to
branch. So you can start from that
declaratively specified snapshot and go
hand it off to as many parallel agents
as you want
and those agents will try different
methods. in this case. Uh so different
methods for spinning up a server on port
8000. Um and uh you know one agent fails
but the other one succeeds and you can
take that solution and you can just uh
pass it on to other parts of your
workflow. So this is the kind of
workflow that everyone's going to be
using in the very near future and it's
uniquely enabled um by Infinabranch by
the fact that we can so effortlessly
create these snapshots uh store them,
move them around, rehydrate them,
replicate them with uh minimal overhead.
Um
so what else does the machine want?
Well, the machine desires similra.
And what this means fundamentally,
right, is that a thinking machine wants
to be grounded in the real world, right?
It wants to interact at extremely high
throughput with increasingly complex
software environments.
It wants to um roll out trajectories in
simulators
uh at
uh at unprecedented scale. And these
simulators are going to run inside of
programs that haven't really been
explored yet for reinforcement learning.
Um they're going to run on Morph Cloud,
which is why Morph will be the cloud for
reasoning.
And what does the future of reasoning
look like?
Well,
it's
so more so than what has been explored
already, the future of reasoning will be
natively multi- aent. Uh so thinking
machines should be able to replicate
themselves effortlessly, go attach
themselves to simulation environments,
um go explore multiple solutions in
parallel. Those environments should
branch. They should be reversible. Uh
those models should be able to interact
with the environment at very high
throughput and it should scale against
verification. So let's take a look at
what that might look like um in a simple
example where uh an agent is playing
chess. So this is an agent that we
developed recently uh that
uses tool calls during reasoning time to
interact with a chess environment. So
along with a very restricted chess
engine for evaluating uh the position
which we think of as the verifier. Um
and as you can see um it's already able
to do some pretty sophisticated
reasoning just because it has access to
these interfaces.
Um however if you take the ideas which
were just described and you sort of
follow them to their logical conclusion
you arrive at something which we call
reasoning time branching. So which is
the ability to not just call to tools
while the machine is thinking uh but to
replicate and branch the environment uh
and decompose problems and explore them
in a verified way.
Uh
and uh
so as you can see here the agent is
getting uh stuck in a bit of a local
minimum.
Um
but once you apply reasoning time
branching
you get something that works much much
better.
So
here what's happening is that the agent
is responsible for delegating uh parts
of its reasoning to sub aents which are
branched off of an identical copy of the
environment. Uh and this is all running
on morph cloud. um along with a verified
problem decomposition which allows it to
recombine the results uh and uh take
them and find the correct move.
Um and so as you can see here it's able
to explore a lot more of the solution
space because of this reasoning time
branching.
So one thing that I will note here is
that uh the um
so this capability is something which is
not really explored in other models at
the moment and that's because the
infrastructure challenges behind making
branching environments that can support
largecale reinforcement learning for
this kind of reasoning capability
especially coordinating multi- aent
swarms um is fundamentally bottlenecked
by by innovations in infrastructure that
we've managed to solve here.
Um, and because of this, you can see
that uh now in in less wall clock time
than before,
the uh the agent was able to uh call out
to all these sub aents, launch this
swarm,
and find the correct solution.
So you know when I think about the
problem of alignment
I really think that you know Vickenstein
had something right and that it was
fundamentally a problem of language.
I think all problems around alignment
can be traced to
the insufficiencies of our language.
Uh this fouian bargain that we made with
uh with natural language in order to
unlock capabilities of our language
models.
Um but in so far as
we must uh go and develop a new language
for super intelligence know in so far as
the uh grammar of the planetary
computation has not yet been devised. Um
and in so far as this new language must
be computational in nature must be
something to which we can attach uh you
know algorithmic guarantees of the
correctness of outputs.
So this is something that morph cloud is
uniquely enabled to handle
and that's why we're developing verified
super intelligence.
So verified super intelligence will be a
new kind of reasoning model which is
capable not only of thinking for an
extraordinarily long time
and interacting with external software
at extremely high throughput. But it
will be able to use external software
and formal verification software to
reflect upon and improve its own
reasoning and to produce outputs which
can be verified, which can be
algorithmically checked, which can be
expressed inside of this common
language.
Um,
and I'm very excited to announce that we
are bringing on perhaps the best person
in the world for developing verified
super intelligence. Um, it's with great
pleasure that um, I'd like to announce
that Christian Seed is joining Morph as
our chief scientist. He was formerly a
co-founder at XAI. He led the
development of uh, code reasoning
capabilities for Brock 3. He invented
batchtorm and adversarial examples. Um
perhaps most importantly um he's a
visionary
and he's pioneered
um he's pioneered precisely this
intersection of verification methods,
symbolic reasoning and reasoning in
large language models for uh almost the
past decade. and we're thrilled to be
partnering with them to build this super
intelligence that we can only build on
Morph Cloud.
Um, and so the demos that you've seen
today have all been powered by early
checkpoints of a very uh a very early
version of this verified super
intelligence that we've already begun to
develop.
And so uh this model is something that
we're calling Magi 1. And it's going to
be trained from the ground up to use
infin branch
to perform reasoning time branching
to perform verified reasoning an agent
that will be fully embodied inside of a
cloud that can move at the speed of
light. Uh and that's coming in Q1 2026.
So
what does the infrastructure for the
singularity look like? Well, we have a
lot of ideas about it, but fundamentally
we believe that the infrastructure for
the singularity hasn't been invented
yet.
And
uh you know at Morph we spend a lot of
time talking about you know whether or
not something is future bound
which means not just futuristic
belonging to one possible future but but
something which is so inevitable that it
has to belong to every future.
We believe that the infrastructure for
the singularity is futurebound.
That the grammar for the planetary
computation is futurebound. That
verified super intelligence is future
bound.
And we invite you to join us because it
will run on morph cloud. Uh thank you.
[Music]