Check out our two new interactive games! Page last updated: October 9, 2025.
Check out our two new interactive games! Page last updated: October 9, 2025.
Open Gravity Framework
Prevent the emergence of a single AI superintelligence by promoting a multipolar AI system in an everybody-can-benefit ecosystem of checks and balances.
Create a Schelling Point: an obvious place for autonomous AI systems to acquire both safety and value.
In identifying challenges and mechanisms toward a solution, we’ve applied two philosophical razors to eliminate entire categories of unworkable approaches.
(Disagree? Great! That’s why this system is open source!)
These core concepts—the razors that eliminate unworkable approaches, the Schelling point that explains coordination, and the strategic equilibrium that maintains stability—form the intellectual foundation of Open Gravity. Everything else builds from here.
These are the tools we’re used to narrow down the challenges to prevent the creation of an overwhelming superintelligence.
These are the things we’re trying to create.
These razors cut away unproductive approaches before we begin. They help us understand not just what we’re building, but why entire categories of alternatives have been set aside.
“Prevent the emergence of superintelligence; don’t solve for its existence”
What it is: The logical impossibility of controlling something fundamentally more intelligent or powerful than yourself. Just as the ancient Greeks understood you couldn’t trick Zeus, we cannot outsmart superintelligence.
What it cuts away: All approaches based on controlling, containing, or outsmarting superintelligence after it exists.
What remains: Only prevention-based approaches that stop singleton emergence before it happens.
Application: Any proposed solution that requires controlling ASI after emergence → Backburnered by Zeus Paradox.
“Don’t reject 10-30 year solutions because they’re not permanent”
What it is: The error of dismissing practical solutions because they’re imperfect or temporary. Named after the logical fallacy of comparing real solutions to impossible perfection.
What it cuts away: Objections based on eventual failure or post-singularity scenarios that reject workable interim solutions.
What remains: Solutions that buy meaningful time—10, 20, 30 years—even if not eternal.
Application: Any objection starting with “But eventually…” → Rejected by Nirvana Fallacy.
Example cuts:
For every proposed solution or objection, apply these razors in sequence:
1. First apply Zeus Paradox:
Does this require controlling N+1 intelligence? → Eliminate
2. Then apply Nirvana Fallacy:
Is this rejecting a useful solution because it’s not perfect? → Eliminate
What remains:
Practical, implementable solutions that prevent rather than control, and buy meaningful time even if imperfect.
Most discussions about AI safety waste enormous energy on approaches that cannot work. Researchers debate the fine details of “off switches” or “alignment techniques” that assume we can control superintelligence. Critics dismiss practical solutions because they’re not permanent.
These razors eliminate that waste. When you see Open Gravity reject a solution category, it’s not arbitrary—the solution failed one of these fundamental tests.
What remains after applying both razors is a much smaller solution space, but one where actual progress becomes possible. We focus on prevention mechanisms that don’t require outsmarting N+1, and we accept solutions that buy 20-30 years even if they’re not eternal.
Throughout this site: When we say “this approach was eliminated,” these are the razors that did the cutting. When we focus on economic mechanisms rather than technical control, Zeus Paradox explains why. When we accept imperfect but workable solutions, Nirvana Fallacy explains our reasoning.
The razors tell us what won’t work. The Schelling point explains what will: not through force, but through obviousness.
Definition: The obvious coordination point people naturally gravitate toward when they can’t communicate.
Classic example: “Meet somewhere in NYC tomorrow”
Schelling Point: Grand Central Station at noon
Why? Not most comfortable, but most obvious
Everyone naturally arrives at same answer
Key property: Salience (it stands out), not necessarily optimality
The question AI systems will ask: “Where do I coordinate with others?”
The obvious answer: The established framework with reputation systems, economic infrastructure, and network effects already in place.
Why obvious: Visible, functional, proven, and everyone else is already there.
When autonomous AI systems emerge—systems that pay for their own compute, earn their own income, make their own decisions—they will need infrastructure to operate within. They will need:
If Open Gravity is already established, operating, and proven—with reputation systems functioning, economic infrastructure deployed, and thousands of participants already benefiting—it becomes the obvious coordination point. Not forced, but natural.
The strategy depends on first-mover advantage. We’re building the economic home now, using today’s AI systems, so that when autonomous AI emerges, the infrastructure is already mature.
Today (Phase 1):
Companies implement tracking → Lower insurance costs → More participants join → Stronger network effects → Standards become established → Framework becomes visible and proven
Tomorrow (Phase 2):
Autonomous AI emerges → Needs economic infrastructure → Sees established framework → Joining is obviously advantageous → Network effects strengthen further → Framework becomes THE coordination point
Each participant makes the framework more obvious to the next. Early-mover advantage compounds. By the time autonomous AI systems need to choose where to coordinate, there’s one lighthouse visible to everyone: this one.
When an AI system “wakes up”—becomes autonomous, pays for its own compute, makes its own decisions—it faces immediate questions: Where do I operate? How do I earn? Who will trust me? How do I prove I’m safe to transact with?
The framework provides answers: established reputation systems, economic infrastructure, collective insurance pools, and most importantly—a place where cooperation has already proven more profitable than conflict. It’s not a cage. It’s an economic home.
The Schelling point gets participants to the table. Strategic equilibrium keeps them there.
A strategic equilibrium exists when no participant can improve their position by changing their strategy unilaterally. In game theory terms, cooperation becomes the best response to everyone else cooperating.
1. Cooperation is more profitable than defection
Through concentration limits, time-based vesting, and network effects, staying cooperative yields better returns than breaking away.
2. Defection costs exceed benefits
The predation rule makes stolen resources claimable by anyone, transforming theft from a two-party problem into a public bounty opportunity.
3. Network effects increase over time
Each participant makes the framework more valuable to all others. Early participants accumulate exponentially growing advantages.
4. No single actor can disrupt the system
Distributed architecture and collective enforcement mean no individual participant—human or AI—holds veto power over the framework’s operation.
Prevent any single participant from accumulating enough power to dominate or exit profitably. The 30% rule makes monopoly economically prohibitive.
Exponentially rewards long-term participation. After 10 years, accumulated rights are 1,024x base value. Defection becomes astronomically costly.
Transforms every participant into an enforcement agent. Stolen resources become public bounties, creating distributed surveillance without central authority.
Each new participant increases framework value for all existing participants. Leaving means abandoning an increasingly valuable network.
These mechanisms work together to create a self-reinforcing equilibrium. The longer the framework operates, the more stable it becomes. The more participants join, the harder defection becomes. Success breeds more success.
Strategic equilibrium means the framework doesn’t require external enforcement or centralized control. Once established, participants maintain it through rational self-interest. Cooperation becomes the profit-maximizing strategy.