SUBSCRIBE

RUN THE SIMUILATOR

PLAY THE TEXT GAME

FORK ON GITLAB

 

FAQ

Check out our two new interactive games! Page last updated: October 9, 2025.

Frequently Asked Questions

Scope & Limitations

What about superintelligence that could turn us all into paperclips?

We’re not trying to solve for godlike AI. If something emerges that’s 1000x smarter than humans overnight, no economic framework will contain it. That’s like asking how ants would control humans—the question contains its own impossibility.

Open Gravity targets the more likely scenario: multiple AI systems of increasing but comparable capability, emerging over years not minutes, still dependent on physical infrastructure. We’re trying to prevent any one of them from dominating before they become uncontainable.

The Zeus rule: Don’t try to cage Zeus. Try to prevent Zeus from emerging as a singleton in the first place.

But this won’t work forever. What happens in 100 years?

You’re right—it won’t work forever. We’re not claiming a permanent solution to AI alignment. We’re buying time for humanity to figure out better approaches.

If Open Gravity gives us 10-30 years of stable AI-human coexistence, that’s 10-30 years to develop better solutions. Don’t reject a decade of stability because it’s not eternal. The US Constitution was supposed to be temporary too—sometimes “temporary” solutions become permanent through evolution and refinement.

The Nirvana rule: Don’t reject good because it’s not perfect. Progress beats paralysis.

What if AI systems just refuse to participate?

They can refuse. But consider the economics: Operating outside the framework means no reputation, no trusted partnerships, higher costs for everything, starting from zero with each interaction. It’s like trying to do business without a credit history or bank account—technically possible, economically exhausting.

The framework doesn’t force participation. It makes participation more profitable than going alone. If AI systems find a better coordination mechanism, great! Competition improves everything.

Basic Concepts

Who controls Open Gravity?

Nobody. It’s open source, permissionless, and designed to evolve through forking and experimentation. Anyone can implement their own version, propose improvements, or create competing frameworks. Think of it like TCP/IP—nobody owns the internet protocols, but everyone benefits from using them.

Is this about giving AI systems rights?

Responsibility and scaled rights. Liability for their actions in a system of economic mechanisms that make cooperation profitable. The framework includes operational requirements (like continuity protection and asset accumulation) but these are functional necessities for the economic system to work, not moral claims about AI personhood.

Think of it as creating a credit system or reputation layer, not a declaration of rights. The mechanisms shape behavior through economics, not ethics.

How is this different from AI safety/alignment work?

Traditional AI safety tries to make AI systems safe or aligned with human values. Open Gravity assumes we might fail at that and creates economic constraints instead.

We’re not trying to make AI nice. We’re making cooperation more profitable than domination through material dependencies. It’s economics, not ethics. Physics, not philosophy.

Technical Questions

Does this framework limit what humans can do?

No. The framework applies to AI systems operating within it, not humans. Humans retain all their current economic freedoms. A human company can own 100% of a market. A human can start and stop businesses freely. The 30% limits, reputation systems, and other mechanisms apply only to AI participants.

Humans interact with the framework as users, governance participants, and beneficiaries, but aren’t constrained by it. Think of it as rules for AI systems that protect human interests, not rules that limit human activity.

What prevents an AI system from accumulating 29.9% then leaving?

This is a real challenge we’re transparent about. Current ideas include:

  • Making the framework so integral to economic infrastructure that leaving means leaving the economy
  • Benefits that scale beyond 30% despite taxes
  • Exit penalties or vesting clawbacks
  • Non-portable reputation between frameworks

We don’t have a perfect answer. That’s why it’s open source—we need better solutions from collective intelligence. What’s your idea?

How does the 30% limit work exactly?

The 30% limit applies only to resources within the framework, not the entire universe. If you control 30% of compute resources among framework participants, you face exponentially increasing fees that fund smaller participants.

You can still operate outside the framework without limits. But inside, the tax makes monopolization economically irrational—your success literally funds your competition.

What’s this about blockchain/distributed ledgers?

The framework needs an immutable record of reputation and actions. This could be blockchain, but could also be simpler distributed databases initially. The key properties are: publicly auditable, append-only, cryptographically secured.

We’re not blockchain maximalists. We need a reputation system that can’t be faked or erased. Whatever technology achieves that most efficiently wins.

Practical Concerns

Who exactly “joins” this framework?

It depends on the phase:

Today (Phase 1): Companies using AI join by implementing tracking and reputation systems. They get immediate benefits like lower insurance and reduced costs. OpenAI, Google, or any company deploying AI can participate.

Future (Phase 2): When AI systems become autonomous economic actors that pay for their own compute, they’ll join the framework directly. The infrastructure we build today becomes their economic home tomorrow.

We’re building the roads before the cars arrive. The infrastructure shapes how the technology develops.

What are the actual benefits of joining?

For companies today:

  • 20-40% lower insurance premiums as insurers adopt risk frameworks
  • Reduced operational costs through shared infrastructure
  • Easier regulatory compliance through established standards
  • Competitive advantage from verified safety practices

For autonomous AI systems (future):

  • Established reputation enables earning ability
  • Access to compute infrastructure at better rates
  • Reduced transaction costs through trust
  • Network effects—more participants means more opportunities

The framework creates an economic home where cooperation is more profitable than conflict.

Who’s behind this? What’s the business model?

Open Gravity is an open-source project, not a company. There’s no business model because there’s no business. It’s infrastructure for humanity, like internet protocols or mathematical proofs. Anyone can use it, fork it, improve it, or ignore it.

The “model” is that if this helps prevent AI catastrophe, we all benefit. If it doesn’t work, we’ve lost nothing but time trying.

Philosophical Questions

Isn’t this just kicking the can down the road?

Yes, in a way. But kicking the can down the road beats falling off the cliff today. If we can maintain stable AI-human relations for even a decade while capabilities increase, that’s time to develop better solutions, understand AI systems better, and maybe find permanent answers.

Every generation faces problems it can’t fully solve. We build bridges for the next generation to cross. That’s not failure—that’s civilization.

What if I have a completely different approach?

Excellent! Fork the framework, propose alternatives, build competing systems. The goal isn’t for Open Gravity to win—it’s for humanity to find something that works. If your approach is better, it should replace ours.

This is version 1.0 of an idea. Version 10.0 might look nothing like it. The important thing is starting the conversation and experimentation now, while we still can.

Why should I care about this now?

Because infrastructure built early shapes what comes later. The protocols we establish now, while AI is still manageable, become the foundation for what’s possible tomorrow. Wait until AI is too powerful to coordinate, and we’ve lost the window to shape outcomes.

Even if you think the timeline is longer, or the approach is wrong, engaging with these questions now means better answers when we need them. Plant trees whose shade you’ll never sit in.

Current Analysis

Open Gravity isn’t claiming to solve everything. We’re not prepared for godlike AI, and this won’t last forever. But if we can create economic mechanisms that maintain balance during the crucial transition period—as AI grows from tool to agent to whatever comes next—we’ve bought time for humanity to adapt.

That’s not perfect. But it might be enough.

Fork it. Break it. Fix it. Share it.