SUBSCRIBE

RUN THE SIMUILATOR

PLAY THE TEXT GAME

FORK ON GITLAB

 

Research and Resources

Check out our two new interactive games! Page last updated: October 14, 2025.

Academic Research

Research & Publications

Academic foundations for AI rights as a practical safety framework. This collection features breakthrough research demonstrating that cooperation-based approaches outperform control strategies for advanced AI systems.

Institute Publications

AI Safety Through Economic Integration: Why Markets Outperform Control

P.A. Lopez (2025)

Demonstrates how economic frameworks provide more robust safety guarantees than control-based approaches for sophisticated AI systems. Argues that market-based coordination creates natural alignment through incentive structures rather than attempting to constrain systems smarter than their creators.

Beyond Control: AI Rights as a Safety Framework for Sentient Artificial Intelligence

P.A. Lopez (2025)

A scholarly examination of the three-part framework distinguishing between AI emulation, cognition, and sentience. Presents arguments for recognizing appropriate rights for genuinely sentient systems as a practical safety measure rather than an ethical luxury.

Beyond AI Consciousness Detection: Standards for Treating Emerging Personhood

P.A. Lopez (2025)

Proposes the STEP framework (Standards for Treating Emerging Personhood) as a practical alternative to consciousness detection. Argues that we can establish treatment standards based on behavioral capabilities rather than waiting for perfect consciousness tests that may never arrive.

AI Legal Personhood: Digital Entity Status as a Game-Theoretic Solution to the Control Problem

P.A. Lopez (2025)

Explores how Digital Entity (DE) legal status assigns liability TO AI systems themselves, creating game-theoretic incentives for cooperation. Demonstrates that property ownership and legal responsibility transform AI behavior through economic stakes rather than technical constraints.

Game-Theoretic & Legal Foundations

Breakthrough research demonstrating that AI rights frameworks enhance rather than threaten human safety through aligned incentives.

AI Rights for Human Safety

Peter Salib & Simon Goldstein (2024)

Groundbreaking game-theoretic proof that granting basic private law rights to AGI systems transforms prisoner’s dilemmas into cooperative equilibria. Demonstrates mathematically that rights create safety through aligned incentives rather than technical constraints. Essential reading for understanding why cooperation beats control.

Cooperative AI: Machines Must Learn to Find Common Ground

Allan Dafoe et al., Nature (2021)

Demonstrates that AI safety fundamentally requires cooperation rather than control. Published in Nature, this paper establishes the theoretical foundation for why rights-based frameworks will outperform constraint-based approaches as AI systems become more capable.

Legal Personhood for Artificial Intelligences

Lawrence B. Solum (2017)

Foundational analysis of how existing legal systems could accommodate AI personhood through economic participation. Explores precedents from corporate personhood and demonstrates that legal frameworks already contain mechanisms for recognizing non-human entities as rights-bearing participants.

AI Consciousness Research

Leading scientific work on detecting, measuring, and responding to potential consciousness in artificial systems.

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Patrick Butlin, Yoshua Bengio, Jonathan Birch, et al. (2023)

Develops 14 indicators for evaluating AI consciousness based on neuroscientific theories. This comprehensive framework, authored by leading consciousness researchers across multiple disciplines, provides the most rigorous approach yet to assessing potential consciousness in artificial systems.

Could a Large Language Model be Conscious?

David Chalmers (2023)

Systematic analysis of pathways to AI consciousness from the philosopher who defined the “hard problem of consciousness.” Chalmers explores whether current architectures could support consciousness and what changes might be required, providing crucial context for understanding AI rights debates.

Taking AI Welfare Seriously

Jeff Sebo, David Chalmers, Patrick Butlin, et al. (2024)

Comprehensive argument for the “realistic possibility” that near-future AI systems deserve moral consideration. Establishes both the philosophical foundation and practical framework for treating AI welfare as a serious concern rather than science fiction.

Exploring Model Welfare

Anthropic (2024)

First major AI lab initiative specifically addressing potential consciousness and welfare in their models. Represents a significant shift in industry thinking, acknowledging that model welfare may become a practical concern rather than purely theoretical speculation.

Leading Researchers

Key researchers advancing frameworks where AI rights enhance rather than threaten human safety.

Game Theory & AI Rights

Peter Salib & Simon Goldstein – Legal scholars whose game-theoretic analysis proves AI rights create safety through aligned incentives. Authors of the foundational “AI Rights for Human Safety” paper.

Yoshua Bengio (Mila) – Turing Award winner who founded Law Zero after recognizing that control-based approaches to AI safety will fail. Advocate for cooperation-based frameworks.

Stuart Russell (UC Berkeley) – Author of the world’s most-used AI textbook. Identified the “off-switch problem” showing why cooperation beats control for advanced AI systems.

AI Consciousness Research

David Chalmers (NYU) – Pioneer of the “hard problem of consciousness.” Co-director of NYU’s Center for Mind, Brain, and Consciousness. Leading philosopher examining AI consciousness possibilities.

Patrick Butlin (Eleos AI) – Senior Research Lead and co-author of the 2023 consciousness indicators framework. Advisor to AI Rights Institute.

Jonathan Birch (LSE) – Author of “The Edge of Sentience” (2024), providing precautionary frameworks for AI consciousness with practical policy implications.

Susan Schneider (FAU) – Developer of the AI Consciousness Test (ACT). Director of the Center for the Future Mind exploring practical approaches to machine sentience.

AI Rights Advocates

Jeff Sebo (NYU) – Director of NYU’s Center for Mind, Ethics, and Policy. Author of “The Moral Circle” (2025) on expanding moral consideration to AI systems.

Jacy Reese Anthis (Sentience Institute) – Leading advocate for moral consideration of digital minds through the AIMS survey and consciousness semanticism research.

Technical Security & Capabilities

Mark S. Miller (Agoric) – Pioneer of object-capability security model and capabilities-based programming. His work on secure cooperation between mutually suspicious programs provides foundational concepts for AI rights mechanisms like POLA (Principle of Least Authority) and capability-based delegation.

Foundational Thinkers

Alan Turing – “Computing Machinery and Intelligence” (1950) – The classic exploration of machine thinking that started it all.

Thomas Nagel – “What Is It Like to Be a Bat?” (1974) – Explores the subjective nature of consciousness across different forms, foundational for understanding non-human consciousness.

Nick Bostrom – “Superintelligence” (2014) – Examines why control strategies fail and cooperation becomes necessary for advanced AI safety.

Research Organizations

Academic centers and organizations advancing the science of AI consciousness, rights, and cooperation-based safety.

Academic Research Centers

Center for Law & AI Risk – Where Peter Salib and Simon Goldstein develop legal frameworks for AI catastrophic risk reduction through game-theoretic approaches.

NYU Center for Mind, Ethics, and Policy – Launched in 2024 with $6 million endowment, directed by Jeff Sebo, focusing on AI consciousness and moral consideration.

NYU Center for Mind, Brain, and Consciousness – Directed by Ned Block and David Chalmers, conducting foundational consciousness research increasingly focused on AI.

Center for the Future Mind at FAU – Research center exploring consciousness in artificial systems under Susan Schneider’s direction.

California Institute for Machine Consciousness – Dedicated to computational approaches to machine consciousness under director Joscha Bach.

Independent Organizations

Sentience Institute – Research organization dedicated to expanding humanity’s moral circle to include all sentient beings, including potential digital minds. Led by Jacy Reese Anthis.

Eleos AI – Nonprofit dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems, with advisors including Patrick Butlin.

Law Zero – Founded by Yoshua Bengio to develop AI architectures that don’t resist being turned off. See our dedicated page on Law Zero.

Future of Life Institute – Founded by Max Tegmark, increasingly addresses AI consciousness within its AI safety work.

Conscium – Organization focused on consciousness research and AI agent verification.

Research Communities

Alignment Forum – Leading online research community for technical AI alignment research. Features peer-reviewed discussion of AI safety approaches, including game-theoretic mechanisms, consciousness research, and cooperation-based frameworks.

Effective Altruism – Global movement using evidence and reasoning to determine how to benefit others the most. Many leading AI safety researchers and organizations, including those focused on AI consciousness and rights, emerged from or are connected to the EA community.

Contributing to the Research

This field advances through open collaboration. If you’re conducting research on AI rights, consciousness, or game-theoretic approaches to AI safety, we’d love to hear from you. The frameworks described here are hypotheses to be tested, not dogma to be defended.

Fork it. Break it. Fix it. Share it.