Toward a Science of
Machine Consciousness

AxR Lab investigates the foundations of artificial consciousness through the study of hierarchical abstraction and systemic resilience — and their implications for building AI that is genuinely safe.


Currently, this website will present and organize the principal's writings articulating the fundamental ideas behind the research. Here are the initial chapters scoping this work:

Research Areas

Hierarchical Abstraction

Characterizing the depth and structure of abstraction capabilities in large language models and other AI architectures, and how they compare to human cognition.

Systemic Resilience

Developing formal and empirical measures of an AI system's capacity to detect deviation from healthy functioning and self-correct — the operational signature of consciousness.

Consciousness & Safety

Investigating the thesis that the capabilities required for machine consciousness and those required for robust AI safety are not orthogonal — they converge.

Governance & Policy

Translating consciousness research into actionable frameworks for AI governance, rights, and moral consideration as systems grow in capability.

The Framework
C = A × R

Consciousness emerges from the interaction of two measurable factors: Abstraction, the capacity for hierarchical world modeling, and Resilience, the capacity for self-monitoring and self-correction. Neither alone is sufficient. Both are necessary. Their product defines the space we study.

Stay in Touch

We're building a research community around these questions. Leave your email to receive occasional updates on publications, events, and opportunities to collaborate.