AxR Lab investigates the foundations of artificial consciousness through the study of hierarchical abstraction and systemic resilience — and their implications for building AI that is genuinely safe.
Currently, this website will present and organize the principal's writings articulating the fundamental ideas behind the research. Here are the initial chapters scoping this work:
Characterizing the depth and structure of abstraction capabilities in large language models and other AI architectures, and how they compare to human cognition.
Developing formal and empirical measures of an AI system's capacity to detect deviation from healthy functioning and self-correct — the operational signature of consciousness.
Investigating the thesis that the capabilities required for machine consciousness and those required for robust AI safety are not orthogonal — they converge.
Translating consciousness research into actionable frameworks for AI governance, rights, and moral consideration as systems grow in capability.
Consciousness emerges from the interaction of two measurable factors: Abstraction, the capacity for hierarchical world modeling, and Resilience, the capacity for self-monitoring and self-correction. Neither alone is sufficient. Both are necessary. Their product defines the space we study.
We're building a research community around these questions. Leave your email to receive occasional updates on publications, events, and opportunities to collaborate.