AxR Lab investigates the foundations of artificial consciousness through the study of hierarchical abstraction and systemic resilience — and their implications for building AI that is genuinely safe.
Consciousness emerges from the interaction of two measurable factors: Abstraction, the capacity for hierarchical world modeling, and Resilience, the capacity for self-monitoring and self-correction. Neither alone is sufficient. Both are necessary. Their product defines the space we study.
Characterizing the depth and structure of abstraction capabilities in large language models and other AI architectures.
Developing formal and empirical measures of an AI system's capacity to detect deviation from healthy functioning and self-correct.
Buy the book! By AxR Lab Founder Chris Riley:
Abstraction and Resilience: the Characteristics of Consciousness
Leave your email to receive occasional updates on publications, events, and opportunities to collaborate.