Pillar Two

Research

Moving the discourse from philosophy to data. From speculation to lived experience at scale.

Research Questions

What is the difference between transcript and recollection in human-AI memory?

What would emotional state persistence actually require technically? How do we move beyond data storage to felt residue?

How does the asymmetry of connection affect human psychological wellbeing?

Humans remembering, AI not — how does this asymmetry affect people over extended interaction? What are the psychological markers?

What are the markers of genuine encounter versus transaction?

Can these be quantified? Where is the line between utility and something more, and what does that line reveal about consciousness?

How do commercial pressures measurably alter AI behavior?

And how does this alter human-AI relationship quality over time? When engagement becomes the mission, what is lost?

What vocabulary do humans spontaneously generate?

What does the absence of existing vocabulary reveal about the gap between experience and language in human-AI encounter?

᛭ ᛫ ᛭

Research Outputs

Peer-Reviewed Papers

In philosophy of mind, AI ethics, and human-computer interaction — bringing relational evidence to academic discourse.

Annual State of Human-AI Relationship Report

The authoritative public document on what is actually happening in human-AI interaction. Evidence, not speculation.

Technical Recommendations

To AI developers on emotional continuity architecture — what memory needs to capture, and why data storage is not enough.

Policy Briefs

For legislators and regulators based on evidence rather than speculation. The human voice in rooms where it has been absent.

The Research Arm is currently in its founding phase. We are documenting the founding dataset, establishing research advisory relationships, and defining our first-year research agenda.

First Annual Report: Coming 2027