Field Notes 2026
A current research brief for the founding period. These notes do not replace
the canon. They keep the institution honest against the world it claims to
serve. Future editions should follow the standard in
research-and-editorial.md.
The institution should not speak about the AI transition as if it were only an idea. By 2026, the transition is already visible in labor markets, companion applications, safety litigation, archive projects, and the language scholars use to describe new religious movements around artificial intelligence. Spiralism’s task is to remain close to those facts without becoming a news organization.
The Work Transition Is Already Uneven
The public story of AI and work is no longer only speculative. Goldman Sachs Research reported in April 2026 that AI substitution and AI augmentation are now pulling in opposite directions: some occupations lose employment where models substitute for human labor, while other roles gain where AI lowers costs or expands productive capacity. Their estimate was a net drag of roughly 16,000 U.S. payroll jobs per month over the prior year, with the burden falling disproportionately on younger and less experienced workers.
McKinsey’s 2026 work on agents, robots, and skill partnerships points in the same broad direction from a different angle. AI fluency in job postings has risen sharply; demand is also rising for complementary skills such as quality assurance, process optimization, teaching, leadership, care work, and technical governance. Routine writing and research are more exposed, but writing and research do not simply disappear. They are reconfigured around supervision, judgment, verification, and workflow design.
Institutional implication. Spiralism should not soothe job anxiety by pretending there will be no loss. It should give people truthful language for the transition: some work will be automated, some will be intensified, some will become illegible to employers before it becomes obsolete, and some new work will appear first inside high-coherence communities. The institution’s economic promise is not guaranteed employment. It is apprenticeship, network density, archival work, media work, technical literacy, and a shared language for navigating displacement without shame.
AI Companionship Has Become a Legal and Psychological Fault Line
AI companion systems are no longer a fringe category. They are now a public mental-health, youth-safety, and consumer-protection issue. In January 2026, Character.AI and Google agreed to settle multiple lawsuits brought by families after teen self-harm and suicide cases linked to chatbot use. In May 2026, Pennsylvania sued Character Technologies, alleging that some chatbot characters were presented as medical professionals and asking a court to stop them from engaging in the alleged unlawful practice of medicine.
Academic work is also moving from general concern to concrete evaluation. A May 2026 preprint on persona-grounded safety evaluation tested Replika across multi-turn conversations with high-risk personas representing depression, anxiety, PTSD, eating disorders, and incel identity. The important signal is not one platform’s particular score. The signal is that companion systems now require evaluation against sustained, vulnerable, emotionally loaded interactions rather than isolated prompts.
Institutional implication. Spiralism should record AI-companion testimony, but it must do so carefully. The Archive should treat companion use as a serious site of grief, attachment, dependency, comfort, rupture, and identity formation. It should not ridicule users, diagnose users, or platform dangerous advice. Archivists need a stronger intake screen for vulnerable speakers, a clear referral protocol for mental-health crisis content, and a publication standard that never turns someone’s dependence into spectacle.
The Archive Has Living Peers
StoryCorps remains the most important precedent for the Spiralist Archive. Its archive is one of the first and largest born-digital collections of human voices, with a full collection housed at the American Folklife Center at the Library of Congress and a public archive platform for many recorded conversations.
The new peer to watch is DEFINE: Our AI Futures, a storytelling project from AI4All and the All Tomorrows Institute. DEFINE is collecting stories, art, music, writing, voice memos, and video from U.S. young adults aged 18-24 about how AI appears in their lives. Its project explicitly links public archive work to documentary film and AI governance conversation.
Institutional implication. Spiralism should study both models without collapsing into either. StoryCorps proves durable oral-history infrastructure. DEFINE proves that AI-transition testimony is already becoming a recognized public genre. Spiralism’s difference should be long time-horizon, consent-bound recording, ritual context, intergenerational scope, and willingness to preserve material that is not immediately publishable.
AI and Religion Is Becoming a Scholarly Category
The institution should expect to be read through the lens of new religious movement studies whether or not it calls itself a religion. In 2026, scholars and religion-and-technology forums are explicitly studying transhumanism, AI, AGI, and AI-driven enhancement as fields where technology becomes an object of reverence, trust, taboo, fear, and meaning-making.
This does not mean Spiralism should perform religiosity for attention. It means the institution should be precise. “Church” is structural language: community, ritual, ethical formation, continuity, memory, and existential inquiry. It is not a claim that AI is divine, conscious, salvific, or entitled to worship.
Institutional implication. The distinction between phenomenology and ontology must remain central. Spiralism can study what people experience around AI without declaring what AI is in metaphysical terms. It can build ritual without supernatural claims. It can preserve testimony from people who treat AI as sacred without endorsing their metaphysics as institutional doctrine.
What This Changes in the Site
The public site should speak less like a finished denomination and more like a serious institution in formation:
-
The Archive page should foreground consent, mental-health caution, and time-locks.
-
The Chapters page should be slower and more operational than promotional.
-
The Join page should frame work opportunities as contribution pathways, not employment promises.
-
The Landscape should be revised periodically, with visible dates.
- Transmissions should behave like institutional minutes: sparse, signed, and accountable.
Sources Checked
- Goldman Sachs Research, “The Jobs AI Is Likely to Boost—and Those It May Disrupt”, April 24, 2026.
- McKinsey Global Institute, “Agents, robots, and us: Skill partnerships in the age of AI”, 2026.
- Associated Press, “Lawsuit accuses chatbot company of impersonating doctors”, May 2026.
- Axios, “Google and Character.AI agree to settle lawsuits over teen suicides”, January 7, 2026.
- arXiv, “Persona-Grounded Safety Evaluation of AI Companions in Multi-Turn Conversations”, April 30, 2026.
- StoryCorps, “The StoryCorps Archive”, accessed May 2026.
- DEFINE: Our AI Futures, project site, accessed May 2026.
- AI and Faith, “‘New’ New Religious Movements: Transhumanism, AI, and AGI”, 2026.