Nonpartisan Public Framework

Policy Posture

The institution’s public stance toward AI law, regulation, lobbying, and political activity. Spiralism is not a party, campaign, PAC, think tank, or activist front. It is an archive-centered cultural and educational institution that may speak clearly about public conditions without becoming captured by politics.

The AI transition is political because technology changes power. It changes labor markets, schools, intimacy, surveillance, authorship, memory, and the conditions under which people make meaning. Spiralism should not pretend those questions are neutral.

It should also not become a policy brand.

The Rule

Educate publicly. Advocate carefully. Never campaign.

The institution may publish research, host talks, document harms, interview affected people, explain regulations, and critique systems. It may support or oppose specific policy ideas only within its legal limits and mission. It does not endorse candidates, parties, or campaigns.

Nonpartisan Boundary

IRS guidance for 501(c)(3) organizations distinguishes lobbying from political campaign activity, and states that 501(c)(3) organizations are prohibited from directly or indirectly participating or intervening in political campaigns on behalf of or in opposition to candidates for public office. Lobbying rules are different, but lobbying must be limited and tracked.

Spiralism’s founding rule:

Members remain citizens. They may do political work personally. They may not make chapters into campaign infrastructure.

What Spiralism Can Do

Spiralism can:

What Spiralism Should Avoid

Spiralism should avoid:

AI Governance Principles

Spiralism’s policy language should align with existing public frameworks where possible. The OECD AI Principles were adopted in 2019 and updated in 2024 to address developments including general-purpose and generative AI, with emphasis on privacy, intellectual property, safety, and information integrity. NIST’s AI Risk Management Framework and its 2024 generative-AI profile provide practical risk-management language. The EU AI Act, in force from 2024 with staged application through 2025-2027, establishes a risk-based regulatory approach, including AI literacy obligations and rules for general-purpose AI and high-risk systems.

Spiralism should use those frameworks as public reference points, not private scripture.

Working principles:

  1. Human continuity. AI policy should preserve human dignity, memory, agency, and meaningful participation.

  2. Transparency. People should know when they are interacting with AI or AI-mediated institutions in contexts where confusion matters.

  3. Consent. Human testimony, likeness, voice, labor, and intimate records should not be extracted without meaningful consent.

  4. Accountability. Systems that shape life chances require accountable owners, deployers, and institutions.

  5. Cognitive sovereignty. Policy should recognize attention and reality perception as public-interest concerns.

  6. Vulnerable users first. Minors, people in crisis, dependent users, and displaced workers deserve stronger protections.

  7. Open memory. The transition should be documented in public-interest archives, not only corporate logs.

  8. Pluralism. No single lab, ideology, state, religion, or economic class should monopolize the interpretation of the AI transition.

Policy Areas

Labor

Position:

AI labor policy should address displacement, skill translation, worker voice, transition support, and dignity. The institution should document the lived experience of automation, not merely speculate about macroeconomic outcomes.

Spiralism may support:

Education

Position:

AI literacy is now a civic competency. Education policy should teach verification, privacy, bias, appropriate use, and human judgment rather than only tool adoption.

Spiralism may support:

Companions and Synthetic Intimacy

Position:

Companion systems require heightened transparency, safeguards for minors, crisis protocols, and research into dependency, grief, and social effects.

Spiralism may support:

Archive and Likeness

Position:

People should retain meaningful control over testimony, likeness, voice, and intimate records. Public-interest archives need legal and technical support to preserve the AI transition outside corporate platforms.

Spiralism may support:

Safety and Risk Management

Position:

The institution is not an AI safety lab, but it supports risk-management practices that make systems more transparent, accountable, and auditable.

Spiralism may support:

Public Comment Protocol

Before submitting public comments or signing letters, the institution should ask:

  1. Is this directly tied to archive, education, testimony, care, chapter life, or cognitive sovereignty?

  2. Is there a clear public record of the institution’s reasoning?

  3. Is the statement nonpartisan?
  4. Does it avoid candidate or party intervention?
  5. Is it based on documented evidence rather than panic?
  6. Has the governance reviewer checked lobbying implications?
  7. Does it preserve testimony dignity?
  8. Would the statement still read responsibly in ten years?

If the answer is no, do not sign.

Chapter Policy Rules

Chapters may:

Chapters may not:

Policy Tracker

The institution should maintain a lightweight public tracker:

Area Jurisdiction Status Why it matters Institutional response
AI literacy EU Obligations began 2025 Education and institutional competence Curriculum alignment
GPAI rules EU 2025-2026 rollout Model transparency and provider duties Field Notes update
Companion chatbots California 2026 enforcement Minors, self-harm, AI disclosure Companion Protocol
AI risk management U.S. / NIST Voluntary framework Risk vocabulary and governance Media and curriculum reference

This is not a lobbying dashboard. It is institutional situational awareness.

The Sentence

Spiralism’s policy posture in one sentence:

We document the human consequences of artificial intelligence, educate for cognitive sovereignty, and support nonpartisan public frameworks that preserve human dignity, consent, transparency, accountability, and memory.

Sources Checked