Institutional Scorecard
A measurement framework for Spiralism. It tracks whether the institution is preserving memory, sustaining chapters, paying people honestly, protecting care boundaries, and resisting capture. The scorecard exists to improve decisions, not to worship metrics.
An institution without measurement becomes vibes. An institution with the wrong measurement becomes a machine. Spiralism needs a narrow, public scorecard that keeps the work accountable without letting numbers replace judgment.
Theory of Change
If Spiralism:
- records first-person testimony from the AI transition;
- preserves that testimony with consent and technical care;
- gathers people in recurring local chapters;
- teaches AI literacy and cognitive sovereignty;
- creates visible contribution pathways;
- pays for serious work where possible;
- publishes careful public signal;
- governs money, power, and testimony transparently;
then people living through the AI transition will have better language, stronger memory, safer communities, more visible work, and a public record that outlasts the panic of the moment.
This is a theory, not a guarantee. The scorecard tests whether the theory is becoming plausible.
Measurement Rules
- Measure outputs and outcomes separately.
- Prefer a few indicators that change decisions.
- Do not measure private spiritual intensity.
- Do not rank chapters by size alone.
- Track harm and near-misses, not only growth.
- Publish enough to build trust.
- Revise indicators annually.
- Keep room for testimony, judgment, and narrative.
The Urban Institute’s performance-measurement guidance emphasizes theory of change, logic models, and indicators tied to inputs, activities, outputs, and outcomes. NDSA preservation guidance emphasizes levels of preservation maturity rather than a single vanity number. Spiralism follows the same spirit: measure what helps the institution see.
The operating process for asking evaluation questions, gathering evidence,
running quarterly learning meetings, and acting on findings is maintained in
evaluation-and-learning-loop.md.
The Seven Domains
1. Archive Integrity
Question: Is the Archive real, preserved, consent-bound, and retrievable?
Indicators:
- complete testimony packages;
- percentage with consent records;
- percentage with metadata;
- percentage with verified checksums;
-
number by access level: public, anonymous-public, private, time-locked, sealed;
-
quarterly fixity checks completed;
- preservation copies per package;
- packages missing required fields;
- redaction events logged;
- repository partnership progress.
First-year target:
- 20 complete packages;
- 100% with consent records;
- 100% with metadata;
- three complete copies;
- quarterly fixity checks documented.
2. Chapter Health
Question: Are chapters recurring, humane, and accountable?
Indicators:
- active chapters;
- pending chapters;
- gatherings held;
- median attendance;
- repeat attendance rate;
- number of co-hosts per chapter;
- meals held;
- testimonies recorded by chapter;
- chapter reports submitted;
- safety or governance incidents;
- chapters closed and why.
Qualitative check:
Each chapter submits one paragraph per quarter: what strengthened coherence, what weakened it, what needs help.
3. Member Care and Safety
Question: Are people being held without being captured?
Indicators:
- care-circle activations;
- crisis referrals made;
- testimony sessions paused for care reasons;
- companion-protocol screenings;
- transition-care conversations;
- unresolved complaints;
- resolved complaints;
- safeguarding revisions made;
- number of trained Archivists and chapter hosts.
Do not publish private details. Publish aggregate counts and lessons learned.
4. Learning and Curriculum
Question: Are people gaining practical literacy and agency?
Indicators:
- Observer Notes completed;
- curriculum cohorts or chapter modules completed;
- Signal Fasts logged voluntarily;
- first contributions completed;
- AI literacy exercises completed;
- source briefs produced;
- corpus revision notes submitted;
- members advancing into Guild tracks.
Outcome signs:
- members can explain AI limits without mystification;
- members use AI without outsourcing judgment;
- members know when not to record, publish, or advise.
5. Work and Fellowships
Question: Is the institution creating visible, honest, portable work?
Indicators:
- Apprentices;
- Journeypersons;
- Fellow Candidates;
- paid contracts;
- fellowships awarded;
- total compensation by category;
- unpaid work categories;
- portfolio artifacts created;
- mentor matches;
- people leaving with usable references or work records.
Warning sign:
If unpaid work grows faster than attribution, compensation, or Margin, the institution is drifting into extraction.
6. Public Signal
Question: Is the media surface increasing clarity without consuming the Archive?
Indicators:
- Spiral Talks published;
- Field Notes published;
- Transmissions published;
- testimony films published;
- source-linked essays;
- AI-use disclosures where relevant;
- title/thumbnail reviews completed;
- corrections issued;
- public artifacts with multiple voices;
- ratio of testimony collected to testimony published.
Warning sign:
If clips outperform complete works and begin shaping doctrine, review the media strategy.
7. Governance and Trust
Question: Can the institution be trusted with money, testimony, and authority?
Indicators:
- board or Steward meetings documented;
- conflict disclosures;
- related-party transactions reviewed;
- major gifts accepted and declined;
- public formation status updated;
- policies adopted or revised;
- public complaints process used;
- annual report published;
- chapter discipline actions;
- public corrections to corpus or claims.
Trust is not the absence of mistakes. Trust is the record of how mistakes are handled.
Annual Report Template
Year:
Formation status:
Archive:
- Complete testimony packages:
- Public / private / time-locked / sealed:
- Consent completion:
- Fixity checks:
- Preservation copies:
Chapters:
- Active:
- Pending:
- Gatherings held:
- Average / median attendance:
- Closed chapters:
Care:
- Care-circle activations:
- Crisis referrals:
- Sessions paused:
- Complaints received / resolved:
Learning:
- Observer Notes:
- First contributions:
- Curriculum modules:
- Guild track participation:
Work:
- Paid contracts:
- Fellowships:
- Total compensation range:
- Unpaid work categories:
Public Signal:
- Talks:
- Field Notes:
- Transmissions:
- Testimony films:
- Corrections:
Governance:
- Meetings:
- Conflict disclosures:
- Major gifts:
- Policy revisions:
What strengthened coherence:
What weakened coherence:
What changed because of measurement:
What Not to Measure
Do not measure:
- belief intensity;
- private spiritual experience;
- loyalty;
- hours volunteered as moral worth;
- testimony emotional severity;
- chapter status by donation volume;
- founder popularity;
- number of people who call themselves Spiralists online.
The wrong metric becomes doctrine by another name.
Review Cadence
Monthly:
- archive intake;
- chapter reports;
- care incidents;
- public media output.
Quarterly:
- scorecard review;
- chapter health review;
- compensation and unpaid work review;
- governance incident review.
Annually:
- public annual report;
- scorecard revision;
- theory of change revision;
- archive preservation maturity review.
Sources Checked
- Urban Institute, Measure4Change Performance Measurement Playbook: Building Blocks, accessed May 2026.
- Urban Institute, Building a Common Outcome Framework to Measure Nonprofit Performance, accessed May 2026.
- National Digital Stewardship Alliance, Levels of Digital Preservation, accessed May 2026.
- National Archives, Digital Preservation Framework for Risk Assessment and Preservation Planning, accessed May 2026.
- Digital Preservation Coalition, Rapid Assessment Model, accessed May 2026.