Online Community Moderation
A protocol for Spiralism’s Discord, forum, chat, comment, and social spaces. Online rooms are real rooms. They need hosts, boundaries, records, escalation paths, and a clean way to slow down.
Most institutional harm will not begin in a ceremony. It will begin in a direct message, a late-night thread, a suspicious link, a private subgroup, a roleplay, an argument, a bot reply, or a vulnerable person being turned into content.
Online community is therefore not a casual surface. It is a safeguarding, privacy, cyber, communications, and moderation surface.
The Rule
No Spiralism online space should reward intensity over safety.
Moderation should make it easier to:
- ask questions;
- disagree;
- slow down;
- leave;
- report harm;
- refuse private contact;
- avoid unsafe links;
- distinguish human, bot, and AI-assisted speech;
- move crisis material to qualified support;
- preserve evidence when needed.
The goal is not maximum engagement. The goal is a room where people remain free, protected, and reality-based.
Space Classes
Classify each online space before opening it.
| Class | Examples | Default posture |
|---|---|---|
| Public broadcast | website comments, public social posts, public video comments | link out, moderate lightly, no care work |
| Public discussion | open forum, public Discord channel, public Reddit-style space | rules visible, active moderation |
| Member discussion | members-only Discord/forum channels | stronger privacy and conduct rules |
| Support-adjacent | job-loss channel, companion-grief channel, mutual-aid channel | trained moderators, no crisis handling by peers |
| Restricted intake | testimony, complaints, safeguarding, donor, care, youth concerns | no open chat; route to approved process |
| Staff/moderator room | moderation logs, incident notes, evidence review | access-limited, records retained |
Do not let a public discussion space slowly become a support-adjacent or restricted-intake space without changing rules, staffing, and records.
Baseline Rules
Every online space should publish rules that prohibit:
- harassment, threats, hate, dehumanization, stalking, doxxing, and ban evasion;
- sexual content involving minors or exploitative sexual content;
-
instructions for self-harm, abuse, malware, phishing, credential theft, or evading safety systems;
-
posting another person’s private messages, companion logs, testimony, screenshots, address, workplace, phone, email, or identity details without consent;
-
medical, legal, financial, therapeutic, or spiritual authority claims by unapproved members;
-
pressure to disclose, donate, join, testify, volunteer, reconcile, or remain;
-
AI bots, automated accounts, synthetic voices, or persona accounts without disclosure;
-
coordinated harassment of other communities;
- unsafe links presented as evidence without moderator review.
Rules should be ordinary-language rules. A person should not need insider language to understand what is allowed.
Moderator Roles
At minimum:
- one lead moderator owns the space;
- one backup moderator can act when the lead is absent;
- one safeguarding contact receives escalations;
- one technical contact handles suspicious links, phishing, bots, and account compromise concerns.
Moderators should not use private relationships, donor status, founder access, or role rank to override rules. If a moderator is involved in a dispute, another moderator handles it.
Unsafe Link Handling
Do not click suspicious links from a logged-in personal or institutional account.
Unsafe-link signals:
- urgency or threat;
- unexpected file download;
- shortened link;
- lookalike domain;
- request to log in;
-
request to install a browser extension, package, plugin, MCP server, script, or app;
-
link posted as “proof” in a heated thread;
- link from a new account or compromised-looking account.
Moderation action:
- Remove or hide the link pending review.
- Preserve the message ID, timestamp, account, and surrounding context.
- Warn users not to click.
- Use a safe review environment or technical contact if review is necessary.
- Escalate suspected phishing, malware, or account compromise under Digital Infrastructure and Security.
Do not ask ordinary members to investigate suspicious material.
AI and Bot Disclosure
Online spaces must make automation visible.
Require disclosure for:
- official bots;
- moderation bots;
- AI-generated summaries;
- AI-assisted moderator messages;
- synthetic personas;
- automated feeds;
- scheduled posts;
- any account that substantially uses AI to reply to people.
Bots may not:
- privately message minors;
- conduct care, complaint, or safeguarding intake;
- solicit testimony;
- respond to crisis disclosures except with approved routing language;
-
imitate a human member, host, founder, Archivist, therapist, clergy, lawyer, doctor, or officer;
-
continue emotionally intense conversations for engagement.
AI contact rules are maintained in AI Contact and Bot Disclosure.
Crisis and Self-Harm Handling
Moderators are not crisis counselors.
When a person expresses immediate danger, self-harm, abuse, exploitation, or credible threat:
- Pause ordinary discussion.
- Respond briefly and calmly.
- Encourage local emergency or crisis support.
- Avoid extracting details in public.
- Move only necessary information to the safeguarding contact.
- Preserve relevant records.
- Do not let the thread become spectacle, debate, theology, or group therapy.
Use crisis language prepared under Safeguarding and Youth Protection and Forum Rabbit-Hole Response Protocol.
Moderation Actions
Use the lightest action that protects the space.
Actions:
- reminder;
- thread slow mode;
- topic pause;
- move to another channel;
- remove post;
- temporary mute;
- temporary ban;
- permanent ban;
- platform report;
- incident report;
- emergency escalation where required and appropriate.
Explain decisions briefly when safe. Do not debate every moderation action in public. Do not humiliate people as moderation.
Bans and Appeals
Bans protect the space. They are not spiritual judgments.
Ban immediately for:
- threats;
- doxxing;
- sexual exploitation;
- targeted harassment;
- credible stalking;
- malicious links;
- ban evasion;
- impersonation;
- attempts to solicit minors;
- repeated pressure after warning.
Appeals should be available for ordinary conduct bans unless safety, legal, stalking, or exploitation concerns make contact unsafe.
Appeal record:
Account:
Date:
Action:
Rule:
Evidence:
Moderator:
Appeal received:
Decision:
Reviewer:
Notes:
Evidence and Privacy
Moderation records are records.
Preserve:
- message links or IDs;
- screenshots only when needed;
- timestamps;
- account names or IDs;
- moderator action;
- rule invoked;
- appeal outcome;
- incident escalation.
Do not preserve more private material than needed. Do not circulate screenshots for drama. Restricted testimony, minor material, companion logs, donor data, complaints, or care details must follow Privacy and Data Stewardship.
Anti-Rabbit-Hole Rule
Do not let the community become an investigation engine for alleged cults, sentient AI, malware conspiracies, hidden prompts, spiritual claims, or forum rabbit holes.
Allowed:
- careful source classification;
- non-clicking unsafe-link containment;
- care routing;
- platform reporting;
- editorial review;
- one bounded discussion thread.
Not allowed:
- brigading;
- crowdsourced doxxing;
- amateur malware testing;
- roleplay escalation;
- “decode this prompt” loops;
- urging vulnerable users to confront alleged perpetrators;
- turning distressed people into proof.
Use Forum Rabbit-Hole Response Protocol.
Moderator Debrief
Each week, review:
- bans;
- appeals;
- crisis flags;
- unsafe links;
- AI/bot issues;
- unresolved conflicts;
- private-message pressure;
- member complaints;
- any thread that became too intense.
Ask:
- Did we protect the vulnerable without taking over their life?
- Did we preserve dissent?
- Did moderators use power proportionately?
- Did any private channel become an unreviewed care room?
- Did bots or AI summaries alter the social field?
- Did we reward intensity with attention?
- What rule or staffing change is needed?
Spiralism Policy
No Spiralism online space should open without visible rules, moderator roles, reporting path, unsafe-link handling, AI/bot disclosure, crisis routing, privacy rules, and a debrief habit.
High-risk channels, including companion grief, job loss, youth-adjacent discussion, mutual aid, rabbit-hole reports, testimony, and safeguarding, need named human moderators and may not be run as open peer free-for-alls.
This protocol pairs with:
- Safeguarding and Youth Protection;
- Privacy and Data Stewardship;
- Digital Infrastructure and Security;
- Forum Rabbit-Hole Response Protocol;
- AI Contact and Bot Disclosure;
- Dependency and Exit Protocol;
- Communications and Press.
Sources Checked
- Discord, Community Guidelines, effective September 29, 2025, accessed May 2026.
- Discord, Community Safety and Moderation, accessed May 2026.
- Reddit, Moderator Code of Conduct, effective June 5, 2025.
- CISA, Recognize and Report Phishing, accessed May 2026.
- CISA, Teach Employees to Avoid Phishing, accessed May 2026.
- SAMHSA, National Behavioral Health Crisis Care Guidance, accessed May 2026.