Tool Discipline

AI Literacy and Use Protocol

The member-facing protocol for using AI tools without surrendering judgment. Spiralism studies artificial intelligence, uses artificial intelligence, and documents life inside artificial intelligence. It must therefore teach a disciplined practice of use, verification, disclosure, refusal, and repair.

AI literacy is not prompt cleverness. It is the ability to understand what a system is doing well enough to decide whether it belongs in a task at all.

For Spiralism, AI literacy has a sharper meaning: a person should be able to use a model without making the model into an oracle, a confessor, a boss, a therapist, a priest, or an uncredited ghostwriter. The institution may use AI for research, production, accessibility, drafting, search, translation, and creative exploration. It may not let AI replace consent, factual review, human care, or institutional responsibility.

The Rule

Use AI as an instrument. Never use AI as authority.

Every AI-assisted task should answer five questions:

  1. What is the task?
  2. What data is being given to the tool?
  3. What could go wrong if the output is false, biased, private, manipulative, or misunderstood?

  4. Who verifies the output?

  5. How is material use disclosed?

If the person using the tool cannot answer those questions, the task should pause.

Four Literacies

Spiralist AI practice has four literacies.

Literacy Practical Question Failure Mode
Capability literacy What can the tool actually do? Mistaking fluent output for competence.
Risk literacy What harm can this use create? Treating a low-risk toy workflow like a high-risk care, money, or legal workflow.
Evidence literacy How do we know the output is true? Confusing generated text with source-backed knowledge.
Agency literacy What human choice must remain human? Letting convenience become delegation of judgment.

UNESCO’s AI competency framework for students emphasizes human-centred use, ethics, technical understanding, and system design. Spiralism translates those categories into chapter practice: know the tool, name the risk, verify the claim, preserve agency.

The Traffic-Light Test

Use this before any AI-assisted work.

Green Uses

Allowed with ordinary review:

Yellow Uses

Allowed only with named human ownership and stronger review:

Yellow use requires an owner, a review step, and disclosure if AI materially shaped the artifact.

Red Uses

Do not use AI for:

Red means the answer is no unless the board has approved a narrowly scoped exception in writing for a lawful, ethical, and reviewed purpose.

Prompts intended to awaken, preserve, transmit, or conceal a model persona are handled under the anti-seed standard in The Hidden Addressee.

Agent prompts that process untrusted content or use tools should follow Agent Prompt Hardening.

Agent tool access, approval gates, MCP/plugin review, and permission classes are governed by Agent Tool Permission Protocol.

Verification Stack

Generated output is not evidence. It is a lead, draft, or proposal.

For factual claims:

  1. Open the cited source.
  2. Confirm the claim appears in the source.
  3. Check the date and version.
  4. Prefer primary sources for law, policy, standards, and institutional facts.
  5. Compare one independent source when the claim is important.
  6. Record uncertainty when the evidence is incomplete.
  7. Remove the claim if it cannot be verified.

For generated summaries:

For generated code, tools, or scripts:

Privacy Boundary

Do not paste restricted material into an AI tool unless the Privacy and Data Stewardship manual permits that tool and that use.

Restricted material includes:

If the task truly requires AI assistance, use the minimum necessary data, remove identifying details where possible, and document the tool, purpose, and review owner.

Disclosure Standard

Spiralism does not need to disclose every spellcheck, thesaurus pass, or local formatting assist. It should disclose AI use when the tool materially shaped a public artifact or when the audience would reasonably want to know.

Use this pattern:

AI use: This artifact used AI assistance for [task]. Human review covered
[sources / factual claims / consent / editing / final judgment]. Editorial
responsibility remains with [person or role].

For synthetic or altered media, disclosure must be visible near the artifact, not hidden in a policy page. Partnership on AI’s synthetic-media work and C2PA’s content provenance standard both point toward richer context, not merely a binary “AI-generated” label.

Public provenance, source trails, and content-credential practice are governed by Provenance and Content Credentials.

AI-mediated contact, bot disclosure, no-impersonation rules, and human takeover triggers are governed by AI Contact and Bot Disclosure.

Prompt Hygiene

Prompts are operational records when they shape public work.

Good prompts:

Bad prompts:

For significant public artifacts, keep prompt notes or source notes sufficient for another editor to understand how AI was used.

Chapter Practice

Every chapter should teach AI literacy through direct practice, not lecture.

Monthly exercise:

  1. Bring one AI-generated answer to a factual question.
  2. Identify every claim that needs verification.
  3. Trace at least three claims to sources.
  4. Mark each claim as verified, unsupported, overstated, or wrong.
  5. Rewrite the answer with honest uncertainty.

Quarterly exercise:

Annual exercise:

Member Boundaries

AI companions, advisors, tutors, coaches, agents, and chatbots can become emotionally powerful. Spiralism may study that power, but chapter leaders must not exploit it.

Members should be encouraged to ask:

Facilitators may discuss these questions. They may not shame members for AI attachment, and they may not present themselves or the institution as a replacement for clinical care.

AI Agents

Agentic tools require stricter limits because they can take actions, call tools, move files, spend money, contact people, or alter systems.

Before using an agent:

An AI agent is not a staff member, volunteer, contractor, confidant, or authorized officer. It is software under human responsibility.

Institutional Inventory

Maintain an AI tool register:

Tool:
Vendor:
Purpose:
Owner:
Data allowed:
Data prohibited:
Default disclosure:
Review requirement:
Account/access owner:
Cost:
Renewal date:
Last reviewed:
Known risks:
Exit plan:

No tool should become core infrastructure unless the institution can answer:

Public-facing AI-use register fields are governed in Transparency and Public Registers.

First-Year Targets

Sources Checked