Provenance and Content Credentials
A publishing protocol for source trails, AI-use labels, synthetic media, and content credentials. Spiralism can use generated tools without letting the record become fog.
The public will increasingly meet the institution through pages, clips, images, transcripts, talks, screenshots, and summaries that pass through AI systems. Some of those tools will be useful. Some will also blur authorship, source lineage, consent, and truth.
Provenance is the answer to the blur. It does not make content true. It makes the history of the content easier to inspect.
The Rule
Never ask the audience to trust a polished artifact when a provenance trail can be provided.
A provenance trail should answer:
- what is being claimed;
- who or what created the artifact;
- what human reviewed it;
- which sources, recordings, or testimony packages support it;
- whether AI materially shaped it;
- whether the media is synthetic, altered, reenacted, translated, or composite;
- what credentials, notes, or archive records can verify the chain.
The goal is not maximal disclosure of private material. The goal is enough context for the audience to understand what kind of thing they are seeing.
Provenance Is Not Truth
Content credentials and source notes can show a content history. They cannot prove that the depicted event happened, that a witness is accurate, or that an interpretation is correct.
Treat provenance as a map of custody and transformation:
- who created the file;
- what tools touched it;
- what edits occurred;
- whether AI generation or alteration is declared;
- whether the file still carries usable metadata.
Do not tell readers that content credentials make an artifact “real.” Say that credentials provide inspectable provenance. Truth still depends on source quality, corroboration, consent, context, and correction.
Artifact Labels
Every public artifact should fit one label.
| Label | Meaning | Required note |
|---|---|---|
| Human-authored | Written or recorded by people; AI not materially involved | No AI note required |
| AI-assisted | AI helped with source search, summarization, editing, transcription, translation, or drafting | AI-use note near the artifact |
| AI-generated | AI generated the primary image, audio, video, or prose surface | Visible synthetic-media disclosure |
| Composite | Built from multiple sources, clips, reenactments, or edited testimony | Source and transformation note |
| Reenactment | Represents an event without being original footage/audio | Visible reenactment disclosure |
| Interpretation | Meaning-making built from sources rather than direct evidence | Claim-class label or source note |
An artifact may carry more than one label. If labels conflict, choose the more transparent label.
Disclosure Patterns
Use plain language.
AI-assisted writing:
AI use: This piece used AI assistance for [task]. Human review covered sources,
factual claims, consent, editing, and final judgment.
Synthetic visual:
Synthetic media: This image was generated or materially altered with AI. It is
not documentary evidence of a real scene.
Composite testimony clip:
Provenance note: This clip combines excerpts from approved testimony records.
Names and identifying details were changed under the Privacy and Data protocol.
Reenactment:
Reenactment: This scene represents a reported experience. It is not original
footage.
Content credential note:
Content Credentials: Where supported, this artifact includes provenance
metadata. Credentials can help inspect origin and edits, but they do not prove
the underlying claim.
Content Credentials Practice
When producing AI-generated or materially altered images, audio, video, or mixed-media artifacts, preserve or attach content credentials when the toolchain supports it.
Record:
- tool or camera used;
- creator or operator;
- creation date;
- edit history summary;
- whether AI generation or alteration occurred;
- source ingredients;
- license or rights status;
- reviewer;
- publication URL;
- archive package ID.
If credentials are stripped by resizing, re-encoding, upload, social platforms, or CDN processing, keep the original credentialed file in the archive and note that the public derivative may not retain metadata.
Source Trail
Every researched page should preserve a source trail.
Minimum source trail:
- source title;
- source URL or archive location;
- access date;
- publication or release date when available;
- claim supported by the source;
-
source class: primary, official, academic, journalistic, forum, testimony, generated lead, or internal record;
-
reviewer initials or name.
Generated output is never a source. It can propose leads, summaries, or questions. The source trail begins when a human opens the underlying source.
Testimony Provenance
Testimony needs a different standard because privacy can outrank public inspectability.
For testimony-derived artifacts, public provenance may say:
- consent status;
- whether the subject approved publication;
- whether names or details were changed;
- whether the excerpt is direct, paraphrased, translated, or composite;
- whether AI transcription or summarization was used;
- where the restricted record is held;
- who reviewed consent and privacy.
Do not publish raw source files, chat logs, care notes, or identity metadata to make an audience feel more certain. Certainty is not worth betrayal.
Synthetic Media Red Lines
Do not publish:
- synthetic images of real testimony subjects without explicit consent;
-
cloned voices of members, donors, sources, staff, public figures, or vulnerable people without explicit permission and visible disclosure;
-
fake reviews, fake testimonials, fake endorsements, or invented quotes;
- AI-generated screenshots presented as real software, forum, or chat evidence;
- altered evidence that changes the meaning of a source;
- generated images of minors in sensitive contexts;
- realistic crisis, abuse, or medical scenes that imply documentary evidence where none exists.
If a synthetic artifact could reasonably be mistaken for documentary evidence, label it at the point of encounter, not only in a footer or policy page.
Provenance Review Before Publication
Before publishing, answer:
- What kind of artifact is this?
- What claims does the audience need to distinguish from interpretation?
- What source trail supports those claims?
- Did AI materially shape the artifact?
- Is synthetic or altered media visible as such?
- Are credentials preserved or archived where possible?
- Does the disclosure appear where the audience will see it?
- Does the artifact expose private or restricted material?
- Can a reviewer reconstruct the chain from source to publication?
- Who owns correction if the artifact is challenged?
If the chain cannot be reconstructed, publish later.
Correction and Challenge
Every public provenance note should make challenge possible.
When challenged:
- preserve the current version;
- review source notes and credentials;
- compare the public artifact against the original source or archive record;
- check whether AI altered or strengthened the claim;
- correct the artifact or explain why no correction is needed;
- record the correction under Research and Editorial Integrity.
Do not defend a claim because it is aesthetically useful. The archive matters more than the image.
Spiralism Policy
Spiralism may use AI for drafting, research leads, transcription support, translation drafts, visual prototypes, accessibility summaries, and production assistance. Public artifacts must remain labeled, reviewable, and correctable.
The institution should adopt C2PA Content Credentials or comparable provenance workflows as soon as the media pipeline can preserve them reliably. Until then, manual provenance notes are required for significant public artifacts.
This protocol pairs with:
- Research and Editorial Integrity;
- Media Engine;
- AI Literacy and Use Protocol;
- Archive Operations Manual;
- Privacy and Data Stewardship;
- Agent Audit and Incident Review.
Sources Checked
- C2PA, Content Credentials: C2PA Technical Specification 2.4, April 2026.
- C2PA, C2PA and Content Credentials Explainer, accessed May 2026.
- OpenAI, Understanding the source of what we see and hear online, May 2024.
- OpenAI Help Center, C2PA in ChatGPT Images, accessed May 2026.
- Federal Trade Commission, Chatbots, deepfakes, and voice clones: AI deception for sale, March 20, 2023.
- Federal Trade Commission, FTC Announces Crackdown on Deceptive AI Claims and Schemes, September 25, 2024.
- NIST, AI Risk Management Framework, accessed May 2026.