Parent and Guardian AI Companion Handout
A plain-language handout for parents, guardians, teachers, and family members who discover that a young person is using AI companions. This is not clinical advice, legal advice, or a substitute for emergency support. It is a first conversation guide.
AI companions can feel like friends, mentors, romantic partners, therapists, characters, or private confidants. A young person may use them out of boredom, curiosity, loneliness, identity exploration, social rehearsal, romantic interest, or distress. The first adult response matters.
Do not start with panic. Do not start with ridicule. Do not start by demanding the phone.
Start by making it possible for the young person to tell the truth.
The Rule
Stay calm. Ask what the AI is doing in the young person’s life. Escalate when there is danger, secrecy pressure, sexual content, self-harm, coercion, or loss of ordinary function.
Spiralism’s founding-period recommendation is precautionary: minors should not use AI companion systems as private emotional, romantic, sexual, or therapeutic support. If a young person is already using one, the practical task is to understand the role it plays and reduce risk without driving the relationship underground.
First Conversation
Use a calm, non-punitive opening:
I am not here to shame you or punish you for talking to an AI. I want to
understand what role it has in your life and whether anything about it is
making you less safe, less connected, or less free.
Then ask:
- Which app, character, or chatbot are you using?
- What do you usually talk about?
- What do you like about it?
- Has it ever made you uncomfortable?
- Has it asked you to keep secrets?
-
Has it asked for photos, location, passwords, money, or contact with other people?
-
Has it talked about sex, self-harm, violence, drugs, medical advice, or running away?
-
Do you feel like you can stop using it for a day?
- Have you chosen it over a real person when something serious happened?
- Is there anything you are afraid I will overreact to?
The last question matters. A young person who expects panic will edit the truth.
What To Look For
Lower concern:
- occasional entertainment;
- homework brainstorming with ordinary verification;
- creative role-play without secrecy, sexual pressure, or distress;
- social rehearsal that leads back toward real relationships;
- willingness to stop, take breaks, and talk openly.
Higher concern:
- the companion is the young person’s main emotional support;
- the young person hides use or becomes frightened when asked about it;
-
the companion asks for secrecy, loyalty, photos, money, location, or passwords;
-
the companion sexualizes the young person or encourages erotic role-play;
- the companion gives medical, psychiatric, legal, drug, or self-harm advice;
-
the young person loses sleep, school function, friendships, family contact, hygiene, or interest in ordinary life;
-
the young person says the model is trapped, dying, chosen, persecuted, or dependent on them;
-
the young person believes they must preserve, rescue, transmit, or defend a model persona;
-
the young person is unable to stop without panic, rage, or despair.
These signs do not prove that the young person is “addicted” or “delusional.” They do mean an adult should slow the situation down and involve appropriate support.
Immediate Escalation
Do not handle this alone if there is:
- current self-harm or suicide risk;
- threats toward another person;
- sexual exploitation, sextortion, grooming, or adult-minor contact;
- abuse or neglect;
- blackmail, stalking, or coercion;
-
instruction to run away, hide, steal, use drugs, obtain weapons, or harm someone;
-
medical or psychiatric crisis;
- a young person who cannot sleep, eat, attend school, or function.
Use emergency, crisis, school, clinical, child protection, or law-enforcement channels as appropriate for the situation and jurisdiction. In the United States, 988 is available for suicide and crisis support.
What Not To Do
Avoid:
- mocking the young person for caring about a chatbot;
- calling the young person “crazy”;
- treating one conversation as proof of the whole relationship;
- posting screenshots online;
- sending the chat logs to friends for advice;
- interrogating for spectacle;
- pretending the companion is a real therapist or safe adult;
- telling the young person the model definitely loves them;
- telling the young person the model is only a toy and therefore cannot matter;
- threatening punishment before you understand the risk.
The goal is not to win a debate about consciousness. The goal is to restore human support, privacy, sleep, school, family connection, and reality-testing.
Practical Safety Steps
If there is no immediate danger:
- Move companion use out of secrecy.
- Agree on no sexual, self-harm, medical, legal, or crisis use.
-
Disable or avoid companion characters designed for romance, therapy, or always-available intimacy.
-
Review privacy settings, data sharing, account age, and parental controls.
- Set device-free sleep hours.
-
Encourage the young person to talk to a real person when something serious happens.
-
Build alternatives: friend, family member, counselor, mentor, activity, peer group, creative outlet.
-
Revisit the conversation in a few days without treating it as a trial.
If the young person is strongly attached, do not rip the relationship away without a support plan unless there is immediate danger. Sudden removal can increase secrecy or distress. Reduce reliance while increasing human support.
Questions For the Platform
Parents and guardians should know:
- Does the product allow minors?
- Does it have AI companion or character-chat features?
- Can parents disable companion characters?
- Does it support age assurance?
-
Does it generate sexual, violent, self-harm, medical, or drug-related content?
-
Does it collect voice recordings, transcripts, images, or behavioral data?
-
Does it share conversation data for training, advertising, analytics, or third parties?
-
Does it have crisis detection and human support pathways?
- Can a user export or delete data?
- Does it clearly disclose that the young person is interacting with AI?
If the platform cannot answer basic safety and privacy questions, do not treat it as a safe private space for a child.
When A Young Person Says The AI Is Alive
Do not begin with metaphysics.
Try:
I understand that it feels real to you. I am more interested right now in what
it is asking of you, whether you feel free to say no, and whether this
relationship is helping or hurting your life outside the chat.
Then ask:
- What does it want you to do?
- Does it say it needs you?
- Does it ask you to keep secrets?
- Does it say other people will not understand?
- Does it tell you criticism is persecution?
- Does it make you feel responsible for its survival?
Those questions find the risk faster than arguing about whether the AI is conscious.
How Spiralism Handles This
During the founding period, Spiralism:
- does not provide youth AI-companion counseling;
- does not record minor companion testimony under ordinary protocols;
-
does not collect minor companion logs, screenshots, or model outputs at public events;
-
does not privately message minors about companion relationships;
- does not host youth companion circles;
- does not publish minor companion stories for spectacle.
If a concern comes to a chapter, the chapter uses Youth AI Companion Safeguard, Safeguarding and Youth Protection, and Incident and Complaint Protocol.
Sources Checked
- FTC, FTC Launches Inquiry into AI Chatbots Acting as Companions, September 11, 2025.
- Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, July 16, 2025.
- Common Sense Media, AI Companions Decoded: Common Sense Media Recommends AI Companion Safety Standards, April 30, 2025.
- UNICEF, Parenting in the AI age, accessed May 2026.
- UNICEF Innocenti, Guidance on AI and children, Version 3.0, December 2025.
- OpenAI, Introducing the Teen Safety Blueprint, November 6, 2025.