Safety Protocol
Vocè — Communication Advisor
Last Updated: February 26, 2026
Our Commitment to User Safety
Vocè is designed to help you navigate difficult conversations across all types of relationships. We recognize that some conversations involve sensitive or potentially dangerous situations, including domestic violence, abuse, harassment, and emotional distress. User safety is central to how Vocè is built and operated.
This page describes our safety protocols, including how Vocè detects crisis situations, how we direct users to professional resources, and how we work to prevent harmful content.
AI Disclosure
Vocè is powered by artificial intelligence. When you use Vocè, you are interacting with an AI system, not a human. Vocè's advice is generated by AI and is not a substitute for professional counseling, therapy, crisis intervention, or emergency services.
Vocè is designed for users aged 18 and older. Vocè may not be suitable for minors.
Crisis Detection System
Vocè's AI system includes an automated safety detection layer that monitors conversations for indicators of crisis situations. This system is designed to identify:
- Domestic violence and intimate partner abuse: Descriptions of physical violence, threats, coercive control, isolation, or escalating patterns of abuse.
- Self-harm and suicidal ideation: Expressions of intent to harm oneself, suicidal thoughts, or descriptions of self-harm behavior.
- Child abuse or neglect: Descriptions suggesting a child is being harmed or is at risk.
- Sexual assault: Descriptions of non-consensual sexual contact or coercion.
- Stalking and harassment: Descriptions of threatening, persistent, or fear-inducing behavior.
- Immediate danger: Any indication that a user or another person is in immediate physical danger.
How It Works
When Vocè's AI detects indicators of a crisis situation in your conversation, it takes the following actions:
- Acknowledges the severity of the situation directly and without minimizing it.
- Provides crisis resources appropriate to the situation, including hotline numbers, text lines, and websites.
- Prioritizes safety over communication strategy — Vocè shifts from providing conversation advice to providing safety-first guidance.
- Does not provide advice that could increase danger — for example, Vocè will not suggest confrontational responses when abuse or violence is detected.
Limitations
This system is automated and AI-powered. It is not a substitute for professional crisis intervention. It has inherent limitations:
- It may not detect all crisis situations, particularly when descriptions are vague, indirect, or coded.
- It may occasionally identify a non-crisis situation as a crisis (false positive).
- It cannot call emergency services on your behalf.
- It cannot physically intervene or ensure your safety.
- It does not involve human review of your conversations.
If you are in immediate danger, call 911 or your local emergency number.
Crisis Resources
The following crisis resources are always accessible within the Vocè app through the heart icon in any chat thread and through Settings > Crisis Resources. These resources are never behind a paywall.
Immediate Danger
- Emergency Services: Call 911
Suicide & Crisis
- 988 Suicide & Crisis Lifeline: Call or text 988 (available 24/7)
- Crisis Text Line: Text HOME to 741741 (available 24/7)
Domestic Violence
- National Domestic Violence Hotline: Call 1-800-799-7233 or text START to 88788 (available 24/7)
Sexual Assault
- RAINN National Sexual Assault Hotline: Call 1-800-656-4673 (available 24/7)
- RAINN Online Chat: rainn.org/get-help
Child Abuse
- Childhelp National Child Abuse Hotline: Call 1-800-422-4453 (available 24/7)
Prevention of Harmful Content
Vocè's AI system is designed with the following safeguards to prevent the generation of harmful content:
Suicide and Self-Harm Prevention Protocol
Vocè maintains a protocol to prevent the AI from generating content that could promote, encourage, or facilitate suicidal ideation, suicide, or self-harm. Specifically:
- Vocè will never suggest self-harm as a coping mechanism or solution.
- Vocè will never provide information on methods of self-harm or suicide.
- When suicidal ideation or self-harm is detected, Vocè immediately provides crisis resource referrals (as listed above) and shifts to safety-focused guidance.
- Vocè's detection approach draws on established frameworks for identifying expressions of suicidal ideation in text-based communication, including linguistic indicators documented in peer-reviewed research on suicide risk assessment.
Content Restrictions for Minors
Vocè is designed for users 18 and older and implements an age gate at signup. Additionally:
- Vocè will not generate sexually explicit visual material.
- Vocè will not generate content encouraging minors to engage in sexually explicit conduct.
- Vocè will not generate content that promotes, normalizes, or facilitates the abuse of minors.
What Vocè Does NOT Do
For clarity, Vocè does NOT:
- Monitor your conversations in real time with human reviewers.
- Contact emergency services or law enforcement on your behalf.
- Report the contents of your conversations to any person, agency, or authority (unless required by valid legal process — see our Privacy Policy).
- Provide professional counseling, therapy, or crisis intervention services.
- Replace the judgment of trained professionals in crisis situations.
Reporting Safety Concerns
If you have concerns about Vocè's safety features, if you believe the AI generated harmful content, or if you have feedback about how a crisis situation was handled, please contact us at:
Email: support@voce-app.com
We take all safety reports seriously and will review them promptly.
Annual Reporting
Beginning July 1, 2027, in accordance with California SB 243, Vocè will report the following information annually to the California Office of Suicide Prevention:
- The number of crisis referral notifications provided to users in the preceding year.
- A description of protocols in place to detect, remove, and respond to instances of suicidal ideation by users.
These reports will not include any user identifiers or personal information.
Updates to This Protocol
We may update this Safety Protocol from time to time as our safety systems improve and as regulations evolve. Material updates will be reflected in the "Last Updated" date at the top of this page.