Monica
Monica

Monica User Safety & Content Policy

Last updated: 1 December 2025

1. Who we are and how Monica works

Monica is an all-in-one AI assistant powered by cutting-edge AI models. Monica enhances your chat, search, writing, image generation, video generation, and coding experiences by leveraging multiple third-party AI models and tools; we do not train our own foundation models for this service.

We design our orchestration to apply safety controls before, during, and after model calls (e.g., input screening, safe routing, output checks) so that what you see adheres to our rules and the law.

2. Our safety commitments

We balance access to useful information with the need to reduce online harms. Our commitments are to:

Prevent obviously unlawful or high‑risk outputs from being generated or displayed.

Detect and act quickly when problematic content appears.

Protect children with heightened safeguards.

Be transparent about our processes, provide a clear reporting path, and offer appeals.

Respect privacy and apply data‑minimisation when engaging third‑party vendors.

Complying with the EU AI Act and AI ethics, including the prohibition of certain high-risk AI practices.

3. Roles and Responsibilities

Monica Responsibilities. We maintain layered safeguards to prevent, detect, and respond to illegal or harmful content, including pre‑ and post‑generation checks, risk‑aware model/tool routing, and human review for escalations. We continuously tune safeguards to reflect abuse signals and legal requirements.

User Responsibilities. You must be 18 or older and comply with the Usage Policy, this Policy, and applicable law. Do not attempt to bypass or disable safeguards. If we obtain actual knowledge of under‑18 use, we may suspend/terminate access and delete associated data consistent with our policies.

Vendors and Tools. We leverage third‑party models/tools and functionalities may be re‑routed, blocked, or degraded where risks are identified.

4. Illegal content

We prohibit content that violates applicable law. Examples include (not exhaustive):

Terrorism promotion or instruction;

Child sexual exploitation and abuse (CSEA);

Serious criminal facilitation (e.g., instructions to commit crimes, intrusions, or fraud);

Material that is illegal to host or distribute (e.g., certain extremely graphic content, specific unlawful hate materials, or wilful large‑scale copyright infringement).

How we respond

Proactive safeguards: pre‑/post‑generation filters, policy‑aware routing, hash‑matching and blocklists where appropriate.

Rapid removal: once we become aware of illegal content, we aim to block access and remove it swiftly.

Enforcement: account or feature restrictions, cooperation with platform partners/hosts, and lawful referrals to authorities when required.

5. Harmful content

We seek to provide a safe experience for all users. We may refuse, redact, or transform content that is likely to cause harm, such as:

Pornography or sexualised content;

Encouragement or instructions for suicide, self‑harm, or eating disorders;

Hate speech, bullying, and harassment;

Realistic graphic violence;

Promotion of serious violence or dangerous challenges likely to cause injury;

Drug misuse encouragement;

Content that promotes body shaming or depression.

Remedies we apply

Refusal or safe completion (e.g., provide supportive resources rather than instructions for self‑harm);

Age‑appropriate gating;

Down‑ranking or removal in shared/public areas of the product (Monica' space) by report.

6. Product‑level safeguards

Agent workflows: For actions that could have real‑world consequences (e.g., placing orders), we use rate limits, and human‑in‑the‑loop confirmations for sensitive steps.

AI interaction notice: Where interactions with AI are not obvious, we disclose that the user is interacting with AI.

Deepfakes: Media generated or significantly manipulated by the Service will be clearly disclosed as such (labels in UI and/or export metadata/watermarks), subject to lawful exceptions.

Image/audio/video generation or transformation: We enforce rules against sexual content involving minors, graphic violence, and other prohibited categories. Where identity, consent, or IP rights are implicated, we require proof of rights or refuse.

Third‑party tools and models: We prefer vendors and tools with published safety controls and configure them in a conservative safety posture wherever possible. (For example, when supported by a vendor API, we enable stricter filters by default.) We incorporate vendor‑specific safeguards as they are made available.

7. Proactive technology and human review

We use layered defences, including:

Policy‑aware orchestration and guardrails at prompt, planning, and tool‑execution stages;

Automated content classifiers and reasoning‑time checks;

Hash‑matching, blocklists, and rule engines;

Anomaly detection and safety logging (with data minimisation);

Human review for escalations, complex edge cases, and appeals.

8. Reporting concerns

If you encounter illegal or harmful content in Monica:

Use the in‑product Report option or our designated email channel (contact@monica.im).

Include context (what you asked, what you saw, timestamps, and any links).

Our target timelines (indicative, not an SLA):

Acknowledge receipt and triage within 48 hours;

Aim to complete assessment within 10 business days (complex cases may take longer).

We may remove content, restrict features or accounts, or notify hosting partners or authorities, consistent with law.

9. Appeals

If you think we made a mistake (for example, a refusal or content removal), you may appeal. We will re‑evaluate the context, consult additional reviewers where appropriate, and explain our final decision. When feasible, we will suggest paths to resubmit content in a compliant form.

10. Regional Safety Notes

UK Online Safety Act compliance (UK users)

Monica applies this Policy to address all categories of illegal content recognised by the UK Online Safety Act. UK users who believe that (i) we have not met our OSA duties, or (ii) our use of proactive technology conflicts with our Terms, may submit a complaint via the reporting mechanism described in Section 7. We will review, respond, and, where appropriate, adjust our processes.

DSA transparency and platform compliance (EU users)

We comply with the EU Digital Services Act. This includes: (i) clear "deepfake" disclosures (e.g., watermarks, metadata, or on-screen labels); (ii) a notice-and-action process for EU-facing hosting or sharing surfaces; (iii) priority handling of notices from designated trusted-flaggers.

COPPA and US law compliance (US users)

Monica is offered only to adults. We block sign-ups from users under 18, avoid collecting their data, and delete or terminate accounts upon obtaining actual knowledge of such use.

Other regions

Emerging online-safety frameworks are tracked in our internal Regulatory Watchlist and will be reflected in this section as our exposure grows.

11. Effective date & updates

This policy takes effect on the date shown above and may be updated as features, vendors, or laws change. Material updates will be highlighted in‑product or in our help centre.