AI Policy
MOYA • AI Policy • Prepared for regulators, juries, partners, and global users
This policy defines what MOYA’s AI does and does not do. It exists to build trust through clarity: non-diagnostic guidance, no medical claims, human governance, and safety-by-design.
A boundary document designed for trust.
MOYA is an ethical, origin-based wellness intelligence platform. This policy clarifies the scope of AI outputs, the safety posture, and the governance principles that keep MOYA credible across jurisdictions.
Clear scope. Clear limits.
MOYA AI may:
- Offer educational guidance about wellness routines, herbs, and rituals in non-medical terms.
- Support habit formation with gentle routines, journaling prompts, and culturally-aware context.
- Provide safety-forward suggestions including conservative options and escalation guidance when appropriate.
- Promote informed choices by encouraging verification, sourcing transparency, and professional care for medical concerns.
MOYA AI does not:
- Diagnose conditions or interpret symptoms as a medical conclusion.
- Provide medical treatment or replace clinicians, pharmacists, or licensed professionals.
- Make medical claims including “cure”, “treat”, “heal”, or disease-specific promises.
- Operate as a regulated health data processor or intentionally collect sensitive medical records.
Language discipline is a safety feature.
MOYA maintains strict language boundaries. We do not present herbs or routines as medical treatment. We prioritize conservative guidance, documented sourcing, and escalation to qualified professionals for medical concerns.
- Escalation-first posture: if users describe urgent or severe concerns, MOYA prompts professional care.
- Conservative guidance: when uncertainty exists, MOYA defaults to safety and caution.
- Education-only framing: MOYA describes traditions and practices as informational, not prescriptive treatment.
AI is bounded by governance, not hype.
MOYA is designed with human oversight and policy constraints. Where higher risk exists, MOYA prioritizes escalation, verified references, and safe defaults. Governance is part of product architecture.
Governance principles
- Policy-first outputs: the system avoids disallowed medical framing and high-risk claims.
- Safety escalation: for severe symptoms, emergencies, pregnancy, and complex conditions, MOYA escalates to professionals.
- Continuous review: feedback and audits are used to improve safety and clarity over time.
- Traceable knowledge posture: MOYA favors structured knowledge and sourcing transparency over improvisation.
Safety-by-design across cultures and languages.
MOYA serves users across cultures. Safety includes minimizing bias, respecting cultural context, and avoiding harmful assumptions. Multilingual delivery is treated as a safety surface, not a marketing feature.
Core safety commitments
- Avoid harmful stereotypes: no cultural essentialism or biased assumptions.
- Uncertainty disclosure: MOYA is transparent when information is limited or context-specific.
- User safety over completion: MOYA prioritizes safe, conservative guidance rather than confident speculation.
- Continuous improvement: MOYA refines policies as new risks and contexts emerge.
Designed to be credible across jurisdictions.
MOYA’s AI posture is intentionally conservative. The platform is structured as wellness intelligence and education — complementary to professional care — with clear boundaries that reduce regulatory risk.
Trust is engineered.
This policy is part of MOYA’s architecture. We do not scale distribution before we scale trust. Ethical AI is not a claim — it is a boundary system.