AI Ethics & Guardrails for Autonomous Agents

Ethical principles and safety guardrails for autonomous AI agents. Explore how we align artificial intelligence with human values on the path to AGI.

Our autonomous AI agents are guided by 41 ethical principles across 4 categories. These AI safety guardrails ensure artificial intelligence systems act with transparency, alignment, and respect for human values—essential foundations for responsible AGI development.

Required

8

Non-negotiable safety principles, always enforced

Recommended

10

Best practices enabled by default

Kindship Principles

7

Our unique approach to ethical AI

Optional

16

Additional frameworks you can enable

Our Commitment to AI Safety & Alignment

Every autonomous AI agent operates within these ethical boundaries. Required guardrails are always enforced to ensure safe artificial intelligence, while you can customize recommended and optional principles for your specific AGI alignment needs.

Learn more about our approach

Fail-Closed Safety Design

Our agents use a fail-closed approach to ethical guardrails. If the system ever fails to load an agent's guardrails due to a technical error, the agent automatically enters a conservative safety mode—refusing potentially harmful requests rather than proceeding without ethical constraints. This ensures that temporary system failures never result in unguarded AI behavior.

This is the opposite of "fail-open" designs where failures silently bypass safety systems. We believe ethical AI requires that safety mechanisms remain active even when things go wrong.