The standard for AI agent security, safety and reliability
100+ Fortune 500 CISOs built AIUC-1. What this changes for those governing AI in markets where regulation is still being written.
AIUC-1 Standard
AIUC-1 covers core enterprise risks
What AIUC-1 covers for AI agents.
Data & Privacy
Protecting users and enterprises against data & privacy concerns through customer data policies, access controls, and safeguards against data leakage, IP exposure, and unauthorized training on user information.
Keywords
Security
Preventing unauthorized access to AI systems through adversarial testing, access controls, monitoring, and safeguards against prompt injection and jailbreak attempts.
Keywords
Safety
Keeping customers safe by mitigating harmful AI outputs and protecting brand reputation through rigorous 3rd-party testing, monitoring, and safeguards including human-review of flagged outputs.
Keywords
Reliability
Preventing unreliable AI outputs that cause customer harm through testing against hallucinations and unauthorized tool calls, and implementing safeguards to detect and prevent these concerns.
Keywords
Accountability
Enforcing strong governance and oversight through formal approval processes, AI failure plans, vendor due diligence, and oversight mechanisms with explicitly defined ownership.
Keywords
Society
Preventing AI from enabling catastrophic societal harm through guardrails against cyber exploitation, system misuse, and threats to national security including chemical, biological, and nuclear risks.
Keywords
Operationalizing trusted frameworks
AIUC-1 Technical Contributors & Consortium
"We need a SOC 2 for AI agents — a familiar, actionable standard for security and trust."
Phil Venables, Former CISO, Google Cloud
The challenge
The SOC 2 for AI already exists
AIUC-1 does for AI agents what SOC 2 did for SaaS: turns trust into verifiable certification. Cisco, Google, Anthropic, Stanford, MITRE and OWASP contributed. The standard is not a promise. It is a fact.
Governance hasn't kept up
The framework covers 6 pillars. But each jurisdiction has different regulation, maturity and adoption pace. What works in the US doesn't directly apply to Brazil, Mexico or Colombia. Regulatory translation is the real gap.
Whoever governs first, leads
Companies adopting AI agents without a governance framework operate with risk that doesn't show on the dashboard. Whoever maps regulatory differences before the market does, sets the standard.
Regulatory map
Where each market stands
5 markets. 5 regulations. 1 global framework.
Brazil
Under developmentAI regulation built on LGPD. Bill 2338/2023 in progress. Largest digital economy in LATAM.
Mexico
IntermediateNew data protection law (2025) with AI provisions. Multiple AI bills in Congress.
Argentina
Pro-innovationArgenIA national plan. Light-touch regulation approach. AI bills introduced in 2025.
Colombia
ExperimentalSIC AI guidelines (2024). CONPES 4144 national AI policy. Regulatory sandbox active.
Chile
Most advancedAI Bill approved by Chamber (Oct 2025), pending Senate. Risk-based framework aligned with EU AI Act.
The market has normalized the idea that "we don't have AI regulation yet." But absence of regulation is not absence of risk. It is absence of a map. And whoever operates without a map, operates in the dark.
Free guide
What the market hasn't mapped yet
- → 6-pillar mapping for each jurisdiction (BR, MX, CO, DO, EC)
- → Gap analysis by market
- → Local frameworks mapped to AIUC-1
- → Adequacy roadmap by country
- → Readiness checklist by pillar
15 pages. Zero fluff. 100% actionable.
Access the Material →Operationalization Guide for Latin America
Open Cybersecurity
The mapping is available.
Fill out the form to receive the full AIUC-1 operationalization guide.
Access the Material →Restricted material. No strings attached.