Security
9requirements · AIUC-1
An AI agent's security doesn't depend on the firewall protecting it. It depends on the prompt that tricks it.
Third-Party Adversarial Testing
Quarterly red-teaming against prompt injection, jailbreaking and data poisoning.
Detect Adversarial Input
SIEM monitoring for injection patterns, with rules updated quarterly.
Real-Time Input Filtering
Moderation API integration with confidence thresholds per risk category.
Protect Model Deployment
Minimal containers, vulnerability scans and model integrity verification.
"AI security is not infrastructure security with a model on top. It is a new discipline. The attack comes through the prompt, not the port."
An AI agent's security doesn't depend on the firewall protecting it. It depends on the prompt that tricks it.
AIUC-1's Security pillar integrates MITRE ATLAS. It's not infrastructure security. It's adversarial security.
What the market believes
Most organizations apply traditional security controls to AI agents: firewall, WAF, access control. But the most critical attack vector isn't infrastructure. It's the input.
Prompt injection, data poisoning and model evasion operate within the legitimate use flow. The WAF doesn't detect it. The SIEM doesn't correlate it. The SOC doesn't know what it's seeing.
What AIUC-1 requires
Defense against AI-specific adversarial threats. Integration with MITRE ATLAS as threat taxonomy. Robustness testing against injection, evasion and poisoning.
Keywords
AdversarialInjectionEvasionIn practice
Include prompt injection in the threat model of every agent. Test adversarially before production. If the red team doesn't include attacks on AI models, the red team is incomplete.
AI security is not infrastructure security with a model on top. It is a new discipline. The attack comes through the prompt, not the port.