Policies & Enforcement
Overview
Policies define how AI systems are allowed to operate in your environment. They control which models can be used, what data can be accessed, and how requests are handled.
In Peridot, policies are enforced in real time and applied across all AI interactions.
What Policies Control
Policies can be used to control:
Model access (e.g. restrict GPT-4 to specific teams)
Data access (e.g. block sensitive data from being sent to external models)
Integration usage (e.g. restrict certain APIs or systems)
Request routing (e.g. route requests to approved providers)
Output handling (e.g. require structured responses or citations)
Policy Structure
Each policy consists of three components:
Conditions
Define when the policy applies.
Examples:
User role or group
Application or system
Data classification
Model or provider
Rules
Define what is allowed or restricted.
Examples:
Allow / deny model usage
Restrict data types
Require specific model providers
Actions
Define what happens when a rule is triggered.
Examples:
Block request
Reroute to approved model
Trigger approval workflow
Log event or create incident
Example Policy
Name: Restrict External Model Usage
Condition: Sensitive data detected
Rule: External models not allowed
Action: Block request and create incident
Best Practices
Start with visibility before enforcement
Use scoped policies (workspace, role, system)
Avoid overly broad restrictions initially
Pair policies with incident workflows
Next Steps
Learn about [Policy-Based Routing]
Understand [Enforcement Actions]
Configure your first policy in the dashboard