Core Concepts
Overview
Peridot is built around a small set of core concepts that define how AI systems are discovered, governed, and controlled.
Understanding these concepts is critical to operating Peridot effectively. They map directly to how AI usage exists in real enterprise environments—across users, systems, data, and models.
The Core Model
Peridot operates as a control layer across your AI ecosystem. At a high level:
AI systems generate requests
Data flows through those systems
Policies govern how those requests are handled
Incidents capture violations or risks
Each concept below represents a part of that lifecycle.
Workspaces
A workspace is the top-level container for your organization in Peridot.
It defines:
Users and access
Connected systems and integrations
Policies and governance rules
Audit boundaries
Large organizations may use multiple workspaces to separate business units or environments.
Environments
Environments allow you to separate development, staging, and production systems.
Each environment can have:
Independent policies
Different model access
Separate integrations
Distinct access controls
This ensures that experimentation does not impact production systems.
AI Inventory
The AI Inventory provides a complete view of all AI systems used across your organization.
This includes:
Sanctioned tools (approved systems)
Shadow AI (unsanctioned or unknown usage)
AI-generated applications and workflows
Inventory is the foundation of visibility—without it, governance is not possible.
Data Flows
Data Flows represent how information moves into and out of AI systems.
This includes:
Inputs sent to models
Outputs generated by AI systems
Data retrieved from internal systems
Peridot tracks these flows to detect risk, enforce policies, and maintain auditability.
Data Classification
Data classification assigns sensitivity levels to data processed by AI systems.
Examples include:
Public
Internal
Sensitive
Restricted
Classification enables policies to enforce rules based on data risk.
Policies
Policies define what is allowed and what is restricted across AI systems.
They control:
Which models can be used
What data can be processed
How requests are routed
What actions are taken when rules are triggered
Policies are evaluated in real time for every AI interaction.
Enforcement Actions
Enforcement actions define how policies are applied.
These include:
Blocking requests
Rerouting to approved systems
Triggering approvals
Logging events
Creating incidents
They ensure that governance is not theoretical—it is enforced.
Incidents
Incidents represent events where risk or policy violations occur.
They provide a structured way to:
Detect issues
Investigate activity
Respond to risk
Maintain audit records
Incidents turn raw signals into actionable workflows.
Integrations
Integrations connect Peridot to your systems:
Cloud platforms (AWS, Azure, GCP)
SaaS tools (Slack, Jira, ServiceNow)
Identity providers
Model providers
They enable discovery, monitoring, and enforcement across your environment.
Audit Logs
Audit logs provide a complete record of all activity in Peridot.
They capture:
Requests and responses
Policy evaluations
Enforcement actions
User activity
This ensures full traceability and compliance.
How These Concepts Work Together
These concepts form a continuous loop:
Inventory discovers AI systems
Data Flows track information movement
Classification identifies risk
Policies define rules
Enforcement applies control
Incidents capture violations
Audit logs record everything
This loop is what enables Peridot to function as a true control layer.
In Practice
In a production environment:
All AI activity is visible
Data movement is tracked
Policies are enforced consistently
Incidents are handled systematically
Every action is auditable
Next Steps
Learn how this system operates in How Peridot Works
Begin setup with the Quickstart Guide