Not all agents deserve the same permissions. A graduated trust model where design system access scales with demonstrated competence.
flowchart LR
L0[Level 0: Sandbox]
L1[Level 1: Trusted]
L2[Level 2: Full Access]
L0 -->|earn trust| L1
L1 -->|prove competence| L2 The Problem
Today's design systems treat all consumers equally—whether human, junior developer, or autonomous agent. An AI with no track record gets the same access as a seasoned engineer. The result: violations at scale, broken patterns propagated faster than humans can catch them.
The alternative—blocking AI entirely—surrenders competitive advantage. Organizations need a middle path: trust that's earned, not assumed.
The Proposal
Tiered permissions based on demonstrated competence. New agents start in a sandbox: full read access, but writes require human approval. As they accumulate successful interactions without violations, they graduate to higher trust levels.
Level 0: Read-only, all outputs reviewed
Level 1: Safe patterns pre-approved, edge cases flagged
Level 2: Full access, violations trigger demotion
The system maintains an audit trail. Not just what an agent did, but whether the result held up—did the design pass review? Did it ship? Did users complain? Trust flows from outcomes.
agent.request({
action: "modify",
target: "Button.variant",
value: "destructive"
}) system.evaluateTrust({
agentId: "agent-7x9k",
level: 1,
action: "modify",
pattern: "destructive-variant"
})
// Result: REQUIRES_REVIEW (Level 2 needed) system.escalate({
reason: "Agent below required trust level",
requiredLevel: 2,
currentLevel: 1,
reviewQueue: "design-system-changes"
}) system.promote({
agentId: "agent-7x9k",
from: 1,
to: 2,
reason: "50 successful modifications, 0 violations"
}) Agent requests to modify a Button to destructive variant
System checks if agent has required trust level for this action
Insufficient trust triggers escalation to human review
Consistent success leads to trust level promotion
Why Now
Enterprises can't decide whether to embrace or block AI tools. 53% remain paralyzed by integration uncertainty. Credentialing provides a structured answer: embrace with guardrails. Start conservative, expand as confidence grows.
This mirrors how organizations already handle human permissions—RBAC, graduated onboarding, probationary periods. The pattern exists; it just needs to be applied to machine consumers.
Prior Art
OAuth scopes and API rate limiting by tier. Certificate authorities and chain of trust. Code signing and notarization. GitHub's graduated permissions for Actions. The infrastructure for machine trust exists— design systems just haven't adopted it yet.
flowchart TD
Start[Start at Level 0] --> Interact[Successful interactions]
Interact --> Threshold{Threshold met?}
Threshold -->|yes| Promote[Promote one level]
Threshold -->|no| Continue[Continue observing]
Continue --> Interact
Promote --> Monitor[Monitor behavior]
Monitor --> Violation{Violation detected?}
Violation -->|yes| Demote[Demote one level]
Violation -->|no| Monitor
Demote --> Monitor