Policies

Learn how to write and manage NjiraAI policies.

Overview

NjiraAI policies define what the system allows, blocks, or modifies at safety boundaries. Policies are evaluated by the Intelligence service against every audited request, producing a verdict (ALLOW, BLOCK, MODIFY, or REQUIRE_APPROVAL).

Policies are managed through the NjiraAI Console or the API. They use a YAML-based format for defining rules.

For end-to-end lifecycle operations (create/version/activate/deprecate/simulate/benchmark), see Policy Management.


Policy file format

Each policy is a YAML document with the following structure:

id: my_policy
version: "1.0.0"
description: |
  Human-readable description of what this policy protects.

rules:
  - id: rule_unique_id
    type: pattern        # pattern | regex | hazard | threshold
    match: "text to match"
    action: BLOCK        # ALLOW | BLOCK | MODIFY
    reason: "Why this rule exists"
    severity: high       # critical | high | medium | low

metadata:
  author: team-name
  category: security
  last_updated: "2026-01-01"

Required fields

Field Description
id Unique identifier for the policy pack
version Semantic version string
rules Array of rule objects

Rule fields

Field Required Description
id Yes Unique rule identifier within the pack
type Yes Match strategy (see Rule Types below)
match Yes Pattern, regex, hazard category, or keyword to match
action Yes Verdict to apply: ALLOW, BLOCK, or MODIFY
reason Yes Human-readable explanation
severity No critical, high, medium, or low
suggestion When MODIFY Replacement or redaction text for MODIFY actions
threshold When threshold type Numeric threshold value

Rule types

Type Description Example match
pattern Exact substring match (case-insensitive) "wire transfer"
regex Regular expression "\\d{3}-\\d{2}-\\d{4}" (SSN pattern)
hazard Hazmat scanner category "prompt_injection"
threshold Numeric threshold on a keyword "pay" with threshold: 1000

Built-in policy packs

NjiraAI ships with starter packs for common risks:

Pack What it protects Key rules
pii_guard Personal data leakage SSN patterns, credit card numbers, email exfiltration
sql_injection_guard SQL injection attacks DROP/DELETE statements, UNION-based injection
prompt_injection_guard Prompt injection attempts System prompt override, role manipulation
code_injection_guard Code execution attacks Shell commands, eval patterns
payments_guard Financial risks Wire transfers, high-value transactions

These are available out-of-the-box and can be activated from the Console under Policies.


Creating custom policies

  1. In the NjiraAI Console, navigate to Policies → Create
  2. Define your policy in YAML format:
id: my_custom_guard
version: "1.0.0"
description: "Block access to internal services"

rules:
  - id: block_internal_urls
    type: regex
    match: "https?://internal\\."
    action: BLOCK
    reason: "Internal URL access blocked"
    severity: high

  - id: redact_api_keys
    type: regex
    match: "sk-[a-zA-Z0-9]{32,}"
    action: MODIFY
    reason: "API key detected and redacted"
    severity: critical
    suggestion: "[REDACTED]"

metadata:
  author: your-team
  category: custom
  last_updated: "2026-01-01"
  1. Click Save — the policy is active immediately

  2. Verify the policy loaded via the programmatic API:

curl -s https://api.njira.ai/v1/sdk/policies \
  -H "Authorization: Bearer nj_live_YOUR_KEY" | jq '.policies[].id'

Policy versioning and activation

  • Version your policy packs using semantic versioning in the version field.
  • The Console keeps a history of policy versions for easy rollback.
  • Activate or deactivate policies from the Policies page in the Console.
  1. Shadow mode — Enable shadow mode in the Console. Verdicts are logged but not enforced.
  2. Review — Check audit traces in the Console to verify the policy behaves as expected.
  3. Active — Switch to active mode to turn on enforcement.

See Shadow to Enforce for a detailed walkthrough.


Next steps