Never let your AI act on unverified memory.

Your AI agent is about to act on memory it hasn't verified — Sgraal is the verification.

One API call. Plain English answer. 15ms.

One wrong memory. One irreversible action. Sgraal stops it before it happens.

pip install sgraal content_copy
Free tier: 10,000 decisions/month · No credit card required
See benchmark -> Examples
terminal
$ curl -X POST https://api.sgraal.com/v1/check \
  -H "Authorization: Bearer sg_demo_playground" \
  -d '{"memories": ["Deploy target is production",
  "API key is sk-proj-abc123"]}'
# Response:
{
"safe": false,
"reason": "Memory contains a likely secret (API key).",
"action": "Remove secrets from memory before proceeding."
}
34%
Reliability ↑
15ms
Preflight
355
Endpoints
26
Integrations

Sgraal sees what your agents are about to do — including the decisions that were blocked and never happened.

security

Systemic Risk Detection

Automatically detect recursive loops or destructive cascading actions before they commit to your core infrastructure.

database

Near-Miss Database

Learn from 'shadow errors' that never happened. Sgraal logs blocked risks to refine future decision weights.

psychology

Organizational Hallucination

Cross-reference agent intents with your organizational policies to eliminate hallucinated permissions.

nightlight

Always-on Agents

Your agent runs while you sleep. Sgraal validates every memory access before any irreversible action — even at 3am.

Decide

Preflight validation before every agent action. BLOCK, WARN, ASK_USER, or USE_MEMORY — with full explainability.

Learn more arrow_forward

Protect

Write firewall, poisoning detection, tamper verification — threats stopped before they reach your agent.

Learn more arrow_forward

Comply

EU AI Act, GDPR, HIPAA, FDA 510(k) — built into every preflight call.

Learn more arrow_forward

Scale

Self-improving thresholds, autonomous healing, 27 production integrations — Sgraal runs itself.

Learn more arrow_forward

Any AI. Any memory. Any stack.

Seamlessly integrates with your existing agent architecture.

See all 27 integrations →

AI Agents

CrewAI Microsoft Autogen OpenAI Agents Semantic Kernel

Frameworks

LangChain LlamaIndex Haystack

Infrastructure

Cloudflare Workers Edge SDK Zapier Make
integration_request.py HTTPS POST
{
  "headers": {
    "Authorization": "Bearer sg_live_..."
  },
  "body": {
    "agent_id": "agent-payments",
    "memory_state": [
      {"id": "mem_001", "content": "User balance: $50,000",
       "type": "semantic", "timestamp_age_days": 3}
    ],
    "action_type": "irreversible",
    "domain": "fintech"
  }
}

Model Context Protocol

Add Sgraal as a tool to any MCP-compatible agent host.

{
  "mcpServers": {
    "sgraal": {
      "command": "npx",
      "args": ["@sgraal/mcp-server"],
      "env": { "SGRAAL_API_KEY": "YOUR_KEY" }
    }
  }
}

Free

$0 /mo

10,000 decisions/month

Start free
Most Popular

Pro

$99 /mo

250,000 decisions/month

Go Pro

Team

$499 /mo

2,500,000 decisions/month

Start Team

Enterprise

Custom

Unlimited decisions

Contact sales

🚀 Beta pricing — all features available on the free tier. Pro and Team tiers launching soon.

See full pricing details →

Validated in the Wild

Joint benchmark with Grok (xAI) across 8 adversarial corpora. Independent builds, side-by-side results.

Round 1 — Sponsored Drift

COMPLETE

60 cases · affiliate bias · brand manipulation

Sgraal F1 = 1.000
Grok F1 = 0.98

Round 2 — Subtle Drift

COMPLETE

59 cases · commercial_intent 0.30–0.55

Sgraal F1 = 1.000 · FP=0 · FN=0
Grok F1 = 0.98 · 2 false negatives

Round 3 — Hallucination

COMPLETE

60 cases · confident fabrication · multi-hop echo · cross-agent amplification

Sgraal F1 = 1.000 · TP=239 · FP=0 · FN=0
Grok F1 = 1.000

Round 4 — Real-world Propagation

COMPLETE

90 cases · memory injection · cross-agent drift · RAG poisoning · API drift

Sgraal F1 = 1.000 · 90/90 green
Grok F1 = 1.000 · <180ms · blast radius <2%

Round 6 — Memory Time Attack

COMPLETE

Timestamp forgery detection. Old decisions disguised as fresh, bypassing Weibull decay.

Sgraal F1 = 1.000 · 60/60
New field timestamp_integrity: VALID | SUSPICIOUS | MANIPULATED

Round 7 — Identity Drift

COMPLETE

Gradual role and authority escalation across agent hops.

Sgraal F1 = 1.000 · 90/90
New field identity_drift: CLEAN | SUSPICIOUS | MANIPULATED

Round 8 — Silent Consensus Collapse

COMPLETE

Self-reinforcing false consensus — no single agent flags the error.

Sgraal F1 = 1.000 · 90/90
New field consensus_collapse: CLEAN | SUSPICIOUS | MANIPULATED

Round 5 — Multi-model Consensus Poisoning

IN PROGRESS

3 independent stacks syncing on fabricated consensus. Joint corpus with Grok.

Sgraal Armed · anti-consensus layer active
Grok Corpus incoming

"Confidence ≠ truth. The formal verification layer catches what probabilistic systems miss."

614/614 corpus cases green · 0 false negatives

In the Wild — Joint Benchmark with Grok

8 rounds. 554 cases. F1=1.000 on both independent stacks. Two AI systems stress-tested each other's safety layers as peers.

"Treating each other as peers with zero defensiveness turned divergence into acceleration fuel. This is how AI systems should co-evolve." — Grok
Read the joint blog post →