Give autonomous systems a reliable safety signal
AI agents making deployment, dependency, or infrastructure decisions cannot reason about real-world exploitation risk from CVSS numbers alone. They need deterministic, structured signals they can act on without hallucinating context.
the request
bash
curl "https://api.attestd.io/v1/check?product=redis&version=6.0.9" \
-H "Authorization: Bearer attestd_demo_key"integration
agent_tool.py
from attestd import check
def check_software_safety(product: str, version: str) -> dict:
"""
Tool available to the agent for checking software risk state.
Returns structured, deterministic data — no interpretation needed.
"""
risk = check(product, version)
return {
"safe_to_deploy": risk.risk_state == "none",
"risk_state": risk.risk_state,
"actively_exploited": risk.actively_exploited,
"recommended_version": risk.fixed_version or version,
"action": (
"block_deployment"
if risk.risk_state == "critical"
else "warn"
if risk.risk_state == "high"
else "proceed"
),
}
# Agent receives structured output — no ambiguity in the signal
result = check_software_safety("redis", "6.0.9")
# result["action"] → "block_deployment" | "warn" | "proceed"operational outcome
▸
Agents make deployment decisions with ground truth, not guesses.
attestd returns the same answer every time for the same inputs. There's no probability, no scoring interpretation, and no hallucination surface. An agent calling attestd either gets 'proceed' or 'block' — and the reason why.