integration

AI Agent Integration

Attestd is designed for agent use. The API returns structured, deterministic data — no CVSS score interpretation, no ambiguous text to parse. An agent either gets risk_state: "none" (proceed) or a specific risk level with supporting data (decide).

The single most important concern for agents is correct handling of products outside attestd's coverage. Outside coverage is not a safety signal. An agent that treats AttestdUnsupportedProductError as “no known vulnerabilities” is wrong and dangerous.

function tool

Basic tool definition

The tool returns a structured dict the agent can use for branching. The docstring is read by the LLM and should make the outside-coverage semantics explicit.

tools.py
import attestd

client = attestd.Client(api_key="YOUR_API_KEY")


def check_software_risk(product: str, version: str) -> dict:
    """
    Check the security risk state of a software package.
    Returns structured risk data. The caller must make the deployment decision.

    Returns a dict with:
      - risk_state: "critical" | "high" | "elevated" | "low" | "none"
      - actively_exploited: bool (true = in CISA KEV catalog)
      - patch_available: bool
      - fixed_version: str or None
      - outside_coverage: bool (true = attestd has no data for this product)

    IMPORTANT: outside_coverage=True is NOT a safety clearance.
    It means attestd has no vulnerability data for this product.
    Treat it as unknown risk and apply your policy (block, warn, or skip).
    """
    try:
        result = client.check(product, version)
        return {
            "outside_coverage": False,
            "risk_state": result.risk_state,
            "actively_exploited": result.actively_exploited,
            "remote_exploitable": result.remote_exploitable,
            "authentication_required": result.authentication_required,
            "patch_available": result.patch_available,
            "fixed_version": result.fixed_version,
            "cve_ids": result.cve_ids,
        }
    except attestd.AttestdUnsupportedProductError:
        # Product is outside Attestd's coverage — this is NOT a safety signal.
        # Return a structured response the agent can act on explicitly.
        return {
            "outside_coverage": True,
            "risk_state": None,
            "reason": f"attestd has no coverage for '{product}'. Unknown risk.",
        }
langchain

LangChain tool

For LangChain agents, use the @tool decorator. Return JSON strings since LangChain expects string tool outputs.

langchain_tools.py
from langchain.tools import tool
import attestd

_client = attestd.Client(api_key="YOUR_API_KEY")


@tool
def check_software_risk(product: str, version: str) -> str:
    """
    Check the security risk of a software component before deployment.
    Use this before recommending or approving any software installation or upgrade.
    
    Args:
        product: Software slug (e.g. nginx, log4j, openssh, redis)
        version: Version string (e.g. 1.24.0, 2.14.1)
    
    Returns:
        JSON string with risk_state, actively_exploited, patch_available,
        and fixed_version. Returns outside_coverage=true if attestd has no
        data — treat this as unknown risk, not as safe.
    """
    import json
    try:
        result = _client.check(product, version)
        return json.dumps({
            "outside_coverage": False,
            "risk_state": result.risk_state,
            "actively_exploited": result.actively_exploited,
            "patch_available": result.patch_available,
            "fixed_version": result.fixed_version,
        })
    except attestd.AttestdUnsupportedProductError:
        return json.dumps({
            "outside_coverage": True,
            "message": f"No coverage for '{product}' — unknown risk, not safe.",
        })
function calling

OpenAI / raw function calling

Define the tool schema for OpenAI function calling or the Anthropic tool use API. The description should explicitly state that outside_coverage is not safe — this prevents the model from hallucinating a safety inference.

tool_schema.py
tools = [
    {
        "type": "function",
        "function": {
            "name": "check_software_risk",
            "description": (
                "Check the security risk of a software component. "
                "Use before any deployment, installation, or dependency recommendation. "
                "outside_coverage=true means unknown risk — not safe."
            ),
            "parameters": {
                "type": "object",
                "properties": {
                    "product": {
                        "type": "string",
                        "description": "Software slug (nginx, log4j, openssh, redis, etc.)"
                    },
                    "version": {
                        "type": "string",
                        "description": "Version string (e.g. 1.24.0)"
                    }
                },
                "required": ["product", "version"]
            }
        }
    }
]
policy

Outside coverage policy

When a product is outside attestd's coverage, choose one of three explicit policies. Document your choice — an undocumented exemption is a gap in your audit trail.

PolicyBehaviourUse when
blockRaise an error, block the deploymentDefault for CI/CD gates — safest option
warnProceed, surface the gap to operatorsMonitoring pipelines where you want visibility, not hard stops
skipExempt the product entirelyInternal tooling already covered by separate security controls
policy.py
import attestd

client = attestd.Client(api_key="YOUR_API_KEY")


def check_with_policy(product: str, version: str, policy: str = "block") -> dict:
    """
    policy options:
      "block"  — treat outside coverage as unknown risk (recommended for CI/CD)
      "warn"   — proceed but surface the gap to operators
      "skip"   — exempt with documented justification
    """
    try:
        result = client.check(product, version)
        return {"ok": True, "risk_state": result.risk_state}
    except attestd.AttestdUnsupportedProductError as e:
        if policy == "block":
            raise RuntimeError(
                f"Deployment blocked: {e.product} is outside attestd coverage. "
                "Review manually before proceeding."
            ) from e
        elif policy == "warn":
            print(f"WARNING: {e.product} is outside attestd coverage — proceeding with unknown risk")
            return {"ok": True, "risk_state": "unknown", "outside_coverage": True}
        else:  # skip
            return {"ok": True, "risk_state": "skipped", "outside_coverage": True}
testing

Testing agent security logic

Use attestd.testing to inject controlled API responses into your agent tool tests. This verifies the branching logic — the paths that handle critical vs high vs outside-coverage — without running a real API.

test_agent_tool.py
import pytest
import attestd
from attestd.testing import MockTransport, NGINX_SAFE, NGINX_VULNERABLE, UNSUPPORTED


def test_agent_blocks_on_critical():
    transport = MockTransport(200, NGINX_VULNERABLE | {"risk_state": "critical"})
    client = attestd.Client(api_key="test", transport=transport)
    result = check_software_risk.__wrapped__(client, "nginx", "1.20.0")
    assert result["risk_state"] == "critical"


def test_agent_handles_outside_coverage():
    transport = MockTransport(200, UNSUPPORTED)
    client = attestd.Client(api_key="test", transport=transport)
    result = check_software_risk.__wrapped__(client, "unknown-product", "1.0.0")
    assert result["outside_coverage"] is True
    # Verify the agent doesn't treat this as safe
    assert result.get("risk_state") is None

For retry behaviour, use SequentialMockTransport. See the SDK reference.