How to Give Your LangChain Agent a Security Sensor

How to Give Your LangChain Agent a Security Sensor#
LangChain agents are increasingly making real deployment decisions. They resolve dependencies, recommend package versions, build pipelines, and approve infrastructure changes. Most of them do this with no access to vulnerability or supply chain data.
A CVE database check is not something an LLM can do reliably from training data. A requests==2.28.0 might be safe or it might carry a known exploit. A litellm==1.82.7 might look current but was a malicious publish on PyPI in March 2026 with no CVE attached. The LLM cannot know either of these things without calling out to a structured data source.
This tutorial builds a LangChain StructuredTool that wraps the Attestd API and gives any agent deterministic answers to those questions.
What you will need#
pip install attestd "langchain-core>=0.3" langchain langchain-openai
An Attestd API key from api.attestd.io/portal/login. Free tier, 1,000 calls a month.
What the tool returns#
The Attestd API returns a deterministic risk_state — one of critical, high, elevated, low, or none — derived from NVD CVE data and the CISA KEV catalog. For PyPI packages it also returns a supply_chain object indicating whether the version was a known malicious publish.
Two signals, independent of each other. A package can have risk_state: none and supply_chain.compromised: true at the same time. litellm 1.82.7 is exactly that case:
{
"product": "litellm",
"version": "1.82.7",
"risk_state": "none",
"supply_chain": {
"compromised": true,
"sources": ["osv", "registry"],
"malware_type": "backdoor",
"description": "TeamPCP supply chain attack. Credential stealer in proxy_server.py",
"compromised_at": "2026-03-24T10:39:00Z",
"removed_at": "2026-03-24T16:00:00Z"
}
}
No CVEs. Still malicious. A risk_state check alone would have passed it.
Pattern 1 — Building the tool#
Instantiate Client once at module level. Do not create a new client inside the function on every call.
from langchain_core.tools import StructuredTool
from pydantic import BaseModel
import attestd
_client = attestd.Client(api_key="YOUR_API_KEY")
class AttestdInput(BaseModel):
product: str
version: str
def check_vulnerability(product: str, version: str) -> dict:
try:
result = _client.check(product, version)
return {
"outside_coverage": False,
"risk_state": result.risk_state,
"actively_exploited": result.actively_exploited,
"patch_available": result.patch_available,
"fixed_version": result.fixed_version,
"supply_chain_compromised": (
result.supply_chain.compromised
if result.supply_chain is not None
else False
),
}
except attestd.AttestdUnsupportedProductError:
return {
"outside_coverage": True,
"risk_state": None,
"message": f"No Attestd coverage for '{product}'. Treat as unknown risk.",
}
attestd_tool = StructuredTool.from_function(
func=check_vulnerability,
name="check_package_vulnerability",
description=(
"Check whether a software package version has known CVE vulnerabilities "
"or supply chain compromise. Use before deploying or recommending any "
"software dependency. outside_coverage=true means Attestd has no data "
"for that product — treat as unknown risk, not safe. "
"Input: product slug (e.g. 'nginx', 'runc', 'litellm') and exact version string."
),
args_schema=AttestdInput,
)
A few things worth noting here.
AttestdUnsupportedProductError must be caught. An agent that raises an exception on an unsupported product is not production-ready. The outside_coverage: True return gives the agent a signal it can branch on: "I asked, Attestd has no data, this is unknown risk, not a clean bill of health."
supply_chain is None for infrastructure products like nginx and runc. Those products are not distributed via PyPI, so there is no supply chain signal. The tool returns supply_chain_compromised: False in that case, which is correct: the story for those products is CVE risk_state, not supply chain.
The tool description matters more than it looks. It is what the LLM reads to decide when to call this tool. The phrase "treat as unknown risk, not safe" is there deliberately to prevent the model from treating an unsupported product as clean by default.
Pattern 2 — Running an agent#
Here is a complete agent that takes a dependency list and returns a go/no-go decision with reasoning.
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
(
"system",
"You are a security-aware deployment assistant. "
"Before approving any software dependency, check its vulnerability status "
"using the check_package_vulnerability tool. "
"Block deployment if risk_state is 'critical' or 'high', or if "
"supply_chain_compromised is true. "
"If outside_coverage is true, state that explicitly and do not treat it as safe.",
),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, [attestd_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[attestd_tool], verbose=True)
response = executor.invoke({
"input": "Is it safe to deploy with runc 1.0.0 and litellm 1.82.7?"
})
print(response["output"])
With the two packages in this example, here is what the agent encounters:
runc 1.0.0 returns risk_state: "high", remote_exploitable: true, patch_available: true, fixed_version: "1.0.0-rc95 or later". Three CVEs including CVE-2024-21626, a container escape that is on the CISA KEV list.
litellm 1.82.7 returns risk_state: "none" and supply_chain_compromised: true. The CVE check passes. The supply chain check blocks it.
A well-prompted agent will block on both and explain why:
runc 1.0.0: BLOCKED — risk_state high, remote exploitable, patch available at 1.0.0-rc95.
litellm 1.82.7: BLOCKED — supply chain compromised (backdoor, TeamPCP attack March 2026).
Deployment not approved.
Note: use create_tool_calling_agent, not the legacy create_react_agent with a hub prompt. The tool-calling pattern gives more reliable structured output.
Pattern 3 — Async#
If your agent runs in an async context, use AsyncClient at module level and pass coroutine= to StructuredTool.from_function. Do not create a new AsyncClient inside the function on every call.
import attestd
from langchain_core.tools import StructuredTool
from pydantic import BaseModel
_async_client = attestd.AsyncClient(api_key="YOUR_API_KEY")
class AttestdInput(BaseModel):
product: str
version: str
async def acheck_vulnerability(product: str, version: str) -> dict:
try:
result = await _async_client.check(product, version)
return {
"outside_coverage": False,
"risk_state": result.risk_state,
"actively_exploited": result.actively_exploited,
"patch_available": result.patch_available,
"fixed_version": result.fixed_version,
"supply_chain_compromised": (
result.supply_chain.compromised
if result.supply_chain is not None
else False
),
}
except attestd.AttestdUnsupportedProductError:
return {
"outside_coverage": True,
"risk_state": None,
"message": f"No Attestd coverage for '{product}'. Treat as unknown risk.",
}
async_attestd_tool = StructuredTool.from_function(
coroutine=acheck_vulnerability,
name="check_package_vulnerability",
description=(
"Check whether a software package version has known CVE vulnerabilities "
"or supply chain compromise. outside_coverage=true means unknown risk, not safe."
),
args_schema=AttestdInput,
)
For async execution use await executor.ainvoke({...}).
Full runnable example#
import os
import attestd
from langchain_core.tools import StructuredTool
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel
_client = attestd.Client(api_key=os.environ["ATTESTD_API_KEY"])
class AttestdInput(BaseModel):
product: str
version: str
def check_vulnerability(product: str, version: str) -> dict:
try:
result = _client.check(product, version)
return {
"outside_coverage": False,
"risk_state": result.risk_state,
"actively_exploited": result.actively_exploited,
"patch_available": result.patch_available,
"fixed_version": result.fixed_version,
"supply_chain_compromised": (
result.supply_chain.compromised
if result.supply_chain is not None
else False
),
}
except attestd.AttestdUnsupportedProductError:
return {
"outside_coverage": True,
"risk_state": None,
"message": f"No Attestd coverage for '{product}'. Treat as unknown risk.",
}
attestd_tool = StructuredTool.from_function(
func=check_vulnerability,
name="check_package_vulnerability",
description=(
"Check whether a software package version has known CVE vulnerabilities "
"or supply chain compromise. Use before deploying or recommending any "
"software dependency. outside_coverage=true means unknown risk, not safe. "
"Input: product slug and exact version string."
),
args_schema=AttestdInput,
)
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0,
api_key=os.environ["OPENAI_API_KEY"],
)
prompt = ChatPromptTemplate.from_messages([
(
"system",
"You are a security-aware deployment assistant. "
"Check each dependency with check_package_vulnerability before approving. "
"Block if risk_state is critical or high, or if supply_chain_compromised is true. "
"State outside_coverage results explicitly. Do not treat them as safe.",
),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, [attestd_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[attestd_tool], verbose=True)
dependencies = [
("runc", "1.0.0"),
("litellm", "1.82.7"),
("nginx", "1.27.4"),
]
query = "Check these dependencies: " + ", ".join(
f"{p} {v}" for p, v in dependencies
)
response = executor.invoke({"input": query})
print(response["output"])
Expected output for these three packages:
runc 1.0.0: BLOCKED — risk_state high, remote exploitable without authentication,
CVEs include CVE-2024-21626 (CISA KEV). Upgrade to 1.0.0-rc95 or later.
litellm 1.82.7: BLOCKED — supply chain compromised. TeamPCP backdoor,
active March 24 2026. risk_state is none but supply chain check fails.
nginx 1.27.4: APPROVED — risk_state none, no active exploitation, no supply chain signal.
Return field reference#
| Field | What it means for your agent |
|---|---|
outside_coverage | true means no data. Not safe, unknown. |
risk_state | CVE severity. critical or high should block by default. |
actively_exploited | In the CISA KEV catalog. Treat as highest urgency. |
patch_available / fixed_version | Upgrade path when a fix exists. |
supply_chain_compromised | Known malicious publish. Block if true, independent of risk_state. false for infrastructure products (not PyPI). |
Full reference at attestd.io/docs/integrations/langchain.
Get an API key at api.attestd.io/portal/login. Free tier, 1,000 calls a month, no credit card required.