LangChain.js Integration
Use Attestd as a LangChain.js tool to give agents real-time CVE risk and supply chain integrity data. The tool returns structured, deterministic output — no CVSS score interpretation, no ambiguous text for the agent to parse.
The most important correctness concern: outsideCoverage is not a safety signal. An agent that treats AttestdUnsupportedProductError as “no known vulnerabilities” is wrong. Catch it and return an explicit unknown-risk response.
Install
npm install @attestd/sdk @langchain/core @langchain/openai langchain zodtool() definition
Use tool() from @langchain/core/tools with an explicit Zod schema. This gives the LLM a validated schema and gives you typed inputs. Instantiate Client once at module level and capture it in the closure — never instantiate inside the tool function.
import { tool } from '@langchain/core/tools';
import { z } from 'zod';
import { Client, AttestdUnsupportedProductError } from '@attestd/sdk';
// Instantiate once — captured in closure. Never instantiate inside the tool function.
const _client = new Client({ apiKey: process.env.ATTESTD_API_KEY! });
export const checkVulnerability = tool(
async ({ product, version }) => {
try {
const result = await _client.check(product, version);
return {
outsideCoverage: false,
riskState: result.riskState,
activelyExploited: result.activelyExploited,
patchAvailable: result.patchAvailable,
fixedVersion: result.fixedVersion,
supplyChainCompromised: result.supplyChain?.compromised ?? false,
};
} catch (err) {
if (err instanceof AttestdUnsupportedProductError) {
// Product is outside Attestd coverage. This is NOT a safety signal.
return {
outsideCoverage: true,
riskState: null,
message: `No Attestd coverage for '${product}'. Treat as unknown risk.`,
};
}
throw err;
}
},
{
name: 'check_package_vulnerability',
description:
'Check whether a software package version has known CVE vulnerabilities ' +
'or supply chain compromise. Use before deploying or recommending any ' +
'software dependency. outsideCoverage=true means Attestd has no data — ' +
'treat as unknown risk, not safe. ' +
'Input: product slug (e.g. "nginx", "runc", "@bitwarden/cli") and exact version string.',
schema: z.object({
product: z.string().describe('Package slug, e.g. "nginx", "runc", "@bitwarden/cli"'),
version: z.string().describe('Exact version string, e.g. "1.0.0"'),
}),
},
);Agent executor pattern
Use createToolCallingAgent with any function-calling LLM. The system prompt must state the deployment-blocking policy explicitly — the model cannot infer the correct behaviour from the tool description alone.
import { ChatOpenAI } from '@langchain/openai';
import { createToolCallingAgent, AgentExecutor } from 'langchain/agents';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { checkVulnerability } from './attestd-tool.js';
const llm = new ChatOpenAI({ model: 'gpt-4o-mini', temperature: 0 });
const prompt = ChatPromptTemplate.fromMessages([
[
'system',
'You are a security-aware deployment assistant. ' +
'Before approving any software dependency, check its vulnerability status ' +
'using the check_package_vulnerability tool. ' +
'Block deployment if riskState is "critical" or "high", or if ' +
'supplyChainCompromised is true. ' +
'outsideCoverage=true means unknown risk — state this explicitly, ' +
'do not treat it as safe.',
],
['human', '{input}'],
['placeholder', '{agent_scratchpad}'],
]);
const agent = createToolCallingAgent({ llm, tools: [checkVulnerability], prompt });
const executor = new AgentExecutor({ agent, tools: [checkVulnerability] });
// Example: check two dependencies before a deploy
const response = await executor.invoke({
input: 'Is it safe to deploy with runc 1.0.0 and @bitwarden/cli 2026.4.0?',
});
console.log(response.output);Async support
The JS Client is natively async. Every client.check() call returns a Promise. There is no separate async client class — the tool function above already works correctly in any async context. Use await executor.invoke({...}) as shown.
Full runnable example
Checks four dependencies. Two are blocked by CVE risk, one by supply chain compromise, one is approved.
import { tool } from '@langchain/core/tools';
import { z } from 'zod';
import { ChatOpenAI } from '@langchain/openai';
import { createToolCallingAgent, AgentExecutor } from 'langchain/agents';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { Client, AttestdUnsupportedProductError } from '@attestd/sdk';
const _client = new Client({ apiKey: process.env.ATTESTD_API_KEY! });
const checkVulnerability = tool(
async ({ product, version }) => {
try {
const result = await _client.check(product, version);
return {
outsideCoverage: false,
riskState: result.riskState,
activelyExploited: result.activelyExploited,
patchAvailable: result.patchAvailable,
fixedVersion: result.fixedVersion,
supplyChainCompromised: result.supplyChain?.compromised ?? false,
};
} catch (err) {
if (err instanceof AttestdUnsupportedProductError) {
return {
outsideCoverage: true,
riskState: null,
message: `No Attestd coverage for '${product}'. Treat as unknown risk.`,
};
}
throw err;
}
},
{
name: 'check_package_vulnerability',
description:
'Check whether a software package version has known CVE vulnerabilities ' +
'or supply chain compromise. Use before deploying or recommending any ' +
'software dependency. outsideCoverage=true means unknown risk, not safe. ' +
'Input: product slug and exact version string.',
schema: z.object({
product: z.string().describe('Package slug, e.g. "nginx", "runc", "@bitwarden/cli"'),
version: z.string().describe('Exact version string'),
}),
},
);
const llm = new ChatOpenAI({
model: 'gpt-4o-mini',
temperature: 0,
apiKey: process.env.OPENAI_API_KEY,
});
const prompt = ChatPromptTemplate.fromMessages([
[
'system',
'You are a security-aware deployment assistant. ' +
'Check each dependency with check_package_vulnerability before approving. ' +
'Block if riskState is critical or high, or if supplyChainCompromised is true. ' +
'State outsideCoverage results explicitly. Do not treat them as safe.',
],
['human', '{input}'],
['placeholder', '{agent_scratchpad}'],
]);
const agent = createToolCallingAgent({ llm, tools: [checkVulnerability], prompt });
const executor = new AgentExecutor({ agent, tools: [checkVulnerability] });
const dependencies = [
['runc', '1.0.0'],
['litellm', '1.82.7'],
['@bitwarden/cli', '2026.4.0'],
['nginx', '1.27.4'],
];
const query =
'Check these dependencies: ' +
dependencies.map(([p, v]) => `${p}@${v}`).join(', ');
const response = await executor.invoke({ input: query });
console.log(response.output);Return fields
The tool returns these fields. Design agent branching logic around them.
| Field | Semantics |
|---|---|
outsideCoverage | true if Attestd has no data for this product. Not a safety signal — treat as unknown risk. |
riskState | "critical" | "high" | "elevated" | "low" | "none" | null (when outsideCoverage). Block on critical or high. |
activelyExploited | true if in the CISA KEV catalog. Block regardless of riskState if true. |
patchAvailable | true if a fixed version is known. Use with fixedVersion to tell the agent what to recommend. |
fixedVersion | The earliest clean version, or null if no patch exists yet. |
supplyChainCompromised | true if a malicious publish or security signal was detected on PyPI or npm. Block immediately — independent of riskState. |
- → LangChain Integration (Python) (StructuredTool, AsyncClient, same agent pattern for Python)
- → AI Agent Integration (generic function tool, OpenAI function calling, outside-coverage policy)
- → JavaScript SDK Reference (Client options, error types, MockFetch testing utilities)
- → Response Field Reference (full semantics for every field returned by /v1/check)