Prerequisites
- A Handlebar account
- Handlebar API key (created on the platform)
Python
Package:handlebar-langchain\n
Supports: langchain >= 1.0.0\n
Supports Python: >= 3.11
Codebase: https://github.com/gethandlebar/handlebar-python
Bugs, issues, feature requests: https://github.com/gethandlebar/handlebar-python/issues
Install
Quick setup
Set your API key in the environment and pass the middleware when creating your agent:You can view your agent runs and configure rules on the Handlebar platform.
Async usage
The middleware supports async agents with no extra configuration:Additional config
enforce_mode | Behaviour |
|---|---|
"enforce" | Governance decisions are applied — blocked tools are stopped |
"shadow" | Decisions are evaluated and logged but never enforced |
"off" | No API calls; pass-through only |
End-user information
Pass the end-user’s identity so Handlebar can enforce per-user budgets and attribute audit events correctly:external_id should be whatever identifier your application uses for the user (database ID, email, etc.).session_id groups multiple agent invocations that belong to the same conversation.
Tool tags
Tag your tools so governance rules can match on them. This unlocks policies such as:- Rate-limiting expensive or high-risk tool categories
- Blocking data exfiltration — e.g. preventing a
pii_readresult flowing into anexternaltool - Requiring human review before
writeactions
metadata argument on the @tool decorator:
With a pre-initialised client
If you want to share one client across multiple agents (one connection, one audit stream):What happens on a block
When a governance rule blocks a tool call:- The tool does not execute.
- The agent receives a
ToolMessagewith content"Blocked by Handlebar governance: <reason>"as the tool result. - If the rule carries a
TERMINATEcontrol signal, the next model call is also intercepted — a syntheticAIMessageis returned instead and the agent loop ends cleanly. - The run is ended with status
"interrupted"and all events are flushed to the audit log.
Javascript
Package:@handlebar/langchain\n
Supports: @langchain/core@^1.1.27
The @handlebar/langchain adapter wraps any LangChain Runnable with full Handlebar governance - run lifecycle, LLM event logging, and tool-call enforcement.
HandlebarAgentExecutor extends LangChain’s Runnable, so it can be composed in chains via .pipe() and passed anywhere a Runnable is expected.
Installation
Quick start
How it works
Run lifecycle
HandlebarAgentExecutor.invoke() creates a new Run for each call:
run.started- emitted immediately onstartRun().- LLM and tool hooks fire during the agent loop (see below).
run.ended- emitted on completion, error, or governance termination; the event bus is flushed before returning.
Tool governance (wrapTools)
wrapTools() intercepts each tool’s invoke() method in place. On each tool call:
run.beforeTool(name, args, tags)is called first - evaluates governance rules.- ALLOW → proceeds with normal execution;
run.afterTool(...)is called after. - BLOCK + CONTINUE → skips execution; returns a JSON-encoded blocked message so the LLM can respond gracefully.
- BLOCK + TERMINATE → throws
HandlebarTerminationError;HandlebarAgentExecutorcatches it and ends the run with status"interrupted".
LLM event logging (HandlebarCallbackHandler)
HandlebarAgentExecutor automatically attaches a HandlebarCallbackHandler to each executor.invoke() call. It bridges LangChain’s callback system to Handlebar’s hooks:
| LangChain callback | Handlebar hook | Notes |
|---|---|---|
handleChatModelStart | run.beforeLlm | Delta-tracked - only new messages emitted per step |
handleLLMEnd | run.afterLlm | Extracts text, tool calls, and token usage from LLMResult |
run.beforeLlm. This prevents duplicate message.raw.created events.
API reference
wrapTools(tools, opts?)
Wraps an array of LangChain tools with Handlebar governance hooks. Mutates tool instances in place; returns the same array.
wrapTool(tool, tags?)
Wraps a single tool. Use when you need to wrap tools individually.
HandlebarAgentExecutor
Extends LangChain’s Runnable - composable in chains and usable anywhere a Runnable is expected.
Handlebar-specific options are passed via RunnableConfig.configurable, which LangChain propagates automatically through .pipe() chains.
HandlebarCallbackHandler
If you need the callback handler standalone (e.g. to attach to a chain rather than an executor):
HandlebarTerminationError
Thrown by a wrapped tool when a BLOCK + TERMINATE governance decision is made. HandlebarAgentExecutor catches this automatically and ends the run with "interrupted". If you are managing the run lifecycle manually, catch it yourself:
Actor schema
You can optionally tell Handlebar which enduser the agent is acting on behalf of. This allows you to enforce user-level rules (e.g. a user cost cap) and run analytics on endusers. In addition to providing the “externalId” (your ID for the enduser), you can define a group the enduser belongs to and attach metadata.Enduser metadata allows you to enforce rules on groups of users. For example, you might want to enforce stricter data controls on users tagged “eu”. The Handlebar platform will register the enduser metadata once it’s provided, so it is not necessary to provide it on every single run. Alternatively, you can configure enduser metadata on the platform itself. The full actor schema is:
Limitations
handleToolStartcannot block: LangChain’s callback system is observational - callbacks fire around tool execution but cannot intercept it. Tool wrapping viawrapTools()/wrapTool()is required to enforceBLOCKdecisions.- Chat models only:
HandlebarCallbackHandleruseshandleChatModelStart, which fires for chat models (ChatOpenAI, etc.). Plain completion LLMs usehandleLLMStart(prompts as strings); these are not currently converted tomessage.raw.createdevents. - Single batch assumed: For batched LLM calls (
messages: BaseMessage[][]), only the first batch (messages[0]) is forwarded torun.beforeLlm. Batched inference is uncommon in agent loops.
Please email
contact@gethandlebar.com to report security issues relating to Handlebar and client packages.