LangChain has become the most popular framework for building AI agents in Python. But it provides almost no built-in governance for what those agents do in production. This guide covers how to add runtime security before going live.
A typical LangChain agent with file system and database tools can read any file the process can access, write or delete files anywhere, execute arbitrary SQL, and make HTTP requests to any URL. There is no built-in mechanism to restrict this. LangChain trusts the model judgment. In production, that is not enough.
pip install vaultak
from vaultak import Vaultak, ActionType, KillSwitchMode
vt = Vaultak(
api_key="vtk_your_key_here",
allowed_action_types=[ActionType.FILE_READ, ActionType.DATABASE_QUERY],
allowed_resources=["/tmp/*", "/data/readonly/*"],
blocked_resources=["prod.*", "*.env", "*.key", "/etc/*"],
max_actions_per_minute=30,
max_risk_score=0.7,
mode=KillSwitchMode.PAUSE
)
from langchain.agents import AgentExecutor
agent = AgentExecutor(agent=..., tools=[...], llm=llm)
with vt.monitor("langchain-agent"):
result = agent.invoke({"input": user_query})
Every tool call is now intercepted, risk-scored, and checked against your permission profile before execution.
import requests
def safe_delete(path):
r = requests.post("https://vaultak.com/api/check",
headers={"x-api-key": "vtk_..."},
json={"agent_id": "langchain-agent", "action_type": "file_delete", "resource": path}
)
if r.json()["decision"] != "allow":
return f"Blocked: {r.json()['reason']}"
os.remove(path)
return f"Deleted {path}"
After these steps your LangChain agent has a declared permission profile, pre-execution blocking, real-time risk scoring, full audit trail, and automatic pause on violations.