LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks

Cybersecurity researchers have disclosed three security vulnerabilities impacting LangChain and LangGraph that, if successfully exploited, could expose filesystem data, environment secrets, and conversation history. Why care? LangChain and LangGraph are widely used open-source frameworks used for LLM. So if you are using AI that has LLMs leaning on LangChain and LangGraph, you probably want to take note.

Here are the three specific vulnerabilties

  • CVE-2026-34070 (CVSS score: 7.5) – A path traversal vulnerability in LangChain (“langchain_core/prompts/loading.py”) that allows access to arbitrary files without any validation via its prompt-loading API by supplying a specially crafted prompt template.
  • CVE-2025-68664 (CVSS score: 9.3) – A deserialization of untrusted data vulnerability in LangChain that leaks API keys and environment secrets by passing as input a data structure that tricks the application into interpreting it as an already serialized LangChain object rather than regular user data.
  • CVE-2025-67644 (CVSS score: 7.3) – An SQL injection vulnerability in LangGraph SQLite checkpoint implementation that allows an attacker to manipulate SQL queries through metadata filter keys and run arbitrary SQL queries against the database.

Check out a detailed post on this topic from thehackernews HERE/

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.