Critical LangChain Vulnerability (CVE-2025-68664) Puts LLM Apps at Risk

langchain

A critical security vulnerability in LangChain Core has been discovered that could allow attackers to steal sensitive secrets, manipulate large language model (LLM) responses, and potentially execute malicious code. The flaw highlights serious risks in AI application security, especially when LLM outputs are treated as trusted input.

The vulnerability, tracked as CVE-2025-68664, has been rated 9.3 on the CVSS scale, making it a high-severity issue. Security researcher Yarden Porat reported the flaw on December 4, 2025, and it has been given the codename “LangGrinch.”

LangChain has released patches to fix the issue, but experts warn that organizations using affected versions should upgrade immediately to reduce the risk of exploitation.

LangChain Core (langchain-core) is a foundational Python package within the LangChain ecosystem. It provides core abstractions and interfaces that developers use to build applications powered by large language models, including chatbots, AI agents, and automation workflows.

Because LangChain Core sits at the heart of many AI-driven systems, a vulnerability in this package can have wide-reaching consequences, especially for enterprises deploying LLMs in production environments.

According to the LangChain maintainers, the issue stems from a serialization injection flaw in the framework’s dumps() and dumpd() functions.

These functions are responsible for serializing and deserializing objects. However, they fail to properly escape user-controlled dictionaries that contain a special key named “lc.”

In LangChain, the “lc” key is used internally to mark serialized LangChain objects. When user-provided data includes this key, the framework mistakenly treats it as a trusted LangChain object rather than untrusted input.

As a result, malicious data can be deserialized as a legitimate object, opening the door to multiple attack paths.

Porat explained that once an attacker manages to insert an “lc” key into user-controlled data, they can trigger unsafe object instantiation during deserialization.

This attack can occur in common LangChain workflows where data is serialized and later deserialized, such as orchestration loops and streaming operations.

Possible outcomes of exploitation include:

If deserialization is performed with secrets_from_env=True (which was previously enabled by default), attackers could extract sensitive secrets, including API keys and credentials, from environment variables.

The flaw allows attackers to instantiate objects from pre-approved trusted namespaces, such as:

  • langchain_core

  • langchain

  • langchain_community

This can be abused to access unintended functionality.

In some scenarios, the vulnerability could be chained with Jinja2 template rendering, potentially leading to arbitrary code execution, depending on application configuration.

One of the most concerning aspects of this vulnerability is that it can be exploited through prompt injection.

The escaping bug allows attackers to inject malicious object structures into user-controlled fields such as:

  • metadata

  • additional_kwargs

  • response_metadata

These fields are often populated by LLM-generated responses, which means attackers can influence them simply by crafting malicious prompts.

“This is a classic example of AI meets traditional security vulnerabilities,” Porat warned. “LLM output should always be treated as untrusted input.”

patch now

Its maintainers has released a patch that introduces safer default behavior in the load() and loads() functions.

Key security changes include:

  • Allowlist-based deserialization
    A new allowed_objects parameter lets developers explicitly define which classes can be serialized and deserialized.

  • Jinja2 templates blocked by default
    This reduces the risk of code execution attacks.

  • Automatic secret loading disabled
    The secrets_from_env option is now set to False by default, preventing unintended secret exposure.

These changes significantly reduce the attack surface but require users to update their installations.

The following versions are affected by CVE-2025-68664:

  • >= 1.0.0, < 1.2.5Fixed in 1.2.5

  • < 0.3.81Fixed in 0.3.81

Organizations running these versions should upgrade immediately.

A related serialization injection flaw has also been identified in LangChain.js, tracked as CVE-2025-68665, with a CVSS score of 8.6.

This issue also results from improper handling of objects containing “lc” keys, enabling secret extraction and prompt injection.

Given the critical severity of these vulnerabilities, security teams should:

  • Upgrade LangChain and LangChain Core immediately

  • Treat LLM outputs as untrusted input

  • Review serialization and deserialization workflows

  • Limit allowed objects using allowlists

  • Disable unnecessary features like template rendering

As AI adoption grows, vulnerabilities like LangGrinch serve as a reminder that traditional security principles still apply—even in modern, AI-powered systems.

Failing to address these risks could leave organizations exposed to data leaks, prompt manipulation, and system compromise.

Follow us on Twitter and Linkedin for real time updates and exclusive content.

1 thought on “Critical LangChain Vulnerability (CVE-2025-68664) Puts LLM Apps at Risk”

  1. Pingback: Headphone Jacking: Bluetooth Earbuds Leads To Smartphone Hacking

Comments are closed.

Scroll to Top