CVE-2026-25874: LeRobot RCE Vulnerability Raises Major Security Concerns

lerobot

A vulnerability in Hugging Face’s open-source robotics platform, LeRobot, is raising serious concerns across the security community. The flaw, identified as CVE-2026-25874, has been assigned a high CVSS score of 9.3 and could allow attackers to execute malicious code remotely without authentication.

This issue highlights growing risks in AI-driven systems and reinforces the need for secure coding practices, especially in platforms handling robotics and machine learning workflows.

The vulnerability in LeRobot stems from unsafe data handling practices within its asynchronous inference pipeline. Specifically, the platform uses Python’s pickle module to deserialize incoming data using pickle.loads(). This method is widely known in cybersecurity as unsafe when handling untrusted input.

In this case, the problem is even more critical because the deserialization occurs over unauthenticated gRPC channels without encryption (TLS). This means attackers can potentially send specially crafted payloads to the system and trigger arbitrary code execution.

Security researchers have confirmed that an attacker can exploit this flaw via multiple gRPC endpoints such as:

  • SendPolicyInstructions
  • SendObservations
  • GetActions

By abusing these functions, a malicious actor can inject harmful serialized data and gain control over both the server and connected robotic clients.

The flaw affects the PolicyServer component of LeRobot, which plays a key role in AI inference operations. According to findings from Resecurity, the vulnerability is particularly dangerous due to how AI systems are typically deployed.

These systems often run with elevated privileges, allowing access to:

  • Internal enterprise networks
  • Sensitive datasets
  • High-performance computing resources

If exploited, this vulnerability could lead to severe consequences, including:

  • Unauthenticated remote code execution (RCE)
  • Full system compromise of the host machine
  • Unauthorized access to API keys and SSH credentials
  • Theft of proprietary AI models
  • Lateral movement across enterprise networks
  • Service disruption or sabotage of robotic operations

In real-world deployments, especially in industrial or research environments, such attacks could even introduce physical safety risks if robots behave unpredictably.

At its core, the issue is a classic example of insecure deserialization. The pickle format allows arbitrary Python objects to be reconstructed during deserialization. However, this also means attackers can embed malicious instructions within the serialized data.

When the system processes this data using pickle.loads(), it unknowingly executes the embedded code.

This risk becomes critical when:

  • Input data is not authenticated
  • Communication channels are not encrypted
  • The system blindly trusts incoming payloads

Despite industry awareness of this issue, the unsafe practice persisted in LeRobot’s implementation.


artificial intelligence ai

Interestingly, the vulnerability was not entirely unknown. It was independently reported in December 2025 by a researcher using the alias “chenpinji.” Later, security researcher Valentin Lobstein validated and publicly detailed the issue.

Even more ironic is the connection to Safetensors, a safer data serialization format developed by Hugging Face itself to replace pickle due to its security risks.

Despite this, LeRobot continued using pickle in a way that exposed systems to attack. Reports suggest that warnings from security tools were suppressed using comments like # nosec, which prevented detection during development.

As of now, the vulnerability remains unpatched in LeRobot version 0.4.3. However, the development team has acknowledged the issue and confirmed that a fix is planned for version 0.6.0.

According to project maintainers, the affected code requires significant refactoring. They also noted that LeRobot was initially designed as a research and prototyping tool, where security was not the primary focus.

However, with increasing adoption in production environments, the importance of security is now being recognized.

This incident serves as a critical reminder for developers, security teams, and organizations working with AI and robotics platforms:

Never use pickle or similar formats for untrusted input. Opt for safer alternatives like JSON or Safetensors.

Always secure communication channels using TLS and implement proper authentication mechanisms.

Do not ignore or suppress security warnings during development. Tools flag issues for a reason.

AI and robotics platforms often have deep system access, making them prime targets for attackers.

The CVE-2026-25874 flaw in LeRobot underscores a broader issue in modern software development—security often lags behind innovation. As AI systems continue to expand into real-world applications, vulnerabilities like this could have far-reaching consequences.

Organizations using LeRobot or similar platforms should take immediate precautions, restrict network exposure, and monitor for unusual activity until a patch is available.

Follow us on Twitter and Linkedin for real time updates and exclusive content.

Scroll to Top