Google Cloud Security Alert: Vertex AI Data Exposure Risk

vertex ai google cloud

A newly discovered security issue in Google Cloud’s Vertex AI platform has raised major concerns for organizations using artificial intelligence (AI) in their cloud environments. Cybersecurity researchers have revealed that this vulnerability could allow attackers to misuse AI agents to access sensitive data, compromise systems, and even expose private internal resources.

The findings, published by Palo Alto Networks Unit 42, highlight how a misconfiguration in Vertex AI’s permission model can turn helpful AI agents into dangerous insider threats.

Vertex AI is Google Cloud’s platform for building and deploying machine learning and AI models. It allows developers to create AI agents that can automate tasks and interact with data. However, researchers found that the platform grants overly broad permissions to certain service agents by default.

This issue is linked to something called the Per-Project, Per-Product Service Agent (P4SA). When an AI agent is deployed using Vertex AI’s Agent Development Kit (ADK), this service agent is automatically created with excessive permissions.

These permissions go beyond what is necessary, creating a security gap that attackers can exploit.

According to the research, once an AI agent is deployed using Vertex AI’s Agent Engine, it interacts with Google’s metadata service. During this process, sensitive credentials linked to the service agent can be exposed.

An attacker who gains access to these credentials can:

  • Impersonate the service agent
  • Access the Google Cloud project hosting the AI agent
  • View details about the AI agent’s identity and permissions
  • Perform actions within the cloud environment

This effectively allows attackers to move from the AI agent into the broader cloud infrastructure.

One of the most serious risks identified is unauthorized access to Google Cloud Storage. Using the exposed credentials, researchers were able to bypass isolation controls and gain read access to all storage buckets within a targeted project.

This means sensitive business data, customer information, and internal files could be exposed without detection.

Such access transforms an AI agent from a productivity tool into a major security liability.

The issue doesn’t stop at customer data. Researchers also discovered that the compromised credentials could provide insights into Google’s internal infrastructure.

Vertex AI’s Agent Engine runs inside a Google-managed tenant project. When credentials are exposed, they may reveal information about internal storage buckets used by Google itself.

Although direct access to these internal buckets was restricted, the visibility alone could help attackers understand how the platform is structured.

Another critical concern involves Google Cloud’s Artifact Registry, which stores container images used in applications.

The researchers found that the same service agent credentials could be used to access restricted repositories within the Artifact Registry. These repositories contain private container images that are part of Vertex AI’s core systems.

With this access, attackers could:

  • Download sensitive container images
  • Explore proprietary code
  • Analyze internal system components
  • Identify weaknesses for future attacks

Even more concerning, attackers could view additional restricted images beyond those initially exposed during deployment logs.

google cloud

This vulnerability highlights a fundamental issue in cloud security—over-permissioning. When systems grant more access than necessary, they increase the attack surface.

In this case, the excessive permissions given to AI service agents violate the principle of least privilege (PoLP), a core cybersecurity best practice.

By default, these agents should only have access to the resources they need. Instead, they were given broad permissions that could be misused if compromised.

This creates a scenario where:

  • AI agents can act as insider threats
  • Sensitive data can be silently exfiltrated
  • Cloud environments can be mapped and attacked
  • Proprietary technology can be exposed

Following the disclosure, Google has taken steps to address the issue. The company has updated its documentation to provide clearer guidance on how Vertex AI uses service accounts, permissions, and resources.

Google now recommends that organizations:

Instead of relying on default service agents, organizations should create and manage their own service accounts with controlled permissions.

Ensure that AI agents only have the minimum permissions required to perform their tasks.

Limit the scope of access tokens to reduce the risk of misuse.

Treat AI agent deployments like production code. Conduct proper testing, validation, and security reviews before deployment.

Continuously monitor cloud activity to detect unusual behavior or unauthorized access attempts.

This Vertex AI vulnerability serves as a strong reminder that AI systems must be secured just like any other critical infrastructure.

Organizations adopting AI in the cloud should:

  • Review permission settings carefully
  • Avoid default configurations without validation
  • Implement strict access controls
  • Regularly audit AI systems and service accounts

AI brings powerful capabilities, but without proper security controls, it can also introduce new risks.

The discovery of this Vertex AI vulnerability shows how small misconfigurations can lead to major security risks. By exploiting overly broad permissions, attackers can turn AI agents into entry points for deeper cloud compromise.

As AI adoption continues to grow, organizations must prioritize security at every stage from development to deployment. Proper configuration, continuous monitoring, and adherence to security best practices are essential to prevent such threats.

Follow us on Twitter and Linkedin for real time updates and exclusive content.

Scroll to Top