▓▒░ USE-CASES / VERTEX-AI
You trust Google's infrastructure.
But who secures what you build on top of it?
Vertex AI spans the entire ML lifecycle - training, fine-tuning, deployment, and agent building. Each stage is an attack surface. Your GCP project's security depends on more than IAM.
model training · fine-tuning · Function Calling · Vertex AI Agents · GCP integration
▓▒░ SUPPLY CHAIN
Your agent is only as secure as its weakest link
Every layer in your Vertex AI stack is an attack surface.
▓▒░ ATTACK SURFACE
The attack surface IAM doesn't cover
GCP IAM is a start. It's not enough.
Training data poisoning through Cloud Storage
Vertex AI fine-tuning reads training data from GCS buckets. An attacker who can modify objects in the training bucket - through a compromised service account, misconfigured IAM, or supply chain attack - can inject poisoned examples that create model backdoors. The model passes all standard evaluations but behaves differently on trigger inputs.
Function Calling privilege escalation
Vertex AI agents with Function Calling can invoke BigQuery, Cloud SQL, and custom APIs. The function definitions often grant broader access than the agent needs. An attacker manipulating the agent's reasoning can escalate from "read recent orders" to "export all customer data" - because the BigQuery IAM role allows it.
Cross-project model serving leakage
Vertex AI endpoints can serve models across GCP projects. When multiple teams share a model endpoint, inference data from one project can influence responses in another through shared model state. Data classification boundaries don't map to GCP project boundaries.
▓▒░ SOLUTION
Scan it. Guard it. Govern it.
Three capabilities purpose-built for AI infrastructure.
Red team your Vertex AI agents before deployment
75+ vulnerability checks purpose-built for Vertex AI. Test for training data poisoning, Function Calling escalation, endpoint exposure, and cross-project leakage - all mapped to OWASP Agentic Top 10 and MITRE ATLAS with GCP-specific attack vectors.
Runtime guardrails for every prediction call
Vertex AI's built-in safety filters are a starting point. Rogue adds behavioral analysis, Function Calling monitoring, and endpoint protection on every prediction call - blocking attacks that bypass native controls.
Continuous posture for your Vertex AI estate
IAM policies drift. Endpoints get reconfigured. Training pipelines change. Rogue continuously monitors your Vertex AI deployment's security posture and alerts on configuration changes that introduce risk.
Deploys in your GCP project. VPC Service Controls compatible. Full Cloud Audit Logging integration. Learn more →
You trust Google. Verify what you build.
Red team your Vertex AI agents before they serve a single prediction.