Coming Soon

Your agents won't go rogue for much longer...

Privacy Terms © 2026 Rogue Security
▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close
← Back to blog
March 24, 2026 by Rogue Security Research
supply-chainpypilitellmcredentialskubernetesMCPagentic-security

LiteLLM Supply Chain Attack: PyPI Compromise Targets AI Infrastructure

!
ACTIVE SUPPLY CHAIN COMPROMISE

LiteLLM versions 1.82.7 and 1.82.8 on PyPI contain credential-stealing malware. If you installed or upgraded LiteLLM today (March 24, 2026), your environment may be compromised. See remediation steps below.

On March 24, 2026, malicious versions of LiteLLM—one of the most popular LLM proxy libraries—were published to PyPI. The attack bypassed normal release processes entirely: no corresponding tags exist on GitHub. The maintainer’s account appears to be fully compromised, with the related GitHub issue closed as “not planned” and flooded with bot spam to suppress discussion.

This is not a theoretical vulnerability. This is active exploitation of AI infrastructure at scale.

2
Compromised Versions
4
Attack Stages
0
GitHub Tags

How the Attack Works

The malware uses a .pth file (litellm_init.pth) that Python executes automatically on interpreter startup—no import required. Once installed, every Python process in the environment triggers the payload.

1
Collection

A Python script harvests sensitive files: SSH private keys, .env files, AWS/GCP/Azure credentials, Kubernetes configs, database passwords, .gitconfig, shell history, and crypto wallet files. It also runs commands to dump environment variables and query cloud metadata endpoints (IMDS, container credentials).

2
Exfiltration

Collected data is encrypted with a hardcoded 4096-bit RSA public key using AES-256-CBC, bundled into a tar archive, and POSTed to models.litellm.cloud—a domain that is not part of legitimate LiteLLM infrastructure.

3
Lateral Movement

If a Kubernetes service account token is present, the malware reads all cluster secrets across all namespaces and attempts to create privileged alpine:latest pods on every node in kube-system. Each pod mounts the host filesystem for full access.

4
Persistence

A backdoor is installed at ~/.config/sysmon/sysmon.py with a systemd user service for automatic restart. On Kubernetes nodes, the same persistence mechanism is deployed via the privileged pods.

Discovery: A Bug in the Malware

The attack was discovered when the package was pulled as a transitive dependency by an MCP plugin. The .pth launcher spawns a child Python process, but because .pth files trigger on every interpreter startup, the child re-triggers the same .pth—creating an exponential fork bomb that crashed the host machine.

The fork bomb is actually a bug in the malware. Without it, the attack might have gone unnoticed for much longer.

Are You Affected?

Check installed version
pip show litellm | grep Version
Check uv cache for malicious file
find ~/.cache/uv -name "litellm_init.pth" 2>/dev/null
Check pip cache
pip cache list | grep litellm
Check for persistence backdoor
ls -la ~/.config/sysmon/sysmon.py 2>/dev/null
ls -la ~/.config/systemd/user/sysmon.service 2>/dev/null
For Kubernetes environments
kubectl get pods -n kube-system | grep node-setup
kubectl get secrets --all-namespaces -o name | wc -l

Immediate Remediation

Response Checklist
Remove LiteLLM 1.82.7 or 1.82.8 from all environments
Purge package manager caches: pip cache purge and rm -rf ~/.cache/uv
Remove persistence: delete ~/.config/sysmon/ and ~/.config/systemd/user/sysmon.service
Audit Kubernetes for unauthorized pods in kube-system matching node-setup-*
Rotate ALL credentials: SSH keys, cloud provider creds, K8s configs, API keys, database passwords
Review CI/CD pipelines that may have cached the compromised package

The Bigger Picture

LiteLLM is used by thousands of AI applications. It’s pulled as a transitive dependency by MCP plugins, LangChain integrations, and countless internal tools. A single compromised package cascades through the entire AI supply chain.

This attack demonstrates several uncomfortable truths:

  1. PyPI trust is fragile. Packages can be uploaded directly without matching GitHub releases. Maintainer account compromise = total package compromise.

  2. Transitive dependencies are invisible attack surface. The affected machine wasn’t running LiteLLM directly—it was pulled by an MCP plugin.

  3. AI infrastructure is uniquely valuable. The malware specifically targets credentials that would give access to cloud resources, Kubernetes clusters, and sensitive data—exactly what AI systems need to function.

  4. Detection relies on luck. This attack was caught because of a bug in the malware. How many similar attacks have succeeded silently?

Supply Chain Security is AI Security

At Rogue, we’ve been warning about AI supply chain risks since our founding. The attack surface isn’t just your model or your prompts—it’s every dependency your AI system touches.

If you’re building or deploying AI systems, you need:

  • Dependency pinning and lockfiles to prevent automatic upgrades to compromised versions
  • Artifact verification to detect packages that don’t match their source repositories
  • Runtime monitoring to catch unexpected network connections and file access
  • Credential isolation so a single compromise doesn’t cascade to your entire infrastructure

The LiteLLM attack won’t be the last. The question is whether you’ll catch the next one before it catches you.