▸ SECURE CONNECTION ▸ LATENCY: 4.2ms ▸ AGENTS: 17,432 ▸ THREAT LEVEL: NOMINAL
ROGUE TERMINAL v1.0 ESC to close
← Back to blog
April 13, 2026 by Rogue Security Research
CVERCEAI-toolchaindeveloper-toolsWebSocketOWASPagentic-security

Root in One WebSocket: Marimo CVE-2026-39987 and the AI Notebook Risk

CVE-2026-39987 | AI developer tooling

Root in one WebSocket

Marimo, a reactive Python notebook used in modern AI development workflows, shipped an integrated terminal WebSocket endpoint that accepted connections without authentication. A single WebSocket handshake could yield an interactive shell.

9.3
CVSS v4.0
0
Auth required
/terminal/ws
vulnerable endpoint
PTY
interactive shell

Why this matters (even if you do not run notebooks in production)

Most organizations still classify notebooks as “developer tools” and assume the worst outcome is leaked model code or a few prompts.

That classification is outdated.

A modern AI notebook host often contains:

  • Cloud provider credentials for training and storage
  • LLM API keys with billing authority
  • Internal tokens for data access, feature flags, and CI systems
  • Shell history, dotfiles, and environment dumps that quietly accumulate sensitive material

When the notebook UI itself offers an integrated terminal, you have to treat it like a remote administration interface.

The pattern

AI developer tooling is becoming agentic infrastructure. It runs where secrets live, and it is increasingly reachable from shared GPU servers, cloud VMs, containers, and “quick demo” deployments.

The bug, in one sentence

Marimo correctly enforced authentication on its primary WebSocket endpoint, but its integrated terminal WebSocket endpoint accepted connections without an auth check, granting an interactive PTY shell.

Attack flow (simplified)
[ATK]
Network reachability
Any path to the instance is enough
->
[WS]
/terminal/ws
No authentication gate
->
[PTY]
Interactive shell
Runs with Marimo process privileges
->
[KEY]
Credential harvest
.env, cloud SDKs, shell history

The uncomfortable part: speed

Researchers monitoring exploitation reported attempts within hours of public disclosure.

This is the operational reality now:

  • Attackers do not need a public PoC
  • A precise advisory is often enough
  • “Dev” environments tend to be less monitored and less segmented

What makes AI notebook compromises uniquely dangerous

[RISK-01]

Credential density

Notebooks are where cloud, data, and model secrets concentrate because it is convenient for experiments.

[RISK-02]

Privilege reality

Containers frequently run as root. GPU hosts often have broad filesystem access. Your “lab” becomes your foothold.

[RISK-03]

Trust shortcuts

Teams expose instances “temporarily” for collaboration. That often means public ingress with a thin auth layer.

How this maps to OWASP Agentic Top 10 (2026)

This is not only a classic CWE-306 story. In an agentic environment it chains into agentic failure modes:

  • ASI05 (Unexpected Code Execution): the notebook becomes the execution engine
  • ASI03 (Identity and Privilege Abuse): stolen tokens turn into lateral movement
  • ASI04 (Agentic Supply Chain): notebook images, extensions, and AI assistants become dependencies
A useful mental model

Treat any integrated terminal in an AI product as a high risk tool. If it is reachable, it is an attack surface. If it is unauthenticated, it is an incident.

Practical checklist for this week

Marimo and similar notebook hosts: immediate controls

[PATCH]
Upgrade Marimo to a patched release (0.23.0 or later). Do not rely on perimeter controls alone.
[EXPOSURE]
Inventory notebook instances on shared GPU servers, cloud VMs, and “demo” subdomains. Assume you missed some.
[NETWORK]
Block public ingress by default. Require VPN, SSO, or a bastion. “Temporary” exposure should still be gated.
[RUNTIME]
Monitor for interactive shell sessions and unexpected outbound traffic from notebook hosts.
[SECRETS]
Rotate tokens that might have lived on notebook hosts: cloud keys, LLM API keys, data warehouse creds, CI tokens.
[CONTAINMENT]
Run notebook services as non-root. Constrain filesystem access. Separate training credentials from general dev sessions.

Bottom line

This incident is not “just a notebook CVE”.

It is a reminder that AI teams are building new, credential rich remote surfaces faster than security teams can inventory them. The fastest wins are boring: patch, segment, and assume your AI toolchain is production adjacent.