kubectl get secrets -o json | jq '.items[].data | keys'

Every secret in your cluster is one kubectl away from anyone with RBAC read access. They're base64-encoded in etcd — not encrypted. Your cloud provider has root access to that etcd. And under CLOUD Act / FISA 702, they can be legally compelled to hand it all over without telling you.

CloudTaser makes that command return nothing.

The 30-Second Pitch

CloudTaser is a Helm chart. You install it, add two annotations to your pod spec, and your secrets stop going through Kubernetes entirely. Instead, they go straight from Vault/OpenBao into your process's environment variables at startup — via a mutating webhook that rewrites your entrypoint.

No sidecar. No init container writing to shared volumes. No code changes. No SDK.

Your app reads os.Getenv("DB_PASSWORD") exactly like it does today. The only difference: that value was never stored in etcd, never touched a Kubernetes Secret object, and never hit disk.

Install
helm install cloudtaser oci://registry.cloudtaser.io/charts/cloudtaser
Annotate your deployment
metadata:
  annotations:
    cloudtaser.io/inject: "true"
    cloudtaser.io/secrets: "secret/data/myapp"

That's it. Next pod rollout picks it up. Existing pods are unaffected until restarted.

Why Should I Care?

Platform Engineer

  • Stop managing K8s Secret objects and their RBAC
  • No more kubectl create secret in CI/CD pipelines
  • Secret rotation without pod restarts (lease renewal)
  • One vault, all clusters — no per-cluster secret sync

SRE / On-Call

  • Webhook failure = pods start without injection (fail-open, configurable)
  • Vault outage = cached secrets + grace period, no pod crashes
  • eBPF agent shows exactly which process accessed which secret, when
  • No more "who rotated the DB password and didn't tell anyone"

Security / Compliance

  • Secrets never in etcd = nothing for the cloud provider to hand over
  • Full audit trail: vault access logs + eBPF syscall monitoring
  • GDPR Transfer Impact Assessment becomes trivial
  • /proc/pid/environ reads detected and blocked by eBPF

How Is This Different From...?

You've seen secrets management tools. Here's where CloudTaser sits:

Tool What it does Secrets in etcd? Code changes? Runtime protection?
K8s Secrets Base64 in etcd Yes No No
Sealed Secrets Encrypted at rest, decrypted into K8s Secret Yes (after unseal) No No
External Secrets Operator Syncs external vault → K8s Secret Yes (synced copy) No No
vault-agent sidecar Sidecar writes secrets to shared volume No File reads No
CSI Secret Store Mounts secrets as files via CSI driver Optional sync File reads No
CloudTaser Webhook injects wrapper, fetches from vault at startup No No eBPF

The key difference

External Secrets Operator and Sealed Secrets still end up as Kubernetes Secret objects. They solve the "secrets in Git" problem but not the "secrets in etcd" problem. CloudTaser skips the Secret object entirely — secrets go from vault to process memory, period.

ESO gets secrets out of Git. CloudTaser gets secrets out of Kubernetes.

What about vault-agent sidecar?

vault-agent avoids etcd, but writes secrets to a shared tmpfs volume. Anyone who can kubectl exec into the pod can cat /vault/secrets/db-password and read them. No runtime protection. CloudTaser's eBPF agent detects and blocks attempts to read /proc/pid/environ — the secret is in the process but can't be read from outside it.

Booth Demo (3 minutes)

What you'd see at our booth:

1

Deploy a standard Postgres + app stack

kubectl apply -f demo/postgres-app.yaml — normal deployment, secrets in K8s Secrets

2

Show the problem

kubectl get secret db-creds -o jsonpath='{.data.password}' | base64 -d — there's your password, readable by any cluster-admin

3

Install CloudTaser

helm install cloudtaser oci://registry.cloudtaser.io/charts/cloudtaser --set vault.addr=https://vault.eu.demo

4

Add annotations, delete the K8s Secret, rollout restart

kubectl annotate deploy/myapp cloudtaser.io/inject=true cloudtaser.io/secrets=secret/data/db-creds
kubectl delete secret db-creds && kubectl rollout restart deploy/myapp

5

Prove it works

kubectl get secrets — nothing there
kubectl exec deploy/myapp -- env | grep DB_PASSWORD — secret is in the process, delivered from vault
App connects to Postgres successfully

6

Try to steal it

kubectl exec deploy/myapp -- cat /proc/1/environ — blocked by eBPF agent, audit event generated
The secret is in the process but can't be read from outside it

What Changes In Your Stack

LayerBefore CloudTaserAfter CloudTaser
Application code os.Getenv("DB_PASSWORD") os.Getenv("DB_PASSWORD") — identical
Dockerfile No change No change
Deployment YAML envFrom: secretRef: db-creds 2 annotations, remove envFrom
CI/CD kubectl create secret vault kv put (or keep existing vault workflow)
Secret storage K8s Secrets in etcd Vault/OpenBao (self-hosted or managed)
RBAC Secret read permissions per namespace Vault policies per service account

Failure Modes (The Stuff You Actually Ask About)

What if the webhook is down?

Configurable: fail-open (pods start without injection, run without secrets — app decides what to do) or fail-closed (pods don't start). Default is fail-open with an alert. Running pods are unaffected — the webhook only acts on pod creation.

What if Vault/OpenBao is unreachable?

Three-tier cache: hot in-memory cache → sealed local cache → grace period. Running pods keep working with cached secrets. New pods retry with exponential backoff. Configurable timeout before giving up. No pod crashes from transient vault outages.

What if someone deletes the webhook?

Running pods are unaffected (secrets are already in process memory). New pods start normally without injection. The eBPF agent detects the missing webhook and alerts. Reinstall via Helm to restore. This is a security event, not a data loss event.

What about startup latency?

The wrapper adds one vault API call at pod startup. Typical overhead: 50–200ms depending on network latency to vault. After startup, zero overhead — the wrapper fork+execs and gets out of the way. No sidecar, no proxy, no ongoing interception.

What about multi-container pods?

Annotate cloudtaser.io/containers: "app,worker" to specify which containers get injection. Init containers and sidecars are left alone. Each container gets its own wrapper instance with independent vault authentication.

Can I use my own Vault instead of your SaaS?

Yes. CloudTaser works with any Vault-compatible API: HashiCorp Vault, OpenBao, HCP Vault. Point the Helm chart at your vault address. The managed SaaS is optional — it's for teams that don't want to run vault themselves.

Does it work on EKS / AKS / GKE?

Yes. The webhook and wrapper are pure Go, no kernel dependencies. The eBPF agent needs Linux 5.8+ (all managed K8s providers support this). eBPF enforcement mode (blocking, not just detection) needs CONFIG_BPF_KPROBE_OVERRIDE — available on most kernels, graceful fallback if not.

Open Source Strategy

Core is open source

The operator (mutating webhook), wrapper, Helm chart, and CLI will be open source. You can run CloudTaser without paying us. We make money on the managed vault SaaS and enterprise eBPF features.

ComponentAvailability
Operator + Webhook Open source
Wrapper (fork+exec) Open source
Helm chart Open source
CLI Open source
eBPF agent Enterprise
Managed OpenBao SaaS Enterprise

Roadmap

Shipped
Mutating webhook + wrapper — core secret injection via fork+exec, Vault/OpenBao auth, Helm chart
Shipped
eBPF detection agent — syscall monitoring, secret access detection, audit events
Shipped
CLIcloudtaser deploy, cloudtaser check for pre-flight validation
Shipped
deb/rpm packages — systemd support for non-K8s workloads (bare metal, VMs)
Next
ArtifactHub listing — Helm chart discoverable in the CNCF ecosystem
Next
kubectl pluginkubectl taser status showing injection status per pod, vault health, eBPF coverage
Next
Grafana dashboard — pre-built panels for secret access patterns, injection success rate, vault latency
Next
ESO migration tool — scan existing External Secrets, generate CloudTaser annotations, validate vault paths
Planned
eBPF enforcement — block (not just detect) secret exfiltration via kprobe + bpf_override_return
Planned
Managed OpenBao SaaS — EU-hosted vault as a service with HSM-backed keys, audit dashboard, compliance reporting
Planned
Confidential computing — AMD SEV-SNP / Intel TDX attestation for hardware-level memory protection

KubeCon Readiness Checklist

What would make CloudTaser a hit at the booth:

1. One-command demo environment

A single curl ... | bash or kubectl apply that spins up a kind cluster with CloudTaser + demo vault + sample app. Attendees try it on their laptop in 2 minutes. Deploy as a Killercoda interactive scenario for those without a laptop.

2. Helm chart on ArtifactHub

KubeCon attendees search ArtifactHub first. List the chart there with proper annotations, screenshots, and a "Getting Started" section. This is the #1 discovery channel for K8s tools.

3. kubectl plugin

kubectl taser status showing: which pods are injected, vault connection health, eBPF coverage per node, recent secret access events. Platform engineers love kubectl plugins — it meets them where they already work.

4. "Migrate from ESO in 5 minutes" story

Many teams use External Secrets Operator. Build a migration tool: cloudtaser migrate --from=eso that scans ExternalSecret CRDs, maps them to CloudTaser annotations, and generates a migration plan. Low-friction adoption path.

5. Grafana dashboard

Pre-built Grafana panels showing: secrets injected per minute, vault latency p50/p99, eBPF events, injection failures. Export as a dashboard JSON that attendees can import. DevOps folk trust tools they can observe.

6. CNCF Sandbox application

Apply for CNCF Sandbox. Even if pending, "CNCF Sandbox candidate" on the booth banner signals legitimacy. The project fits the cloud-native security landscape perfectly alongside Falco, cert-manager, and OPA.

Talk Proposal

"Your Kubernetes Secrets Aren't Secret: A Live Demo of What Your Cloud Provider Can See"

Format: 35-min session + live demo

Track: Security + Identity

Abstract: We'll start by showing exactly how easy it is to extract every secret from a GKE cluster — as the cloud provider, not as an attacker. Then we'll show what a CLOUD Act subpoena actually looks like and what data it compels. Finally, we'll demonstrate an alternative architecture where secrets never enter Kubernetes at all: from external vault to process memory via mutating webhook, with eBPF-based runtime detection of any exfiltration attempt. All open source, all live, no slides.

Key moment: Side-by-side terminals. Left: kubectl get secrets on a standard cluster (everything visible). Right: same command on a CloudTaser cluster (nothing there). Then: kubectl exec into the protected pod, env | grep PASSWORD — the secret is there, in the process, but never touched Kubernetes.

Come to the booth

Bring your laptop. We'll install CloudTaser on your cluster in 3 minutes. If your secrets are still in etcd after that, the stickers are on us.