BlogCloud Security

Kubernetes Security Best Practices for 2026

Kubernetes is the infrastructure substrate for over 70% of containerized workloads in production. Its default configuration is not secure. Here are the 10 practices that distinguish a hardened cluster from a breach waiting to happen.

18 min readUpdated April 2026

Why Kubernetes Security Matters

Kubernetes adoption has accelerated to the point where it now underpins critical infrastructure across every industry. With that adoption comes an expanded attack surface that adversaries actively exploit. Kubernetes-related breaches are no longer theoretical: Tesla, Capital One, and numerous other organizations have suffered production compromises through misconfigured K8s clusters.

70%+
of organizations run K8s in production (CNCF 2025)
94%
of security incidents involve misconfigurations, not zero-days
83+
CIS Kubernetes Benchmark controls — most clusters fail over 40%

The good news: the majority of Kubernetes security incidents are preventable. The CIS Kubernetes Benchmark v1.8.0 provides 83 scored controls covering API server hardening, etcd security, RBAC configuration, worker node settings, and pod security. Organizations that implement these controls systematically reduce their exploitable attack surface by an estimated 80%.

Most Common K8s Misconfigurations (Red Team Data, 2025)

Privileged containers (allows full host access)
hostNetwork: true (shares node network namespace)
Containers running as root (UID 0)
No resource limits (enables DoS)
Default service account with cluster-admin
No Network Policies (any-to-any traffic allowed)
etcd without TLS client authentication
Secrets stored in ConfigMaps
Audit logging disabled
Public container registries with no image signing

10 Essential Security Practices

These practices are ordered by impact and implementation difficulty. Start with the first five for immediate risk reduction.

#1

Implement Role-Based Access Control (RBAC)

Apply least-privilege permissions to every service account, user, and group in your cluster.

Implementation Checklist

  • Audit all ClusterRoleBindings and RoleBindings quarterly — wildcards (* verbs or resources) are a critical misconfiguration
  • Use dedicated service accounts per workload, never the default service account
  • Disable automounting of service account tokens unless explicitly required via automountServiceAccountToken: false
  • Prefer namespace-scoped Roles over cluster-wide ClusterRoles for application workloads
  • Use tools like rbac-audit or kubectl-who-can to visualize effective permissions

Common Misconfiguration

ClusterRoleBinding granting cluster-admin to a default service account — seen in 23% of K8s clusters (CNCF 2025 survey).
#2

Enforce Network Policies

Default-deny all ingress and egress traffic, then allowlist only required communication paths.

Implementation Checklist

  • Apply a default-deny NetworkPolicy to every namespace immediately after creation
  • Use Cilium or Calico for L7-aware policies that can filter by HTTP method and path
  • Isolate the kube-system namespace from application namespaces with explicit deny policies
  • Restrict egress to known CIDR ranges and DNS only — prevent pod-to-internet exfiltration
  • Monitor network policy violations with Hubble (Cilium) or similar observability tooling

Common Misconfiguration

No NetworkPolicy defined: all pods in a cluster can communicate with all other pods and external internet endpoints by default.
#3

Apply Pod Security Standards

Use Kubernetes Pod Security Admission to enforce Baseline or Restricted profiles on all namespaces.

Implementation Checklist

  • Label namespaces with pod-security.kubernetes.io/enforce: restricted for production workloads
  • The Restricted profile prohibits: privilege escalation, running as root, hostPath volumes, and all Linux capabilities by default
  • Use the Baseline profile for workloads that require some flexibility — it still blocks privileged containers and hostNetwork
  • Enforce at cluster level with a default policy and grant namespace overrides only for specific system workloads
  • Validate compliance with kube-bench or Polaris before workloads reach production

Common Misconfiguration

Containers running as root (UID 0) — allows any file system vulnerability to compromise the host OS.
#4

Scan Container Images Before Deployment

Block images with critical CVEs from being pulled into production clusters.

Implementation Checklist

  • Integrate Trivy or Grype into your CI pipeline and fail builds on CVSS >= 7.0 findings
  • Use an Admission Webhook (e.g., Kyverno or Gatekeeper) to reject unscanned images at deploy time
  • Enforce image signing with Cosign and verify signatures in the admission controller
  • Use distroless or minimal base images to reduce attack surface — Alpine, Chainguard, or Google Distroless
  • Scan images weekly even for unchanged deployments — new CVEs are disclosed against existing packages daily

Common Misconfiguration

Using latest tag — prevents reproducible builds and makes it impossible to track which CVEs are present in deployed images.
#5

Manage Secrets Securely

Never store secrets in ConfigMaps or container environment variables — use external secrets stores.

Implementation Checklist

  • Enable Kubernetes Secrets encryption at rest via EncryptionConfiguration with AES-GCM or KMS provider
  • Use External Secrets Operator to sync secrets from AWS Secrets Manager, GCP Secret Manager, or HashiCorp Vault
  • Mount secrets as volumes, not environment variables — environment variables are exposed in /proc/$PID/environ
  • Rotate secrets automatically — use Vault dynamic secrets or cloud-native rotation with zero downtime
  • Audit secret access with Kubernetes audit logs filtered on secrets resource operations

Common Misconfiguration

Database connection strings stored in ConfigMaps — readable by any pod in the namespace without RBAC controls.
#6

Deploy Runtime Monitoring with eBPF

Monitor process execution, file access, network connections, and privilege changes at the kernel level.

Implementation Checklist

  • Deploy an eBPF agent (Falco, Tetragon, or TigerGate) as a DaemonSet — runs on every node with <3% CPU overhead
  • Monitor for container escape indicators: unexpected host filesystem access, namespace transitions, /proc manipulation
  • Detect privilege escalation: setuid/setgid execution, capability changes, securityContext overrides at runtime
  • Alert on unexpected outbound connections — a Node.js container connecting to a cryptocurrency mining pool is a clear indicator of compromise
  • Use Linux 5.7+ with LSM BPF to actively block policy violations — not just detect them

Common Misconfiguration

No runtime monitoring: a compromised pod can exfiltrate data, mine cryptocurrency, or pivot laterally for minutes to hours before detection.
#7

Configure Admission Controllers

Use policy engines like Kyverno or OPA/Gatekeeper to enforce security standards at admission time.

Implementation Checklist

  • Deploy Kyverno or Gatekeeper as a ValidatingAdmissionWebhook to enforce organizational policies
  • Write policies to: require resource limits/requests, prohibit latest tag, require security context, validate image registries
  • Use OPA policies to enforce cross-cutting concerns like mandatory labels, namespace naming conventions, and PodDisruptionBudgets
  • Enable the built-in admission controllers: PodSecurity, NodeRestriction, LimitRanger, ResourceQuota
  • Test policies in audit mode before switching to enforce — use kyverno test to validate policy logic

Common Misconfiguration

No resource limits on pods — a single misbehaving container can exhaust cluster CPU/memory and cause a node-wide DoS.
#8

Enable Comprehensive Audit Logging

Log all API server requests at the Metadata level, and sensitive operations at the RequestResponse level.

Implementation Checklist

  • Configure kube-apiserver with an audit policy that captures at minimum: secrets reads, RBAC changes, exec/portforward operations
  • Ship audit logs to an immutable external store (S3, CloudWatch Logs, GCS) — never rely on node-local log storage
  • Set log retention to 90 days minimum for SOC 2 / ISO 27001 compliance
  • Alert on: kubectl exec to production pods, secret enumeration, cluster-admin binding changes, new CronJobs or DaemonSets
  • Use Falco or TigerGate to correlate audit log events with runtime eBPF events for enriched incident context

Common Misconfiguration

Audit logging disabled (default in many managed K8s offerings) — forensics becomes impossible post-incident.
#9

Benchmark Against CIS Kubernetes Standard

Run kube-bench regularly to measure cluster compliance against CIS Kubernetes Benchmark v1.8.0.

Implementation Checklist

  • Run kube-bench as a Job in your cluster to test API server, etcd, controller manager, scheduler, and worker node configuration
  • Prioritize Level 1 (Scored) findings — these have direct security impact and are required for most compliance frameworks
  • Common failures: anonymous-auth not disabled, insecure-port enabled, profiling endpoints exposed, etcd without client cert auth
  • Use TigerGate Cloud Scanner's Kubernetes checks (83+ controls) for continuous automated compliance assessment
  • Map findings to CIS controls in compliance reports for SOC 2 Type II and ISO 27001 auditor evidence

Common Misconfiguration

etcd without client certificate authentication — etcd contains all cluster secrets and configuration in plaintext.
#10

Secure Your Supply Chain

Verify the provenance and integrity of every artifact deployed to your cluster.

Implementation Checklist

  • Sign container images with Sigstore/Cosign and verify signatures at admission with policy enforcement
  • Generate and attest SBOMs (CycloneDX or SPDX) for all production images — required for NIST SSDF compliance
  • Use a private container registry with vulnerability scanning enabled (ECR, GAR, ACR) — block public registry pulls in production
  • Pin Helm chart versions and use Artifact Hub to verify chart provenance
  • Enable Binary Authorization (GKE) or AWS ECR Image Signing to prevent unsigned images from running

Common Misconfiguration

Pulling images from public Docker Hub without digest pinning — images with the same tag can change between pulls.

Tools and Frameworks

The Kubernetes security ecosystem is rich with open source and commercial tooling. Here are the key categories and recommended tools for each area.

CIS Benchmark Assessment

  • kube-bench — automated CIS K8s Benchmark scanning
  • TigerGate Cloud Scanner — 83+ K8s controls with continuous monitoring
  • Checkov — IaC scanning for K8s manifests and Helm charts

Policy Enforcement

  • Kyverno — Kubernetes-native policy engine with YAML policies
  • OPA/Gatekeeper — Rego-based policies via admission webhook
  • Kubewarden — WebAssembly-based policy execution

Runtime Monitoring

  • Falco — syscall-based runtime threat detection
  • Tetragon — eBPF-based security observability (Cilium)
  • TigerGate Agent — eBPF monitoring + LSM enforcement

Network Security

  • Cilium — eBPF-based CNI with L7 Network Policies and Hubble observability
  • Calico — NetworkPolicy implementation with enterprise features
  • Istio — service mesh with mTLS and L7 traffic policy

Image & Supply Chain Security

  • Trivy — comprehensive image and IaC scanner
  • Cosign (Sigstore) — container image signing and verification
  • Syft — SBOM generation for container images

Secrets Management

  • External Secrets Operator — sync from Vault, AWS SSM, GCP Secret Manager
  • Sealed Secrets — encrypt secrets for GitOps workflows
  • HashiCorp Vault — centralized secrets and dynamic credentials

How TigerGate Secures Kubernetes

TigerGate provides both pre-deployment scanning and runtime monitoring specifically designed for Kubernetes environments, covering the full attack surface from IaC misconfiguration through production threat detection.

Static Analysis (Shift-Left)

  • IaC scanning for K8s manifests and Helm charts
  • 83 CIS Kubernetes Benchmark checks
  • Privileged container detection
  • hostNetwork / hostPID / hostIPC checks
  • Missing resource limits and security context
  • Secrets in ConfigMaps and environment variables
  • RBAC wildcard permission detection
  • CI/CD integration with PR blocking

Runtime Monitoring (Shift-Right)

  • eBPF DaemonSet — <3% CPU overhead per node
  • Process execution monitoring per pod
  • File integrity monitoring for critical paths
  • Network egress anomaly detection
  • Container escape attempt detection
  • Privilege escalation alerts
  • LSM BPF enforcement (Linux 5.7+)
  • Pod metadata enrichment on all events

One-Line Kubernetes Deployment

The TigerGate eBPF agent deploys as a DaemonSet and begins monitoring within minutes:

kubectl apply -f https://install.tigergate.dev/agent/kubernetes.yaml

Requires Linux 4.15+ for monitoring, 5.7+ for LSM enforcement. Works with EKS, GKE, AKS, and self-managed clusters.

Harden Your Kubernetes Clusters Today

TigerGate scans your K8s manifests and Helm charts for the 83 CIS Kubernetes Benchmark controls, and deploys an eBPF agent to monitor runtime behavior — all from one platform.