← Back to Blog
NOVEMBER 9, 2022

Kubernetes Security: Containers Need Protection Too

Author: Aaron Smith

Kubernetes moved from “innovative” to “expected” fast. By 2022, many teams were running production workloads on it or actively migrating. Platform engineering also matured: internal developer platforms, golden paths, and self-service deployments became common.

That improved delivery speed—but not automatically security.

A recurring pattern in 2022: organizations modernized delivery while keeping VM-era security assumptions. Teams hardened cloud accounts, ran periodic image scans, and assumed Kubernetes “handled the rest.” It doesn’t.

Containers need protection across the full lifecycle: build, deploy, and runtime. Focus on one phase alone and critical blind spots remain.

Why traditional controls break down in Kubernetes

Traditional infrastructure tolerated slower control cycles because systems were static. Kubernetes changes that:

  • Workloads are ephemeral and rescheduled frequently
  • Teams deploy continuously
  • Service communication is dynamic and east-west
  • Ownership is distributed across app, platform, and security teams

Controls based on manual review and static assumptions won’t keep up. Security has to be codified and enforced automatically wherever possible.

Build phase: reduce risk before code ships

Build security is the right starting point, but image scanning alone is not enough.

1) Establish trusted base images

Use a small approved image set maintained centrally and patched on a clear cadence. Random public base images increase inherited risk and reduce consistency.

Practical baseline:

  • Maintain an internal approved image catalog
  • Prefer minimal images where feasible
  • Version and deprecate old bases with clear timelines

2) Enforce composition and image scanning in CI

Run dependency and image scans automatically in CI/CD and fail builds on critical policy violations.

What works:

  • Gate on severity plus exploitability, not CVSS alone
  • Track exceptions with owner and expiration
  • Re-scan periodically because vulnerability intelligence changes

3) Sign artifacts and verify provenance

Supply chain risk is operational reality. Sign images and generate provenance metadata so deployment systems can verify source and build integrity.

Baseline implementation:

  • Sign images in pipeline
  • Store signatures with artifacts
  • Require signature verification before production deploys

4) Shift policy left for Kubernetes manifests

Treat manifests as code and evaluate pre-merge.

Examples:

  • Block privileged containers
  • Require resource requests and limits
  • Block `latest` tags
  • Require `runAsNonRoot` and read-only root filesystem where possible

If enforcement depends on manual reviewer attention, it will fail under release pressure.

Deploy phase: enforce guardrails at admission

Deployment is where intent meets reality.

1) Use admission controls for baseline policy

Admission policies can prevent non-compliant workloads from entering clusters.

Enforce:

  • Namespace boundaries and approved registries
  • Required ownership/traceability labels
  • Pod security constraints by workload tier

Secure defaults from platform teams reduce rework and speed up delivery.

2) Segment network traffic intentionally

Flat networking increases blast radius. Implement network policy to constrain east-west traffic by design.

Practical approach:

  • Start with deny-by-default in sensitive namespaces
  • Explicitly allow required flows
  • Review policy on service changes

This reduces lateral movement opportunities during compromise.

3) Harden secrets and workload identities

Kubernetes makes secret mounting easy, but broad access and static credentials remain common breach drivers.

Improve by:

  • Using short-lived credentials where possible
  • Mapping workloads to least-privilege service accounts
  • Auditing who can create/read/mount secrets

Overbroad secret access eliminates key containment boundaries.

4) Separate environments and privilege domains

Production should not be a shared playground.

At minimum:

  • Isolate dev/stage/prod by cluster or hardened namespace model
  • Restrict direct production kubectl access
  • Require auditable pipeline-based changes

These controls improve both security and incident reliability.

Runtime phase: detect and respond when prevention fails

Prevention is never perfect. Runtime resilience is where mature programs stand out.

1) Build behavior-based runtime visibility

Logs help, but behavior signals are essential in dynamic environments.

Monitor for:

  • Unexpected process execution in containers
  • Privilege escalation attempts
  • Suspicious outbound connections
  • Access to sensitive files/tokens/secrets

Behavioral detection catches compromise patterns missed by pre-deploy controls.

2) Alert on high-signal abuse patterns

Prioritize events with strong attacker correlation:

  • New privileged pod creation in production
  • Abnormal `kubectl exec` spikes
  • Cross-namespace service account token misuse
  • Untrusted registry image pulls

Low-signal noise trains responders to ignore real incidents.

3) Prepare Kubernetes-native containment playbooks

VM runbooks don’t translate directly.

Playbooks should cover:

  • Namespace/workload isolation with network policy
  • Service account credential revocation
  • Node quarantine for investigation
  • Clean redeploy from trusted signed artifacts

Test these playbooks before a real incident.

4) Validate controls through simulation

Use controlled attack simulation or red/blue exercises to identify gaps.

Measure:

  • Time to detect
  • Time to contain
  • Control failures by lifecycle phase

Then feed results back into build and deploy guardrails.

The operating model matters as much as tooling

Kubernetes security is often treated as a tooling problem: buy scanner, policy engine, runtime detector. Helpful, but insufficient without ownership clarity.

Strong programs in 2022 align on roles:

  • Platform engineering: owns secure paved roads and defaults
  • Application teams: own service-level risk and remediation timelines
  • Security teams: define policy outcomes and automate assurance

Without this alignment, security becomes either a bottleneck or an afterthought.

A practical maturity path for two quarters

Don’t try to deploy every control at once.

  1. Quarter 1: Standardize base images, CI scan gates, and core admission policies.
  2. Quarter 2: Expand network segmentation, identity hardening, and runtime detection in high-value namespaces.
  3. Ongoing: Test incident playbooks and tune controls from real telemetry.

This sequence produces meaningful risk reduction without operational paralysis.

Final thought

Kubernetes improved delivery speed and platform consistency. Security has to evolve at the same pace. Protecting containers only at build time is like locking the front door and leaving windows open.

A lifecycle model—build, deploy, runtime—creates layered protection aligned to modern delivery. If you’re investing in platform engineering, make the secure path the easiest path, and make response fast when prevention misses.

A practical next step: map your current controls to lifecycle phases and enforce the first two high-impact gaps this quarter.

Want to Learn More?

For detailed implementation guides and expert consultation on cybersecurity frameworks, contact our team.

Schedule Consultation →