Container Security: A Short Dive into Kubernetes
Kubernetes orchestrates containers across clusters of machines, automating deployment, scaling, and management of distributed applications. Let's dive into Kubernetes security.
Software Engineer
Schild Technologies
Container Security: Securing Kubernetes
Kubernetes orchestrates containers across clusters of machines, automating deployment, scaling, and management of distributed applications. The platform handles service discovery, load balancing, storage orchestration, and self-healing—abstracting away individual servers to treat the entire cluster as a unified compute resource. This distributed architecture creates a broader attack surface than single-host Docker deployments: the API server controls the entire cluster, network traffic flows between pods across nodes, role-based access controls determine who can deploy what, and secrets management becomes critical when credentials must be distributed across multiple containers and nodes. Let's dive into Kubernetes security.
Kubernetes security architecture
Kubernetes orchestrates containers across clusters of machines, introducing additional security considerations beyond single-host Docker deployments. Kubernetes security encompasses API server authentication, authorization, network policies, pod security, and secrets management across distributed systems.
API server security
The Kubernetes API server provides the control plane interface for cluster management. Securing API access prevents unauthorized cluster manipulation and protects sensitive cluster data. All cluster components and users interact through the API server, making it the critical security perimeter.
API server authentication verifies user and service identities before processing requests. Kubernetes supports multiple authentication mechanisms including client certificates, bearer tokens, and external identity providers via OpenID Connect (OIDC). Production clusters can be integrated with a number of identity providers for centralized user management.
# API server configuration with OIDC
apiVersion: v1
kind: Config
users:
- name: oidc-user
user:
auth-provider:
name: oidc
config:
client-id: kubernetes
client-secret: secret
id-token: eyJhbGc...
idp-issuer-url: https://accounts.example.com
Role-Based Access Control (RBAC)
RBAC implements the principle of least privilege by granting users minimum permissions required for their roles. Kubernetes RBAC uses Roles and ClusterRoles to define permissions, and RoleBindings and ClusterRoleBindings to assign those permissions to users or service accounts.
Roles define permissions within a namespace, while ClusterRoles apply cluster-wide. Each role specifies allowed actions (verbs) on specific resources.
# Role allowing read access to pods in namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
# RoleBinding granting role to user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: production
subjects:
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
ClusterRoles provide cluster-wide permissions for resources like nodes, persistent volumes, and namespaces themselves.
# ClusterRole for viewing cluster-wide resources
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-viewer
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumes", "namespaces"]
verbs: ["get", "list", "watch"]
---
# ClusterRoleBinding granting cluster-wide permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-viewers
subjects:
- kind: Group
name: viewers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-viewer
apiGroup: rbac.authorization.k8s.io
Service accounts provide identities for pods to access the Kubernetes API. Each namespace has a default service account, but dedicated service accounts with minimal permissions strengthen security.
# Create dedicated service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
namespace: production
---
# Grant specific permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myapp-role
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
---
# Bind service account to role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: myapp-binding
namespace: production
subjects:
- kind: ServiceAccount
name: myapp-sa
namespace: production
roleRef:
kind: Role
name: myapp-role
apiGroup: rbac.authorization.k8s.io
---
# Use service account in deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: production
spec:
template:
spec:
serviceAccountName: myapp-sa
containers:
- name: myapp
image: myapp:v1.0
Pod Security Standards
Pod Security Standards replace deprecated Pod Security Policies, defining three levels of security restrictions: Privileged (unrestricted), Baseline (minimally restrictive), and Restricted (highly restrictive). These standards prevent dangerous pod configurations like privileged containers, host network access, and privilege escalation.
# Enforce Restricted standard at namespace level
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Pod Security Admission enforces standards automatically, rejecting pods that violate configured policies. This prevents deployment of insecure pod configurations without requiring explicit policy authoring.
# Kyverno policy blocking privileged containers
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged
spec:
validationFailureAction: enforce
rules:
- name: validate-privileged
match:
resources:
kinds:
- Pod
validate:
message: "Privileged containers are not allowed"
pattern:
spec:
containers:
- securityContext:
privileged: false
Network policies
Network policies control traffic between pods, implementing microsegmentation to limit lateral movement after container compromise. By default, Kubernetes allows all pod-to-pod communication. Network policies can be used to implement default-deny approaches that permit only explicitly authorized traffic.
# Default deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Allow traffic from frontend to backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
---
# Restrict egress to specific services
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-egress
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
Secrets management
Kubernetes Secrets store sensitive data like passwords, tokens, and keys. Default secrets configuration stores values base64-encoded but unencrypted in etcd. Encryption at rest for secrets protects against unauthorized etcd access in production clusters.
# Create secret from literals
kubectl create secret generic database-credentials \
--from-literal=username=admin \
--from-literal=password=secretpassword
# Create secret from file
kubectl create secret generic tls-cert \
--from-file=tls.crt=cert.pem \
--from-file=tls.key=key.pem
It's a best practice to reference secrets in pods via environment variables or mounted volumes. Volumes provide better security by avoiding environment variable exposure in process listings.
# Mount secrets as files
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0
volumeMounts:
- name: db-credentials
mountPath: "/etc/secrets"
readOnly: true
volumes:
- name: db-credentials
secret:
secretName: database-credentials
# ExternalSecret synchronizing from Vault
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: database-credentials
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: secret/data/database
property: username
- secretKey: password
remoteRef:
key: secret/data/database
property: password
An EncryptionConfiguration on the API server can be used to enable encryption at rest for secrets in etcd.
# EncryptionConfiguration for secrets
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {}
Container runtime security monitoring
Runtime security monitoring detects anomalous behavior indicating compromise or attack. Unlike static analysis that examines images before deployment, runtime monitoring observes executing containers for potentially malicious activities like unexpected network connections, file modifications, or privilege escalations.
Falco uses eBPF or kernel modules to monitor system calls, detecting potentially malicious behavior through customizable rules. Default rules detect common attack patterns like shell execution in containers, unexpected file access, and privilege escalation.
# Falco rule detecting shell spawning in containers
- rule: Terminal shell in container
desc: A shell was spawned in a container
condition: >
spawned_process and container and
shell_procs and proc.tty != 0
output: >
Shell spawned in container
(user=%user.name container_id=%container.id
container_name=%container.name shell=%proc.name)
priority: WARNING
# Deploy Falco with Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco --namespace falco-system --create-namespace
# View Falco alerts
kubectl logs -n falco-system -l app.kubernetes.io/name=falco
Security monitoring and auditing
Comprehensive security monitoring correlates events across container infrastructure to detect attacks spanning multiple components. Kubernetes audit logs can be configured to record all API server requests, providing detailed trails of cluster access and modifications.
# Audit policy logging all metadata
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: Metadata
verbs: ["create", "update", "delete", "patch"]
Forwarding audit logs to centralized logging systems enables long-term retention and analysis. Elasticsearch, Splunk, or cloud-native logging services provide search, correlation, and alerting capabilities. Here's an example with Elasticsearch:
# Fluent Bit configuration forwarding to Elasticsearch
[OUTPUT]
Name es
Match kube.*
Host elasticsearch.logging.svc
Port 9200
Index kubernetes
Type _doc
Compliance and security benchmarks
Industry security benchmarks provide prescriptive guidance for container security hardening. The Center for Internet Security (CIS) publishes benchmarks for Kubernetes covering hundreds of specific configuration recommendations.
# Run CIS Kubernetes benchmark with kube-bench
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
# View results
kubectl logs -l app=kube-bench
# Run Docker CIS benchmark
docker run --rm --net host --pid host --userns host --cap-add audit_control \
-v /etc:/etc:ro \
-v /var/lib:/var/lib:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/lib/systemd:/usr/lib/systemd:ro \
aquasec/kube-bench:latest run --targets node
NIST, PCI-DSS, HIPAA, and SOC 2 compliance frameworks include container-specific requirements covering access controls, encryption, logging, and vulnerability management. Automated compliance scanning tools verify configuration adherence and generate evidence for audits.
Conclusion
Kubernetes enables the management of containerized workloads at scale through automated orchestration, declarative infrastructure, and built-in resilience mechanisms that would be impractical to implement manually. The platform supports diverse workloads—from microservices architectures and stateful databases to machine learning pipelines and multi-tenant SaaS platforms—across on-premises data centers and cloud environments. Securing Kubernetes clusters requires implementing defense-in-depth: RBAC to control who can access the API, network policies to segment pod communication, pod security standards to prevent risky configurations, and runtime detection to identify potentially malicious activity. Effective security implementation transforms Kubernetes from a complex distributed system into a production-ready platform that developers and organizations can operate with manageable risk.
Related articles
Continue exploring related topics