DevOps January 17, 2026 18 min read

Container Security: A Short Dive into Docker

Docker packages applications with their dependencies into lightweight, portable containers that run consistently across different environments—from a developer's laptop to production servers. Let's dive into Docker security.

S

Software Engineer

Schild Technologies

Container Security: A Short Dive into Docker

Container Security: A Short Dive into Docker

Docker packages applications with their dependencies into lightweight, portable containers that run consistently across different environments—from a developer's laptop to production servers. This containerization approach isolates applications from each other and the host system using Linux namespaces and cgroups, while sharing the host's kernel for efficiency. The shared kernel model and layered container architecture create unique security considerations: vulnerable base images propagate to all containers built from them, containers running as root can potentially escape isolation, and the Docker daemon itself becomes a critical attack surface since it typically runs with elevated privileges. Let's dive into Docker security.

Docker security architecture

Docker's security model builds on Linux kernel security features while adding container-specific protections. The Docker daemon runs as root by default, making daemon access control critical since anyone with daemon access can effectively execute commands as root. Docker supports rootless mode, which runs both the daemon and containers without root privileges, significantly reducing the impact of daemon or container compromise.

User namespaces provide additional isolation by mapping the container's root user to a non-privileged user on the host system. When enabled, a process running as root inside a container (UID 0) maps to a high-numbered UID on the host (e.g., UID 100000), preventing container breakout from granting host root access. This feature isn't enabled by default due to compatibility concerns with some Docker features, but it's worth enabling it for security-sensitive deployments.

Docker Content Trust provides image signing and verification, ensuring containers run only images signed by trusted entities. When enabled, Docker refuses to pull or run unsigned images, preventing deployment of tampered or malicious images. Content Trust uses The Update Framework (TUF) for secure key management and Notary for storing and distributing signed metadata.

# Enable Docker Content Trust
export DOCKER_CONTENT_TRUST=1

# Pull image with signature verification
docker pull ubuntu:22.04

# Sign and push image
docker push myregistry.com/myapp:v1.0

Security scanning identifies vulnerabilities in container images before deployment. Docker includes scanning capabilities that check images against vulnerability databases, reporting known CVEs in base images and installed packages.

Securing Docker images

Container images form the foundation of container security. Vulnerable or malicious base images affect every container built from them, making image security a top priority.

Image vulnerability scanning

Vulnerability scanning identifies known security issues in container images by comparing installed packages against databases of documented vulnerabilities. These scans detect CVEs (Common Vulnerabilities and Exposures) present in base images, application dependencies, and system libraries that attackers could exploit to compromise containers or escape to the host system.

Scanning at multiple points in the container lifecycle provides layered protection. Image scans during the build process catch vulnerabilities before they enter your infrastructure. Registry scans verify images remain secure before deployment, while periodic scans of running containers detect newly disclosed vulnerabilities in already-deployed workloads. This multi-stage approach ensures vulnerabilities are caught early when they're cheapest to fix, rather than discovered in production where they pose immediate risk.

Tools like Trivy, Clair, and Grype automate this scanning process by maintaining up-to-date vulnerability databases and checking images against them. Trivy, for example, scans container images, filesystems, and Git repositories:

# Scan image with Trivy
trivy image myapp:latest

# Scan and fail on high/critical vulnerabilities
trivy image --severity HIGH,CRITICAL --exit-code 1 myapp:latest

# Generate vulnerability report
trivy image --format json --output report.json myapp:latest

Integrating scanners into CI/CD pipelines transforms vulnerability detection from a manual audit task into an automated quality gate. When a build produces an image containing high or critical severity vulnerabilities, the pipeline fails the build automatically, preventing the vulnerable image from reaching the registry or production environment. This forces developers to address security issues during the development phase, when fixing them requires changing application dependencies or base images—actions that are straightforward during development but risky in production.

# GitLab CI example
security_scan:
  stage: test
  image: aquasec/trivy:latest
  script:
    - trivy image --severity HIGH,CRITICAL --exit-code 1 $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  only:
    - merge_requests

This pipeline configuration runs security scans on every merge request, blocking merges that would introduce vulnerable images into the codebase. The automated enforcement removes the need for manual security reviews at this stage while maintaining consistent security standards across all builds.

Best Dockerfile practices

Dockerfile construction directly impacts image security. Running containers as non-root users prevents privilege escalation if attackers compromise the container. Dedicated user creation provides this protection rather than relying on the default root user.

# Create non-root user
RUN addgroup -g 1000 appuser && \
    adduser -D -u 1000 -G appuser appuser

# Set working directory ownership
WORKDIR /app
RUN chown appuser:appuser /app

# Switch to non-root user
USER appuser

# Application runs as appuser, not root
CMD ["node", "server.js"]

Multi-stage builds separate build dependencies from runtime dependencies, producing smaller final images with reduced attack surface. Build stages compile code and install build tools, then copy only compiled artifacts into minimal runtime images.

# Build stage
FROM node:20-alpine AS builder
WORKDIR /build
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build

# Runtime stage
FROM node:20-alpine
RUN addgroup -g 1000 appuser && \
    adduser -D -u 1000 -G appuser appuser
WORKDIR /app
COPY --from=builder --chown=appuser:appuser /build/dist ./dist
COPY --from=builder --chown=appuser:appuser /build/node_modules ./node_modules
USER appuser
CMD ["node", "dist/server.js"]

Image signing and verification

Image signing verifies that container images haven't been modified between build and deployment, protecting against supply chain attacks where attackers inject malicious code into images during transit or storage. Without signing, a compromised registry or man-in-the-middle attack could replace legitimate images with trojanized versions that container orchestrators would deploy without detecting the tampering.

Cryptographic signing works by creating a digital signature of the image manifest using a private key at build time. Before deploying an image, container runtimes verify this signature using the corresponding public key. If the image content has changed since signing, the signature verification fails and deployment is blocked. This ensures only approved images from trusted sources run in production environments.

Docker Content Trust provides signing for Docker Hub and private registries. When enabled, Docker automatically verifies signatures during image pulls and refuses to run unsigned images. Cosign, part of the Sigstore project, offers similar capabilities with additional features like keyless signing and integration with cloud key management services:

# Sign image with Cosign
cosign sign --key cosign.key myregistry.com/myapp:v1.0

# Verify image signature before deployment
cosign verify --key cosign.pub myregistry.com/myapp:v1.0

# Generate and verify attestations
cosign attest --key cosign.key --predicate sbom.json myregistry.com/myapp:v1.0
cosign verify-attestation --key cosign.pub myregistry.com/myapp:v1.0

Software Bill of Materials (SBOM) documents inventory every software component within a container image—base OS packages, application dependencies, libraries, and their specific versions. When a new vulnerability is disclosed (for example, a critical OpenSSL CVE), security teams can query SBOMs across all deployed images to identify which containers contain the affected version, rather than manually inspecting each image. This dramatically reduces the time between vulnerability disclosure and remediation.

SBOMs also enable license compliance tracking by documenting which open source licenses apply to software running in production. Tools like Syft generate SBOMs in standardized formats (SPDX, CycloneDX) that automated vulnerability scanners and compliance tools can process:

# Generate SBOM with Syft
syft myapp:latest -o spdx-json > sbom.json

# Scan SBOM for known vulnerabilities
grype sbom:sbom.json

Attestations link SBOMs cryptographically to specific image versions through signing. This prevents attackers from substituting a falsified SBOM that conceals vulnerable components. The combination of signed images and signed SBOMs provides both integrity verification (the image hasn't been tampered with) and transparency (you know exactly what's in the image).

Docker runtime security

Runtime configurations can help strengthen security posture in production. Default Docker settings prioritize compatibility over security, requiring explicit hardening.

Resource limits

Resource limits prevent individual containers from exhausting host resources through denial of service or resource starvation attacks.

# Limit container to 1 CPU and 512MB memory
docker run --cpus="1.0" --memory="512m" myapp

# Set memory reservation and limit
docker run --memory="512m" --memory-reservation="256m" myapp

# Limit I/O operations
docker run --device-read-iops=/dev/sda:1000 myapp

Read-only filesystems

Read-only root filesystems prevent attackers from modifying binaries or installing malware after compromising containers. Applications requiring temporary file storage can use tmpfs mounts for specific directories.

# Run with read-only root filesystem
docker run --read-only --tmpfs /tmp myapp

# Mount specific directories as read-only
docker run -v /app/static:/static:ro myapp

Security profiles

Seccomp profiles restrict system calls available to containers, blocking dangerous kernel interfaces that attackers exploit for privilege escalation or information disclosure.

{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": ["SCMP_ARCH_X86_64"],
  "syscalls": [
    {
      "names": ["read", "write", "open", "close", "stat", "fstat"],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}
# Apply custom seccomp profile
docker run --security-opt seccomp=profile.json myapp

AppArmor and SELinux provide mandatory access control, confining containers to predefined security policies that restrict file access, network operations, and process capabilities.

# Apply AppArmor profile
docker run --security-opt apparmor=docker-default myapp

# Apply SELinux label
docker run --security-opt label=level:s0:c100,c200 myapp

Docker daemon security

Docker daemon security prevents unauthorized access to the primary control point for container operations. Daemon compromise grants attackers control over all containers and potentially the host system.

Daemon socket protection

The Docker daemon socket (/var/run/docker.sock) provides full Docker API access. Mounting this socket into containers grants those containers complete control over the Docker daemon, equivalent to root access on the host. When remote access is required, protecting the daemon with TLS mutual authentication ensures only clients with valid certificates can connect, and prevents eavesdropping on API communications.

# Generate certificates for daemon and clients
# Configure daemon to use TLS
dockerd --tlsverify \
  --tlscacert=ca.pem \
  --tlscert=server-cert.pem \
  --tlskey=server-key.pem \
  -H=0.0.0.0:2376

# Connect with client certificates
docker --tlsverify \
  --tlscacert=ca.pem \
  --tlscert=cert.pem \
  --tlskey=key.pem \
  -H=daemon.example.com:2376 ps

Rootless Docker

Rootless mode runs the Docker daemon as an unprivileged user, eliminating many privilege escalation vectors. Both the daemon and containers execute without root privileges, significantly reducing impact if compromise occurs.

# Install rootless Docker
curl -fsSL https://get.docker.com/rootless | sh

# Start daemon as regular user
systemctl --user start docker

# Containers run as current user, not root
docker run myapp
Rootless mode has limitations including restricted network functionality and incompatibility with some storage drivers, but provides substantial security improvements for compatible workloads.

Conclusion

Docker's benefits—consistent deployment across environments, rapid application delivery, efficient resource utilization, and simplified microservices architectures—explain its widespread adoption across development and production environments. Docker containers are useful for everything from local development workflows to large-scale production deployments, CI/CD pipelines, and multi-tenant platforms. With proper security controls in place, Docker's advantages can be leveraged while managing the inherent risks of containerized environments.

© 2026 Schild Technologies