Pod Security Standards (PSS)

Pod Security Standards define three security levels - Privileged, Baseline, and Restricted - replacing the deprecated Pod Security Policies since Kubernetes 1.25. These standards are enforced through the built-in Pod Security Admission controller using namespace labels.

Understanding the Three Security Profiles

Profile Description Use Case
Privileged Unrestricted - allows all capabilities and privilege escalations System daemons, CNI plugins, infrastructure components
Baseline Minimally restrictive - prevents known privilege escalations Development environments, most applications
Restricted Heavily restricted - current Pod hardening best practices Production workloads, security-critical applications

Enabling Pod Security Standards

Pod Security Admission is enabled by default in Kubernetes 1.25+. Configure enforcement through namespace labels:

# Enable PSS Restricted profile for production namespace
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    # Enforcement mode - reject violating pods
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: v1.34

    # Audit mode - log violations but allow
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: v1.34

    # Warn mode - show warning to user
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: v1.34
Best Practice: Start with warn and audit modes in existing clusters to identify violations before enabling enforce mode.

Security Context for Restricted Profile

Pods must meet these requirements to pass Restricted profile validation:

# Complete security context for Restricted profile compliance
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
  namespace: production
spec:
  securityContext:
    runAsNonRoot: true           # Must run as non-root user
    runAsUser: 1000              # Non-zero UID
    runAsGroup: 1000
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault       # Or Localhost with custom profile
  containers:
  - name: app
    image: ghcr.io/myorg/myapp:v1.0.0@sha256:abc123...  # Pin by digest
    securityContext:
      allowPrivilegeEscalation: false  # Must be false
      readOnlyRootFilesystem: true     # Recommended
      capabilities:
        drop: ["ALL"]                  # Must drop ALL capabilities
        add: ["NET_BIND_SERVICE"]      # Only allowed addition
    resources:
      limits:
        cpu: "500m"
        memory: "256Mi"
      requests:
        cpu: "100m"
        memory: "128Mi"
    volumeMounts:
    - name: tmp
      mountPath: /tmp
  volumes:
  - name: tmp
    emptyDir: {}  # Only allowed volume types

Allowed Volume Types (Restricted): configMap, csi, downwardAPI, emptyDir, ephemeral, persistentVolumeClaim, projected, secret.

Network Policies and Service Mesh

Network Policies should start with default-deny-all and explicitly allow only required traffic flows using labels instead of IP addresses. This implements the principle of least privilege at the network layer.

Step 1: Default Deny All Traffic

# Apply this to every production namespace first
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}  # Applies to all pods in namespace
  policyTypes:
  - Ingress
  - Egress
  # No ingress/egress rules = deny all

Step 2: Allow DNS Egress (Required)

# Pods need DNS resolution - always allow this
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: production
spec:
  podSelector: {}  # All pods need DNS
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

Step 3: Allow Specific Application Traffic

# Allow frontend to backend communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

CNI Plugin Selection

CNI Plugin Network Policy Advanced Features
Cilium Full + L7 eBPF-native, L7 policies, Hubble observability
Calico Full BGP, eBPF mode, GlobalNetworkPolicy
Weave Full Encryption, multicast support
Flannel None Simple overlay only - avoid for security

Recommendation: Use Cilium for production environments requiring L7 policies and observability. Use Calico for established enterprise deployments with BGP requirements.

Service Mesh mTLS with Istio

Service mesh solutions like Linkerd provide automatic mTLS with zero configuration, while Istio offers more comprehensive but complex security controls.

# Istio PeerAuthentication for strict mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: production
spec:
  mtls:
    mode: STRICT  # Require mTLS for all traffic

RBAC and Access Control

Every Kubernetes workload should use a dedicated service account with only the minimum permissions required - never use the default service account.

Core RBAC Principles

  1. Least Privilege: Grant minimum permissions required for each workload
  2. Separation of Duties: Different service accounts for different functions
  3. Namespace Scoping: Prefer Role over ClusterRole when possible
  4. Regular Auditing: Review RoleBindings periodically

Create Dedicated Service Accounts

# Never use the default service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-app-sa
  namespace: production
automountServiceAccountToken: false  # Disable if no API access needed
---
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  namespace: production
spec:
  serviceAccountName: my-app-sa
  automountServiceAccountToken: false  # Explicit pod-level override
  containers:
  - name: app
    image: myapp:v1.0.0

Least-Privilege Role Example

# Read-only access to pods in specific namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: production
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: pod-reader-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: monitoring-sa
  namespace: production
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Dangerous Permissions to Avoid

Permission Risk
create on pods Can create privileged pods for cluster compromise
* (wildcard) verbs Unrestricted access to resources
escalate verb Can grant permissions beyond own level
bind verb Can bind roles beyond own permissions
impersonate verb Can act as any user/group/serviceaccount

Verify Permissions

# Check what a service account can do
kubectl auth can-i --list --as=system:serviceaccount:production:my-app-sa

# Check specific permission
kubectl auth can-i create pods --as=system:serviceaccount:production:my-app-sa

# Verify who can perform an action
kubectl auth can-i delete secrets --all-namespaces --list

Secrets Management

Kubernetes Secrets are NOT encrypted by default - they are only Base64 encoded. Anyone with etcd access or API permissions can read them in plaintext.

# This is NOT secure - Secrets are just Base64 encoded
kubectl get secret db-credentials -o jsonpath='{.data.password}' | base64 -d

Enable etcd Encryption at Rest

KMS v2 encryption uses envelope encryption where data encryption keys (DEKs) are protected by key encryption keys (KEKs) stored in external KMS.

# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
  - secrets
  - configmaps
  providers:
  - kms:
      apiVersion: v2
      name: aws-kms
      endpoint: unix:///var/run/kmsplugin/socket.sock
      cachesize: 1000
      timeout: 3s
  - identity: {}  # Fallback for reading old unencrypted data

Add to kube-apiserver:

--encryption-provider-config=/etc/kubernetes/encryption-config.yaml

External Secrets Operator (ESO)

External Secrets Operator synchronizes secrets from AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault into native Kubernetes Secrets automatically.

# SecretStore - Connection to AWS Secrets Manager
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-secrets-manager
  namespace: production
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-west-2
      auth:
        jwt:
          serviceAccountRef:
            name: external-secrets-sa
---
# ExternalSecret - Syncs secrets from AWS to K8s
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: database-credentials
  namespace: production
spec:
  refreshInterval: 1h  # Auto-sync every hour
  secretStoreRef:
    name: aws-secrets-manager
    kind: SecretStore
  target:
    name: db-credentials
    creationPolicy: Owner
  data:
  - secretKey: username
    remoteRef:
      key: production/database
      property: username
  - secretKey: password
    remoteRef:
      key: production/database
      property: password

HashiCorp Vault Integration

# Vault Secrets Operator (VSO) example
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
  name: app-secret
  namespace: production
spec:
  vaultAuthRef: default
  mount: secret
  type: kv-v2
  path: production/app
  destination:
    name: app-credentials
    create: true
  refreshAfter: 30s  # Short refresh for security
Best Practice: Use dynamic secrets with short TTLs where possible. Vault can generate database credentials on-demand that automatically expire.

Image Security and Admission Controllers

Kyverno uses YAML for policy definitions while OPA Gatekeeper requires learning Rego - making Kyverno more accessible for Kubernetes-native teams.

Kyverno: Block Latest Tag

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-latest-tag
spec:
  validationFailureAction: Enforce
  rules:
  - name: require-image-tag
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      message: "Images must use a specific tag, not 'latest'"
      pattern:
        spec:
          containers:
          - image: "!*:latest"

Kyverno: Verify Image Signatures

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image-signatures
spec:
  validationFailureAction: Enforce
  background: false
  rules:
  - name: verify-signature
    match:
      any:
      - resources:
          kinds:
          - Pod
    verifyImages:
    - imageReferences:
      - "ghcr.io/myorg/*"
      attestors:
      - entries:
        - keyless:
            subject: "https://github.com/myorg/*"
            issuer: "https://token.actions.githubusercontent.com"
            rekor:
              url: https://rekor.sigstore.dev

OPA Gatekeeper: Trusted Registries

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
  name: require-trusted-registries
spec:
  match:
    kinds:
    - apiGroups: [""]
      kinds: ["Pod"]
  parameters:
    repos:
    - "gcr.io/my-project/"
    - "ghcr.io/myorg/"
    - "registry.internal.company.com/"

Image Scanning with Trivy

# Scan image for vulnerabilities
trivy image nginx:1.25

# Scan with severity filter (CKS exam focus)
trivy image --severity HIGH,CRITICAL nginx:1.25

# Generate SBOM (Software Bill of Materials)
trivy image --format spdx-json -o sbom.json nginx:1.25

# Scan entire Kubernetes cluster
trivy k8s --report summary cluster

GitHub Actions Integration

# .github/workflows/scan.yml
- name: Scan image with Trivy
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: 'myapp:${{ github.sha }}'
    format: 'sarif'
    output: 'trivy-results.sarif'
    severity: 'CRITICAL,HIGH'
    exit-code: '1'  # Fail build on vulnerabilities

Runtime Security with Falco

Falco monitors Linux system calls using eBPF and alerts on suspicious runtime behavior like shell spawns, file access, and network connections.

Deploy Falco with Helm

# Install Falco with eBPF driver
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco \
  --namespace falco \
  --create-namespace \
  --set driver.kind=ebpf \
  --set falcosidekick.enabled=true \
  --set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/..."

Custom Falco Rules

# Detect shell spawned in container
- rule: Shell Spawned in Container
  desc: Detect shell execution in container - potential breach
  condition: >
    spawned_process and
    container and
    shell_procs
  output: >
    Shell spawned in container
    (user=%user.name container=%container.name
    shell=%proc.name parent=%proc.pname
    cmdline=%proc.cmdline)
  priority: WARNING
  tags: [container, shell, mitre_execution]

# Detect sensitive file access
- rule: Read Sensitive File
  desc: Detect read of sensitive files like /etc/shadow
  condition: >
    open_read and
    container and
    sensitive_files
  output: >
    Sensitive file opened for reading
    (user=%user.name file=%fd.name container=%container.name)
  priority: WARNING
  tags: [filesystem, confidentiality]

Kubernetes Audit Logging

# /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all requests to secrets at RequestResponse level
- level: RequestResponse
  resources:
  - group: ""
    resources: ["secrets"]

# Log pod exec/attach at Metadata level
- level: Metadata
  resources:
  - group: ""
    resources: ["pods/exec", "pods/attach"]

# Log changes to RBAC
- level: RequestResponse
  resources:
  - group: "rbac.authorization.k8s.io"
    resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]

# Don't log read-only requests to configmaps (reduce noise)
- level: None
  resources:
  - group: ""
    resources: ["configmaps"]
  verbs: ["get", "list", "watch"]

# Catch-all for everything else
- level: Metadata
  omitStages:
  - RequestReceived

Cluster Hardening and CIS Benchmarks

CIS Kubernetes Benchmarks provide two security levels: L1 for essential requirements and L2 for defense-in-depth hardening.

Run kube-bench

# Run as Kubernetes Job
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml

# View results
kubectl logs job/kube-bench

# Run locally for faster feedback
kube-bench run --targets master,node

Key CIS Recommendations

Control Recommendation
1.2.1 Ensure anonymous-auth is disabled
1.2.6 Ensure authorization-mode is not AlwaysAllow
1.2.16 Ensure PodSecurity admission is enabled
2.1 Ensure etcd data directory permissions are 700
5.1.5 Ensure default service accounts are not used

API Server Hardening Flags

# Essential kube-apiserver security flags
--anonymous-auth=false
--authorization-mode=Node,RBAC
--enable-admission-plugins=NodeRestriction,PodSecurity
--audit-log-path=/var/log/kubernetes/audit.log
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml
--tls-min-version=VersionTLS12
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

Supply Chain Security

Sigstore provides keyless signing using OIDC identities - eliminating the complexity of traditional PKI for container image verification.

Sign Images with Cosign

# Keyless signing with OIDC (recommended)
cosign sign --yes ghcr.io/myorg/myapp:v1.0.0

# Sign with key (traditional)
cosign sign --key cosign.key ghcr.io/myorg/myapp:v1.0.0

Verify Image Signatures

# Keyless verification
cosign verify \
  --certificate-identity="https://github.com/myorg/myrepo/.github/workflows/build.yml@refs/heads/main" \
  --certificate-oidc-issuer="https://token.actions.githubusercontent.com" \
  ghcr.io/myorg/myapp:v1.0.0

# Verify with key
cosign verify --key cosign.pub ghcr.io/myorg/myapp:v1.0.0

Generate and Attest SBOM

# Generate SBOM with Syft
syft ghcr.io/myorg/myapp:v1.0.0 -o spdx-json > sbom.spdx.json

# Attach SBOM as attestation
cosign attest --predicate sbom.spdx.json --type spdxjson ghcr.io/myorg/myapp:v1.0.0

# Verify attestation
cosign verify-attestation --type spdxjson ghcr.io/myorg/myapp:v1.0.0

SLSA Framework Levels

Level Requirements
SLSA 1 Documentation of build process exists
SLSA 2 Version control, hosted build service, authenticated provenance
SLSA 3 Source verified history (18 months), isolated parameterless builds
SLSA 4 Two-person review, hermetic builds

GitHub Actions SLSA Generator

jobs:
  build:
    outputs:
      digest: ${{ steps.build.outputs.digest }}
    steps:
    - uses: actions/checkout@v4
    - id: build
      run: |
        docker build -t myapp:${{ github.sha }} .
        echo "digest=$(docker inspect --format='{{index .RepoDigests 0}}' myapp:${{ github.sha }})" >> $GITHUB_OUTPUT

  provenance:
    needs: build
    uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v1.9.0
    with:
      image: myapp
      digest: ${{ needs.build.outputs.digest }}

Security Tools Comparison

Tool Category Function CKS Exam
Trivy Image Scanning Vulnerability, SBOM, misconfiguration Yes (critical)
kube-bench Compliance CIS Kubernetes Benchmark Yes
Falco Runtime Security System call monitoring, threat detection Yes (critical)
Kyverno Policy Engine YAML-based admission control Partially
OPA Gatekeeper Policy Engine Rego-based admission control Partially
Cilium CNI + Security eBPF networking, L7 policies No
Cosign Supply Chain Image signing and verification No
Kubescape Posture Management CIS, NSA/CISA compliance, risk scoring No

Recommended Tool Stack

For CKS Exam:

  • Trivy (image scanning - almost guaranteed)
  • kube-bench (CIS compliance)
  • Falco (runtime detection)
  • Network Policies (native)
  • RBAC verification with kubectl auth can-i

For Production:

  • Trivy + Kubescape (comprehensive scanning)
  • Falco + observability platform (runtime + alerting)
  • Kyverno (admission control, image verification)
  • External Secrets Operator (secrets management)
  • Cilium (advanced networking)
  • Sigstore/Cosign (supply chain)

CKS Certification Preparation

CKS Exam Requirements: 67% to pass, 2 hours, performance-based. Prerequisite: valid CKA certification.

Exam Domain Weights (2025-2026)

Domain Weight Key Topics
Cluster Setup 10% Network policies, CIS benchmarks, ingress security
Cluster Hardening 15% RBAC, service accounts, upgrade kubernetes
System Hardening 15% Host OS security, kernel hardening
Minimize Microservice Vulnerabilities 20% PSS, SecurityContext, secrets management
Supply Chain Security 20% Image scanning, signing, admission controllers
Monitoring, Logging, Runtime Security 20% Audit logs, Falco, container immutability

High-Priority Study Topics

  1. Network Policies - Default deny first, always allow DNS egress
  2. Trivy Image Scanning - Almost guaranteed on the exam
  3. Falco Rules - Understanding alerts and rule syntax
  4. RBAC - Verify with kubectl auth can-i
  5. Audit Logs - Configuration and analysis
  6. Pod Security Standards - Restricted profile enforcement
  7. Secrets Encryption - EncryptionConfiguration for etcd
  8. crictl - Container runtime investigation

Practice Commands

# Image scanning with Trivy
trivy image --severity HIGH,CRITICAL nginx:latest

# Check CIS compliance
kube-bench run --targets master

# Verify RBAC permissions
kubectl auth can-i create pods --as=system:serviceaccount:default:mysa

# Check pod security violations
kubectl get pods -A -o json | jq '.items[] | select(.spec.securityContext.runAsNonRoot != true)'

# View Falco alerts
kubectl logs -n falco -l app.kubernetes.io/name=falco

# Check secrets encryption
ETCDCTL_API=3 etcdctl get /registry/secrets/default/mysecret | hexdump -C

Official CKS Resources

Frequently Asked Questions

What is the difference between Pod Security Standards and Pod Security Policies?

Pod Security Standards (PSS) replaced the deprecated Pod Security Policies (PSP) starting in Kubernetes 1.25. PSS defines three security profiles (Privileged, Baseline, Restricted) enforced through the built-in Pod Security Admission controller using namespace labels, eliminating the need for complex custom policies.

How do I encrypt Kubernetes Secrets at rest?

Kubernetes Secrets are only Base64 encoded by default. To encrypt at rest, configure the kube-apiserver with an EncryptionConfiguration using either a local encryption key (aescbc/aesgcm) or preferably a KMS v2 provider integration with AWS KMS, Google Cloud KMS, or Azure Key Vault for production environments.

What is the best admission controller for Kubernetes - Kyverno or OPA Gatekeeper?

Kyverno is recommended for teams wanting Kubernetes-native YAML policies with a lower learning curve, while OPA Gatekeeper suits organizations needing complex compliance logic using Rego. Many production environments use Kyverno for operational policies and image verification with Sigstore integration.

How does Falco detect runtime threats in Kubernetes?

Falco uses eBPF (extended Berkeley Packet Filter) to monitor Linux kernel system calls in real-time. It compares events against customizable rules to detect suspicious behavior like shell spawns in containers, sensitive file access, or unexpected network connections, triggering alerts without signature-based scanning.

What security tools should I use for CKS exam preparation?

Focus on Trivy for image vulnerability scanning (almost guaranteed on the exam), kube-bench for CIS benchmark compliance, Network Policies for microsegmentation, RBAC with kubectl auth can-i for permission verification, and Falco for runtime security alerts and audit log analysis.

What are the CKS exam domains and their weightage?

The CKS exam covers six domains: Cluster Setup (10%), Cluster Hardening (15%), System Hardening (15%), Minimize Microservice Vulnerabilities (20%), Supply Chain Security (20%), and Monitoring, Logging, Runtime Security (20%). Supply chain security and runtime security have the highest combined weightage at 40%.

How do I implement default-deny Network Policies in Kubernetes?

Create a NetworkPolicy with empty podSelector and specify both Ingress and Egress in policyTypes without any allow rules. This blocks all traffic to and from pods in the namespace. Then create specific NetworkPolicies to allow only required traffic flows using label selectors instead of IP addresses.

Conclusion

Kubernetes security in 2026 requires a comprehensive defense-in-depth strategy across all eight security domains. The key principles to remember:

  1. Pod Security Standards "Restricted" - Use namespace labels for automatic enforcement
  2. Default-Deny Networking - Start with deny-all, then explicitly allow required traffic
  3. Least-Privilege RBAC - Dedicated service accounts, verify with kubectl auth can-i
  4. Secrets Encryption - KMS v2 for etcd, External Secrets Operator for management
  5. Image Security - Sign with Sigstore, scan with Trivy, enforce with Kyverno
  6. Runtime Protection - Falco with eBPF for real-time threat detection
  7. Cluster Hardening - CIS Benchmarks with kube-bench automation
  8. Supply Chain - SBOMs with Syft, attestations with Cosign, SLSA compliance

For CKS certification, focus your preparation on Trivy image scanning, Network Policies, Falco rules, and RBAC - these topics represent the majority of exam questions.

Next Steps

Ready to Master Kubernetes Security for CKS?

Subscribe to our YouTube channel for hands-on security tutorials, CKS exam prep, and DevOps best practices.

Subscribe to Gheware DevOps AI