GitHub Actions is a powerful CI/CD platform that automates your software development workflow directly within your GitHub repository. It's like having a dedicated DevOps engineer that never sleeps, never makes mistakes, and works for free.
π― Real-World Impact: Teams using GitHub Actions report 85% faster deployment cycles, 90% reduction in deployment errors, and 40% less time spent on manual testing compared to traditional CI/CD tools.
β Advantages Over Traditional CI/CD:
Let's build a complete CI/CD pipeline for a real-world application. I'll use a Python weather application as our example, but the patterns apply to any technology stack.
weather-py/
βββ .github/
β βββ workflows/
β βββ ci-cd.yml # Main workflow file
βββ app/
β βββ __init__.py
β βββ main.py # Flask application
β βββ requirements.txt # Dependencies
βββ k8s/
β βββ deployment.yaml # Kubernetes manifest
β βββ service.yaml # Service definition
β βββ ingress.yaml # Ingress configuration
βββ Dockerfile # Container image definition
βββ README.md
Create .github/workflows/ci-cd.yml in your repository:
name: CI/CD Pipeline
# Trigger conditions
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
# Manual trigger
workflow_dispatch:
# Environment variables available to all jobs
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# Job 1: Code Quality & Testing
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.9'
cache: 'pip' # Cache pip dependencies
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r app/requirements.txt
pip install pytest flake8 black
- name: Code formatting check
run: black --check app/
- name: Lint code
run: flake8 app/
- name: Run unit tests
run: |
cd app
python -m pytest tests/ -v --junitxml=../test-results.xml
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: test-results.xml
π‘ Pro Tip: Use different triggers for different environments:
Let's add Docker image building and push to our workflow. This creates a portable, consistent deployment artifact.
First, create an optimized Dockerfile for our Python app:
# Use specific version for reproducibility
FROM python:3.9-slim-buster
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first (better layer caching)
COPY app/requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Create non-root user for security
RUN adduser --disabled-password --gecos '' appuser
# Copy application code
COPY app/ .
# Change ownership to non-root user
RUN chown -R appuser:appuser /app
USER appuser
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5000/health || exit 1
# Expose port
EXPOSE 5000
# Run application
CMD ["python", "main.py"]
Add this job to your workflow file:
# Job 2: Build and Push Docker Image
build:
needs: test # Only run if tests pass
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image-digest: ${{ steps.build.outputs.digest }}
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha,prefix={{branch}}-
type=raw,value=latest,enable={{is_default_branch}}
- name: Build and push Docker image
id: build
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Multi-platform builds
platforms: linux/amd64,linux/arm64
Add vulnerability scanning to catch security issues early:
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: 'trivy-results.sarif'
Kubernetes IN Docker (KIND) creates lightweight local clusters perfect for testing your Kubernetes manifests in CI.
# Job 3: Kubernetes Testing with KIND
k8s-test:
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Create KIND cluster
uses: helm/kind-action@v1.8.0
with:
cluster_name: test-cluster
node_image: kindest/node:v1.29.0
config: |
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- name: Load Docker image into KIND
run: |
# Get image from previous job
IMAGE_TAG="${{ needs.build.outputs.image-tag }}"
echo "Loading image: $IMAGE_TAG"
# Pull image (since it was built in previous job)
docker pull $IMAGE_TAG
# Load image into KIND cluster
kind load docker-image $IMAGE_TAG --name test-cluster
- name: Install NGINX Ingress Controller
run: |
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
- name: Deploy application to KIND
run: |
# Update image tag in deployment
sed -i "s|IMAGE_TAG|${{ needs.build.outputs.image-tag }}|g" k8s/deployment.yaml
# Apply all Kubernetes manifests
kubectl apply -f k8s/
# Wait for deployment to be ready
kubectl wait --for=condition=available --timeout=300s deployment/weather-app
- name: Run integration tests
run: |
# Port forward to access the app
kubectl port-forward svc/weather-app 8080:80 &
sleep 10
# Test health endpoint
curl -f http://localhost:8080/health || exit 1
# Test weather API
curl -f "http://localhost:8080/weather?city=London" || exit 1
echo "β
All integration tests passed!"
- name: Debug on failure
if: failure()
run: |
echo "=== Pod Status ==="
kubectl get pods -o wide
echo "=== Pod Logs ==="
kubectl logs -l app=weather-app
echo "=== Events ==="
kubectl get events --sort-by=.metadata.creationTimestamp
β οΈ Common KIND Pitfall: KIND clusters use Docker-in-Docker, so images built in previous jobs need to be explicitly loaded using kind load docker-image. Don't forget this step!
Now let's deploy to a real Kubernetes cluster. This example shows deployment to multiple environments.
# Job 4: Deploy to Staging
deploy-staging:
needs: [build, k8s-test]
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'
environment: staging
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure kubectl
run: |
echo "${{ secrets.STAGING_KUBECONFIG }}" | base64 -d > kubeconfig
export KUBECONFIG=./kubeconfig
- name: Deploy to staging
run: |
# Update image tag
sed -i "s|IMAGE_TAG|${{ needs.build.outputs.image-tag }}|g" k8s/deployment.yaml
# Apply to staging namespace
kubectl apply -f k8s/ -n staging
# Wait for rollout to complete
kubectl rollout status deployment/weather-app -n staging --timeout=600s
# Verify deployment
kubectl get pods -n staging -l app=weather-app
# Job 5: Deploy to Production
deploy-production:
needs: [build, k8s-test]
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://weather.company.com
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure kubectl
run: |
echo "${{ secrets.PROD_KUBECONFIG }}" | base64 -d > kubeconfig
export KUBECONFIG=./kubeconfig
- name: Blue-Green Deployment
run: |
# Update image tag
sed -i "s|IMAGE_TAG|${{ needs.build.outputs.image-tag }}|g" k8s/deployment.yaml
# Create new deployment with version label
kubectl apply -f k8s/ -n production
# Wait for new version to be ready
kubectl rollout status deployment/weather-app -n production --timeout=600s
# Run smoke tests
./scripts/smoke-tests.sh production
# Update service to point to new version (if using blue-green)
kubectl patch service weather-app -n production -p '{"spec":{"selector":{"version":"'${{ github.sha }}'"}}}'
- name: Notify deployment success
uses: 8398a7/action-slack@v3
with:
status: success
text: 'β
Production deployment successful! Version: ${{ github.sha }}'
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}
Here are the production-ready Kubernetes manifests:
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: weather-app
labels:
app: weather-app
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: weather-app
template:
metadata:
labels:
app: weather-app
version: ${{ github.sha }}
spec:
containers:
- name: weather-app
image: IMAGE_TAG # Replaced by CI/CD
ports:
- containerPort: 5000
env:
- name: ENV
value: production
- name: API_KEY
valueFrom:
secretKeyRef:
name: weather-secrets
key: api-key
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 5000
initialDelaySeconds: 5
periodSeconds: 5
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
Security should be built into every step of your CI/CD pipeline. Here are essential practices for GitHub Actions.
β Secrets Best Practices:
name: Secure CI/CD Pipeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
# Restrict permissions by default
permissions:
contents: read
jobs:
secure-build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
security-events: write # For SARIF uploads
steps:
- name: Harden Runner
uses: step-security/harden-runner@v2
with:
egress-policy: audit
disable-sudo: true
disable-file-monitoring: false
- name: Checkout code
uses: actions/checkout@v4
with:
persist-credentials: false # Don't persist git credentials
- name: Dependency Review
uses: actions/dependency-review-action@v4
if: github.event_name == 'pull_request'
- name: Run CodeQL Analysis
uses: github/codeql-action/init@v3
with:
languages: python
- name: Build application
run: |
# Your build commands here
python -m pip install -r requirements.txt
- name: Run CodeQL Analysis
uses: github/codeql-action/analyze@v3
- name: Build Docker image
uses: docker/build-push-action@v5
with:
context: .
push: false
tags: temp-image:latest
# Security optimizations
build-args: |
BUILDKIT_INLINE_CACHE=1
secrets: |
GIT_AUTH_TOKEN=${{ secrets.GITHUB_TOKEN }}
- name: Run container security scan
uses: aquasecurity/trivy-action@master
with:
image-ref: 'temp-image:latest'
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1' # Fail build on critical vulnerabilities
- name: Run Snyk container scan
uses: snyk/actions/docker@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
image: temp-image:latest
args: --severity-threshold=high
π Security Tip: Use GitHub's dependency review action to automatically scan for vulnerable dependencies in pull requests. This catches security issues before they reach your main branch.
Test your application across multiple versions and configurations:
jobs:
test-matrix:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, '3.10', 3.11]
django-version: [3.2, 4.0, 4.1]
exclude:
# Exclude unsupported combinations
- python-version: 3.8
django-version: 4.1
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install Django ${{ matrix.django-version }}
run: |
pip install Django==${{ matrix.django-version }}
pip install -r requirements.txt
- name: Run tests
run: python manage.py test
jobs:
deploy:
runs-on: ubuntu-latest
environment:
name: production
url: https://app.company.com
# Only deploy on main branch AND if version tag exists
if: github.ref == 'refs/heads/main' && contains(github.event.head_commit.message, '[deploy]')
steps:
- name: Wait for approval
uses: trstringer/manual-approval@v1
with:
secret: ${{ secrets.GITHUB_TOKEN }}
approvers: rajesh,devops-team
minimum-approvals: 2
issue-title: "Deploy version ${{ github.sha }} to production"
issue-body: |
Please review the changes and approve deployment:
**Commit**: ${{ github.sha }}
**Author**: ${{ github.actor }}
**Changes**: ${{ github.event.head_commit.message }}
[View Changes](https://github.com/${{ github.repository }}/commit/${{ github.sha }})
- name: Deploy to production
run: |
echo "Deploying to production..."
# Your deployment steps here
π Performance Optimizations:
Symptoms: Workflow file exists but doesn't run on push/PR
Common Causes & Solutions:
.github/workflows/ directory# Debug workflow triggers
name: Debug Workflow
on:
push:
pull_request:
workflow_dispatch: # Add manual trigger for testing
jobs:
debug:
runs-on: ubuntu-latest
steps:
- name: Print trigger info
run: |
echo "Event: ${{ github.event_name }}"
echo "Ref: ${{ github.ref }}"
echo "Actor: ${{ github.actor }}"
Debug Docker issues systematically:
- name: Debug Docker build
if: failure()
run: |
echo "=== Docker Info ==="
docker version
docker system df
echo "=== Build Context ==="
ls -la
echo "=== Dockerfile Content ==="
cat Dockerfile
echo "=== Available Space ==="
df -h
Comprehensive K8s debugging:
- name: Debug Kubernetes deployment
if: failure()
run: |
echo "=== Cluster Info ==="
kubectl cluster-info
echo "=== Nodes Status ==="
kubectl get nodes -o wide
echo "=== Pod Status ==="
kubectl get pods -A
echo "=== Recent Events ==="
kubectl get events --sort-by=.metadata.creationTimestamp
echo "=== Deployment Logs ==="
kubectl logs deployment/weather-app -n default --tail=100
Join 10,000+ engineers building bulletproof CI/CD pipelines. Get our complete GitHub Actions template library and advanced Kubernetes deployment strategies.
Get Free Templates βπ¬ Share Your Experience: Have you built GitHub Actions workflows for production? What challenges did you face? Share your insights in the comments below!
π¬ Watch the Tutorial: See this GitHub Actions CI/CD pipeline in action with our complete video walkthrough.
Jenkins & GitHub Actions patterns from 25 years of enterprise DevOps. Plug-and-play pipeline templates included.
Practical insights on Kubernetes, CI/CD, and Agentic AI from 25+ years in enterprise engineering.
Rajesh Gheware is a Senior DevOps Architect with over 20 years of experience building enterprise CI/CD pipelines. He has helped Fortune 500 companies migrate from manual deployments to fully automated GitHub Actions workflows, reducing deployment times from hours to minutes.