CI/CD Pipeline Best Practices for Modern Development Teams
Author
ZTABS Team
Date Published
A well-built CI/CD pipeline is the difference between teams that ship with confidence and teams that dread deployments. In 2026, the tooling has matured — GitHub Actions is the dominant CI/CD platform, container-based builds are standard, and security scanning is no longer optional. The challenge is not the tools but how you compose them into a pipeline that is fast, reliable, and secure.
This guide covers the practices that matter, with production-ready GitHub Actions workflows you can adapt to your stack.
Pipeline Architecture
A production CI/CD pipeline has distinct stages, each with a clear purpose and failure mode.
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Commit │───▶│ Build │───▶│ Test │───▶│ Security │───▶│ Deploy │
│ & Lint │ │ & Compile │ │ Suite │ │ Scan │ │ Stages │
└──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘
│ │ │ │ │
Type check Container Unit tests Dependency Staging
ESLint/Prettier Docker build Integration SAST/DAST Production
Commit lint Artifacts E2E Secret scan Rollback
Each stage is a gate. If linting fails, tests do not run. If tests fail, security scans do not run. If security scans fail, deployment does not happen. This fail-fast approach saves compute and gives developers the fastest possible feedback.
GitHub Actions: Foundation Workflow
Start with a workflow that runs on every pull request and covers the essential checks.
name: CI Pipeline
on:
pull_request:
branches: [main, develop]
push:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
NODE_VERSION: "20"
PNPM_VERSION: "9"
jobs:
lint:
name: Lint & Type Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
with:
version: ${{ env.PNPM_VERSION }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: pnpm
- run: pnpm install --frozen-lockfile
- name: Type check
run: pnpm tsc --noEmit
- name: Lint
run: pnpm eslint . --max-warnings 0
- name: Format check
run: pnpm prettier --check .
test:
name: Test Suite
needs: lint
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: testdb
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
with:
version: ${{ env.PNPM_VERSION }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: pnpm
- run: pnpm install --frozen-lockfile
- name: Run migrations
run: pnpm db:migrate
env:
DATABASE_URL: postgres://test:test@localhost:5432/testdb
- name: Unit & integration tests
run: pnpm vitest run --coverage
env:
DATABASE_URL: postgres://test:test@localhost:5432/testdb
REDIS_URL: redis://localhost:6379
- name: Upload coverage
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
retention-days: 7
Key patterns in this workflow:
concurrencywithcancel-in-progress— if you push a new commit to a PR, the previous workflow run is canceled. This saves compute and ensures you only care about the latest code.--frozen-lockfile— ensures CI installs the exact dependency versions from your lockfile, preventing "works on my machine" issues.- Service containers — PostgreSQL and Redis spin up as Docker containers alongside your tests. No mocking the database, no shared test environments.
needs: lint— tests only run if linting passes, failing fast on trivial issues.
Testing Strategy in CI
The Testing Pyramid in Practice
| Layer | Tool | Runs When | Speed | Coverage | |-------|------|-----------|-------|----------| | Unit tests | Vitest, Jest | Every PR | Seconds | Individual functions and modules | | Integration tests | Vitest + real DB | Every PR | Minutes | Service boundaries, API contracts | | E2E tests | Playwright, Cypress | Pre-deploy | Minutes | Critical user flows | | Performance tests | k6, Artillery | Nightly or pre-release | Minutes | Latency, throughput under load |
Parallel Test Execution
Split your test suite across multiple runners to keep feedback fast.
test:
name: Tests (Shard ${{ matrix.shard }})
needs: lint
runs-on: ubuntu-latest
strategy:
matrix:
shard: [1, 2, 3, 4]
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
with:
version: ${{ env.PNPM_VERSION }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: pnpm
- run: pnpm install --frozen-lockfile
- name: Run tests (shard ${{ matrix.shard }}/4)
run: pnpm vitest run --shard=${{ matrix.shard }}/4
E2E Tests with Playwright
e2e:
name: E2E Tests
needs: [test, build]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
with:
version: ${{ env.PNPM_VERSION }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: pnpm
- run: pnpm install --frozen-lockfile
- name: Install Playwright browsers
run: pnpm exec playwright install --with-deps chromium
- name: Run E2E tests
run: pnpm exec playwright test
env:
BASE_URL: http://localhost:3000
- name: Upload test report
if: failure()
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: playwright-report/
retention-days: 7
Security Scanning
Security scanning in CI catches vulnerabilities before they reach production. Three layers cover the critical attack surface.
Dependency Scanning
security:
name: Security Scan
needs: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: fs
scan-ref: .
severity: CRITICAL,HIGH
exit-code: 1
format: sarif
output: trivy-results.sarif
- name: Upload Trivy scan results
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: trivy-results.sarif
Secret Detection
Prevent accidental commits of API keys, passwords, and tokens.
- name: Scan for secrets
uses: trufflesecurity/trufflehog@main
with:
extra_args: --only-verified --fail
Container Image Scanning
If you build Docker images, scan them for OS-level vulnerabilities.
scan-image:
name: Scan Container Image
needs: build
runs-on: ubuntu-latest
steps:
- name: Download image artifact
uses: actions/download-artifact@v4
with:
name: docker-image
- name: Load image
run: docker load -i image.tar
- name: Scan image with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
severity: CRITICAL,HIGH
exit-code: 1
Deployment Automation
Environment Promotion
Code should flow through environments with increasing confidence requirements.
deploy-staging:
name: Deploy to Staging
needs: [test, security, e2e]
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment:
name: staging
url: https://staging.example.com
steps:
- uses: actions/checkout@v4
- name: Deploy to staging
run: |
pnpm install --frozen-lockfile
pnpm build
pnpm deploy:staging
env:
DEPLOY_TOKEN: ${{ secrets.STAGING_DEPLOY_TOKEN }}
deploy-production:
name: Deploy to Production
needs: deploy-staging
runs-on: ubuntu-latest
environment:
name: production
url: https://example.com
steps:
- uses: actions/checkout@v4
- name: Deploy to production
run: |
pnpm install --frozen-lockfile
pnpm build
pnpm deploy:production
env:
DEPLOY_TOKEN: ${{ secrets.PRODUCTION_DEPLOY_TOKEN }}
GitHub Environments give you:
- Required reviewers — production deployments require manual approval
- Wait timers — enforce a minimum delay after staging deployment before production
- Environment-specific secrets — staging and production have separate credentials
- Deployment protection rules — require status checks to pass before deployment
Database Migrations in CI/CD
Database migrations are the trickiest part of deployment automation. They must be backward-compatible with the currently running code.
migrate:
name: Run Migrations
needs: deploy-staging
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/checkout@v4
- name: Run database migrations
run: pnpm db:migrate
env:
DATABASE_URL: ${{ secrets.STAGING_DATABASE_URL }}
- name: Verify migration
run: pnpm db:verify
env:
DATABASE_URL: ${{ secrets.STAGING_DATABASE_URL }}
Follow the expand-and-contract pattern for schema changes:
- Expand — add the new column or table without removing anything
- Migrate data — backfill the new column from existing data
- Deploy code — update the application to use the new schema
- Contract — remove the old column or table in a subsequent deployment
Artifact Management
Container Image Workflow
build:
name: Build Container Image
needs: lint
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: ${{ github.ref == 'refs/heads/main' }}
tags: |
ghcr.io/${{ github.repository }}:${{ github.sha }}
ghcr.io/${{ github.repository }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Key patterns:
cache-from: type=gha— use GitHub Actions cache for Docker layer caching. This dramatically reduces build times for images where only application code changes.- Tag with SHA — every image is tagged with the exact commit SHA for traceability. The
latesttag is a convenience but should never be used for production deployments. - Push only on main — PR builds create images but do not push them, saving registry storage.
Pipeline Optimization
Caching Dependencies
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: pnpm
The built-in cache support in setup-node handles pnpm, npm, and yarn. For custom caches:
- name: Cache Playwright browsers
uses: actions/cache@v4
with:
path: ~/.cache/ms-playwright
key: playwright-${{ hashFiles('pnpm-lock.yaml') }}
Conditional Job Execution
Skip expensive jobs when only documentation or non-code files change.
changes:
name: Detect Changes
runs-on: ubuntu-latest
outputs:
src: ${{ steps.filter.outputs.src }}
docs: ${{ steps.filter.outputs.docs }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
src:
- 'src/**'
- 'package.json'
- 'pnpm-lock.yaml'
docs:
- 'docs/**'
- '*.md'
test:
needs: [lint, changes]
if: needs.changes.outputs.src == 'true'
# ... test job config
Monitoring Your Pipeline
Key Metrics to Track
| Metric | Target | Why It Matters | |--------|--------|----------------| | Pipeline duration (P50) | < 10 minutes | Developer wait time on every PR | | Pipeline success rate | > 95% | Flaky pipelines erode trust | | Deployment frequency | Daily+ | Proxy for team velocity | | Lead time (commit to production) | < 1 hour | Measures pipeline efficiency | | Mean time to recovery (MTTR) | < 30 minutes | How fast you fix production issues | | Change failure rate | < 5% | Percentage of deployments causing incidents |
These are the four DORA metrics. Teams that excel at all four deliver software faster with fewer incidents.
Production Readiness Checklist
- Every PR runs lint, type check, and tests — no exceptions, no manual overrides
- Tests use real dependencies — containerized databases, not mocks, for integration tests
- Security scanning blocks deployment — critical and high vulnerabilities are gates, not warnings
- Staging mirrors production — same infrastructure, same configuration, different data
- Production deployments require approval — GitHub Environment protection rules
- Rollback is automated — one command or one click to revert
- Pipeline duration is under 10 minutes — measure it, budget it, optimize it
- Secrets are never logged — audit your workflows for accidental secret exposure
Getting Started
A CI/CD pipeline is the most leveraged investment a development team can make. Every improvement to the pipeline pays dividends on every commit, every PR, and every deployment for the life of the project.
To understand the investment involved, explore our DevOps setup cost guide.
If you need help designing a CI/CD pipeline for your team or want to modernize an existing deployment process, talk to our team. We build deployment pipelines that ship code safely and quickly — from GitHub Actions workflows to Kubernetes deployment strategies to multi-environment promotion with automated rollback.
Ship fast. Ship safely. Ship often.
Need Help Building Your Project?
From web apps and mobile apps to AI solutions and SaaS platforms — we ship production software for 300+ clients.
Related Articles
Docker Compose for Production: Multi-Service Setup Done Right
A practical guide to running Docker Compose in production. Covers multi-service setup, health checks, resource limits, networking, volumes, secrets management, and production-hardened YAML configurations.
9 min readAgriculture Technology Solutions in 2026: Precision Farming, IoT & Farm Management
A comprehensive guide to agriculture technology covering precision farming platforms, IoT sensor networks, farm management software, drone and satellite imagery, and supply chain traceability for the ag sector in 2026.
10 min readAPI Security Best Practices: A Developer Guide for 2026
A practical guide to securing APIs in production. Covers OAuth 2.0, JWT handling, rate limiting, input validation, CORS configuration, API key management, and security headers with real code examples.