How to Build a Production-Ready DevOps Pipeline with Free Tools: Complete Step by Step Guide
By Braincuber Team
Published on March 7, 2026
A D2C brand we work with was paying $4,700/month for a DevOps setup cobbled together from CircleCI Pro, Datadog, and a Jenkins server running on a dedicated EC2 instance. Two developers spent 11 hours/week maintaining it. We rebuilt the entire pipeline using GitHub Actions (free for public repos, 2,000 min/month for private), Grafana Cloud free tier, Terraform Cloud free tier, and Trivy for container scanning. Total cost: $0/month. Total migration time: 3 days. This complete tutorial shows you how to build the same production-grade pipeline from scratch.
What You'll Learn:
- How to structure a GitHub repo with branch protection, PR templates, and pre-commit hooks
- Step by step CI pipeline setup with GitHub Actions — build, test, lint, and cache
- How to optimize Docker builds with multi-stage builds and BuildKit
- Infrastructure as Code with Terraform on free cloud providers
- Lightweight Kubernetes orchestration with K3d and GitOps via Flux
- Monitoring with Grafana Cloud + Prometheus + UptimeRobot at zero cost
- Security scanning with CodeQL, OWASP ZAP, and Trivy in your CI pipeline
The Free Tool Stack That Replaces $4,700/Month in Paid DevOps
| Pipeline Stage | Free Tool | Replaces | Free Tier Limit |
|---|---|---|---|
| Source Control | GitHub | GitLab Premium, Bitbucket | Unlimited public repos |
| CI/CD | GitHub Actions | CircleCI, Jenkins, Travis CI | 2,000 min/month (private) |
| Containerization | Docker + BuildKit | Paid container registries | Unlimited local builds |
| Infrastructure | Terraform Cloud | Pulumi Teams, CloudFormation | 500 resources/month |
| Orchestration | K3d (K3s in Docker) | EKS, GKE ($74+/month) | Unlimited (local) |
| Monitoring | Grafana Cloud + Prometheus | Datadog ($15/host/month) | 10K metrics, 50GB logs |
| Security | CodeQL + Trivy + ZAP | Snyk Pro, SonarQube | Unlimited (open source) |
Step by Step: Building Your Zero-Cost DevOps Pipeline
Structure Your GitHub Repository Like a Professional Team
Create separate folders for frontend/, backend/, infrastructure/, and a .github/ directory for workflow configs. Enable branch protection on main — require pull requests, status checks, and linear history. Add a PULL_REQUEST_TEMPLATE.md to standardize code reviews. Set up Husky pre-commit hooks to lint code and run tests before anything reaches GitHub. Use Conventional Commits for semantic versioning. Add GitHub Issue templates for bugs and features, and use GitHub Projects as a free Kanban board. This foundation takes 20 minutes and saves 3+ hours/week in code review friction.
my-app/
├── frontend/ # React/Vue/Angular app
├── backend/ # Node.js/Express API
├── infrastructure/ # Terraform configs
├── .github/
│ ├── workflows/ # GitHub Actions YAML
│ ├── PULL_REQUEST_TEMPLATE.md
│ └── ISSUE_TEMPLATE/
├── docker-compose.yml
└── README.md
# Set up pre-commit hooks with Husky
npx husky-init && npm install
npx husky add .husky/pre-commit "npm test"
Build Your CI Pipeline with GitHub Actions — Free 2,000 Minutes/Month
Create .github/workflows/ci.yml to auto-build, test, and lint on every push and PR to main. Use actions/cache@v3 to cache node_modules — this cuts install time from 47 seconds to 8 seconds per run. Add matrix builds to test across Node 18, 20, and 22 simultaneously. For private repos, you get 2,000 free minutes/month. Public repos get unlimited. Cache dependencies aggressively, trigger only on meaningful branches, and skip redundant steps to stay well within limits. One D2C client's CI runs dropped from 6.5 minutes to 2.1 minutes after we added dependency caching and parallelized their test suite.
# .github/workflows/ci.yml
name: CI Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
- run: npm ci
- run: npm test
- run: npm run lint
Optimize Docker Builds — Multi-Stage Builds Cut Image Size by 73%
A naive Dockerfile produces 800MB+ images bloated with dev dependencies. Multi-stage builds use a full Node.js image for building, then copy only the compiled output into a slim Alpine image. Result: 217MB instead of 800MB. Enable BuildKit for parallel layer processing and better caching. Order your Dockerfile layers so that package.json is copied and dependencies installed before source code — so source changes don't invalidate the dependency cache. We had a client whose CI Docker builds dropped from 4 minutes 23 seconds to 1 minute 11 seconds after multi-stage + BuildKit + layer ordering.
# Stage 1: Build (full Node image with dev deps)
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Stage 2: Production (slim image, no dev deps)
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
EXPOSE 3000
CMD ["node", "dist/index.js"]
# Build with BuildKit for 2x faster builds:
# DOCKER_BUILDKIT=1 docker build -t my-app .
Infrastructure as Code with Terraform — Stop Clicking Cloud Dashboards
Manual infrastructure means you can't reproduce it, can't audit what changed, and can't roll back when someone fat-fingers a security group rule at 3 AM. Terraform defines your entire infrastructure in .tf files: EC2 instances, VPCs, security groups, databases. Version it in Git. Review changes in PRs. Apply with terraform apply. Use Terraform Cloud free tier (500 resources/month) for remote state management. Target AWS Free Tier (t2.micro, 750 hours/month) or Oracle Cloud's always-free tier (2 AMD instances, 24GB RAM). We provision a full D2C staging environment in 4 minutes flat — repeatable, auditable, and disposable.
Lightweight Kubernetes with K3d — No $74/Month EKS Bill
Full Kubernetes (EKS, GKE) costs $74+/month before you even deploy a pod. K3d runs K3s inside Docker containers on your laptop or a $5 VPS. Install K3d, create a cluster with k3d cluster create, and deploy with standard kubectl manifests. Set resource limits (128Mi memory, 100m CPU) to keep it running on minimal hardware. Add Flux for GitOps — it watches your Git repo and auto-deploys changes to your cluster. Perfect for dev/staging environments and small production workloads. When you outgrow it, your manifests work on any Kubernetes cluster without modification.
# Install K3d and create a lightweight cluster
k3d cluster create my-app-cluster \
--servers 1 \
--agents 2 \
--port "8080:80@loadbalancer"
# Verify the cluster is running
kubectl get nodes
# Deploy your app with resource limits
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:latest
resources:
limits:
memory: "128Mi"
cpu: "100m"
ports:
- containerPort: 3000
EOF
Monitoring with Grafana Cloud + Prometheus + UptimeRobot — All Free
Grafana Cloud free tier gives you 10,000 metrics, 50GB logs, and 50GB traces. That covers most D2C applications easily. Set up Prometheus to scrape your app's /metrics endpoint every 15 seconds. Create dashboards for request latency, error rates, CPU/memory usage. Write PromQL alert rules — fire a Slack notification if p95 latency exceeds 500ms or error rate crosses 1%. Add UptimeRobot (50 free monitors) for external endpoint checks every 5 minutes. Track SLOs with custom dashboards: target 99.5% uptime and 200ms p95 latency. This setup replaced a $15/host/month Datadog bill for one client running 7 services.
Security Scanning — CodeQL + OWASP ZAP + Trivy in CI
Security isn't an afterthought — bake it into every PR. Enable GitHub CodeQL for static code analysis (catches SQL injection, XSS, insecure deserialization). Run OWASP ZAP baseline scans against your staging URL to find web vulnerabilities. Scan Docker images with Trivy — it checks OS packages and application dependencies for known CVEs. Set threshold-based pipeline failures: if Trivy finds any CRITICAL severity vulnerability, the build fails. No exceptions. One scan caught a CVE-2024-38816 in a Spring Boot dependency that would have exposed customer payment data. The fix took 4 minutes. The breach would have cost $87,000.
When Free Tiers Hit the Ceiling
Free tools scale surprisingly far. But know the limits: GitHub Actions caps at 2,000 min/month for private repos (that's ~33 hours of CI). Grafana Cloud free tier caps at 10K active metrics. AWS Free Tier expires after 12 months. When you hit these walls, expect $50-200/month for the next tier. Plan the migration before you're stuck with failed builds on the 28th of the month because you burned through your minutes.
Performance Optimization: Make Your Pipeline Fast, Not Just Free
Dependency Caching
Cache npm, pip, or Maven dependencies across builds. actions/cache@v3 stores your node_modules hash. Subsequent builds skip the install step entirely if package-lock.json hasn't changed. Saves 30-90 seconds per CI run — that's 15-45 minutes/day on a team pushing 30 PRs.
Parallel Test Execution
Split your test suite across multiple GitHub Actions jobs using matrix strategy. Run unit tests, integration tests, and linting in parallel instead of sequentially. A 7-minute serial pipeline becomes 2.5 minutes when parallelized across 3 jobs.
Docker Layer Caching
Order Dockerfile commands so rarely-changing layers come first. COPY package.json before COPY source code. When only your source changes, Docker reuses the cached dependency install layer. Cuts rebuild time by 60-80%.
Conditional Workflows
Don't rebuild the frontend when only backend code changed. Use path filters in GitHub Actions to trigger workflows selectively. paths: ['backend/**'] saves minutes per run and stays within free tier limits easily.
Frequently Asked Questions
Can I really build a production-ready pipeline for free?
Yes, for small-to-medium projects. GitHub Actions (2,000 min/month), Grafana Cloud (10K metrics), Terraform Cloud (500 resources), and open-source security tools cover most D2C applications. You'll hit limits around 50+ daily builds on private repos or 15+ monitored services.
Is K3d suitable for production workloads?
K3d is ideal for dev/staging and small production workloads. For high-traffic D2C stores, migrate to managed Kubernetes (EKS, GKE) when you need multi-node HA clusters, auto-scaling node pools, or managed control planes. Your K3d manifests transfer directly without modification.
How do I keep my GitHub Actions within the free tier limit?
Cache dependencies aggressively, use path filters to skip irrelevant builds, trigger only on main and release branches, parallelize tests to reduce total runtime, and use public repos when possible (unlimited free minutes). Monitor usage in Settings > Billing > Actions.
Should I use Trivy or Snyk for container scanning?
Trivy is open-source, free, and scans OS packages + application dependencies in one pass. Snyk's free tier limits you to 200 tests/month and requires account setup. For zero-cost pipelines, Trivy wins. Add Snyk later if you need license compliance scanning or private registry support.
What's the first paid upgrade I should make when outgrowing free tiers?
GitHub Actions minutes, typically. The $4/month GitHub Pro plan gives you 3,000 minutes. After that, Grafana Cloud's $29/month tier if you exceed 10K metrics. Avoid upgrading infrastructure (EC2, EKS) until you have actual traffic that demands it — most D2C apps run fine on free tier for 6-12 months.
Still Paying $4,700/Month for a DevOps Stack You Could Build for Free?
We've migrated D2C teams from CircleCI + Datadog + Jenkins combos costing $4,700/month to GitHub Actions + Grafana Cloud + Terraform pipelines at $0/month. Same CI reliability. Same monitoring coverage. Same security scanning. Zero monthly bill. If your DevOps costs more than your cloud infrastructure, something is backwards. We'll audit your pipeline, identify the free-tier replacements, and migrate you in under a week.
