Home DevOps GitHub Actions CI/CD Tutorial: Workflows, Jobs, and Real-World Pipelines

GitHub Actions CI/CD Tutorial: Workflows, Jobs, and Real-World Pipelines

In Plain English 🔥
Imagine you run a bakery. Every time a new recipe is approved, you want your staff to automatically test it, bake a sample, taste it, and ship it to stores — without you lifting a finger. GitHub Actions is that automated staff for your code. Every time you push code, it kicks off a chain of tasks: run tests, build the app, deploy to a server. You write the instructions once, and it just happens every single time.
⚡ Quick Answer
Imagine you run a bakery. Every time a new recipe is approved, you want your staff to automatically test it, bake a sample, taste it, and ship it to stores — without you lifting a finger. GitHub Actions is that automated staff for your code. Every time you push code, it kicks off a chain of tasks: run tests, build the app, deploy to a server. You write the instructions once, and it just happens every single time.

Every professional software team ships code multiple times a day. But between writing a feature and it reaching real users, there's a gauntlet: run the tests, check code style, build a Docker image, deploy to staging, maybe prod. Doing all of that by hand is how bugs sneak out on a Friday night. GitHub Actions bakes that gauntlet directly into the place you already store your code — no extra CI server to manage, no third-party account to juggle.

Before GitHub Actions (2019), teams stitched together Jenkins pipelines, CircleCI configs, and custom bash scripts hosted on separate machines. The problem wasn't just complexity — it was drift. Your CI config lived somewhere other than your code, so they'd slowly get out of sync. GitHub Actions fixes this by treating your entire pipeline as code that lives in the same repository, is reviewed in the same pull request, and is versioned in the same git history.

By the end of this article you'll understand how GitHub Actions actually works under the hood, not just which YAML keys to copy-paste. You'll build a complete CI/CD pipeline that runs tests on every pull request and deploys to a server on merge to main. You'll know how to use secrets safely, cache dependencies to cut build times, and avoid the three mistakes that burn most teams in production.

How GitHub Actions Actually Works: Events, Workflows, Jobs, and Steps

The mental model is a clean hierarchy, and getting it right changes everything. At the top is a Workflow — a YAML file in .github/workflows/. A workflow is triggered by an Event: a push, a pull request, a schedule, or even a manual button click in the GitHub UI. One event can trigger many workflows.

Inside a workflow are Jobs. Jobs are the parallel units of work. By default they run simultaneously — so your 'run tests' job and your 'lint code' job can race each other. That's a huge speed win. If you need sequencing (don't deploy until tests pass), you declare explicit dependencies with needs.

Inside each job are Steps. Steps are sequential within a job — they share the same runner machine and filesystem, which is why you can install Node in step 1 and use it in step 2. Each step is either a shell command (run) or a pre-built Action (uses). Those pre-built Actions are the real superpower: the community has published Actions for deploying to AWS, sending Slack messages, caching npm dependencies — thousands of them on the GitHub Marketplace.

The runner is just a virtual machine spun up on demand by GitHub. It's clean every run — nothing carries over between workflow runs unless you explicitly cache it or upload an artifact.

.github/workflows/ci-pipeline.yml · YAML
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
# This workflow runs on every push to any branch and on every pull request targeting main.
# It has two jobs: one for testing, one for linting — they run in parallel to save time.

name: CI Pipeline

on:
  push:
    branches:
      - '**'          # Trigger on every branch push
  pull_request:
    branches:
      - main          # Extra scrutiny on PRs targeting main

jobs:

  # ── JOB 1: Run the test suite ───────────────────────────────────────────────
  run-tests:
    name: Run Unit & Integration Tests
    runs-on: ubuntu-latest   # GitHub-hosted runner — fresh VM every time

    steps:
      - name: Check out repository code
        uses: actions/checkout@v4   # Clones your repo onto the runner

      - name: Set up Node.js 20
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'              # Caches node_modules between runs — huge speed win

      - name: Install dependencies
        run: npm ci                 # 'ci' is stricter than 'install' — uses package-lock.json exactly

      - name: Run tests with coverage
        run: npm test -- --coverage
        env:
          NODE_ENV: test            # Set environment variables inline per step

  # ── JOB 2: Lint the codebase (runs in PARALLEL with run-tests) ───────────────
  lint-code:
    name: ESLint Code Quality Check
    runs-on: ubuntu-latest

    steps:
      - name: Check out repository code
        uses: actions/checkout@v4

      - name: Set up Node.js 20
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run ESLint
        run: npm run lint           # Fails the job (and blocks the PR) if lint errors exist
▶ Output
✓ Run Unit & Integration Tests (42s)
✓ Check out repository code
✓ Set up Node.js 20 [cache hit]
✓ Install dependencies
✓ Run tests with coverage — 48 passed, 0 failed

✓ ESLint Code Quality Check (38s)
✓ Check out repository code
✓ Set up Node.js 20 [cache hit]
✓ Install dependencies
✓ Run ESLint — No lint errors found

All checks passed. PR is ready to merge.
⚠️
Pro Tip: npm ci vs npm installAlways use `npm ci` in CI pipelines, not `npm install`. It reads package-lock.json exactly, fails if there's a mismatch, and never writes to it. This means your CI tests the exact dependency tree your teammates agreed on — not whatever npm resolves on the day the job runs.

Handling Secrets, Environment Variables, and Multi-Environment Deployments

Here's where most tutorials fail you: they show you how to reference a secret but not how to think about secrets architecture for a real project. Let's fix that.

GitHub has three levels of secrets: Organization secrets (shared across repos), Repository secrets (just this repo), and Environment secrets (scoped to a named deployment environment like 'staging' or 'production'). Environment secrets are the most powerful for CI/CD because GitHub won't hand them to a workflow unless it's deploying to that specific named environment — and you can add required reviewers, meaning a human must approve before prod secrets are ever exposed.

The environment key on a job is what unlocks this. When you add environment: production to a deployment job, GitHub checks if that environment exists, applies its protection rules (required reviewers, wait timers), and only then injects its secrets into the job's environment variables.

Never log secrets. GitHub automatically redacts known secret values from logs, but if you base64-encode a secret and then decode it in a run step and echo it, GitHub has no idea that string is sensitive. The redaction is string-match based, not magic.

.github/workflows/deploy-pipeline.yml · YAML
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485
# This workflow deploys to staging on every merge to main,
# then requires a manual approval before deploying to production.
# Secrets are scoped per environment so prod credentials are never
# exposed during a staging deploy.

name: Deploy Pipeline

on:
  push:
    branches:
      - main   # Only deploys on merges to main — not on feature branches

jobs:

  # ── JOB 1: Tests must pass before anything deploys ──────────────────────────
  run-tests:
    name: Test Gate
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npm test

  # ── JOB 2: Deploy to Staging (runs after tests pass) ────────────────────────
  deploy-staging:
    name: Deploy to Staging
    runs-on: ubuntu-latest
    needs: run-tests          # Will not start until run-tests job succeeds
    environment: staging      # Unlocks staging environment secrets + protection rules

    steps:
      - uses: actions/checkout@v4

      - name: Build production bundle
        run: npm run build
        env:
          VITE_API_URL: ${{ vars.API_URL }}   # 'vars' = non-secret config variables (visible in logs)

      - name: Deploy to staging server via SSH
        run: |
          # Write the SSH private key from secrets to a temp file
          echo "${{ secrets.STAGING_SSH_PRIVATE_KEY }}" > /tmp/deploy_key
          chmod 600 /tmp/deploy_key

          # Sync build output to the staging server
          rsync -avz --delete \
            -e "ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no" \
            ./dist/ \
            ${{ secrets.STAGING_USER }}@${{ secrets.STAGING_HOST }}:/var/www/app/

          # Clean up the key file immediately after use
          rm /tmp/deploy_key
        # secrets.STAGING_SSH_PRIVATE_KEY is ONLY available because environment: staging is set above

  # ── JOB 3: Deploy to Production (requires a human to approve in GitHub UI) ──
  deploy-production:
    name: Deploy to Production
    runs-on: ubuntu-latest
    needs: deploy-staging     # Staging must succeed before prod is even offered
    environment: production   # 'production' environment has required reviewers set in GitHub settings
                              # The workflow PAUSES here until a reviewer approves in the GitHub UI

    steps:
      - uses: actions/checkout@v4

      - name: Build production bundle
        run: npm run build
        env:
          VITE_API_URL: ${{ vars.API_URL }}

      - name: Deploy to production server via SSH
        run: |
          echo "${{ secrets.PROD_SSH_PRIVATE_KEY }}" > /tmp/deploy_key
          chmod 600 /tmp/deploy_key

          rsync -avz --delete \
            -e "ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no" \
            ./dist/ \
            ${{ secrets.PROD_USER }}@${{ secrets.PROD_HOST }}:/var/www/app/

          rm /tmp/deploy_key
▶ Output
Workflow: Deploy Pipeline — triggered by push to main

✓ Test Gate (45s)
✓ Run tests — 48 passed

✓ Deploy to Staging (1m 12s)
✓ Build production bundle
✓ Deploy to staging server via SSH — 23 files transferred

⏸ Deploy to Production — Waiting for review
Reviewer '@alice' approved (3m later)

✓ Deploy to Production (1m 08s)
✓ Build production bundle
✓ Deploy to production server via SSH — 23 files transferred

All deployments complete.
⚠️
Watch Out: Environment vs Repository SecretsIf you define `PROD_SSH_PRIVATE_KEY` as a repository secret instead of an environment secret, it's accessible to EVERY job in EVERY workflow — including a job triggered by a pull request from a fork. An attacker could open a PR, modify the workflow YAML, and exfiltrate your production key. Use environment secrets with protection rules for anything that touches production.

Caching, Build Matrices, and Reusable Workflows — Scaling Without Pain

Once your pipeline works, the next battle is speed and maintainability. Three features change the game at scale.

Caching is the fastest win. Without it, npm ci downloads every package fresh on every run. With actions/cache (or the built-in cache on actions/setup-node), the node_modules are restored from a cache key built from your package-lock.json hash. If the lock file hasn't changed, you skip the download entirely. Same principle works for pip, Maven, Gradle, and Cargo.

Build matrices let you run the same job across multiple configurations in parallel without duplicating YAML. Testing against Node 18 and 20? Two browsers? Three operating systems? A matrix expands one job definition into N parallel jobs automatically. Failed combinations are clearly labeled, passing ones don't block each other.

Reusable workflows solve the DRY problem at the organization level. Instead of copy-pasting a 'deploy via SSH' job across 12 microservice repos, you define it once in a central repo and call it with uses: your-org/devops-workflows/.github/workflows/ssh-deploy.yml@main. Update the template once, every repo benefits. This is the pattern that separates organizations that maintain CI/CD well from those that have 12 slightly-different-and-all-broken pipelines.

.github/workflows/matrix-and-cache.yml · YAML
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
# This workflow demonstrates a build matrix — running tests across multiple
# Node.js versions and OS combinations simultaneously.
# It also shows manual cache control for fine-grained cache invalidation.

name: Cross-Platform Test Matrix

on:
  pull_request:
    branches:
      - main

jobs:

  test-matrix:
    name: "Node ${{ matrix.node-version }} / ${{ matrix.os }}"
    # ↑ GitHub uses this as the job label in the UI — makes failures obvious at a glance

    strategy:
      matrix:
        os: [ubuntu-latest, windows-latest]     # Run on both Linux and Windows
        node-version: ['18', '20']              # And on both Node 18 and 20
        # This creates 2 × 2 = 4 parallel jobs automatically

      fail-fast: false
      # ↑ IMPORTANT: Without this, if Node 18/Linux fails, GitHub cancels
      # the other 3 jobs immediately. Set fail-fast: false to see ALL results.

    runs-on: ${{ matrix.os }}   # Each job uses the OS from its matrix slot

    steps:
      - uses: actions/checkout@v4

      - name: Set up Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          # We're NOT using the built-in cache here — we'll manage it manually
          # to show you exactly what's happening under the hood

      - name: Cache node_modules
        uses: actions/cache@v4
        with:
          path: node_modules
          # Cache key = OS + Node version + hash of package-lock.json
          # If ANY of those change, the cache is invalidated and rebuilt
          key: ${{ runner.os }}-node-${{ matrix.node-version }}-${{ hashFiles('package-lock.json') }}
          # Fallback: if exact key not found, try a key from the same OS+version
          # This restores a slightly stale cache and npm ci tops it up — faster than a cold install
          restore-keys: |
            ${{ runner.os }}-node-${{ matrix.node-version }}-

      - name: Install dependencies
        run: npm ci
        # If the cache hit was exact, npm ci verifies integrity and exits fast (~3s)
        # If partial or no cache, it downloads and the cache is saved after the job

      - name: Run tests
        run: npm test

  # ── Reusable Workflow Call — deploy using a shared template ─────────────────
  # Instead of writing the deploy steps here, we call a workflow defined
  # in a central devops repo. All 12 microservices call this same template.
  deploy-via-shared-template:
    name: Deploy (Shared Workflow)
    needs: test-matrix
    uses: your-org/devops-workflows/.github/workflows/ssh-deploy.yml@main
    # ↑ References a reusable workflow in another repo — pinned to main branch
    with:
      environment: staging
      app-name: 'user-service'
    secrets: inherit
    # ↑ 'inherit' passes all secrets from the calling workflow to the reusable one
    # Without this, the reusable workflow has no access to any secrets
▶ Output
Workflow: Cross-Platform Test Matrix

Running 4 parallel jobs:
✓ Node 18 / ubuntu-latest (52s) — 48 passed
✓ Node 20 / ubuntu-latest (49s) — 48 passed
✓ Node 18 / windows-latest (1m 4s) — 48 passed
✗ Node 20 / windows-latest (58s) — 47 passed, 1 FAILED
✗ test/fileUtils.test.js — path separator mismatch (\ vs /)

Note: fail-fast: false allowed the other 3 jobs to complete.
Without it, all 4 would have been cancelled on first failure.

Deploy (Shared Workflow): skipped — test-matrix did not fully pass.
🔥
Interview Gold: Matrix + fail-fastInterviewers love asking how you'd test across multiple Node versions without copy-pasting jobs. The answer is `strategy.matrix`. The follow-up gotcha: `fail-fast: true` is the default, which cancels surviving matrix jobs the moment one fails — you lose visibility. In debugging scenarios, set it to `false` so you can see which configurations are broken.
Feature / AspectGitHub ActionsJenkins
Setup timeZero — lives in your repo, GitHub hosts itHours — install, configure, maintain a server
Config languageYAML in .github/workflows/Groovy (Jenkinsfile) or GUI-based
Marketplace / plugins16,000+ community Actions1,800+ plugins (older ecosystem)
Cost modelFree tier: 2,000 min/month; then per-minuteSelf-hosted = server costs only, no per-minute fee
Secrets managementBuilt-in, org/repo/env scoped with protection rulesCredentials plugin — works but more manual wiring
Parallel jobsNative matrix strategy, simple syntaxParallel stages in Jenkinsfile — more verbose
Audit trailWorkflow run logs tied to git SHA and PRBuild logs separate from code history
Best forTeams already on GitHub wanting zero ops overheadLarge orgs needing on-premise or highly custom pipelines

🎯 Key Takeaways

  • The hierarchy is Workflow → Job → Step — jobs are parallel by default, steps within a job are sequential and share a filesystem. Getting this model wrong is the root of most pipeline bugs.
  • Use environment secrets with required reviewers for production deployments — repository-level secrets are accessible to every workflow and every job, which is a credential leak waiting to happen.
  • Pin third-party Actions to a commit SHA, not a branch or floating tag — branch-pinning means someone else's commit can break your deploy pipeline without you touching a single file.
  • The concurrency key with cancel-in-progress: true is a one-liner that prevents deployment race conditions — skip it and you'll eventually get two deploys colliding on the same server.

⚠ Common Mistakes to Avoid

  • Mistake 1: Pinning Actions to a branch tag like uses: actions/checkout@main — If the Action maintainer pushes a breaking change to that branch, your pipeline breaks at 2am with no code change from you. Fix: pin to a specific commit SHA like uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683. Check GitHub's own security hardening guide — they recommend SHA pinning for all third-party Actions.
  • Mistake 2: Putting secrets in env at the workflow level instead of the job level — A secret defined at the top-level env block is injected into every job, including jobs triggered by pull requests from forks. GitHub partially sandboxes forked PR workflows, but the blast radius is larger than needed. Fix: scope secrets to the exact job or step that needs them using the env key at the job or step level, not the workflow level.
  • Mistake 3: Not using concurrency groups on deployment workflows — If two devs push to main within seconds of each other, two deploy workflows run simultaneously and race to overwrite the same server. The second deploy can corrupt a half-deployed first one. Fix: add concurrency: group: deploy-${{ github.ref }} / cancel-in-progress: true at the workflow level. This cancels any in-progress run of the same group and lets only the newest run proceed.

Interview Questions on This Topic

  • QWhat's the difference between a job and a step in GitHub Actions, and why does it matter for sharing data between tasks?
  • QHow would you prevent two simultaneous deploys from racing each other in a GitHub Actions workflow?
  • QA pull request from a forked repository can't access repository secrets — why is that, and how do you safely run integration tests that need credentials on fork PRs?

Frequently Asked Questions

How much does GitHub Actions cost for private repositories?

GitHub gives every account 2,000 free minutes per month for private repos on the Free plan (3,000 on Team, unlimited on Enterprise). Minutes on macOS runners are billed at 10× the Linux rate, and Windows at 2×. Public repositories get unlimited free minutes — which is why most open-source projects use GitHub Actions without a second thought about cost.

Can GitHub Actions deploy to AWS, GCP, or Azure?

Yes — and the recommended approach for cloud providers is OIDC (OpenID Connect) rather than storing long-lived cloud credentials as secrets. With OIDC, your workflow requests a short-lived token directly from the cloud provider for each run. AWS, GCP, and Azure all support this natively. Search the GitHub Marketplace for 'aws-actions/configure-aws-credentials' or 'google-github-actions/auth' for ready-made OIDC Actions.

What's the difference between `on: push` and `on: pull_request` triggers?

Both fire when code is involved, but with key differences in context. on: push fires after code lands on a branch — it has full access to repository secrets. on: pull_request fires when a PR is opened or updated — for security, workflows triggered by a fork's PR run with read-only permissions and no access to secrets by default. Use on: pull_request_target if you genuinely need secrets in a fork PR context, but read the security implications carefully first as it introduces risks.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousIntroduction to CI/CDNext →Jenkins Tutorial
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged