How to Set Up CI/CD and Automated Tests for a React SPA Monorepo
Before you begin
- Basic Git knowledge, including branches, pull requests, and merges
- Node.js 22 installed locally
- pnpm 10 or later installed
- Docker installed for running E2E tests locally
- A GitHub or GitLab repository with permission to configure CI/CD variables and secrets
- Access to a staging and/or production server over SSH if you want to follow the deploy steps
What you'll learn
- Map a monorepo CI/CD pipeline from pull request validation through production deploy
- Configure automated validation gates for formatting, linting, type checking, unit tests, builds, and security checks
- Run Playwright E2E tests in a production-like local environment
- Use change detection so only affected apps are built and deployed
- Separate staging and production deployment behavior by branch
- Add approval gates, health checks, and rollback thinking to your release process
On this page
Most teams get the monorepo structure right on day one and then spend the next six months fighting the release process. A clean workspace layout does not protect you from broken merges, wasted CI minutes rebuilding unchanged apps, or production deploys that skip verification. What actually matters is turning the monorepo into a reliable release system with automated validation, environment-aware deploy rules, and reproducible end-to-end tests.
This tutorial is the hands-on companion to React SPA Monorepo CI/CD: How to Automate Testing and Deploy Only What Changed. It walks through the full pipeline used by the react-spa-monorepo-cicd repository: how pull requests are validated, how checks re-run after merge on staging and main, how only changed apps are deployed, and how Playwright E2E tests and post-deploy health checks reduce bad releases. By the end, you will have a working CI/CD flow from feature branch to production with every gate documented.
Before you start, clone the repo:
git clone https://github.com/InkByteStudio/react-spa-monorepo-cicd.git
cd react-spa-monorepo-cicd
Step 1: Map the React SPA monorepo CI/CD flow
Before touching any configuration, understand the full release path. The core idea of this repository is not “multiple apps in one folder.” It is “multiple apps with a controlled path from code change to deploy.” The pipeline detects changes, runs validation gates, executes E2E tests, deploys only changed apps, and closes with a health check.
Understand trigger points
The pipeline has three main entry points, each with a distinct purpose:
- Pull request to main or staging — validates the proposed change during code review
- Push to staging — re-validates the merge result and deploys to the staging environment
- Push to main — re-validates again and deploys to production
Tests intentionally re-run after merge. A pull request validates the branch in isolation, but the merged state can differ due to conflicts or concurrent changes from other contributors. Post-merge validation on staging or main ensures the exact merged code still passes every gate before anything is deployed.
Follow the branch promotion model
The repository enforces a linear promotion path:
- Work on a feature branch
- Open a pull request — CI validates formatting, linting, types, unit tests, build, security audit, and E2E
- Merge into
stagingfor a pre-production deploy - Verify the staging deployment manually or with automated smoke tests
- Merge
stagingintomainfor the production deploy - Post-deploy health checks confirm the release is live and healthy
This repository has a clear opinion on release flow: staging is not just a preview branch — it is part of the quality gate before production.
Understand why selective deploys matter
The monorepo contains three deployable targets: a static marketing main site, an admin SPA, and a portal SPA. Without change detection, every push would rebuild and redeploy all three even if only one file changed. This repository uses intelligent change detection so a commit touching apps/admin-spa/ only triggers the admin SPA build and deploy. Changes under packages/, which holds shared code, trigger rebuilds for both SPA apps since they share those dependencies. The main site is independent.
You should now understand when this pipeline validates code, when it deploys, and why it re-runs checks after merge.
Step 2: Run the full validation pipeline locally
The fastest way to understand a CI/CD pipeline is to run it on your own machine before pushing a branch. This keeps feedback loops short and prevents wasted CI minutes on commits that were never going to pass.
Install required tools
Confirm you have the documented prerequisites:
node -v # Should output v22.x
pnpm -v # Should output 10.x or later
docker -v # Should output Docker version 27.x or later
Then install dependencies:
corepack enable
pnpm install
Run the local pipeline
The repository provides a single command that reproduces the full CI validation flow:
bash scripts/run-all.sh
This script runs each validation stage in sequence and prints a pass/fail summary at the end. It is the closest thing to a local preflight check. If this command passes, you can reasonably trust that CI will pass too.
Understand the validation categories
The pipeline enforces seven categories of checks, in order:
- Format check — Prettier ensures consistent code style
- Lint — ESLint catches code quality issues and potential bugs
- Type check — TypeScript compiler verifies type correctness across all packages
- Unit tests — Vitest runs fast, isolated component and utility tests
- Build — Vite compiles each app into production-ready static assets
- Security audit — dependency vulnerability scanning flags known CVEs
- E2E tests — Playwright runs browser-level tests against the built apps
A strong CI/CD process starts with a single entry command. If developers need five separate commands to guess whether CI will pass, the pipeline is harder to adopt and easier to bypass.
Run the full command successfully before moving on. You should now have a local way to reproduce the same check categories that the CI server enforces.
Step 3: Configure automated test gates
This step is the heart of the pipeline. The goal is not just to have jobs that run, but to understand why they are ordered this way and what each gate protects against.
Organize gates by purpose
Think of the seven validation gates as a test ladder with four tiers:
Fast feedback gates catch trivial issues in seconds:
pnpm format:check # Prettier formatting
pnpm lint # ESLint code quality
pnpm typecheck # TypeScript compiler
Confidence gates verify that code works and produces valid output:
pnpm test # Unit tests via Vitest
pnpm build:admin-spa # Build admin SPA
pnpm build:portal-spa # Build portal SPA
pnpm build:main-site # Build main site
Security gates check for known vulnerabilities (see Harden Your CI/CD Pipeline with Sigstore, SLSA, and SBOMs for deeper artifact signing and provenance):
pnpm audit # Dependency security scan
Release gates validate the deployed artifact in a realistic environment:
pnpm test:e2e # Playwright browser tests
The ordering matters. Format and lint checks cost almost nothing to run. If they fail, there is no reason to spend time building three apps and launching a Docker stack for E2E. Fail early on cheap checks so you only spend resources on expensive checks when the basics are already clean.
Map local commands to CI jobs
Each of the commands above maps directly to a job in the CI workflow. When you read .github/workflows/validate-and-deploy.yml or .gitlab-ci.yml, you will see the same scripts called in the same order. This alignment between local and CI behavior is intentional — it means a local pass is a reliable predictor of a CI pass.
Know that local commits are also checked
The repository enforces Conventional Commits through commitlint and runs lint-staged via Husky pre-commit hooks. Every local commit automatically runs Prettier and ESLint on staged files before the commit is created. This means formatting and lint issues are caught before code is even pushed, reducing noise in CI.
Understand why E2E comes after build
E2E tests need built artifacts to test against. They do not run against a dev server because dev servers behave differently from production builds — they use hot module replacement, skip certain optimizations, and serve assets differently. Running E2E against actual build output catches problems that only appear under production-like conditions.
This repository’s E2E design goes further than running browser tests against localhost. It uses Docker and nginx to serve the apps in a topology closer to the real deployment, which makes the CI signal more trustworthy.
You should now have a clear test ladder: static checks first, build validation second, and browser-level tests after deployable artifacts exist.
Step 4: Run Playwright E2E tests in a production-like environment
E2E tests are the final confidence layer before deploy. This step covers the exact sequence to run them locally so you can reproduce and debug failures without waiting for CI.
Build the SPAs first
Playwright tests need production build output. Build both SPAs before starting the test environment:
pnpm build:admin-spa
pnpm build:portal-spa
Start the test environment
The repository uses Docker Compose with nginx to serve the built apps in a topology that mirrors the production deployment:
docker compose -f docker/docker-compose.yml up -d
This starts an nginx container serving the admin SPA and portal SPA at the same paths they will occupy in production. That means route boundaries, asset paths, and cross-app navigation all behave the way they will after a real deploy.
In CI, the pipeline uses docker/docker-compose.ci.yml as an override that removes port exposure and creates an isolated bridge network. This prevents port conflicts on shared runners while maintaining the same nginx topology. You do not need the CI override when running locally.
Run Playwright
With the Docker stack running, execute the E2E suite:
pnpm test:e2e
The e2e/ directory contains Playwright tests covering:
- App loading — each deployed SPA renders its root component
- Route boundaries — client-side routing works within each app
- Content rendering — dashboard elements like stat cards, headings, and page content display correctly
- Cross-app navigation — links between admin and portal SPAs resolve without errors
- Visual regression — screenshots are compared against baselines to catch unintended UI changes
- Accessibility — axe-core checks flag WCAG 2.0/2.1 AA violations before deploy
Shut down cleanly
After tests complete, stop the Docker stack:
docker compose -f docker/docker-compose.yml down
Do not treat E2E as a replacement for unit tests. In this pipeline, E2E is the final confidence layer before deploy. Unit tests catch logic bugs quickly. E2E tests catch integration and deployment problems that unit tests cannot see.
You should now be able to run the same kind of browser-level validation that the pipeline uses before allowing a deploy.
Step 5: Add change detection so only affected apps are deployed
Selective deployment is one of the biggest practical benefits of a well-structured monorepo. Without it, every push rebuilds and redeploys everything, which wastes compute, slows feedback, and increases the blast radius of every release.
Understand the deployment decision model
The repository maps file changes to deploy targets with straightforward rules:
| Changed path | What gets deployed |
|---|---|
apps/main-site/ | Main site only |
apps/admin-spa/ | Admin SPA only |
apps/portal-spa/ | Portal SPA only |
packages/ | Both SPAs (shared dependency) |
docs/ or markdown only | Minimal validation, skip deploy |
Changes in packages/ trigger both SPA deploys because those packages are shared dependencies. A bug in shared code could break either consumer, so both must be rebuilt and retested. The main site does not depend on the shared packages and is therefore unaffected.
Know the pivot script
The repository includes scripts/changed-files.sh, which is the decision point for both selective validation and selective deployment. The CI workflow calls this script to determine which apps were affected by the current commit range, then conditionally runs only the relevant build, test, and deploy jobs.
You do not need to modify this script to follow the tutorial, but understanding that it exists — and that it is the single source of truth for “what changed” — is important for extending the pipeline later.
Recognize the business case
Selective deploy logic directly affects three things teams care about:
- CI cost — building and deploying one app instead of three cuts compute time proportionally
- Feedback speed — a targeted pipeline finishes faster, which means faster code review cycles
- Release risk — deploying only the changed app reduces the surface area for regressions in unrelated code
Selective deploy logic is one of the biggest benefits of a well-structured monorepo. Without it, monorepos quickly become slow and expensive to validate as the number of apps grows.
You should now understand how this repo avoids deploying everything on every change.
Step 6: Configure staging and production release rules
This step shows how the same pipeline logic produces different deployment behavior depending on which branch received the push. No application code changes between environments — only the build mode, secrets, and deploy target differ.
Understand the environment separation
The repository separates environments by branch:
stagingbranch — builds with staging mode, uses staging secrets, deploys to the staging servermainbranch — builds with production mode, uses production secrets, deploys to the production server
Use environment-specific builds
The build scripts accept an environment argument that maps to a Vite mode:
bash scripts/build-spa.sh admin-spa staging
bash scripts/build-spa.sh admin-spa production
Each mode can define its own environment variables (API endpoints, feature flags, analytics keys) through .env.staging and .env.production files. The CI workflow passes the correct mode automatically based on the branch that triggered the pipeline.
Configure required secrets
Both GitHub Actions and GitLab CI need the following secrets configured per environment:
| Secret | Purpose |
|---|---|
SSH_PRIVATE_KEY | Authentication for rsync-over-SSH deployment |
DEPLOY_HOST | Target server hostname or IP |
DEPLOY_PORT | SSH port (typically 22) |
DEPLOY_USER | SSH user on the target server |
DEPLOY_MAIN_SITE_PATH | Remote path for the main site (e.g., /var/www/html) |
DEPLOY_ADMIN_SPA_PATH | Remote path for the admin SPA (e.g., /var/www/html/admin) |
DEPLOY_PORTAL_SPA_PATH | Remote path for the portal SPA (e.g., /var/www/html/portal) |
Configure separate values for staging and production so each environment deploys to its own server or directory.
Set up approval gates
For GitHub Actions, use GitHub Environments to control deployment behavior:
- Staging environment — configure for automatic deploy after validation passes. No manual approval needed since staging is a pre-production verification step, not a customer-facing release.
- Production environment — configure with required reviewers. After validation passes, the deploy job pauses and waits for an authorized team member to approve the release in the Actions UI.
This is controlled automation: deployments are fully automated, but production releases can still require a human approval gate. The pipeline handles the mechanics; humans handle the judgment.
You should now see how the same pipeline logic behaves differently based on branch without changing application code.
Step 7: Deploy, verify, and prepare for rollback
Deployment is not complete when files are successfully transferred. It is complete when the target app responds correctly and you have a tested path back to the previous version if something goes wrong.
Understand the deploy target model
The repository deploys each app to a distinct path on the target server using rsync over SSH:
DEPLOY_MAIN_SITE_PATH=/var/www/html
DEPLOY_ADMIN_SPA_PATH=/var/www/html/admin
DEPLOY_PORTAL_SPA_PATH=/var/www/html/portal
Each deploy script only transfers the build output for its target app. The other apps on the server are untouched, which is why selective deployment is safe — deploying the admin SPA does not risk overwriting the portal SPA’s files.
Use dry-run mode before deploying
The deploy scripts support a dry-run flag that shows exactly what rsync would transfer without actually modifying the remote server:
DRY_RUN=true bash scripts/deploy-spa.sh admin-spa
Use this to verify deploy intent before allowing the file transfer, especially the first time you configure a new target path or server.
Know the rollback path
The repository creates timestamped backups before each deployment. If a deploy introduces a problem, roll back to the previous version:
bash scripts/rollback-spa.sh admin-spa
This restores the most recent backup for the specified app. Rollback is part of the CI/CD design, not a last-minute emergency script. The fact that the repo includes it by default means the team expects rollbacks to happen and has made them a single-command operation.
Always verify that backups are being created before relying on rollback. Run a deploy to staging and confirm the backup directory contains the expected files before trusting the rollback path in production.
Verify with post-deploy health checks
After each deployment, the pipeline runs a health check against the deployed URL to confirm the app responds. A successful file transfer does not guarantee a working app — the server configuration could be wrong, environment variables could be missing, or the build could have been created with the wrong mode.
Health checks close the loop. If the check fails, the pipeline reports a failure even though the deploy itself succeeded, giving the team an immediate signal to investigate and potentially roll back.
You should now understand the full lifecycle: deploy, verify, and recover if needed.
Common setup issues
Docker-based E2E tests fail locally
Symptom: Playwright does not start cleanly, or tests cannot reach the apps.
Likely cause: Docker is not running, the nginx test stack is not started, or the SPAs were not built first.
Solution: Re-run in the documented order: build the SPAs with pnpm build:admin-spa and pnpm build:portal-spa, start Docker Compose with docker compose -f docker/docker-compose.yml up -d, run pnpm test:e2e, then shut down the stack with docker compose -f docker/docker-compose.yml down.
A shared package change did not trigger the expected SPA rebuild
Symptom: A package update passes validation, but one of the SPAs was not rebuilt or redeployed.
Likely cause: The change detection rules do not correctly account for packages/.
Solution: Verify that your pipeline maps packages/ changes to both SPAs. The scripts/changed-files.sh script should treat any file change under packages/ as impacting both apps/admin-spa/ and apps/portal-spa/. Check the script logic and the CI workflow’s conditional job triggers.
Production deploy never starts even though CI passed
Symptom: Validation completes successfully, but the production deploy job stays blocked.
Likely cause: GitHub Environment protection rules require reviewer approval.
Solution: Check the configured production environment in your repository settings under Settings > Environments > production. Approve the pending deployment in the Actions UI. If nobody on the team has the approval permission, update the environment’s required reviewers list.
Deploy succeeds but the app is broken in the browser
Symptom: Files were transferred successfully, but the deployed app does not load or behaves incorrectly.
Likely cause: Wrong environment mode (staging build deployed to production), incorrect target path, or stale server configuration such as an nginx config that does not serve the SPA’s index.html for client-side routes.
Solution: Confirm the branch-to-environment mapping in the CI workflow. Verify that scripts/build-spa.sh received the correct mode argument. Check that deploy paths match the nginx configuration on the server. Use the health check and rollback flow to recover while you investigate.
The pipeline is too slow after adding more apps
Symptom: CI times increase as the monorepo grows beyond three or four apps.
Likely cause: Selective validation or selective deployment is not being applied aggressively enough. New apps may not be wired into the change detection logic, causing the pipeline to rebuild everything.
Solution: Extend scripts/changed-files.sh to include the new app’s directory. Map the new path to its own build and deploy jobs in the CI workflow. The existing model scales well as long as each app has its own conditional path — the key is keeping change detection central to the pipeline design.
Wrap-Up
You now have a complete picture of how this repository turns a React SPA monorepo into a release system. The pipeline validates every pull request with seven automated gates, re-validates after merge to catch integration issues, deploys only the apps affected by each change, separates staging from production through branch-based environment rules, and closes the loop with post-deploy health checks and single-command rollback.
From here, consider these next steps:
- Annotate the workflow files. Read
.github/workflows/validate-and-deploy.ymland.gitlab-ci.ymlside by side to see how the same pipeline logic is expressed in both CI systems. - Add a new app to the monorepo. Create a fourth SPA under
apps/, wire it intoscripts/changed-files.sh, and add its build, deploy, and E2E targets to the CI workflow. That is the real test of whether the selective deploy model scales. - Tighten the approval process. Experiment with branch protection rules, commit signing requirements, and CODEOWNERS files to add more structure around who can merge into staging and main.
The strongest monorepo CI/CD pipelines are not the ones with the most jobs. They are the ones where every gate has a clear purpose, every deploy is selective, and every release has a tested path back to the previous version.
Related guides
- Software Supply Chain Security in the AI Era — add SBOM generation and dependency integrity checks to your pipeline
- Harden Your CI/CD Pipeline with Sigstore, SLSA, and SBOMs — sign artifacts and enforce provenance in your monorepo releases
- Securing AI Coding Agent Workflows — sandbox and review AI-generated code before it enters your monorepo
Frequently asked questions
Do I need Docker installed to follow this tutorial?
Yes. The Playwright E2E tests run against built SPAs served by an nginx container through Docker Compose. Docker is required for Step 4 and for the full local pipeline script in Step 2. If you skip the E2E steps, you can complete the validation gate configuration without Docker.
Can I use this pipeline with GitHub Actions only, or do I need GitLab CI too?
You only need one CI system. The repository includes both .github/workflows/validate-and-deploy.yml and .gitlab-ci.yml so teams can choose whichever platform they use. The pipeline architecture is the same in both — only the syntax differs.
How long does the full CI pipeline take to run?
On a typical GitHub Actions runner, the full pipeline including E2E tests completes in 3 to 6 minutes depending on which apps changed. Selective deployment means most pushes only build and test one app, which keeps the feedback loop fast.
What happens if a post-deploy health check fails?
The pipeline reports a failure even though the file transfer succeeded. This gives the team an immediate signal to investigate. You can then run the rollback script to restore the previous version while diagnosing the issue.