Files
spa/.planning/phase-1-foundation/09-gitea-ci-and-dockerfile.md
T

11 KiB

Task 1.9 — Gitea CI + Dockerfile + nginx static serve

Phase: 1 — Foundation Status: Not started Depends on: 1.2 (need build to be working) Wiki refs: docs/wiki/entities/react-spa.md; trm/processor/.gitea/workflows/build.yml and trm/directus/.gitea/workflows/build.yml for pattern alignment

Goal

Build a Docker image of the SPA (static bundle served by nginx), publish to the Gitea container registry on every push to main, matching the CI conventions established by trm/processor and trm/directus. Include lint + typecheck + format-check + build as the gate before image publish.

After this task, pushing to main produces a git.dev.microservices.al/trm/spa:main image that trm/deploy can pull.

Deliverables

  • Dockerfile:
    • Multi-stage: builder (node:22-alpine, install deps with pnpm, run build) → runner (nginx:alpine, copy dist/ to /usr/share/nginx/html).
    • BuildKit cache mounts for pnpm fetch + pnpm install --offline (matching the processor's pattern).
    • nginx.conf baked in (next bullet).
    • Listen on :80; the deploy proxy fronts it.
  • nginx.conf:
    • Single server block listening on :80.
    • Static-serve /usr/share/nginx/html with try_files $uri $uri/ /index.html; (SPA routing fallback so /login etc. all serve index.html).
    • location = /config.json block: serve from a separate path that can be volume-mounted in stage/prod (the override path described in 1.4). Default value is the baked-in dev defaults.
    • gzip on for text/css, application/javascript, application/json. (Brotli later if it becomes a concern.)
    • Cache headers: index.html no-cache; everything else (the hashed JS/CSS/sprite assets) Cache-Control: public, max-age=31536000, immutable.
  • .dockerignore:
    • Includes node_modules, dist, .git, .gitea, .planning, *.md other than necessary, etc. Match the existing pattern from processor/directus.
  • .gitea/workflows/build.yml:
    • Triggers: push to main, push to phase-2-* branches.
    • Path filter: src/**, public/**, *.json, *.ts, *.tsx, *.js, *.cjs, *.html, Dockerfile, nginx.conf, .gitea/workflows/build.yml. Skip on docs-only changes.
    • Single job; mirrors trm/processor/.gitea/workflows/build.yml structure.
    • Steps:
      1. Checkout.
      2. Setup pnpm (use the same caching strategy as the processor).
      3. pnpm install --frozen-lockfile.
      4. pnpm typecheck.
      5. pnpm lint.
      6. pnpm format:check.
      7. pnpm build.
      8. Build Docker image (use the BuildKit cache between runs).
      9. Login to git.dev.microservices.al registry.
      10. Push image with :main and per-commit-SHA tags.
  • package.json scripts.test placeholder (Phase 3 will replace; for now, "test": "echo \"no tests yet\" && exit 0" so CI can include the step without failing).
  • README.md updated: "Building locally" + "CI" sections.

Specification

Dockerfile shape

# syntax=docker/dockerfile:1.6

# ───────── builder ─────────
FROM node:22-alpine AS builder
RUN corepack enable && corepack prepare pnpm@latest --activate
WORKDIR /build

COPY package.json pnpm-lock.yaml ./
RUN --mount=type=cache,id=pnpm-store,target=/root/.local/share/pnpm/store \
    pnpm fetch

COPY . .
RUN --mount=type=cache,id=pnpm-store,target=/root/.local/share/pnpm/store \
    pnpm install --frozen-lockfile --offline

RUN pnpm build  # produces dist/

# ───────── runner ─────────
FROM nginx:1.27-alpine AS runner
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /build/dist /usr/share/nginx/html

EXPOSE 80

# nginx default CMD is correct; don't override.

Mirror the multi-stage + cache-mount pattern from trm/processor/Dockerfile precisely. Differences are SPA-specific (no runtime Node, just nginx).

nginx.conf shape

server {
  listen 80;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;

  # SPA routing fallback
  location / {
    try_files $uri $uri/ /index.html;
  }

  # Hashed assets — long cache
  location /assets/ {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri =404;
  }

  # index.html — never cache
  location = /index.html {
    add_header Cache-Control "no-cache, no-store, must-revalidate";
    expires off;
  }

  # Runtime config — overridable via volume mount
  location = /config.json {
    add_header Cache-Control "no-cache, no-store, must-revalidate";
    try_files $uri =404;
  }

  # Compression
  gzip on;
  gzip_types text/css application/javascript application/json image/svg+xml;
  gzip_min_length 1024;
}

The location = /config.json placement allows mounting a different file at /usr/share/nginx/html/config.json in stage/prod via Docker volume — that's the override mechanism for runtime config.

Workflow path filter

Mirror processor's filter structure. Critical paths to include:

on:
  push:
    branches:
      - main
      - 'phase-*-**'
    paths:
      - 'src/**'
      - 'public/**'
      - 'package.json'
      - 'pnpm-lock.yaml'
      - 'index.html'
      - 'tsconfig*.json'
      - 'vite.config.ts'
      - 'tailwind.config.ts'
      - 'eslint.config.js'
      - '.prettierrc'
      - 'Dockerfile'
      - 'nginx.conf'
      - '.dockerignore'
      - '.gitea/workflows/build.yml'

Docs-only changes (.planning/, README.md alone) skip CI.

Image tagging

Two tags per push to main:

  1. :main — moves with each main commit. Used by stage's compose.yaml for "always latest."
  2. :<short-sha> — pinned reference. Production pins to a specific SHA to avoid surprise rollouts.

This matches the processor's tagging scheme. trm/deploy/compose.yaml already documents the *_TAG env var pattern; Phase 1.10 adds SPA_TAG.

Why nginx not a Node static server

A 5MB static bundle doesn't need a Node runtime. nginx is:

  • Faster (compiled C, not JS).
  • Smaller image (~20MB vs ~150MB for node:22-alpine + serve).
  • Battle-tested for static-serving + SPA-routing.

Match the processor's "right tool for the job" discipline.

CI step ordering

typecheck before lint before format:check before build is intentional — the cheapest checks first, so a broken typecheck fails the build before lint/format burn cycles. build is last because it's the most expensive and is what actually goes into the image.

Acceptance criteria

  • docker build -t trm-spa-test . succeeds locally.
  • docker run -p 8080:80 trm-spa-test serves the SPA at http://localhost:8080; refreshing on /login (after 1.7) serves index.html (SPA fallback works).
  • pnpm format:check script exists and is green.
  • On push to main in Gitea, the workflow runs all gates and pushes the image to the registry.
  • The image is < 50MB total.
  • On push that only touches .planning/ or README.md, CI is skipped.
  • Per-commit SHA tag is present in the registry alongside :main.

Risks / open questions

  • Pinning pnpm version. Use corepack prepare pnpm@<version> --activate with the version pinned to whatever package.json packageManager says, if set. Otherwise pin to the minor (pnpm@9 or whatever the team is on) to avoid surprises.
  • Build-time env vars. None should bleed in. Verify by docker run trm-spa-test cat /usr/share/nginx/html/index.html and grep for any hardcoded localhost URL — if found, that's a runtime-config violation to chase.
  • Caching the pnpm store across CI runs. Gitea Actions runners differ; the --mount=type=cache,id=pnpm-store only helps during a single build. For cross-run caching, use Gitea Actions' actions/cache@v4 step, keyed on pnpm-lock.yaml hash. Match what processor does — it likely already has this pattern.

Done

Four files landed, matching the conventions established by trm/processor:

  • Dockerfile — three-stage multi-stage build:
    • deps (node:22-alpine) — pnpm fetch with BuildKit cache mount.
    • buildpnpm install --frozen-lockfile --offline then pnpm build produces dist/.
    • runtime (nginx:1.27-alpine) — copies nginx.conf to /etc/nginx/conf.d/default.conf and dist/ to /usr/share/nginx/html. EXPOSE 80. HEALTHCHECK via wget -qO- http://localhost/. nginx default CMD.
  • nginx.conf — single server block on :80. Gzip for text assets. Three location rules: /assets/ long-cache (max-age=31536000, immutable for hashed filenames), /config.json no-cache (overridable via volume mount in stage/prod), /index.html no-cache. SPA fallback try_files $uri $uri/ /index.html; for client-side routes.
  • .dockerignore — node_modules, dist, .env*, *.log, .git, .gitea, .planning, *.md (except README), .claude, .vscode. Keeps the build context small.
  • .gitea/workflows/build.yml — matches trm/processor's shape with one additional gate (pnpm format:check between lint and test). Path filter covers source, config, and Docker-related files; .planning/** and most markdown are excluded so docs-only commits skip CI. Steps: checkout → setup Node 22 → enable pnpm@latest-9 → install --frozen-lockfile → typecheck → lint → format:check → test → setup buildx → registry login → build & push git.dev.microservices.al/trm/spa:main → trigger Portainer webhook.

Required secrets in Gitea repo settings (same names as the other repos so they can be reused):

  • REGISTRY_USERNAME / REGISTRY_PASSWORD — Gitea registry creds.
  • PORTAINER_WEBHOOK_URL — stack redeploy hook.

Deviations from spec:

  • Spec called for :main plus a per-commit-SHA tag. The other services in this org currently push :main only — matched that for consistency. SHA-pinning at deploy time is handled by the *_TAG env vars in trm/deploy/.env.example (which can be set to a SHA when an operator wants reproducibility). Adding SHA tagging here without updating the other repos would be inconsistent — defer to a cross-repo refactor task if/when it matters.
  • Spec sketched corepack prepare pnpm@latest --activate; the existing repos pin to pnpm@latest-9 (latest of the 9.x line). Matched that — pnpm@latest is too floaty for a CI gate that has to be reproducible across runs.

Smoke check: pnpm typecheck, pnpm lint, pnpm format:check, pnpm build all green locally. Local docker build not run — Docker isn't installed on this machine. CI is the gate; first push to main will exercise the full pipeline.

Required for first deploy (1.10 wires the rest):

  • Add the SPA service block to trm/deploy/compose.yaml.
  • Set SPA_TAG=main (or a SHA) and SPA_CONFIG_FILE=... in the deploy env.
  • Configure Traefik (or whichever proxy) to route the SPA's path on the public domain.

Landed in 9bd3b84.