Add .planning/ scaffolding: - ROADMAP.md (4 phases, 8 non-negotiable design rules) - phase-1-foundation/ README + 9 task files (1.2-1.10) - phase-2-live-map / phase-3-dogfood-readiness / phase-4-future README placeholders Task 1.2 — stack rounding-out: - Tailwind 4 via @tailwindcss/vite + src/styles/globals.css - shadcn/ui (slate, new-york) primitives in src/ui/primitives/: button, input, label, form, card, alert - TanStack Router 1.169 + Query 5.100 (devtools + plugin in devDeps) - Zustand 5, @directus/sdk 21, zod 4, react-hook-form 7 + resolvers - Prettier 3 + eslint-config-prettier + eslint-plugin-prettier - ESLint override disabling react-refresh/only-export-components for src/ui/primitives/** (intentional dual-exports in shadcn primitives) - Path alias @/* -> ./src/* in tsconfig.json + tsconfig.app.json (TS 6 deprecates baseUrl; paths now resolve relative to config file). Pulled forward from 1.3 because shadcn add CLI needs it resolvable. - Scripts: dev, build, preview, lint, typecheck, format, format:check, test (placeholder) - App.tsx Tailwind smoke test (centred card + shadcn Button) - README.md rewritten with stack/scripts/shadcn-add docs All four gates green: typecheck, lint, format:check, build (222KB / 70KB gz).
7.4 KiB
Task 1.10 — Compose service block in trm/deploy
Phase: 1 — Foundation
Status: ⬜ Not started
Depends on: 1.9 (image must be publishable)
Wiki refs: docs/wiki/entities/react-spa.md; trm/deploy/compose.yaml; trm/deploy/README.md
Goal
Wire the SPA into the platform stack: add a service block to trm/deploy/compose.yaml, document SPA_TAG in .env.example, update the deploy README's Currently / First-deploy / Network sections to reference it. After this task, redeploying the stack pulls the SPA image and serves it under the same origin as Directus, behind the reverse proxy.
This task touches trm/deploy, not trm/spa — but it's a SPA Phase 1 deliverable because the SPA isn't operationally complete until it's wired into the stack.
Deliverables
trm/deploy/compose.yamlupdated:- New
spaservice block (full shape below). - Internal-only (
expose: '80', noports:) — same pattern asdirectus. The reverse proxy fronts it. - Volume mount for the runtime-config override:
/usr/share/nginx/html/config.jsonoverridable from a host file.
- New
trm/deploy/.env.exampleupdated:- New
SPA_TAG=main(default). - Section header for SPA-specific config (currently just the tag).
- New
trm/deploy/README.mdupdated:- "Services in the stack" section: move SPA from Planned to Currently.
- "Network model" section: add the SPA paragraph (internal-only, served by the reverse proxy).
- "First-deploy checklist" section: add a "Verify SPA loads" step (browse to public URL, expect login page).
- "Runtime config override" subsection: how the
config.jsonvolume mount works for setting per-environment URLs / Google Maps key.
Specification
Compose service block
spa:
image: git.dev.microservices.al/trm/spa:${SPA_TAG:-main}
expose:
- '80'
volumes:
# Override the baked-in dev config with the per-environment one.
# The host path is whatever the operator configures in Portainer or .env;
# default points at a sibling file in this repo.
- ${SPA_CONFIG_FILE:-./spa-config.json}:/usr/share/nginx/html/config.json:ro
restart: unless-stopped
networks:
- default
depends_on:
# SPA can boot independently of Directus / Processor — it's just static files.
# The reverse proxy is what wires them together; SPA loading without backends
# would just show a "Failed to load" error, which is the right UX.
[]
The :ro mount means the container can't accidentally write to its own config. Defensive.
Per-environment config file
A sibling file trm/deploy/spa-config.json (NOT committed; in .gitignore) is created per environment. Operators copy from spa-config.example.json (committed) and edit:
{
"directusUrl": "https://stage.trmtracking.org/api",
"liveWsUrl": "wss://stage.trmtracking.org/ws-live",
"businessWsUrl": "wss://stage.trmtracking.org/ws-business",
"env": "stage"
}
For stage with the proxy in place, the URLs are relative (just /api, /ws-live, etc.) — same pattern as the dev defaults. Absolute URLs are only needed if the SPA ever runs cross-origin to its backends, which it shouldn't.
spa-config.example.json (committed):
{
"directusUrl": "/api",
"liveWsUrl": "/ws-live",
"businessWsUrl": "/ws-business",
"env": "stage"
}
Operators copy → edit env to prod for prod / add googleMapsKey / etc.
Reverse proxy routing
The reverse proxy (Traefik / Caddy / nginx — operator's choice; not part of this stack) is responsible for:
/→http://spa:80(everything under root that isn't a more specific match)./api/*→http://directus:8055/...(REST + GraphQL)./ws-business→ws://directus:8055/websocket(Directus WS)./ws-live→ws://processor:8081(Processor WS — when Phase 1.5 lands).
The proxy itself is documented in trm/deploy/README.md but not part of the compose stack — it's a sibling stack or a host-level service. Different operators will use different proxies; the README gives examples but doesn't prescribe.
.env.example addition
# ---------------------------------------------------------------------
# spa
# ---------------------------------------------------------------------
# Image tag to pull. `main` auto-tracks the latest commit on the main branch.
# In production, pin to a specific commit SHA for reproducibility.
# Example: SPA_TAG=ab12cd3
SPA_TAG=main
# Path on the host to the runtime config file mounted into the SPA container
# at /usr/share/nginx/html/config.json. Defaults to a sibling file in this repo;
# create it from spa-config.example.json before first deploy.
# SPA_CONFIG_FILE=/srv/trm/spa-config.json
trm/deploy/README.md updates
In "Services in the stack" (under Currently): add the SPA row, remove from Planned.
In "Network model": add the SPA paragraph:
- spa — static bundle served by nginx. Internal-only on
:80. The reverse proxy serves the SPA at/(default route). Same-origin with Directus and Processor's WS so cookie auth flows naturally to all three.
In "First-deploy checklist", add to step 1 (generate secrets) a callout that no SPA secrets are needed; in step 5 (watch the first boot) add "the SPA container starts in seconds — no internal migrations to run"; add a step 8 "Verify SPA loads": browse to https://<your-domain>/ → expect to land on /login.
Add a new "Runtime config override" subsection after "First-deploy checklist":
The SPA reads
/config.jsonat boot for environment-specific URLs and optional API keys. The image bakes a default for dev; in stage/prod, override by mounting a custom file:
- Copy
spa-config.example.jsontospa-config.json(or whereverSPA_CONFIG_FILEpoints).- Edit
env(stage/prod) and any optional keys.- Redeploy the stack — no SPA rebuild needed.
Acceptance criteria
compose.yamlparses cleanly (docker compose configreturns no errors).- After Portainer redeploy with the new compose,
docker compose psshows the SPA container running. curl -i http://<reverse-proxy-host>/returns the SPA'sindex.html(status 200, content-type text/html).- Browsing the public URL in a browser shows the login page.
curl http://<reverse-proxy-host>/config.jsonreturns the override config (NOT the baked-in dev defaults).- After login + navigation to
/, the home page renders. The end-to-end Phase 1 happy path works against a stage stack that also hasdirectusrunning. - Phase 1.5 of processor hasn't landed yet → the
/ws-liveproxy route 502s, but the SPA's home page still loads (no live map UI to try-and-fail yet).
Risks / open questions
- Reverse-proxy choice not in scope. The deploy README documents Traefik / Caddy / nginx as options; this task doesn't prescribe one. If the operator hasn't set up a proxy, this task's acceptance can't be verified end-to-end. Add a note in the deploy README's "First-deploy checklist" step pointing at the proxy-setup gap.
spa-config.jsonnot in version control. Each operator maintains theirs; it lives in their secret store (1Password, Vaultwarden, or Portainer's environment-files feature). Worth flagging in the README.- WebSocket sticky sessions. Multi-replica SPA + multiple Processor instances in Phase 3 may need sticky sessions at the reverse proxy so a client's WS stays on the same Processor instance across reconnects. Out of scope for Phase 1 (single Processor, single SPA replica).
Done
(Filled in when the task lands.)