177 lines
11 KiB
Markdown
177 lines
11 KiB
Markdown
# Task 1.10 — Compose service block in trm/deploy
|
|
|
|
**Phase:** 1 — Foundation
|
|
**Status:** ⬜ Not started
|
|
**Depends on:** 1.9 (image must be publishable)
|
|
**Wiki refs:** `docs/wiki/entities/react-spa.md`; `trm/deploy/compose.yaml`; `trm/deploy/README.md`
|
|
|
|
## Goal
|
|
|
|
Wire the SPA into the platform stack: add a service block to `trm/deploy/compose.yaml`, document `SPA_TAG` in `.env.example`, update the deploy README's Currently / First-deploy / Network sections to reference it. After this task, redeploying the stack pulls the SPA image and serves it under the same origin as Directus, behind the reverse proxy.
|
|
|
|
This task touches `trm/deploy`, not `trm/spa` — but it's a SPA Phase 1 deliverable because the SPA isn't operationally complete until it's wired into the stack.
|
|
|
|
## Deliverables
|
|
|
|
- `trm/deploy/compose.yaml` updated:
|
|
- New `spa` service block (full shape below).
|
|
- Internal-only (`expose: '80'`, no `ports:`) — same pattern as `directus`. The reverse proxy fronts it.
|
|
- Volume mount for the runtime-config override: `/usr/share/nginx/html/config.json` overridable from a host file.
|
|
- `trm/deploy/.env.example` updated:
|
|
- New `SPA_TAG=main` (default).
|
|
- Section header for SPA-specific config (currently just the tag).
|
|
- `trm/deploy/README.md` updated:
|
|
- "Services in the stack" section: move SPA from Planned to Currently.
|
|
- "Network model" section: add the SPA paragraph (internal-only, served by the reverse proxy).
|
|
- "First-deploy checklist" section: add a "Verify SPA loads" step (browse to public URL, expect login page).
|
|
- "Runtime config override" subsection: how the `config.json` volume mount works for setting per-environment URLs / Google Maps key.
|
|
|
|
## Specification
|
|
|
|
### Compose service block
|
|
|
|
```yaml
|
|
spa:
|
|
image: git.dev.microservices.al/trm/spa:${SPA_TAG:-main}
|
|
expose:
|
|
- '80'
|
|
volumes:
|
|
# Override the baked-in dev config with the per-environment one.
|
|
# The host path is whatever the operator configures in Portainer or .env;
|
|
# default points at a sibling file in this repo.
|
|
- ${SPA_CONFIG_FILE:-./spa-config.json}:/usr/share/nginx/html/config.json:ro
|
|
restart: unless-stopped
|
|
networks:
|
|
- default
|
|
depends_on:
|
|
# SPA can boot independently of Directus / Processor — it's just static files.
|
|
# The reverse proxy is what wires them together; SPA loading without backends
|
|
# would just show a "Failed to load" error, which is the right UX.
|
|
[]
|
|
```
|
|
|
|
The `:ro` mount means the container can't accidentally write to its own config. Defensive.
|
|
|
|
### Per-environment config file
|
|
|
|
A sibling file `trm/deploy/spa-config.json` (NOT committed; in `.gitignore`) is created per environment. Operators copy from `spa-config.example.json` (committed) and edit:
|
|
|
|
```json
|
|
{
|
|
"directusUrl": "https://stage.trmtracking.org/api",
|
|
"liveWsUrl": "wss://stage.trmtracking.org/ws-live",
|
|
"businessWsUrl": "wss://stage.trmtracking.org/ws-business",
|
|
"env": "stage"
|
|
}
|
|
```
|
|
|
|
For stage with the proxy in place, the URLs are relative (just `/api`, `/ws-live`, etc.) — same pattern as the dev defaults. Absolute URLs are only needed if the SPA ever runs cross-origin to its backends, which it shouldn't.
|
|
|
|
`spa-config.example.json` (committed):
|
|
|
|
```json
|
|
{
|
|
"directusUrl": "/api",
|
|
"liveWsUrl": "/ws-live",
|
|
"businessWsUrl": "/ws-business",
|
|
"env": "stage"
|
|
}
|
|
```
|
|
|
|
Operators copy → edit `env` to `prod` for prod / add `googleMapsKey` / etc.
|
|
|
|
### Reverse proxy routing
|
|
|
|
The reverse proxy (Traefik / Caddy / nginx — operator's choice; not part of this stack) is responsible for:
|
|
|
|
1. `/` → `http://spa:80` (everything under root that isn't a more specific match).
|
|
2. `/api/*` → `http://directus:8055/...` (REST + GraphQL).
|
|
3. `/ws-business` → `ws://directus:8055/websocket` (Directus WS).
|
|
4. `/ws-live` → `ws://processor:8081` (Processor WS — when Phase 1.5 lands).
|
|
|
|
The proxy itself is documented in `trm/deploy/README.md` but not part of the compose stack — it's a sibling stack or a host-level service. Different operators will use different proxies; the README gives examples but doesn't prescribe.
|
|
|
|
### `.env.example` addition
|
|
|
|
```bash
|
|
# ---------------------------------------------------------------------
|
|
# spa
|
|
# ---------------------------------------------------------------------
|
|
|
|
# Image tag to pull. `main` auto-tracks the latest commit on the main branch.
|
|
# In production, pin to a specific commit SHA for reproducibility.
|
|
# Example: SPA_TAG=ab12cd3
|
|
SPA_TAG=main
|
|
|
|
# Path on the host to the runtime config file mounted into the SPA container
|
|
# at /usr/share/nginx/html/config.json. Defaults to a sibling file in this repo;
|
|
# create it from spa-config.example.json before first deploy.
|
|
# SPA_CONFIG_FILE=/srv/trm/spa-config.json
|
|
```
|
|
|
|
### `trm/deploy/README.md` updates
|
|
|
|
In "Services in the stack" (under Currently): add the SPA row, remove from Planned.
|
|
|
|
In "Network model": add the SPA paragraph:
|
|
|
|
> - **spa** — static bundle served by nginx. Internal-only on `:80`. The reverse proxy serves the SPA at `/` (default route). Same-origin with Directus and Processor's WS so cookie auth flows naturally to all three.
|
|
|
|
In "First-deploy checklist", add to step 1 (generate secrets) a callout that no SPA secrets are needed; in step 5 (watch the first boot) add "the SPA container starts in seconds — no internal migrations to run"; add a step 8 "Verify SPA loads": browse to `https://<your-domain>/` → expect to land on `/login`.
|
|
|
|
Add a new "Runtime config override" subsection after "First-deploy checklist":
|
|
|
|
> The SPA reads `/config.json` at boot for environment-specific URLs and optional API keys. The image bakes a default for dev; in stage/prod, override by mounting a custom file:
|
|
>
|
|
> 1. Copy `spa-config.example.json` to `spa-config.json` (or wherever `SPA_CONFIG_FILE` points).
|
|
> 2. Edit `env` (`stage` / `prod`) and any optional keys.
|
|
> 3. Redeploy the stack — no SPA rebuild needed.
|
|
|
|
## Acceptance criteria
|
|
|
|
- [ ] `compose.yaml` parses cleanly (`docker compose config` returns no errors).
|
|
- [ ] After Portainer redeploy with the new compose, `docker compose ps` shows the SPA container running.
|
|
- [ ] `curl -i http://<reverse-proxy-host>/` returns the SPA's `index.html` (status 200, content-type text/html).
|
|
- [ ] Browsing the public URL in a browser shows the login page.
|
|
- [ ] `curl http://<reverse-proxy-host>/config.json` returns the override config (NOT the baked-in dev defaults).
|
|
- [ ] After login + navigation to `/`, the home page renders. The end-to-end Phase 1 happy path works against a stage stack that also has `directus` running.
|
|
- [ ] Phase 1.5 of [[processor]] hasn't landed yet → the `/ws-live` proxy route 502s, but the SPA's home page still loads (no live map UI to try-and-fail yet).
|
|
|
|
## Risks / open questions
|
|
|
|
- **Reverse-proxy choice not in scope.** The deploy README documents Traefik / Caddy / nginx as options; this task doesn't prescribe one. If the operator hasn't set up a proxy, this task's acceptance can't be verified end-to-end. Add a note in the deploy README's "First-deploy checklist" step pointing at the proxy-setup gap.
|
|
- **`spa-config.json` not in version control.** Each operator maintains theirs; it lives in their secret store (1Password, Vaultwarden, or Portainer's environment-files feature). Worth flagging in the README.
|
|
- **WebSocket sticky sessions.** Multi-replica SPA + multiple Processor instances in Phase 3 may need sticky sessions at the reverse proxy so a client's WS stays on the same Processor instance across reconnects. Out of scope for Phase 1 (single Processor, single SPA replica).
|
|
|
|
## Done
|
|
|
|
Cross-repo task — all changes land in `trm/deploy`:
|
|
|
|
- **`compose.yaml`** — new `spa` service block right after `directus`. Image `git.dev.microservices.al/trm/spa:${SPA_TAG:-main}`. Internal-only (`expose: '80'`, no host publish; reverse proxy fronts it). Volume mount `${SPA_CONFIG_FILE:-./spa-config.json}:/usr/share/nginx/html/config.json:ro` for the runtime-config override. `restart: unless-stopped`. Healthcheck via `wget -qO- http://localhost/`. Replaced the old "Future services land here: react-spa" placeholder comment.
|
|
- **`spa-config.example.json`** (new, committed) — dev-default-equivalent file for stage/prod with `env: "stage"`. Operators copy → edit → `cp spa-config.example.json spa-config.json` before first deploy.
|
|
- **`.env.example`** — new "spa" section documenting `SPA_TAG` and `SPA_CONFIG_FILE`. Notes the SPA-is-internal-only design and same-origin requirement.
|
|
- **`.gitignore`** — added `spa-config.json` so per-environment configs never get committed.
|
|
- **`README.md`**:
|
|
- "Services in the stack" — moved the SPA from Planned to Currently with a one-liner about the volume-mount override.
|
|
- "Set Portainer stack environment variables" optional table — added `SPA_TAG` and `SPA_CONFIG_FILE` rows. Reframed the `DIRECTUS_CORS_*` row as "leave disabled — same-origin proxy means CORS isn't in play."
|
|
- **New step 8 — "Verify the SPA loads"** in the first-deploy checklist with the expected happy-path behaviour (redirect to `/login`, login → `/`, hard refresh survives, sign out → `/login`).
|
|
- **"Network model"** — added the spa to the no-host-publish list and rewrote the proxy paragraph to spell out the four routes the proxy must wire (`/api/*`, `/ws-business`, `/ws-live`, default → SPA). Same-origin-non-negotiable callout.
|
|
- **New "Runtime config override (SPA)" section** — copy → edit → mount workflow, when to override URLs to absolute (don't, unless cross-origin which breaks cookie auth), how to redeploy after editing.
|
|
|
|
**Deviations from spec:**
|
|
|
|
None significant. The spec also mentioned wanting the proxy stack to be a "sibling Portainer stack or run on the host" — that text was already in the existing README's Network model section before this task touched it; no change needed.
|
|
|
|
**Smoke check:** `docker compose config` not run (Docker isn't installed on this machine). The yaml is hand-validated against the existing service blocks' shape — same env-var pattern, same volume-mount pattern, same healthcheck pattern. CI doesn't validate the compose file directly; Portainer will surface yaml errors on first deploy.
|
|
|
|
**Required for first deploy** (operator action):
|
|
|
|
1. Push the SPA's CI to `main` (1.9) so the `:main` image exists in the registry.
|
|
2. In `trm/deploy`, copy `spa-config.example.json` to `spa-config.json` and edit (`env: "stage"`, optional `googleMapsKey`, etc.).
|
|
3. Set Gitea repo secrets in `trm/spa` if not already present: `REGISTRY_USERNAME`, `REGISTRY_PASSWORD`, `PORTAINER_WEBHOOK_URL`.
|
|
4. Configure the reverse proxy with the four routes documented in the README's Network model section. The SPA needs to be reachable at the public domain root (everything under one origin).
|
|
5. Stack redeploy via Portainer.
|
|
6. Walk the first-deploy checklist's step 8 — verify the SPA loads.
|
|
|
|
Landed in `trm/deploy` `68ab08f`.
|