Scaffold directus service planning structure
Initial commit. Establishes the .planning/ tree mirroring processor's shape (ROADMAP.md as nav hub + per-phase folders with READMEs and granular task files). Six phases: 1. Slice 1 schema + deploy pipeline — what Rally Albania 2026 needs. Org catalog (orgs, users, vehicles, devices) + event participation (events, classes, entries, entry_crew, entry_devices). db-init/ for the positions hypertable + faulty column. snapshot/apply tooling. Gitea CI dry-run. Dogfood seed of Rally Albania 2026. Nine task files with full Goal / Deliverables / Specification / Acceptance criteria / Risks / Done sections. 2. Course definition — stages, segments, geofences, waypoints, SLZs. PostGIS extension introduced here. 3. Timing & penalty tables — co-developed with processor Phase 2. entry_segment_starts, entry_crossings, entry_penalties, stage_results, penalty_formulas. 4. Permissions & policies — Directus 11 dynamic-filter Policies per logical role. Deployment-time work, deferred to keep early phases focused on the data model. 5. Custom extensions — TypeScript hooks/endpoints implementing the cross-plane workflows the schema implies (faulty-flag → Redis stream emit, stage-open materializer, etc.). 6. Future / optional — retroactivity preview UI, command-routing Flows, audit trails, federation rule import. Not committed. Non-negotiable design rules captured in ROADMAP.md: schema authority in Directus + snapshot-as-code + db-init for non-Directus DDL + sequential idempotent migrations + entrypoint apply order + no application logic in Flows + permissions deferred to Phase 4. Architectural anchors point at the wiki at ../docs/wiki/ — the schema draft, the Rally Albania 2025 source page, plus the existing processor/postgres-timescaledb/live-channel pages. Each task file calls out the wiki refs an implementing agent should read first. README.md mirrors the processor service README structure: quick start, local Docker test, prod/stage deployment notes, env vars, CI behavior.
This commit is contained in:
@@ -0,0 +1,102 @@
|
||||
# directus — Roadmap
|
||||
|
||||
The TRM business plane. Directus 11 instance owning the relational schema and exposing it via REST/GraphQL/WebSockets/Admin UI. Schema-as-code via `snapshots/` + `db-init/`, applied at container startup.
|
||||
|
||||
This file is the single navigation hub for all implementation planning. Each phase has its own folder with a README and granular task files. Update statuses here as work lands.
|
||||
|
||||
## Status legend
|
||||
|
||||
| Symbol | Meaning |
|
||||
|--------|---------|
|
||||
| ⬜ | Not started |
|
||||
| 🟦 | Planned (designed, not coded) |
|
||||
| 🟨 | In progress |
|
||||
| 🟩 | Done |
|
||||
| ⏸ | Paused / blocked |
|
||||
| ❄️ | Frozen / future / optional |
|
||||
|
||||
## Architectural anchors
|
||||
|
||||
The service is specified by the wiki at `../docs/wiki/`. Implementing agents should read these pages before starting any task:
|
||||
|
||||
- **Architecture** — `docs/wiki/sources/gps-tracking-architecture.md`, `docs/wiki/concepts/plane-separation.md`, `docs/wiki/concepts/failure-domains.md`
|
||||
- **This service** — `docs/wiki/entities/directus.md`
|
||||
- **Schema design** — `docs/wiki/synthesis/directus-schema-draft.md`
|
||||
- **Reference rulebook** — `docs/wiki/sources/rally-albania-regulations-2025.md` (canonical real-world fixture for federation rule shapes)
|
||||
- **Downstream / sibling** — `docs/wiki/entities/postgres-timescaledb.md`, `docs/wiki/entities/processor.md`, `docs/wiki/concepts/live-channel-architecture.md`
|
||||
|
||||
## Non-negotiable design rules
|
||||
|
||||
These rules govern every task. Any deviation must be discussed and documented as a decision before code lands.
|
||||
|
||||
1. **Schema authority lives in Directus.** Collections, fields, relations are defined through Directus and round-tripped via `directus schema snapshot`. The exception is the `positions` hypertable (owned by [[processor]]) and any other DDL Directus cannot represent (PostGIS-specific syntax, custom indexes, hypertable creation) — those live in `db-init/*.sql`.
|
||||
2. **`db-init/*.sql` is sequential, idempotent, and guarded.** Files numbered `NNN_name.sql`. Each is internally idempotent (`IF NOT EXISTS`, `ADD COLUMN IF NOT EXISTS`). The runner skips files already recorded in `migrations_applied`. Manual application of out-of-order files is forbidden.
|
||||
3. **Apply order at boot:** db-init runner → `directus schema apply --yes` → `directus start`. Any failure halts boot. Implemented in `entrypoint.sh`.
|
||||
4. **Snapshot lives in git, edited only via the admin UI.** Hand-editing `snapshots/schema.yaml` is forbidden — round-trip through the UI keeps the format consistent with what `directus schema snapshot` produces.
|
||||
5. **One PR = one snapshot regeneration.** PRs that change schema include the regenerated snapshot. CI verifies the snapshot matches what `directus schema snapshot` would produce against an applied database.
|
||||
6. **No application logic in Flows.** Flows are reserved for declarative orchestration (notifications, simple field updates, webhook routing). Domain logic lives in `extensions/` (TypeScript hooks/endpoints) where it is reviewed, tested, and version-controlled like any other code.
|
||||
7. **Permissions are a separate phase.** Adding a collection in Phase 1–3 does NOT come with its access policies — those land deliberately in Phase 4. Until then collections are admin-only by default. This avoids premature commitment to role definitions before the data model is settled.
|
||||
8. **Image starts from `directus/directus:11.x`.** No forking the upstream image. Customizations are: bundled extensions under `/directus/extensions/`, snapshot/db-init artifacts under `/directus/snapshots/` and `/directus/db-init/`, and an entrypoint wrapper.
|
||||
|
||||
## Phases
|
||||
|
||||
### Phase 1 — Slice 1 schema + deploy pipeline
|
||||
|
||||
**Status:** ⬜ Not started
|
||||
**Outcome:** A Directus instance with the org-level catalog (orgs, users, organization_users, vehicles, devices and their org junctions) and event-participation collections (events, classes, entries, entry_crew, entry_devices) live and snapshot-tracked. `db-init/` covers the TimescaleDB extension, the `positions` hypertable, and the `faulty` column. Image builds via Gitea Actions with a CI dry-run that catches snapshot drift before deploy. Rally Albania 2026 is registered as the first event in admin UI to dogfood the registration workflow. **This is what Rally Albania 2026 needs.**
|
||||
|
||||
[**See `phase-1-slice-1-schema/README.md`**](./phase-1-slice-1-schema/README.md)
|
||||
|
||||
| # | Task | Status | Landed in |
|
||||
|---|------|--------|-----------|
|
||||
| 1.1 | [Project scaffold](./phase-1-slice-1-schema/01-project-scaffold.md) | ⬜ | — |
|
||||
| 1.2 | [db-init runner script](./phase-1-slice-1-schema/02-db-init-runner.md) | ⬜ | — |
|
||||
| 1.3 | [Initial migrations (extensions, positions hypertable, faulty column)](./phase-1-slice-1-schema/03-initial-migrations.md) | ⬜ | — |
|
||||
| 1.4 | [Org-level catalog collections](./phase-1-slice-1-schema/04-org-catalog-collections.md) | ⬜ | — |
|
||||
| 1.5 | [Event-participation collections](./phase-1-slice-1-schema/05-event-participation-collections.md) | ⬜ | — |
|
||||
| 1.6 | [Schema snapshot/apply tooling](./phase-1-slice-1-schema/06-snapshot-tooling.md) | ⬜ | — |
|
||||
| 1.7 | [Image build & entrypoint](./phase-1-slice-1-schema/07-image-and-dockerfile.md) | ⬜ | — |
|
||||
| 1.8 | [Gitea CI dry-run workflow](./phase-1-slice-1-schema/08-gitea-ci-dryrun.md) | ⬜ | — |
|
||||
| 1.9 | [Rally Albania 2026 dogfood seed](./phase-1-slice-1-schema/09-rally-albania-2026-seed.md) | ⬜ | — |
|
||||
|
||||
### Phase 2 — Course definition
|
||||
|
||||
**Status:** ⬜ Not started — depends on Phase 1
|
||||
**Outcome:** Stages, segments, geofences (PostGIS polygons), waypoints, and speed_limit_zones as data-layer collections. Operators can define an event's full course before each stage. No processor logic yet — Phase 2 of [[processor]] consumes this data and writes crossings/penalties.
|
||||
|
||||
[**See `phase-2-course-definition/README.md`**](./phase-2-course-definition/README.md)
|
||||
|
||||
### Phase 3 — Timing & penalty tables
|
||||
|
||||
**Status:** ⬜ Not started — co-developed with processor Phase 2
|
||||
**Outcome:** `entry_segment_starts`, `entry_crossings`, `entry_penalties`, `stage_results`, and `penalty_formulas` collections. The schema half of the paired schema/code work that produces real timing results. Penalty evaluator registry shipped on the [[processor]] side; rule numeric values shipped here.
|
||||
|
||||
[**See `phase-3-timing-and-penalty-tables/README.md`**](./phase-3-timing-and-penalty-tables/README.md)
|
||||
|
||||
### Phase 4 — Permissions & policies
|
||||
|
||||
**Status:** ⬜ Not started — depends on Phases 1–3
|
||||
**Outcome:** Dynamic-filter Policies per logical role (org-admin, race-director, marshal, timekeeper, participant, …) covering each collection × action. Multi-tenant isolation enforced by Directus, not by application code. Deployment-time work, not architectural.
|
||||
|
||||
[**See `phase-4-permissions-and-policies/README.md`**](./phase-4-permissions-and-policies/README.md)
|
||||
|
||||
### Phase 5 — Custom extensions
|
||||
|
||||
**Status:** ⬜ Not started — depends on Phase 3
|
||||
**Outcome:** TypeScript extensions implementing the cross-plane workflows the schema implies: faulty-flag → `recompute:requests` stream emit; `events.discipline` validation hook; stage-open trigger materializing `entry_segment_starts`; CP closing-time computation; entry registration "copy crew from previous entry" custom endpoint.
|
||||
|
||||
[**See `phase-5-custom-extensions/README.md`**](./phase-5-custom-extensions/README.md)
|
||||
|
||||
### Phase 6 — Future / optional
|
||||
|
||||
**Status:** ❄️ Not committed
|
||||
[**See `phase-6-future/README.md`**](./phase-6-future/README.md)
|
||||
|
||||
Ideas on radar: retroactivity preview UI for geometry edits (Phase 2.5 of [[processor]] — needs a UI counterpart here), command-routing Flows ([[phase-2-commands]]), audit trail extensions, federation rule import tooling.
|
||||
|
||||
## Operating model
|
||||
|
||||
- **Implementation agent contract.** Each task file is self-sufficient: goal, deliverables, specification, acceptance criteria. An agent should be able to complete one task without reading the whole wiki — but should skim the wiki references at the top of the task before starting.
|
||||
- **Sequence within a phase.** Task numbering reflects intended order. Soft dependencies are explicit in each task's "Depends on" field. Tasks with no dependencies on each other can be done in parallel.
|
||||
- **Status updates.** When a task is started, change its row in this ROADMAP to 🟨 and the task file's status badge accordingly. When done, 🟩 + a one-line note in the task file's "Done" section pointing at the merging commit/PR.
|
||||
- **Drift control.** If implementation diverges from a task's spec, update the task file *before* the diverging code lands, with a note explaining why. Do not let plans rot — either fix the plan or fix the code.
|
||||
@@ -0,0 +1,61 @@
|
||||
# Task 1.1 — Project scaffold
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** None
|
||||
**Wiki refs:** `docs/wiki/entities/directus.md`, `docs/wiki/synthesis/directus-schema-draft.md`
|
||||
|
||||
## Goal
|
||||
|
||||
Initialize the `directus/` service folder with the directory layout from the Phase 1 README, the config files needed for local Docker compose dev, and a minimal `compose.dev.yaml` that boots Directus + TimescaleDB so the next tasks have something to iterate against. **No Directus collections are created in this task** — that starts in 1.4.
|
||||
|
||||
## Deliverables
|
||||
|
||||
- `directus/Dockerfile` — `FROM directus/directus:11.x`, copies `snapshots/`, `db-init/`, `scripts/`, `entrypoint.sh`, `extensions/` into the image. Sets `ENTRYPOINT ["/directus/entrypoint.sh"]`. (Concrete entrypoint contents land in task 1.7; for now create a placeholder that just `exec`s the upstream entrypoint.)
|
||||
- `directus/compose.dev.yaml` — two services:
|
||||
- `db`: `timescale/timescaledb-ha:pg16-latest` (or equivalent), volume-mapped Postgres data dir, healthcheck.
|
||||
- `directus`: built from local `Dockerfile`, depends on `db` healthy, env vars for DB connection + `KEY` + `SECRET` + admin bootstrap, port `8055` exposed.
|
||||
- `directus/package.json` — minimal, only for npm scripts (no runtime deps). Scripts:
|
||||
- `schema:snapshot` — `bash scripts/schema-snapshot.sh` (script body lands in 1.6)
|
||||
- `schema:apply` — `bash scripts/schema-apply.sh`
|
||||
- `db:init` — `bash scripts/apply-db-init.sh`
|
||||
- `dev` — `docker compose -f compose.dev.yaml up --build`
|
||||
- `dev:down` — `docker compose -f compose.dev.yaml down`
|
||||
- `dev:reset` — `docker compose -f compose.dev.yaml down -v && docker compose -f compose.dev.yaml up --build`
|
||||
- `directus/.env.example` — full list of env vars with descriptions and defaults. Required: `DB_HOST`, `DB_PORT`, `DB_DATABASE`, `DB_USER`, `DB_PASSWORD`, `KEY`, `SECRET`, `ADMIN_EMAIL`, `ADMIN_PASSWORD`, `PUBLIC_URL`. Plus optional: `LOG_LEVEL`, `LOG_STYLE`, `CACHE_ENABLED`, `CORS_ENABLED`, `CORS_ORIGIN`, `WEBSOCKETS_ENABLED`.
|
||||
- `directus/.gitignore` — `node_modules/`, `.env`, `.env.local`, `*.log`, `directus-data/` (the local Postgres volume mount, if used).
|
||||
- `directus/.dockerignore` — `.git/`, `.planning/`, `node_modules/`, `.env*`, `*.md` except `README.md`, `compose.dev.yaml` (compose isn't part of the image), `directus-data/`.
|
||||
- Empty placeholder directories with `.gitkeep`:
|
||||
- `snapshots/` (1.6 fills it)
|
||||
- `db-init/` (1.3 fills it)
|
||||
- `scripts/` (1.2, 1.6 fill it)
|
||||
- `extensions/` (Phase 5)
|
||||
- `directus/entrypoint.sh` — placeholder that simply `exec /directus/cli.js start` (or whatever the upstream image's start command is). Real wrapper lands in 1.7.
|
||||
- `directus/README.md` already exists from this scaffold pass — verify it's accurate.
|
||||
|
||||
## Specification
|
||||
|
||||
- **Postgres image choice.** Pin to a TimescaleDB image that includes PostgreSQL 16. PostGIS will be installed via `db-init/` in Phase 2; the base image must support `CREATE EXTENSION postgis` (most TimescaleDB-HA images do). Document the pinned tag in compose.dev.yaml.
|
||||
- **Volume policy in compose.dev.yaml.** Use a named volume (`directus-pg-data`) so `dev:down` preserves data and `dev:reset` wipes it.
|
||||
- **No secrets committed.** `.env` is gitignored. `.env.example` carries placeholder values only.
|
||||
- **No bind mounts of `snapshots/` or `db-init/` in compose.dev.yaml.** The image bakes them in. (Implementer can override with a bind mount during local iteration but the committed file does not.)
|
||||
- **Entrypoint is a placeholder in this task.** Real flow (db-init → schema apply → start) lands in 1.7. Keep the placeholder simple to unblock 1.4 testing.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] `pnpm install` succeeds (no runtime deps; lockfile generated).
|
||||
- [ ] `docker compose -f compose.dev.yaml up --build` boots Directus successfully against a fresh TimescaleDB container.
|
||||
- [ ] `http://localhost:8055` serves the Directus admin login.
|
||||
- [ ] First-time bootstrap with `ADMIN_EMAIL` / `ADMIN_PASSWORD` from `.env` works.
|
||||
- [ ] `pnpm dev:down` stops the stack, preserves the volume.
|
||||
- [ ] `pnpm dev:reset` wipes the volume and reboots clean.
|
||||
- [ ] No collection definitions exist yet — the Directus instance is empty by design.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **TimescaleDB-HA image PostGIS support.** Verify the chosen tag includes `postgis` extension binaries (or document the alternative — e.g. switching to `postgis/postgis:16-master` with manual TimescaleDB install). Capture the answer in this task's Done section.
|
||||
- **Directus 11.x patch version.** Pin a specific tag (e.g. `11.5.1`) rather than `11.x` for reproducible builds. Update the pin via PR when bumping.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,60 @@
|
||||
# Task 1.2 — db-init runner script
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.1
|
||||
**Wiki refs:** `docs/wiki/entities/postgres-timescaledb.md`, `docs/wiki/entities/directus.md` (Schema management section)
|
||||
|
||||
## Goal
|
||||
|
||||
Implement `scripts/apply-db-init.sh` — the boot-time runner that walks `db-init/*.sql` in numeric order, applies each via `psql` against the configured Postgres, and records successful applications in a `migrations_applied` guard table so re-runs are no-ops. This is the foundation Phase 1 (and every later phase) depends on for non-Directus DDL.
|
||||
|
||||
## Deliverables
|
||||
|
||||
- `scripts/apply-db-init.sh` — POSIX-compatible bash. Does the following, in order:
|
||||
1. **Wait for Postgres readiness.** Loop calling `pg_isready -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_DATABASE` until success or timeout (configurable, default 60 s). Exit non-zero on timeout with a clear log message.
|
||||
2. **Bootstrap the guard table.**
|
||||
```sql
|
||||
CREATE TABLE IF NOT EXISTS migrations_applied (
|
||||
filename TEXT PRIMARY KEY,
|
||||
applied_at TIMESTAMPTZ NOT NULL DEFAULT now(),
|
||||
checksum TEXT NOT NULL
|
||||
);
|
||||
```
|
||||
3. **Walk `db-init/*.sql` in numeric-prefix order** (sorted lexically; the `NNN_` prefix enforces order). For each file:
|
||||
- Compute `sha256sum` of the file contents → `checksum`.
|
||||
- Query `migrations_applied WHERE filename = <basename>`.
|
||||
- If a row exists and the checksums match → log `skip filename` and continue.
|
||||
- If a row exists and checksums DON'T match → log error and exit non-zero. (Migrations are append-only; never edit a file once applied.)
|
||||
- If no row exists → apply the file via `psql -v ON_ERROR_STOP=1 -f <path>`. On success, insert the row. On failure, exit non-zero with the SQL error.
|
||||
4. **Log a one-line summary** at the end: `db-init complete: <N> applied, <M> skipped`.
|
||||
|
||||
## Specification
|
||||
|
||||
- **Environment variables expected:** `DB_HOST`, `DB_PORT`, `DB_USER`, `DB_PASSWORD`, `DB_DATABASE`. Plus `DB_INIT_DIR` (default `/directus/db-init`) and `DB_INIT_TIMEOUT_SECONDS` (default `60`).
|
||||
- **Use `PGPASSWORD` for psql auth** — exported in the script before `psql` calls, never printed in logs.
|
||||
- **Each migration runs in a single transaction** by virtue of `psql -v ON_ERROR_STOP=1 -1 -f`. The `-1` flag wraps the whole file in `BEGIN/COMMIT`. (Some statements like `CREATE EXTENSION` or `CREATE INDEX CONCURRENTLY` can't run in a transaction — those go in their own files without `-1` if needed. Document the exception inline.)
|
||||
- **Numeric-prefix convention.** `001_`, `002_`, …, `999_`. Pad to 3 digits; gives 999 slots which is well beyond what we'll need.
|
||||
- **Filename uniqueness.** Two files can't share a prefix. Lint check at script start: detect collisions, error out before applying anything.
|
||||
- **Logging.** One line per file at INFO level. Failure logs include the psql exit code and the offending file. No SQL output to stdout (verbose `psql` output goes to stderr and is suppressed unless `DEBUG=1` is set).
|
||||
- **Idempotency.** Running the script twice in a row → second run does zero psql work beyond the readiness check + guard-table query.
|
||||
- **Exit codes.** `0` = success, `1` = readiness timeout, `2` = checksum mismatch, `3` = psql error, `4` = filename collision.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] Script is executable (`chmod +x`), shebang is `#!/usr/bin/env bash`.
|
||||
- [ ] `set -euo pipefail` at the top.
|
||||
- [ ] Against a fresh Postgres, no `db-init/*.sql` files yet → script creates `migrations_applied` table, prints "0 applied, 0 skipped", exits 0.
|
||||
- [ ] After 1.3 lands, script applies all three migrations on first run (3 applied, 0 skipped), no-ops on second run (0 applied, 3 skipped).
|
||||
- [ ] Manually editing an applied file → next run exits 2 with a clear "checksum mismatch" error.
|
||||
- [ ] Adding two files with the same numeric prefix → script exits 4 before applying anything.
|
||||
- [ ] Killing Postgres mid-run during file 002 → script exits 3 with the psql error; on next run, file 002 retries cleanly.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **`CREATE EXTENSION` inside a transaction.** Some Postgres extensions can be created inside a transaction (timescaledb, postgis), some cannot (pg_partman with parallel apply). For Phase 1 the only extension is timescaledb, which is fine. Re-evaluate per phase.
|
||||
- **Concurrent boots.** If two Directus containers boot against the same DB at the same time (rolling deploy), both will try to apply migrations. The guard table's `PRIMARY KEY` on `filename` makes the insert race-safe, but two containers running the *same* `psql -f` at once is risky. Mitigation for Phase 1: assume single-replica boot during deploy; Phase 3+ revisit if rolling deploy is a goal.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,92 @@
|
||||
# Task 1.3 — Initial migrations
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.2
|
||||
**Wiki refs:** `docs/wiki/entities/postgres-timescaledb.md`, `docs/wiki/concepts/position-record.md`, `docs/wiki/entities/processor.md` (Faulty position handling)
|
||||
|
||||
## Goal
|
||||
|
||||
Author the three Phase 1 migrations under `db-init/`: the TimescaleDB extension, the `positions` hypertable creation, and the `faulty boolean` column. Each is internally idempotent so that environments where they were applied ad-hoc (e.g. existing stage) absorb them as no-ops.
|
||||
|
||||
## Deliverables
|
||||
|
||||
- `db-init/001_extensions.sql`:
|
||||
```sql
|
||||
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
|
||||
```
|
||||
- `db-init/002_positions_hypertable.sql`:
|
||||
```sql
|
||||
CREATE TABLE IF NOT EXISTS positions (
|
||||
device_id TEXT NOT NULL,
|
||||
ts TIMESTAMPTZ NOT NULL,
|
||||
latitude DOUBLE PRECISION NOT NULL,
|
||||
longitude DOUBLE PRECISION NOT NULL,
|
||||
altitude DOUBLE PRECISION,
|
||||
angle SMALLINT,
|
||||
speed SMALLINT,
|
||||
satellites SMALLINT,
|
||||
priority SMALLINT,
|
||||
attributes JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||
PRIMARY KEY (device_id, ts)
|
||||
);
|
||||
|
||||
-- Idempotent hypertable creation: if_not_exists => true
|
||||
SELECT create_hypertable(
|
||||
'positions', 'ts',
|
||||
chunk_time_interval => INTERVAL '7 days',
|
||||
if_not_exists => TRUE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS positions_device_ts_idx
|
||||
ON positions (device_id, ts DESC);
|
||||
```
|
||||
- `db-init/003_faulty_column.sql`:
|
||||
```sql
|
||||
ALTER TABLE positions
|
||||
ADD COLUMN IF NOT EXISTS faulty BOOLEAN NOT NULL DEFAULT FALSE;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS positions_faulty_idx
|
||||
ON positions (device_id, ts DESC) WHERE faulty = FALSE;
|
||||
```
|
||||
|
||||
## Specification
|
||||
|
||||
- **Schema must match what `processor` writes.** Cross-check column names, types, nullability against `docs/wiki/concepts/position-record.md` and the actual `processor` writer code (`processor/src/db/migrations/0001_positions.sql`). If any field differs, this task is **blocked** until [[directus-schema-draft]] and the processor's existing migration are reconciled — fix the divergence in the doc first, then this task.
|
||||
- **`attributes` is `JSONB NOT NULL DEFAULT '{}'`** — never null, always an object. Keeps query plans simple.
|
||||
- **`(device_id, ts)` primary key** — natural key, idempotent for the processor's `ON CONFLICT DO NOTHING` writer.
|
||||
- **Chunk interval = 7 days.** Tunable later; 7 days is a reasonable default for hundreds of devices emitting at multi-Hz.
|
||||
- **Faulty index uses a partial-index `WHERE faulty = FALSE`.** Optimizes the [[processor]] hot-path read which always filters faulty out. Operator queries that select faulty rows specifically use the broader `(device_id, ts DESC)` index.
|
||||
- **`CASCADE` on `CREATE EXTENSION`** so that any dependent extensions install transparently. TimescaleDB has no required deps so CASCADE is a no-op for now, but harmless and future-proof.
|
||||
- **No `IF EXISTS` shortcuts that hide schema drift.** The migrations are idempotent at the *DDL* level (`IF NOT EXISTS`), but if a column type already differs from what the file declares, the migration silently passes — leaving stage in an inconsistent state. Add a final `DO $$ ... $$` block per file that asserts the table shape is what the migration intends:
|
||||
```sql
|
||||
-- end of 002_positions_hypertable.sql
|
||||
DO $$ BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM information_schema.columns
|
||||
WHERE table_name = 'positions' AND column_name = 'attributes' AND data_type = 'jsonb'
|
||||
) THEN
|
||||
RAISE EXCEPTION 'positions.attributes is not JSONB — schema drift';
|
||||
END IF;
|
||||
END $$;
|
||||
```
|
||||
One assertion per critical column shape. Catches the case where stage has the table but with subtly different types.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] Against a fresh Postgres + TimescaleDB image, `apply-db-init.sh` runs all three files cleanly.
|
||||
- [ ] `\d positions` shows the expected columns (including `faulty`).
|
||||
- [ ] `SELECT * FROM timescaledb_information.hypertables WHERE hypertable_name = 'positions';` returns one row.
|
||||
- [ ] Both indexes (`positions_device_ts_idx`, `positions_faulty_idx`) exist (`\di+`).
|
||||
- [ ] Re-running the script is a no-op (verified via `migrations_applied` table contents).
|
||||
- [ ] Against a Postgres that *already* has `positions` from a prior ad-hoc run, the migration absorbs it as a no-op (provided the existing schema matches; otherwise the assertion blocks deploy).
|
||||
- [ ] Cross-checked against `processor/src/db/migrations/0001_positions.sql` — column names, types, indexes match.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **Existing stage Postgres may have a slightly different schema.** Run `pg_dump --schema-only -t positions` on stage before this task lands and compare to the migration above. Reconcile differences in this file (or document them as known-divergent).
|
||||
- **Hypertable was created before — `create_hypertable` with `if_not_exists` should accept it, but the chunk interval can't be retroactively changed via this call.** If stage's chunk interval differs from `7 days`, that's a non-blocking divergence (functional, just suboptimal). Don't try to migrate it via SQL; leave it as a follow-up.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,129 @@
|
||||
# Task 1.4 — Org-level catalog collections
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.3 (db-init applied so Directus can boot)
|
||||
**Wiki refs:** `docs/wiki/synthesis/directus-schema-draft.md` (Org-level catalog section), `docs/wiki/sources/rally-albania-regulations-2025.md`
|
||||
|
||||
## Goal
|
||||
|
||||
Create the durable, org-level collections in the Directus admin UI: `organizations`, `users` (using Directus's built-in users with custom fields), `organization_users`, `vehicles`, `organization_vehicles`, `devices`, `organization_devices`. These are the resources that exist independently of any single event.
|
||||
|
||||
This task happens against a locally running Directus instance (from `pnpm dev`). The output is a snapshot YAML that captures the collection definitions; that snapshot lands in git in task 1.6.
|
||||
|
||||
## Deliverables
|
||||
|
||||
Create the following collections via the admin UI (Settings → Data Model). Field shapes per [[directus-schema-draft]]. Required-field columns marked `*`.
|
||||
|
||||
### `organizations`
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | primary key, auto-generated |
|
||||
| `name` * | string | display name |
|
||||
| `slug` * | string | URL-friendly identifier, unique |
|
||||
| `created_at` | timestamp | Directus standard |
|
||||
| `updated_at` | timestamp | Directus standard |
|
||||
|
||||
Singleton: false. Sort: `name asc`.
|
||||
|
||||
### `users` (extending Directus built-in `directus_users`)
|
||||
|
||||
Use the built-in user collection. Add custom fields (Settings → Data Model → directus_users):
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `phone` | string | optional |
|
||||
| `birth_date` | date | optional, used for age-derived class eligibility (M-5/M-6/M-7) |
|
||||
| `nationality` | string | ISO 3166-1 alpha-2 country code |
|
||||
|
||||
Do NOT add an `organization_id` here — multi-tenancy goes through `organization_users`.
|
||||
|
||||
### `organization_users` (junction)
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `organization_id` * | M2O → organizations | |
|
||||
| `user_id` * | M2O → directus_users | |
|
||||
| `role` * | string (dropdown) | enum: `org-admin`, `race-director`, `marshal`, `timekeeper`, `participant`, `viewer` |
|
||||
| `joined_at` | timestamp | default `now()` |
|
||||
|
||||
Unique constraint: `(organization_id, user_id)` — a user can only have one row per org. Multiple roles per user in same org → not yet (single role per tenant; revisit if needed).
|
||||
|
||||
### `vehicles`
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `make` * | string | "Toyota" |
|
||||
| `model` * | string | "Land Cruiser 70" |
|
||||
| `year` | integer | |
|
||||
| `engine_cc` | integer | engine displacement, used for class assignment |
|
||||
| `vin` | string | optional |
|
||||
| `plate_number` | string | optional |
|
||||
| `notes` | text | |
|
||||
|
||||
No `owner_user_id` / `owner_team_id` — vehicles are org-scoped only, ownership is not modeled (per [[directus-schema-draft]] decision).
|
||||
|
||||
### `organization_vehicles` (junction)
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `organization_id` * | M2O → organizations | |
|
||||
| `vehicle_id` * | M2O → vehicles | |
|
||||
| `registered_at` | timestamp | default `now()` |
|
||||
|
||||
Unique constraint: `(organization_id, vehicle_id)`.
|
||||
|
||||
### `devices`
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `imei` * | string | unique, the canonical device identifier |
|
||||
| `model` * | string | "FMB920", "FMB003", etc. — drives IO mapping in [[processor]] |
|
||||
| `serial_number` | string | optional |
|
||||
| `notes` | text | |
|
||||
|
||||
`imei` UNIQUE — same IMEI can't be registered twice anywhere in the system.
|
||||
|
||||
### `organization_devices` (junction)
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `organization_id` * | M2O → organizations | |
|
||||
| `device_id` * | M2O → devices | |
|
||||
| `registered_at` | timestamp | default `now()` |
|
||||
|
||||
Unique constraint: `(organization_id, device_id)`.
|
||||
|
||||
## Specification
|
||||
|
||||
- **Use UUIDs for all primary keys** (Directus offers UUID v4 generation natively). Avoids leaking row counts and simplifies cross-env data sync.
|
||||
- **All M2O relations have `ON DELETE` set to `RESTRICT`** by default — accidentally deleting an org or vehicle should require the operator to clean up dependents first. Override per-relation only with explicit reason.
|
||||
- **No permission policies** — Phase 4 territory. Set every collection to "All Access" → none (admin only) for now.
|
||||
- **No interface customization beyond defaults** — the SPA isn't using these collections directly yet, and admin UI usability for operators happens after Phase 4 (when policies define what they see).
|
||||
- **Do not commit `.env` or any secrets.** This task only modifies Directus schema, which is captured in the snapshot.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] All seven collections exist in the admin UI with the fields listed above.
|
||||
- [ ] Required fields are flagged required.
|
||||
- [ ] All unique constraints are enforced (test by trying to create a duplicate row — should error).
|
||||
- [ ] M2O relations are visible and clickable in the admin UI's relational fields.
|
||||
- [ ] No permission policies attached (admin-only).
|
||||
- [ ] Manually create one organization, one user, one organization_user row → the relationships work end-to-end.
|
||||
- [ ] `pnpm run schema:snapshot` produces a `snapshots/schema.yaml` with all seven collections present (verified by grep).
|
||||
- [ ] Booting a brand-new Directus instance (fresh DB, fresh containers) and running `directus schema apply --yes snapshots/schema.yaml` recreates the seven collections identically.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **`directus_users` field additions** — Directus does allow adding fields to its built-in user collection, but the snapshot/apply behavior for those additions has historically been finicky across versions. Verify on the pinned Directus version that custom user fields round-trip cleanly via `schema snapshot` + `schema apply`. If they don't, fall back to a separate `user_profiles` collection M2O'd to `directus_users`.
|
||||
- **Slug uniqueness on `organizations`** — Directus enforces this at the field level. Confirm it generates a unique-index DDL in the snapshot.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,125 @@
|
||||
# Task 1.5 — Event-participation collections
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.4
|
||||
**Wiki refs:** `docs/wiki/synthesis/directus-schema-draft.md` (Event-level participation section), `docs/wiki/sources/rally-albania-regulations-2025.md` (§2.2–§2.5 for class taxonomy reference)
|
||||
|
||||
## Goal
|
||||
|
||||
Create the per-event participation collections in the Directus admin UI: `events`, `classes`, `entries`, `entry_crew`, `entry_devices`. These are scoped to a single event and form the unit of timing.
|
||||
|
||||
## Deliverables
|
||||
|
||||
Create the following collections via the admin UI. Field shapes per [[directus-schema-draft]].
|
||||
|
||||
### `events`
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `organization_id` * | M2O → organizations | event lives in exactly one org |
|
||||
| `name` * | string | "Rally Albania 2026" |
|
||||
| `slug` * | string | unique within an org |
|
||||
| `discipline` * | string (dropdown) | enum: `rally`, `time-trial`, `regatta`, `trail-run`, `hike` — drives validation |
|
||||
| `starts_at` * | timestamp | event window begin |
|
||||
| `ends_at` * | timestamp | event window end |
|
||||
| `regulation_doc_url` | string | external URL to the rulebook PDF/page (e.g. `wiki/sources/rally-albania-regulations-2025.md`) |
|
||||
| `notes` | text | |
|
||||
|
||||
Unique constraint: `(organization_id, slug)`.
|
||||
|
||||
### `classes`
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `event_id` * | M2O → events | classes are per-event |
|
||||
| `code` * | string | "M-1", "C-2", "S-1", … |
|
||||
| `name` * | string | human-readable |
|
||||
| `description` | text | eligibility rules in plain text |
|
||||
| `sort_order` | integer | for display ordering |
|
||||
|
||||
Unique constraint: `(event_id, code)`.
|
||||
|
||||
### `entries`
|
||||
|
||||
The unit of timing. One row per (vehicle or solo participant) registered for an event.
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `event_id` * | M2O → events | |
|
||||
| `vehicle_id` | M2O → vehicles | nullable — null for foot races (trail-run, hike) |
|
||||
| `team_id` | M2O → teams | nullable — for now, no `teams` collection in Phase 1, leave the field nullable and unwired (`teams` collection is Phase 2 territory if needed; per the schema draft, teams are an org-level catalog item) |
|
||||
| `class_id` * | M2O → classes | required: every entry has a class |
|
||||
| `race_number` * | integer | per Rally Albania §5: 1–199 moto, 2xx quad, 3xx car, 4xx SSV |
|
||||
| `status` * | string (dropdown) | enum: `registered`, `confirmed`, `started`, `finished`, `dnf`, `dns`, `dq`, `withdrawn` |
|
||||
| `registered_at` | timestamp | default `now()` |
|
||||
| `notes` | text | |
|
||||
|
||||
Unique constraint: `(event_id, race_number)` — no two entries share a race number in the same event.
|
||||
|
||||
> **Status enum semantics** (from the schema draft):
|
||||
> - `registered` — paid, not yet confirmed at scrutineering
|
||||
> - `confirmed` — passed scrutineering, eligible to start
|
||||
> - `started` — has begun the first stage
|
||||
> - `finished` — completed all stages within MTA
|
||||
> - `dnf` — did not finish (started but couldn't complete)
|
||||
> - `dns` — did not start (confirmed but absent at start)
|
||||
> - `dq` — disqualified (rule violation, see Rally Albania §12.13)
|
||||
> - `withdrawn` — voluntary withdraw (Rally Albania §12.15 — MTA penalty for remaining stages)
|
||||
|
||||
> **`teams` deferred:** Phase 1 doesn't define a `teams` collection. The `team_id` field on `entries` is nullable and the FK target is intentionally unwired in Phase 1. Drop the field entirely if it complicates the snapshot — re-add in Phase 2 if a real team relationship is needed.
|
||||
|
||||
### `entry_crew` (junction)
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `entry_id` * | M2O → entries | |
|
||||
| `user_id` * | M2O → directus_users | |
|
||||
| `role` * | string (dropdown) | enum: `pilot`, `co-pilot`, `navigator`, `mechanic`, `rider`, `runner`, `hiker` |
|
||||
|
||||
Unique constraint: `(entry_id, user_id)` — a user can't appear twice in the same entry's crew.
|
||||
|
||||
### `entry_devices` (junction)
|
||||
|
||||
| Field | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` * | UUID | |
|
||||
| `entry_id` * | M2O → entries | |
|
||||
| `device_id` * | M2O → devices | |
|
||||
| `assigned_user_id` | M2O → directus_users | nullable. null = vehicle-mounted; set = body-worn on this crew member |
|
||||
| `mount_position` | string | optional free text: "panic_button_pilot", "hardwired_dash", "backup_chassis" |
|
||||
|
||||
Unique constraint: `(entry_id, device_id)` — a device can't appear twice in the same entry.
|
||||
|
||||
## Specification
|
||||
|
||||
- **All M2O `ON DELETE`:** `RESTRICT` by default. Cascading from event → entries is appealing but risky for audit/historical purposes — leave `RESTRICT` and require explicit operator action.
|
||||
- **`status` enum order matters for display.** Set the dropdown's option order to match the lifecycle: `registered` → `confirmed` → `started` → `finished` → `dnf` → `dns` → `dq` → `withdrawn`.
|
||||
- **`race_number` is integer**, not string. Plate background color (white/yellow/green/red per Rally Albania §5.5) is derivable from the number range; not a stored field.
|
||||
- **No permission policies yet** — Phase 4 territory. Admin-only access.
|
||||
- **No `team_id` field if it adds complexity** — the schema draft leaves teams as an org-level catalog item that's not yet defined. Phase 1 ships entries without team support.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] All five collections exist in the admin UI with the fields listed above.
|
||||
- [ ] Required fields flagged required.
|
||||
- [ ] Unique constraints enforced.
|
||||
- [ ] M2O relations work in the admin UI.
|
||||
- [ ] `entries.status` dropdown shows all eight values in lifecycle order.
|
||||
- [ ] Manually walk through the registration: create an event → create classes → create one entry referencing a vehicle, class, and race number → add two `entry_crew` rows (pilot + co-pilot) → add three `entry_devices` rows (one with `assigned_user_id` set, two with null). All FKs resolve.
|
||||
- [ ] Try to create a second entry with the same `race_number` in the same event → error.
|
||||
- [ ] `pnpm run schema:snapshot` produces a snapshot containing the new collections.
|
||||
- [ ] Cross-checked against the schema draft: every field that should exist does, every nullable field is nullable, every unique constraint is in place.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **`assigned_user_id` on entry_devices** — Directus represents this as an M2O. Verify the snapshot encodes the nullable / non-required nature correctly.
|
||||
- **Cascading deletes vs RESTRICT** — RESTRICT is the safe default but may make admin UX painful (you can't delete an event without first deleting all its entries, etc.). Phase 4 / Phase 5 may revisit with custom Flows that walk the dependency graph.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,59 @@
|
||||
# Task 1.6 — Schema snapshot/apply tooling
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.4, 1.5 (collections must exist before there's anything to snapshot)
|
||||
**Wiki refs:** `docs/wiki/entities/directus.md` (Schema management section)
|
||||
|
||||
## Goal
|
||||
|
||||
Wrap Directus's native `schema snapshot` and `schema apply` commands in repo-local scripts and npm aliases so the snapshot/apply lifecycle is one command, ergonomic for daily dev, and reliable in the entrypoint and CI. Commit the first generated `snapshots/schema.yaml` containing the 12 Phase 1 collections.
|
||||
|
||||
## Deliverables
|
||||
|
||||
- `scripts/schema-snapshot.sh`:
|
||||
- Runs against a *running* Directus container (the local `directus` service from compose.dev.yaml).
|
||||
- Invokes `directus schema snapshot --yes /tmp/snapshot.yaml` inside the container.
|
||||
- Copies the generated snapshot out to `./snapshots/schema.yaml`.
|
||||
- Exits non-zero if Directus isn't reachable or the snapshot command fails.
|
||||
- One-line success log: `snapshot written to snapshots/schema.yaml (<size> bytes)`.
|
||||
- `scripts/schema-apply.sh`:
|
||||
- Used at boot (entrypoint) and in CI dry-run.
|
||||
- Invokes `directus schema apply --yes /directus/snapshots/schema.yaml`.
|
||||
- Logs the diff before applying (`directus schema apply --dry-run` then real apply).
|
||||
- Exits non-zero on failure.
|
||||
- `package.json` scripts (already stubbed in 1.1):
|
||||
- `schema:snapshot` → runs the snapshot script (dev-time only).
|
||||
- `schema:apply` → runs the apply script (used by entrypoint, also useful for local "apply this committed snapshot to my running dev DB").
|
||||
- `schema:diff` → wraps `directus schema apply --dry-run` to preview pending changes without applying.
|
||||
- `snapshots/schema.yaml` — first committed snapshot, containing the 12 Phase 1 collections from tasks 1.4 + 1.5.
|
||||
- `snapshots/README.md` — short note explaining: this directory is **generated**, edit Directus via the admin UI and re-snapshot, do not hand-edit YAML.
|
||||
|
||||
## Specification
|
||||
|
||||
- **Snapshot script runs against a running container, not via Node.** The `directus` CLI requires the same env (DB connection, KEY, SECRET) the server uses; easiest is to `docker compose exec directus directus schema snapshot ...`. Document this assumption — the script fails clearly if no compose stack is running.
|
||||
- **Apply script is environment-agnostic.** It runs inside the image at boot (where Directus is in PATH) and in CI (where it runs against a throwaway Postgres). Don't assume compose; the script just calls `directus schema apply` with paths injected via env or arguments.
|
||||
- **Snapshot format.** Directus 11 snapshots are YAML by default. Pin the format explicitly via the `--format=yaml` flag if available — otherwise rely on the default. Verify the chosen Directus 11 patch version's snapshot format is stable across patch bumps.
|
||||
- **Diff before apply, always.** The apply script logs `directus schema apply --dry-run` output before the real apply. This makes container boot logs self-explanatory: "applying these changes". On a clean re-deploy, the diff is empty.
|
||||
- **Snapshot regeneration is a manual, conscious action.** Don't auto-regenerate on file save. The dev edits the schema in admin UI, decides the change is good, then runs `pnpm run schema:snapshot` to capture it.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] With Phase 1's 12 collections in the running dev Directus, `pnpm run schema:snapshot` produces a `snapshots/schema.yaml` file.
|
||||
- [ ] `snapshots/schema.yaml` contains all 12 collections (verified by grep for `collection: organizations`, `collection: events`, etc.).
|
||||
- [ ] The snapshot is < 200 KB (sanity check — much larger means something is wrong like committed data).
|
||||
- [ ] `pnpm run schema:diff` against the same running Directus shows "no changes".
|
||||
- [ ] Wipe Directus DB (`pnpm dev:reset`) → boot fresh → `pnpm run schema:apply` recreates the 12 collections from the committed snapshot.
|
||||
- [ ] Snapshot a second time after no admin UI changes → result is byte-identical to the first.
|
||||
- [ ] Make a trivial admin UI change (add a description to a field) → snapshot → diff against committed → exactly that change shows up.
|
||||
- [ ] `snapshots/schema.yaml` is committed; `snapshots/README.md` warns against hand-editing.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **Snapshot determinism across runs.** Some Directus versions have re-ordered keys in their snapshot output between identical runs, producing noisy diffs. If this happens on the pinned version, document it as a known issue and consider a post-snapshot `yq sort-keys` normalization step.
|
||||
- **Permission policies in the snapshot.** Phase 1 has no policies set; verify the snapshot is empty in those sections. When Phase 4 adds policies, re-evaluate whether snapshot/apply round-trips them faithfully.
|
||||
- **`directus_users` custom-field round-trip.** Already flagged in task 1.4. If those fields don't round-trip, the workaround (separate `user_profiles` collection) needs to be applied before this snapshot lands.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,85 @@
|
||||
# Task 1.7 — Image build & entrypoint
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.2, 1.3, 1.6 (need the runner, migrations, and snapshot tooling all in place)
|
||||
**Wiki refs:** `docs/wiki/entities/directus.md` (Schema management section)
|
||||
|
||||
## Goal
|
||||
|
||||
Build a production-ready Directus image that bakes in the snapshot, db-init migrations, extensions directory, and entrypoint script. Replace the placeholder entrypoint from 1.1 with the real boot sequence: db-init → schema apply → directus start.
|
||||
|
||||
## Deliverables
|
||||
|
||||
- `Dockerfile` (replacing the placeholder from 1.1):
|
||||
```dockerfile
|
||||
FROM directus/directus:11.5.1 # pin specific patch version
|
||||
|
||||
USER root
|
||||
RUN apk add --no-cache postgresql16-client bash coreutils
|
||||
USER node
|
||||
|
||||
COPY --chown=node:node snapshots/ /directus/snapshots/
|
||||
COPY --chown=node:node db-init/ /directus/db-init/
|
||||
COPY --chown=node:node extensions/ /directus/extensions/
|
||||
COPY --chown=node:node scripts/ /directus/scripts/
|
||||
COPY --chown=node:node entrypoint.sh /directus/entrypoint.sh
|
||||
RUN chmod +x /directus/entrypoint.sh /directus/scripts/*.sh
|
||||
|
||||
ENTRYPOINT ["/directus/entrypoint.sh"]
|
||||
```
|
||||
Adjust `apk` / `apt-get` based on the upstream image's distro. `postgresql-client` is required for `psql` in the db-init runner.
|
||||
- `entrypoint.sh`:
|
||||
```sh
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
echo "[entrypoint] running db-init"
|
||||
/directus/scripts/apply-db-init.sh
|
||||
|
||||
echo "[entrypoint] applying Directus schema snapshot"
|
||||
/directus/scripts/schema-apply.sh
|
||||
|
||||
echo "[entrypoint] starting Directus"
|
||||
exec /directus/cli.js start
|
||||
```
|
||||
(Verify `/directus/cli.js start` is the correct upstream command for the pinned version. Some versions use `node /directus/server.js`.)
|
||||
- Update `compose.dev.yaml` so the dev image uses the same Dockerfile (no special path in dev). The local image has identical boot semantics to prod — only env vars differ.
|
||||
|
||||
## Specification
|
||||
|
||||
- **Pin the Directus version exactly** (e.g. `11.5.1`, not `11`). Version bumps land via PR.
|
||||
- **Layer ordering for cache friendliness.**
|
||||
1. `FROM` + apk install (rarely changes).
|
||||
2. `COPY scripts/` (changes occasionally).
|
||||
3. `COPY entrypoint.sh` (rarely changes).
|
||||
4. `COPY db-init/` (changes per migration PR).
|
||||
5. `COPY snapshots/` (changes per schema PR — most volatile).
|
||||
6. `COPY extensions/` (Phase 5+).
|
||||
Putting the most-changed layer last maximizes cache reuse for the rest.
|
||||
- **`USER node`** for runtime (matches upstream image's non-root convention).
|
||||
- **Health check.** Add a `HEALTHCHECK` instruction calling `wget -qO- http://localhost:8055/server/ping` (or the upstream's health endpoint), with sensible interval/timeout. Useful in compose and Portainer.
|
||||
- **Entrypoint failure modes.** If db-init fails → exit, container restarts (Docker will retry). If schema apply fails → same. Both failures should produce clear log lines so an operator looking at Portainer logs can diagnose.
|
||||
- **No `EXPOSE` change** — the upstream image already exposes `8055`.
|
||||
- **No `ENV` overrides** for Directus runtime config in the Dockerfile — that's the deployer's concern via env vars at runtime.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] `docker build -t trm-directus:dev .` succeeds.
|
||||
- [ ] Image size is reasonable (< 600 MB; upstream image + tooling).
|
||||
- [ ] Booting against a fresh Postgres: db-init applies all three migrations, schema apply creates 12 collections, Directus starts and serves on `:8055`.
|
||||
- [ ] Re-booting against the same Postgres (warm DB): db-init reports "0 applied, 3 skipped", schema apply reports "no changes", Directus starts.
|
||||
- [ ] Killing Postgres mid-db-init → container exits non-zero with clear error in logs.
|
||||
- [ ] Killing Postgres mid-schema-apply → container exits non-zero with clear error in logs.
|
||||
- [ ] HEALTHCHECK reports "healthy" once Directus is serving.
|
||||
- [ ] `compose.dev.yaml` `directus` service uses the local Dockerfile build and works end-to-end (`pnpm dev:reset` → fresh boot → admin UI loads).
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **Upstream image distro.** Directus's official image has used both Alpine and Debian-based bases over the years. Verify the current 11.x base and adjust `apk` vs `apt-get` accordingly.
|
||||
- **`/directus/cli.js start` path.** Confirm against the upstream Dockerfile / docs for the pinned version. Bake the right command into entrypoint.sh.
|
||||
- **Permissions on `/directus/snapshots/` etc.** If the upstream user is `node` (uid 1000), the `--chown=node:node` flag is right. Verify with `docker run --rm trm-directus:dev id`.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,129 @@
|
||||
# Task 1.8 — Gitea CI dry-run workflow
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.7
|
||||
**Wiki refs:** `docs/wiki/entities/directus.md` (Schema management section)
|
||||
|
||||
## Goal
|
||||
|
||||
Build a Gitea Actions workflow that on push to `main` (when relevant paths change): builds the image, spins up a throwaway Postgres + TimescaleDB in CI, runs the entrypoint flow as a **dry-run** to catch snapshot/migration breakage, and only publishes the image to the registry if the dry-run succeeds. Mirrors the `processor` and `tcp-ingestion` workflow shape.
|
||||
|
||||
## Deliverables
|
||||
|
||||
- `.gitea/workflows/build.yml`:
|
||||
```yaml
|
||||
name: Build directus image
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths:
|
||||
- 'snapshots/**'
|
||||
- 'db-init/**'
|
||||
- 'extensions/**'
|
||||
- 'scripts/**'
|
||||
- 'entrypoint.sh'
|
||||
- 'Dockerfile'
|
||||
- '.gitea/workflows/build.yml'
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
build-and-publish:
|
||||
runs-on: ubuntu-22.04
|
||||
services:
|
||||
postgres:
|
||||
image: timescale/timescaledb-ha:pg16-latest
|
||||
env:
|
||||
POSTGRES_USER: directus
|
||||
POSTGRES_PASSWORD: directus
|
||||
POSTGRES_DB: directus
|
||||
ports: ['5432:5432']
|
||||
options: >-
|
||||
--health-cmd "pg_isready -U directus"
|
||||
--health-interval 5s
|
||||
--health-timeout 5s
|
||||
--health-retries 10
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Build image
|
||||
run: docker build -t trm-directus:ci .
|
||||
|
||||
- name: Dry-run boot against throwaway Postgres
|
||||
env:
|
||||
DB_HOST: postgres
|
||||
DB_PORT: 5432
|
||||
DB_USER: directus
|
||||
DB_PASSWORD: directus
|
||||
DB_DATABASE: directus
|
||||
KEY: ci-key-not-secret
|
||||
SECRET: ci-secret-not-secret
|
||||
ADMIN_EMAIL: ci@example.com
|
||||
ADMIN_PASSWORD: ci-password-not-secret
|
||||
PUBLIC_URL: http://localhost:8055
|
||||
run: |
|
||||
docker run --rm \
|
||||
-e DB_CLIENT=pg \
|
||||
-e DB_HOST=$DB_HOST -e DB_PORT=$DB_PORT \
|
||||
-e DB_USER=$DB_USER -e DB_PASSWORD=$DB_PASSWORD -e DB_DATABASE=$DB_DATABASE \
|
||||
-e KEY=$KEY -e SECRET=$SECRET \
|
||||
-e ADMIN_EMAIL=$ADMIN_EMAIL -e ADMIN_PASSWORD=$ADMIN_PASSWORD \
|
||||
-e PUBLIC_URL=$PUBLIC_URL \
|
||||
--network host \
|
||||
--entrypoint bash \
|
||||
trm-directus:ci \
|
||||
-c '/directus/scripts/apply-db-init.sh && /directus/scripts/schema-apply.sh && echo "dry-run ok"'
|
||||
|
||||
- name: Login to Gitea registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: git.dev.microservices.al
|
||||
username: ${{ secrets.REGISTRY_USERNAME }}
|
||||
password: ${{ secrets.REGISTRY_PASSWORD }}
|
||||
|
||||
- name: Tag and push
|
||||
run: |
|
||||
docker tag trm-directus:ci git.dev.microservices.al/trm/directus:main
|
||||
docker tag trm-directus:ci git.dev.microservices.al/trm/directus:${{ github.sha }}
|
||||
docker push git.dev.microservices.al/trm/directus:main
|
||||
docker push git.dev.microservices.al/trm/directus:${{ github.sha }}
|
||||
|
||||
- name: Trigger Portainer redeploy (optional)
|
||||
if: secrets.PORTAINER_WEBHOOK_URL != ''
|
||||
run: curl -X POST "${{ secrets.PORTAINER_WEBHOOK_URL }}"
|
||||
```
|
||||
|
||||
## Specification
|
||||
|
||||
- **Dry-run runs the entrypoint scripts only**, not `directus start`. Starting the server and waiting for it to serve is slow and unnecessary — the goal is to catch DDL / snapshot apply errors. Override the `ENTRYPOINT` and run the two scripts directly.
|
||||
- **Service container is the throwaway Postgres.** `services:` block in Gitea Actions (compatible syntax with GitHub Actions). Use the pinned TimescaleDB image; mismatch with prod hides bugs.
|
||||
- **Path filter on `on.push.paths`** keeps CI quiet for unrelated repo changes (docs-only commits, etc.). Mirrors the processor workflow.
|
||||
- **Two image tags published:** `:main` (always points at latest main) and `:<sha>` (specific commit, immutable). The deploy stack can pin to either.
|
||||
- **Portainer webhook is optional** (gated by secret presence). If unset, no auto-deploy.
|
||||
- **No integration tests in CI for Phase 1.** The dry-run boot *is* the integration test — it proves the snapshot+db-init combination works against a fresh Postgres. Phase 5+ adds extension-specific tests as those land.
|
||||
- **Required Gitea secrets:**
|
||||
- `REGISTRY_USERNAME`, `REGISTRY_PASSWORD` — for the image push.
|
||||
- `PORTAINER_WEBHOOK_URL` — optional, for auto-deploy.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] Workflow file is committed at `.gitea/workflows/build.yml`.
|
||||
- [ ] First push to `main` after this lands triggers the workflow.
|
||||
- [ ] Workflow steps in order: checkout → build → dry-run boot → registry login → tag/push → optional Portainer ping.
|
||||
- [ ] Dry-run step exits 0 with logs showing "db-init complete" and "schema apply: no changes" (after the snapshot has been applied once, subsequent runs against fresh Postgres still apply from scratch — verify the apply step works in both cases).
|
||||
- [ ] Intentionally break the snapshot (manually edit `snapshots/schema.yaml` to a malformed YAML) → workflow fails at the dry-run step → image is NOT pushed.
|
||||
- [ ] Intentionally break a migration (introduce SQL syntax error in `db-init/`) → workflow fails at the dry-run step → image is NOT pushed.
|
||||
- [ ] Push a docs-only change → workflow does NOT trigger.
|
||||
- [ ] Image pushed to registry under `git.dev.microservices.al/trm/directus:main` and `:<sha>`.
|
||||
- [ ] Portainer webhook fires if configured.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **Gitea Actions `services:` syntax compatibility.** Gitea's runner is mostly GitHub-Actions-compatible but has historically had quirks with the `services:` block (especially around image pulls from private registries). If the throwaway Postgres can't be brought up via `services:`, fall back to a `docker run` step that backgrounds the container and a wait-loop on `pg_isready`. Document the chosen approach.
|
||||
- **Network access between job container and service container.** `--network host` is the simplest solution if Gitea's runner allows it. If not, use the Docker network created by the runner and reference the service by name (`postgres:5432`).
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
@@ -0,0 +1,106 @@
|
||||
# Task 1.9 — Rally Albania 2026 dogfood seed
|
||||
|
||||
**Phase:** 1 — Slice 1 schema + deploy pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Depends on:** 1.5, 1.7 (need event-participation collections live; need a deployable image to run them on stage)
|
||||
**Wiki refs:** `docs/wiki/sources/rally-albania-regulations-2025.md` (§2.2–§2.5 class catalog, §1 event metadata), memory `project_rally_albania_2026.md`
|
||||
|
||||
## Goal
|
||||
|
||||
Seed the stage instance with real data: the "Motorsport Club Albania" organization, the "Rally Albania 2026" event, the full class catalog from the regulations, and at least one fully-registered test entry. Walk the registration workflow end-to-end through the admin UI to confirm the slice-1 schema actually supports a real event registration without surprises. **This is the dogfood gate.**
|
||||
|
||||
## Deliverables
|
||||
|
||||
Done via the admin UI on the stage Directus instance (no code changes — this task is operational, not a build). Capture screenshots / brief notes in this task's "Done" section.
|
||||
|
||||
### 1. Organization
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| `name` | Motorsport Club Albania |
|
||||
| `slug` | msc-albania |
|
||||
|
||||
### 2. Event
|
||||
|
||||
| Field | Value |
|
||||
|---|---|
|
||||
| `organization_id` | (the org from step 1) |
|
||||
| `name` | Rally Albania 2026 |
|
||||
| `slug` | rally-albania-2026 |
|
||||
| `discipline` | rally |
|
||||
| `starts_at` | 2026-06-06T00:00:00Z |
|
||||
| `ends_at` | 2026-06-13T23:59:59Z |
|
||||
| `regulation_doc_url` | https://www.rallyalbania.org or the wiki source page URL |
|
||||
|
||||
### 3. Class catalog (per Rally Albania §2.2–§2.5)
|
||||
|
||||
Create one row per class. `event_id` = the event from step 2.
|
||||
|
||||
| code | name | sort_order |
|
||||
|---|---|---|
|
||||
| M-1 | MOTO Under 450cc | 10 |
|
||||
| M-2 | MOTO 450–600cc | 20 |
|
||||
| M-3 | MOTO over 600cc, single cylinder | 30 |
|
||||
| M-4 | MOTO over 600cc, bi-cylinder | 40 |
|
||||
| M-5 | MOTO Senior, under 450cc | 50 |
|
||||
| M-6 | MOTO Senior, over 450cc | 60 |
|
||||
| M-7 | MOTO Veteran (any bike) | 70 |
|
||||
| M-8 | MOTO Female driver | 80 |
|
||||
| Q-1 | QUAD 2WD | 90 |
|
||||
| Q-2 | QUAD 4WD | 100 |
|
||||
| Q-3 | QUAD Female pilot | 110 |
|
||||
| C-1 | CAR Modified | 120 |
|
||||
| C-2 | CAR Production | 130 |
|
||||
| C-A | CAR Standard Automobiles | 140 |
|
||||
| C-3 | CAR All-female team | 150 |
|
||||
| S-1 | SSV Single pilot | 160 |
|
||||
| S-2 | SSV Two-driver team | 170 |
|
||||
| S-3 | SSV All-female team | 180 |
|
||||
|
||||
> **Numbering note:** The regulations doc uses `M-7` for both Veteran and Female driver — apparent typo. This seed renames the Female driver class to **M-8** to disambiguate. Flag this in the post-event review with the organizer; if they confirm M-8 is wrong, rename later.
|
||||
|
||||
### 4. Test entry — full registration walkthrough
|
||||
|
||||
Pick (or create) a test user in `directus_users`, a test vehicle in `vehicles`, and two test devices in `devices`. Register them all in the event:
|
||||
|
||||
1. Add the test user to `organization_users` with role `participant`.
|
||||
2. Add the test vehicle to `organization_vehicles`.
|
||||
3. Add the test devices to `organization_devices`.
|
||||
4. Create an `entries` row: `event_id` = Rally Albania 2026, `vehicle_id` = test vehicle, `class_id` = M-1 (or whatever fits the test vehicle), `race_number` = 1, `status` = `registered`.
|
||||
5. Create one `entry_crew` row: `entry_id` = the entry, `user_id` = test user, `role` = `pilot`.
|
||||
6. Create two `entry_devices` rows: one with `assigned_user_id` = test user (panic button), one with `assigned_user_id` = null (vehicle-mounted). `mount_position` field filled in for both.
|
||||
7. Verify the live map (Phase 1 of [[processor]]) still renders the test devices' positions correctly under the new entry-aware schema. (If the SPA isn't yet wired to look up entries, that's fine — verify in DB / processor logs that the device IDs match what the entry registered.)
|
||||
|
||||
### 5. Post-walkthrough checklist
|
||||
|
||||
In this task's "Done" section, capture:
|
||||
|
||||
- [ ] Any field that was awkward to enter via admin UI (interface improvements for Phase 5 hooks).
|
||||
- [ ] Any constraint that fired unexpectedly (data model bugs to fix in a follow-up).
|
||||
- [ ] Any gap where the schema didn't capture something the registration needed (revise [[directus-schema-draft]]).
|
||||
- [ ] How long the full registration took. Realistic baseline for "register N entries" planning.
|
||||
|
||||
## Specification
|
||||
|
||||
- **Stage env, not local.** This task verifies the deploy pipeline end-to-end: image was built by Phase 1.8 CI, pulled by Portainer, booted with snapshot+db-init applied, then operator interacts with the live admin UI.
|
||||
- **Real-ish data.** Use plausible names / IMEIs / VINs — not "test1", "foo", "bar". The data will be reviewed by the organizer eventually; quality matters.
|
||||
- **One full crew, not many.** A single pilot entry is enough to dogfood. Save the multi-crew rally car case for a Phase 2 dogfood.
|
||||
- **No SPA work in this task.** The registration is admin-UI only. SPA-side work (operator-friendly registration UX) is a separate workstream not blocked on Phase 1.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
- [ ] All 18 class rows visible in admin UI under the Rally Albania 2026 event.
|
||||
- [ ] One complete entry exists with vehicle + class + crew + devices.
|
||||
- [ ] Live map shows the test devices' positions tagged with their device IDs (existing Phase 1 [[processor]] behavior).
|
||||
- [ ] Post-walkthrough checklist filled in.
|
||||
- [ ] Any schema bugs surfaced are tracked as new tasks (or revisions to existing task files).
|
||||
- [ ] Decision: does the slice-1 schema support Rally Albania 2026 as a test event, or does it need revisions before June? Captured as a one-line verdict in this task's Done section.
|
||||
|
||||
## Risks / open questions
|
||||
|
||||
- **Phase 4 (permissions) hasn't landed yet.** Operators using admin UI for registration are doing so as Directus admins, which is fine for dogfood but obviously not for production use. Phase 4 is the gate for non-admin users.
|
||||
- **The "live map" verification step** depends on Phase 1 [[processor]] being deployed and pointed at the same database. Confirm before starting.
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA / dogfood date + one-line verdict when this lands.)
|
||||
@@ -0,0 +1,87 @@
|
||||
# Phase 1 — Slice 1 schema + deploy pipeline
|
||||
|
||||
Stand up a Directus 11 instance with the minimum schema needed to register entries and tie them to devices, plus the schema-as-code pipeline (snapshots + db-init) and Gitea Actions CI. **This is what Rally Albania 2026 needs to run as a test event.**
|
||||
|
||||
## Outcome statement
|
||||
|
||||
When Phase 1 is done:
|
||||
|
||||
- Directus runs locally via `docker compose -f compose.dev.yaml up`, against a Postgres 16 + TimescaleDB + PostGIS container.
|
||||
- `db-init/` contains three migrations applied at boot: TimescaleDB extension, `positions` hypertable creation, `faulty boolean` column on positions. All idempotent, all guarded by a `migrations_applied` table.
|
||||
- `snapshots/schema.yaml` contains 12 collections: `organizations`, `users`, `organization_users`, `vehicles`, `organization_vehicles`, `devices`, `organization_devices`, `events`, `classes`, `entries`, `entry_crew`, `entry_devices`. Relations and required fields per [[directus-schema-draft]] (the org-level catalog and event-participation sections).
|
||||
- The image entrypoint runs db-init, then `directus schema apply --yes`, then `directus start`. All three exit 0 against a fresh Postgres.
|
||||
- Gitea Actions builds the image on push to `main` (when `snapshots/`, `db-init/`, `extensions/`, `Dockerfile`, or workflow file changes), runs the apply pipeline against a throwaway Postgres in CI, and pushes the image to `git.dev.microservices.al/trm/directus:main` only if the dry-run passes.
|
||||
- "Motorsport Club Albania" exists as an organization, "Rally Albania 2026" exists as an event under it, and the Rally Albania class catalog is seeded (M-1..M-7, Q-1..Q-3, C-1/C-2/C-A/C-3, S-1/S-2/S-3 from `wiki/sources/rally-albania-regulations-2025.md` §2.2–§2.5). At least one test entry registered with vehicle + crew + devices, used to dogfood the registration workflow.
|
||||
|
||||
Phase 1 deliberately stops short of:
|
||||
- Course definition (stages, segments, geofences, SLZs) — Phase 2.
|
||||
- Penalty system tables and timing tables — Phase 3.
|
||||
- Permission policies — Phase 4 (collections are admin-only by default).
|
||||
- Custom extension code — Phase 5.
|
||||
|
||||
## Sequencing
|
||||
|
||||
```
|
||||
1.1 Project scaffold
|
||||
└─→ 1.2 db-init runner script
|
||||
└─→ 1.3 Initial migrations
|
||||
├─→ 1.4 Org-level catalog collections (admin UI work)
|
||||
│ └─→ 1.5 Event-participation collections (admin UI work)
|
||||
│ └─→ 1.6 Schema snapshot/apply tooling
|
||||
│ └─→ 1.7 Image build & entrypoint
|
||||
│ └─→ 1.8 Gitea CI dry-run
|
||||
│ └─→ 1.9 Rally Albania 2026 seed
|
||||
```
|
||||
|
||||
Tasks 1.1 → 1.3 are pure infrastructure and can land before any Directus admin UI work begins. Tasks 1.4 + 1.5 happen against a locally running Directus instance. Tasks 1.6 → 1.8 wire the artifacts together. Task 1.9 is dogfood verification.
|
||||
|
||||
## Files modified
|
||||
|
||||
Phase 1 produces this layout in `directus/`:
|
||||
|
||||
```
|
||||
directus/
|
||||
├── .gitea/workflows/build.yml
|
||||
├── snapshots/
|
||||
│ └── schema.yaml # generated; edits via admin UI + pnpm run schema:snapshot
|
||||
├── db-init/
|
||||
│ ├── 001_extensions.sql # CREATE EXTENSION timescaledb (postgis added in Phase 2)
|
||||
│ ├── 002_positions_hypertable.sql
|
||||
│ └── 003_faulty_column.sql
|
||||
├── extensions/ # empty — Phase 5 fills this
|
||||
├── scripts/
|
||||
│ ├── apply-db-init.sh # numeric-order, guard-table-protected runner
|
||||
│ ├── schema-snapshot.sh # wraps `directus schema snapshot --yes`
|
||||
│ └── schema-apply.sh # wraps `directus schema apply --yes`
|
||||
├── entrypoint.sh # apply-db-init.sh && directus schema apply && directus start
|
||||
├── Dockerfile # FROM directus/directus:11.x + bundled artifacts
|
||||
├── compose.dev.yaml # local dev: directus + timescaledb container
|
||||
├── package.json # only for the snapshot/apply npm scripts and tooling
|
||||
├── pnpm-lock.yaml
|
||||
├── .env.example
|
||||
├── .dockerignore
|
||||
├── .gitignore
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Tech stack (decided)
|
||||
|
||||
- **Directus 11.x** (latest stable on the 11.x line at time of build). Pinned in `Dockerfile` `FROM` line.
|
||||
- **Postgres 16 + TimescaleDB + PostGIS** as the database (PostGIS extension added in Phase 2; Phase 1 only uses TimescaleDB).
|
||||
- **pnpm** for any local dev scripts (snapshot wrappers, lint).
|
||||
- **bash** (POSIX-compatible) for `apply-db-init.sh` and `entrypoint.sh`. No Node dependency at runtime — only Directus needs Node, and that's the upstream image's responsibility.
|
||||
- **psql** (from `postgresql-client` package) inside the image for db-init application.
|
||||
- **Gitea Actions** for CI, matching the `processor` and `tcp-ingestion` workflow shape.
|
||||
|
||||
If an implementer wants to deviate, they must update the relevant task file first.
|
||||
|
||||
## Key design decisions inherited from `processor`
|
||||
|
||||
- **Image is bundled, not assembled at runtime.** `snapshots/`, `db-init/`, and `extensions/` are baked into the image, not mounted as volumes. Reproducible across envs.
|
||||
- **Slim Dockerfile.** Multi-stage if extensions need a build step (Phase 5+); for Phase 1 a single stage is enough.
|
||||
- **CI workflow** — single-job pattern matching `processor/.gitea/workflows/build.yml`. Use `services:` for the throwaway Postgres in the dry-run step.
|
||||
- **No `.env` in image.** All env vars come from the deploy stack (Portainer / compose) at runtime.
|
||||
|
||||
## Open questions blocking task-level detail
|
||||
|
||||
None. The schema draft pinned the org-level catalog and event-participation shape; Phase 1 implements exactly that subset.
|
||||
@@ -0,0 +1,34 @@
|
||||
# Phase 2 — Course definition
|
||||
|
||||
**Status:** ⬜ Not started — depends on Phase 1
|
||||
**Outcome:** A complete data layer for defining the spatial and procedural shape of an event: stages, segments (typed: liaison / special-stage / parc-ferme), geofences (PostGIS polygons), waypoints, and speed_limit_zones. Operators can populate the full course before each stage from the roadbook. No processor logic yet — Phase 2 of [[processor]] consumes this data and writes crossings/penalties.
|
||||
|
||||
## Why this is a separate phase
|
||||
|
||||
- **Phase 1 ships a usable system.** Live tracking + entry registration is enough for the Rally Albania 2026 dogfood. Course definition adds operator workload but no live functionality until [[processor]] Phase 2 ships.
|
||||
- **PostGIS introduction.** Phase 2 is where `CREATE EXTENSION postgis` lands in `db-init/`. Geometry columns require it; no other Phase 1 work does.
|
||||
- **Scope creep risk.** Course definition has surprising depth (geofence editing UX, polygon validation, segment ordering UX). Isolating it as its own phase prevents Phase 1 from ballooning.
|
||||
|
||||
## Tasks (sketched, not detailed)
|
||||
|
||||
These tasks will get full task files when Phase 2 starts. For now, this is the planned shape:
|
||||
|
||||
| # | Task | Notes |
|
||||
|---|------|-------|
|
||||
| 2.1 | PostGIS extension migration | `db-init/004_postgis.sql` adds the extension. Idempotent. |
|
||||
| 2.2 | `stages` collection | Per [[directus-schema-draft]]: `event_id`, `name`, `sort_order`, `role`, `starts_at`, `start_interval_seconds`, `start_order_strategy`, `start_order_strategy_params`, `start_order_input_stage_id`. |
|
||||
| 2.3 | `segments` collection | `stage_id`, `sort_order`, `type` (liaison/special-stage/parc-ferme), `entry_geofence_id`, `exit_geofence_id`, `target_duration_seconds`. |
|
||||
| 2.4 | `geofences` collection | PostGIS polygon column. `event_id`, `name`, `kind`, `geometry`, `manual_verification`, `retroactive`. |
|
||||
| 2.5 | `waypoints` collection | PostGIS point. `segment_id`, `location`, `tolerance_meters`, `sort_order`. |
|
||||
| 2.6 | `speed_limit_zones` collection | PostGIS polygon. `segment_id`, `geometry`, `max_speed_kmh`, `evaluation_window_meters`, `retroactive`. |
|
||||
| 2.7 | Geometry editor evaluation | Directus's built-in map interface vs. a custom extension panel. Decide based on usability for operators authoring real geofences from a roadbook. |
|
||||
| 2.8 | Snapshot regeneration + CI verification | All new collections in the snapshot; CI dry-run still passes. |
|
||||
|
||||
## Open questions blocking task-level detail
|
||||
|
||||
(These get answered when Phase 2 starts.)
|
||||
|
||||
1. Is Directus's built-in map interface adequate for authoring geofence polygons, or does Phase 2 need a custom extension with a richer editor (drawing tools, vertex snapping, polygon validation)?
|
||||
2. How are roadbook coordinates (typically WGS84 lat/lon) imported — bulk upload, copy-paste, manual click-to-place?
|
||||
3. Are checkpoint geofences a separate `kind` value or is `manual_verification = true` enough to identify them?
|
||||
4. Should `start_order_strategy_params` be a JSON field (flexible) or a dedicated table (queryable)? JSON is the schema-draft default; revisit if querying is needed.
|
||||
@@ -0,0 +1,29 @@
|
||||
# Phase 3 — Timing & penalty tables
|
||||
|
||||
**Status:** ⬜ Not started — co-developed with [[processor]] Phase 2
|
||||
**Outcome:** The schema half of the paired schema/code work that produces real timing results. Adds `entry_segment_starts`, `entry_crossings`, `entry_penalties`, `stage_results`, and `penalty_formulas` collections. Penalty evaluator registry ships on the [[processor]] side; the rule numeric values ship here. After Phase 3, operators can review computed penalties, override them, and publish official stage results.
|
||||
|
||||
## Why this is co-developed
|
||||
|
||||
- **Schema and writer must land together.** `entry_crossings` rows are written by [[processor]] Phase 2; defining the collection without the writer is dead weight, defining the writer without the collection is broken code. Land both in the same release window.
|
||||
- **Penalty formula seeding is event-specific.** Rally Albania 2026's SLZ brackets come from the Supplementary Regulation (published 60 days before the event per regs §1.8). Phase 3 needs that data, or a reasonable placeholder, before the rally.
|
||||
- **Snapshot pattern requires care.** `entry_penalties` snapshots its inputs (peak speed, count, formula values). The schema must capture this faithfully so the recompute strategy in [[processor]] works.
|
||||
|
||||
## Tasks (sketched, not detailed)
|
||||
|
||||
| # | Task | Notes |
|
||||
|---|------|-------|
|
||||
| 3.1 | `penalty_formulas` collection | Per [[directus-schema-draft]]: `event_id`, `belongs_to_type`, `belongs_to_id`, `type`, `offence_min`, `offence_max`, `operator`, `penalty`, `retroactive`, `enabled`. Both bracket-style and flat-style coexist. |
|
||||
| 3.2 | `entry_segment_starts` collection | `entry_id`, `segment_id`, `start_position`, `target_at`, `manual_override`. Materialized at stage open. |
|
||||
| 3.3 | `entry_crossings` collection | Per-position-derived crossing events. Idempotent on `(entry_id, geofence_id, ts)`. Written by [[processor]]. |
|
||||
| 3.4 | `entry_penalties` collection | `entry_id`, `type`, `formula_id`, `formula_snapshot` (JSONB), `inputs` (JSONB), `seconds`, `evaluated_at`, `recomputed_at`, `manual_override`. Snapshot inputs and rule rows for cheap recompute. |
|
||||
| 3.5 | `stage_results` collection | `entry_id`, `stage_id`, `clean_time`, `penalty_seconds`, `total_time`, `position_in_class`, `position_overall`, `published_at`. The next-stage seeding input is `clean_time`. |
|
||||
| 3.6 | Custom interface for penalty review | Operator-facing panel showing all `entry_penalties` rows for a stage, with diff between auto-computed and manual override. Likely a custom extension (Phase 5 territory). Phase 3 produces the schema; the UI follows. |
|
||||
| 3.7 | Snapshot regeneration + CI verification | All new collections in the snapshot; CI dry-run still passes. |
|
||||
| 3.8 | Rally Albania SLZ bracket seed | Once the Supplementary Regulation is published, seed `penalty_formulas` rows with the actual brackets. Single SQL/Directus-API script. |
|
||||
|
||||
## Open questions blocking task-level detail
|
||||
|
||||
1. Does `entry_crossings` need PostGIS metadata on each row (e.g. the exact geometry-relative position), or just `(entry_id, geofence_id, ts, kind)`? Default: minimal, the position is in `positions` and the join key gets you back to it.
|
||||
2. Where does `position_in_class` and `position_overall` get computed — DB view, materialized view, or [[processor]]-written column? Trade-off: view is simpler but slower; column is faster but needs invalidation.
|
||||
3. Penalty review workflow UX — is the operator approving each row individually, or bulk-approving the auto-computed set with manual exceptions? Drives whether `manual_override` is a single bool or a richer state.
|
||||
@@ -0,0 +1,41 @@
|
||||
# Phase 4 — Permissions & policies
|
||||
|
||||
**Status:** ⬜ Not started — depends on Phases 1–3 (all collections must exist before policies are drafted)
|
||||
**Outcome:** Every collection × action combination has an explicit Directus 11 Policy attached. Multi-tenant isolation is enforced by Directus's dynamic-filter mechanism, not by application code. Operators see only data scoped to orgs they belong to, with the actions allowed by their role. Non-admin users can register entries, view live tracking, review their own results — all without ever needing admin role.
|
||||
|
||||
## Why this is a separate phase
|
||||
|
||||
- **Premature policy commitment is expensive.** Defining policies before the data model has shaken out leads to filters that break when a collection's shape changes. Phases 1–3 get one to two iterations on the schema; Phase 4 lands when the model is stable.
|
||||
- **Policy filters are tedious but not architectural.** This is admin-UI configuration work, not design. Roughly 5 roles × ~20 collections × 4 CRUD actions = ~400 (role × collection × action) cells, most of which are templated repeats of "user is in this org via `organization_users`".
|
||||
- **Testable as a unit.** End-state: a non-admin test user with `participant` role can perform exactly the operations they should, and zero others. Phase 4's CI / dogfood verification is a permission-boundary test suite.
|
||||
|
||||
## Roles to support (per [[directus-schema-draft]])
|
||||
|
||||
| Role | Power |
|
||||
|---|---|
|
||||
| `org-admin` | Full CRUD within their org. Can manage `organization_users`, classes, events. |
|
||||
| `race-director` | Manage entries, segments, geofences, penalties for events in their org. Approve / publish stage results. Cannot create new orgs. |
|
||||
| `marshal` | Read-only on most collections; can flag faulty positions and write notes on entries during the event. Time-limited (only during active event). |
|
||||
| `timekeeper` | Edit `entry_segment_starts.target_at` (late-arrival reseeding); read all entries; cannot modify penalties. |
|
||||
| `participant` | Read-only on entries they appear in (via `entry_crew`); read on the events they're registered for; no writes. |
|
||||
| `viewer` | Read-only on public-facing event data (live map, published results). Lowest privilege; default for any user not otherwise scoped. |
|
||||
|
||||
## Tasks (sketched, not detailed)
|
||||
|
||||
| # | Task | Notes |
|
||||
|---|------|-------|
|
||||
| 4.1 | Draft the canonical "user is in this org with this role" filter expression | One JSON filter that gets reused. Lives in a template / snippet for copy-paste. |
|
||||
| 4.2 | `org-admin` policy | All CRUD on org-scoped collections, scoped via the canonical filter. |
|
||||
| 4.3 | `race-director` policy | CRUD on events / entries / classes / penalties for events in their org. |
|
||||
| 4.4 | `marshal` policy | Field-level write on `positions.faulty`; entry notes; otherwise read-only. |
|
||||
| 4.5 | `timekeeper` policy | Field-level write on `entry_segment_starts.target_at` and `manual_override`; otherwise read-only. |
|
||||
| 4.6 | `participant` policy | Filter on entries via `entry_crew.user_id = $CURRENT_USER`. |
|
||||
| 4.7 | `viewer` policy | Public read on a curated subset (live positions for active events, published `stage_results`). |
|
||||
| 4.8 | Snapshot regeneration + CI verification | All policies round-trip via `directus schema snapshot` (verify the format is faithful — Directus's policy serialization has historically been finicky). |
|
||||
| 4.9 | Permission-boundary test suite | Custom test that creates a user per role, attempts a series of CRUD operations, asserts allowed/denied per a fixture. Runs in CI alongside the dry-run. |
|
||||
|
||||
## Open questions blocking task-level detail
|
||||
|
||||
1. **Marshal time-limiting.** Marshal access tied to "during active event" — does Directus's dynamic filter support time-bounded conditions natively, or does this need a custom hook (Phase 5)?
|
||||
2. **Field-level vs row-level restrictions.** Some collections (`positions`, `entry_segment_starts`) need field-level write restrictions (only one column writable). Verify Directus 11 supports field-level policies in the dynamic-filter mechanism, or fall back to a hook that rejects writes to other fields.
|
||||
3. **Snapshot fidelity.** Does `directus schema snapshot` faithfully capture all policy filter JSON? If not, policies might need to live in a separate seed script applied alongside the snapshot.
|
||||
@@ -0,0 +1,30 @@
|
||||
# Phase 5 — Custom extensions
|
||||
|
||||
**Status:** ⬜ Not started — depends on Phase 3 (timing tables); some extensions can predate Phase 4 (permissions)
|
||||
**Outcome:** TypeScript extensions implementing the cross-plane workflows the schema implies. Each extension is small, focused, well-tested, and bundled into the image via `extensions/`. Together they let Directus react to operator actions in ways that ripple to [[processor]] (recompute requests) and to the SPA (live updates).
|
||||
|
||||
## Why these are extensions, not Flows
|
||||
|
||||
- **Reviewable, testable, version-controlled.** Extensions are TypeScript modules under `extensions/`, ESLint-checked, unit-tested, reviewed in PR. Flows are admin-UI configuration that round-trips through snapshots but isn't readable as code.
|
||||
- **Domain logic doesn't belong in declarative orchestration.** Flows shine for "on event-write, send a Slack message". They're inadequate for "on `positions.faulty` UPDATE, compute the affected window, emit a Redis Stream message with payload shape X."
|
||||
- **Performance and correctness.** Extensions run in the Directus Node process with full access to `services`, `database`, custom error handling, structured logging. Flows are higher-overhead and harder to reason about under load.
|
||||
|
||||
## Tasks (sketched, not detailed)
|
||||
|
||||
| # | Task | Notes |
|
||||
|---|------|-------|
|
||||
| 5.1 | Faulty-flag → Redis Stream emit | Hook on `positions` UPDATE where `faulty` changed. Emits a `recompute:requests` message with `{ device_id, ts, action: 'set'|'unset' }` for [[processor]] to consume. |
|
||||
| 5.2 | `events.discipline` validation hook | Pre-create / pre-update hook on `entries` that validates: discipline=rally → vehicle_id required; discipline=trail-run → vehicle_id null + crew is exactly one runner. Per [[directus-schema-draft]] decision. |
|
||||
| 5.3 | Stage-open materialization endpoint | Custom endpoint `POST /stages/:id/open`. Reads `start_order_strategy`, queries the strategy's input data, materializes `entry_segment_starts` rows per category. Race-director-permissioned. |
|
||||
| 5.4 | CP closing-time computation | Hook or scheduled task that, when the last competitor's ideal start is computed, sets `time_control_closed_at` on each CP geofence (Rally Albania §9.19: 60 min after last ideal). |
|
||||
| 5.5 | "Copy crew from previous entry" endpoint | Custom endpoint `POST /entries/:id/copy-crew-from/:source_entry_id`. Replaces the recipient's `entry_crew` rows with cloned values from the source. Per [[directus-schema-draft]] decision (no `crews` collection; UX shortcut). |
|
||||
| 5.6 | Penalty review batch publish endpoint | Custom endpoint `POST /stages/:id/publish-results`. Validates `entry_penalties` are all reviewed, writes `stage_results.published_at`, freezes the stage. Race-director-permissioned. |
|
||||
| 5.7 | Entry registration validation | Pre-create hook on `entries` that checks `race_number` is in the valid range for the vehicle's category (Rally Albania §5.4 number bands). Friendly error if violated. |
|
||||
| 5.8 | Extension build pipeline | `extensions/` builds via `pnpm build:extensions` to produce the loadable extension files. Wired into the Dockerfile. CI runs the build + extension unit tests. |
|
||||
|
||||
## Open questions blocking task-level detail
|
||||
|
||||
1. **Hook framework choice.** Directus's hook system supports `filter` (mutate-allowed, blocking) and `action` (post-event, non-blocking). Most of these tasks fit one or the other; explicit per task. Validation hooks are `filter`; emit-to-Redis hooks are `action`.
|
||||
2. **Redis client in extensions.** `ioredis` direct from inside the Directus process? Or reuse a shared service module? Phase 5 task 5.1 makes the decision.
|
||||
3. **Test strategy.** Vitest for unit tests; a dev Directus container + testcontainers Postgres for integration tests. Mirror the [[processor]] split.
|
||||
4. **Permission interaction.** Custom endpoints check permissions explicitly via `accountability` — verify which Phase 4 policy applies, gate the endpoint accordingly.
|
||||
@@ -0,0 +1,42 @@
|
||||
# Phase 6 — Future / optional
|
||||
|
||||
**Status:** ❄️ Not committed
|
||||
**Outcome:** Capability ideas that don't fit Phases 1–5 but are worth tracking so they don't get lost. Work here begins only when a concrete trigger arrives — usually a real event has surfaced a pain point Phases 1–5 don't address.
|
||||
|
||||
## Ideas on radar
|
||||
|
||||
### Retroactivity preview UI
|
||||
|
||||
Counterpart to [[processor]] Phase 2.5. When an operator edits a `geofence` or `speed_limit_zone` polygon with `retroactive = true`, before committing the change Directus shows a diff: "this edit will affect 47 entries, recompute will take ~12 minutes, here are the entries whose `entry_penalties.seconds` change." Implemented as a custom interface or pre-save modal driven by a server-side dry-run extension that calls into [[processor]]'s replay engine.
|
||||
|
||||
### Command-routing Flows for Phase 2 commands
|
||||
|
||||
Per `wiki/concepts/phase-2-commands.md`. Directus owns the `commands` collection. SPA inserts a row; a Directus Flow routes it via Redis to the [[tcp-ingestion]] instance holding the device's socket. Wait for ACK / nACK / timeout, update the row's `status` field, fire WebSocket subscription so the SPA learns the result. Substantial workstream, blocked on Phase 2 commands being ready in [[tcp-ingestion]].
|
||||
|
||||
### Audit trail extension
|
||||
|
||||
Out-of-the-box Directus tracks who changed what, but not in a way that's easy to surface to operators. A custom extension that materializes a per-entry timeline ("registered → confirmed → started → marshal flagged WP3 → race director recomputed penalties → published") would help during protests and post-event reviews. Pure read-side; consumes Directus's revisions/activity tables and presents them.
|
||||
|
||||
### Federation rule import tooling
|
||||
|
||||
When the org publishes their Supplementary Regulation as a structured document (or even as a PDF that the operator transcribes), an import tool that creates `penalty_formulas` rows in bulk, validated against the evaluator registry. Saves the org from manually filling in 5–10 bracket rows per event. Could also export the seeded rules back to a human-readable document for review.
|
||||
|
||||
### Multi-event leaderboard
|
||||
|
||||
Cross-event aggregation: "show me Aldo's results across all rallies he's run in the last 12 months." Requires either a materialized view or a custom endpoint that joins across events. Probably a SPA concern more than a Directus concern; mentioned here in case it's easier to build server-side.
|
||||
|
||||
### Data export for federation submission
|
||||
|
||||
Many federations require post-event result submissions in specific CSV / XML formats. A custom endpoint that takes an event and a federation profile, emits the formatted file. Keeps formatting logic out of the SPA.
|
||||
|
||||
### GDPR / data retention
|
||||
|
||||
Participants have rights over their data; the system needs to support deletion-on-request, export-on-request, and configurable retention windows on `positions` data. Probably a Phase 4 (policies) + Phase 5 (extension) combo when actually needed; tracked here so we don't forget it.
|
||||
|
||||
## When to promote one of these to a real phase
|
||||
|
||||
- A real event surfaced a pain point this idea addresses.
|
||||
- A scheduled stakeholder commitment depends on it (federation sub-mission deadline, etc.).
|
||||
- The idea has been refined enough that the task breakdown is clear.
|
||||
|
||||
Until then: notes only. Don't pre-design.
|
||||
Reference in New Issue
Block a user