Scaffold directus service planning structure

Initial commit. Establishes the .planning/ tree mirroring processor's
shape (ROADMAP.md as nav hub + per-phase folders with READMEs and
granular task files).

Six phases:

1. Slice 1 schema + deploy pipeline — what Rally Albania 2026 needs.
   Org catalog (orgs, users, vehicles, devices) + event participation
   (events, classes, entries, entry_crew, entry_devices). db-init/
   for the positions hypertable + faulty column. snapshot/apply
   tooling. Gitea CI dry-run. Dogfood seed of Rally Albania 2026.
   Nine task files with full Goal / Deliverables / Specification /
   Acceptance criteria / Risks / Done sections.

2. Course definition — stages, segments, geofences, waypoints, SLZs.
   PostGIS extension introduced here.

3. Timing & penalty tables — co-developed with processor Phase 2.
   entry_segment_starts, entry_crossings, entry_penalties,
   stage_results, penalty_formulas.

4. Permissions & policies — Directus 11 dynamic-filter Policies per
   logical role. Deployment-time work, deferred to keep early phases
   focused on the data model.

5. Custom extensions — TypeScript hooks/endpoints implementing the
   cross-plane workflows the schema implies (faulty-flag → Redis
   stream emit, stage-open materializer, etc.).

6. Future / optional — retroactivity preview UI, command-routing
   Flows, audit trails, federation rule import. Not committed.

Non-negotiable design rules captured in ROADMAP.md: schema authority
in Directus + snapshot-as-code + db-init for non-Directus DDL +
sequential idempotent migrations + entrypoint apply order + no
application logic in Flows + permissions deferred to Phase 4.

Architectural anchors point at the wiki at ../docs/wiki/ — the schema
draft, the Rally Albania 2025 source page, plus the existing
processor/postgres-timescaledb/live-channel pages. Each task file
calls out the wiki refs an implementing agent should read first.

README.md mirrors the processor service README structure: quick start,
local Docker test, prod/stage deployment notes, env vars, CI behavior.
This commit is contained in:
2026-05-01 20:42:32 +02:00
commit a8e808e71c
17 changed files with 1305 additions and 0 deletions
@@ -0,0 +1,61 @@
# Task 1.1 — Project scaffold
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** None
**Wiki refs:** `docs/wiki/entities/directus.md`, `docs/wiki/synthesis/directus-schema-draft.md`
## Goal
Initialize the `directus/` service folder with the directory layout from the Phase 1 README, the config files needed for local Docker compose dev, and a minimal `compose.dev.yaml` that boots Directus + TimescaleDB so the next tasks have something to iterate against. **No Directus collections are created in this task** — that starts in 1.4.
## Deliverables
- `directus/Dockerfile``FROM directus/directus:11.x`, copies `snapshots/`, `db-init/`, `scripts/`, `entrypoint.sh`, `extensions/` into the image. Sets `ENTRYPOINT ["/directus/entrypoint.sh"]`. (Concrete entrypoint contents land in task 1.7; for now create a placeholder that just `exec`s the upstream entrypoint.)
- `directus/compose.dev.yaml` — two services:
- `db`: `timescale/timescaledb-ha:pg16-latest` (or equivalent), volume-mapped Postgres data dir, healthcheck.
- `directus`: built from local `Dockerfile`, depends on `db` healthy, env vars for DB connection + `KEY` + `SECRET` + admin bootstrap, port `8055` exposed.
- `directus/package.json` — minimal, only for npm scripts (no runtime deps). Scripts:
- `schema:snapshot``bash scripts/schema-snapshot.sh` (script body lands in 1.6)
- `schema:apply``bash scripts/schema-apply.sh`
- `db:init``bash scripts/apply-db-init.sh`
- `dev``docker compose -f compose.dev.yaml up --build`
- `dev:down``docker compose -f compose.dev.yaml down`
- `dev:reset``docker compose -f compose.dev.yaml down -v && docker compose -f compose.dev.yaml up --build`
- `directus/.env.example` — full list of env vars with descriptions and defaults. Required: `DB_HOST`, `DB_PORT`, `DB_DATABASE`, `DB_USER`, `DB_PASSWORD`, `KEY`, `SECRET`, `ADMIN_EMAIL`, `ADMIN_PASSWORD`, `PUBLIC_URL`. Plus optional: `LOG_LEVEL`, `LOG_STYLE`, `CACHE_ENABLED`, `CORS_ENABLED`, `CORS_ORIGIN`, `WEBSOCKETS_ENABLED`.
- `directus/.gitignore``node_modules/`, `.env`, `.env.local`, `*.log`, `directus-data/` (the local Postgres volume mount, if used).
- `directus/.dockerignore``.git/`, `.planning/`, `node_modules/`, `.env*`, `*.md` except `README.md`, `compose.dev.yaml` (compose isn't part of the image), `directus-data/`.
- Empty placeholder directories with `.gitkeep`:
- `snapshots/` (1.6 fills it)
- `db-init/` (1.3 fills it)
- `scripts/` (1.2, 1.6 fill it)
- `extensions/` (Phase 5)
- `directus/entrypoint.sh` — placeholder that simply `exec /directus/cli.js start` (or whatever the upstream image's start command is). Real wrapper lands in 1.7.
- `directus/README.md` already exists from this scaffold pass — verify it's accurate.
## Specification
- **Postgres image choice.** Pin to a TimescaleDB image that includes PostgreSQL 16. PostGIS will be installed via `db-init/` in Phase 2; the base image must support `CREATE EXTENSION postgis` (most TimescaleDB-HA images do). Document the pinned tag in compose.dev.yaml.
- **Volume policy in compose.dev.yaml.** Use a named volume (`directus-pg-data`) so `dev:down` preserves data and `dev:reset` wipes it.
- **No secrets committed.** `.env` is gitignored. `.env.example` carries placeholder values only.
- **No bind mounts of `snapshots/` or `db-init/` in compose.dev.yaml.** The image bakes them in. (Implementer can override with a bind mount during local iteration but the committed file does not.)
- **Entrypoint is a placeholder in this task.** Real flow (db-init → schema apply → start) lands in 1.7. Keep the placeholder simple to unblock 1.4 testing.
## Acceptance criteria
- [ ] `pnpm install` succeeds (no runtime deps; lockfile generated).
- [ ] `docker compose -f compose.dev.yaml up --build` boots Directus successfully against a fresh TimescaleDB container.
- [ ] `http://localhost:8055` serves the Directus admin login.
- [ ] First-time bootstrap with `ADMIN_EMAIL` / `ADMIN_PASSWORD` from `.env` works.
- [ ] `pnpm dev:down` stops the stack, preserves the volume.
- [ ] `pnpm dev:reset` wipes the volume and reboots clean.
- [ ] No collection definitions exist yet — the Directus instance is empty by design.
## Risks / open questions
- **TimescaleDB-HA image PostGIS support.** Verify the chosen tag includes `postgis` extension binaries (or document the alternative — e.g. switching to `postgis/postgis:16-master` with manual TimescaleDB install). Capture the answer in this task's Done section.
- **Directus 11.x patch version.** Pin a specific tag (e.g. `11.5.1`) rather than `11.x` for reproducible builds. Update the pin via PR when bumping.
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,60 @@
# Task 1.2 — db-init runner script
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.1
**Wiki refs:** `docs/wiki/entities/postgres-timescaledb.md`, `docs/wiki/entities/directus.md` (Schema management section)
## Goal
Implement `scripts/apply-db-init.sh` — the boot-time runner that walks `db-init/*.sql` in numeric order, applies each via `psql` against the configured Postgres, and records successful applications in a `migrations_applied` guard table so re-runs are no-ops. This is the foundation Phase 1 (and every later phase) depends on for non-Directus DDL.
## Deliverables
- `scripts/apply-db-init.sh` — POSIX-compatible bash. Does the following, in order:
1. **Wait for Postgres readiness.** Loop calling `pg_isready -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_DATABASE` until success or timeout (configurable, default 60 s). Exit non-zero on timeout with a clear log message.
2. **Bootstrap the guard table.**
```sql
CREATE TABLE IF NOT EXISTS migrations_applied (
filename TEXT PRIMARY KEY,
applied_at TIMESTAMPTZ NOT NULL DEFAULT now(),
checksum TEXT NOT NULL
);
```
3. **Walk `db-init/*.sql` in numeric-prefix order** (sorted lexically; the `NNN_` prefix enforces order). For each file:
- Compute `sha256sum` of the file contents → `checksum`.
- Query `migrations_applied WHERE filename = <basename>`.
- If a row exists and the checksums match → log `skip filename` and continue.
- If a row exists and checksums DON'T match → log error and exit non-zero. (Migrations are append-only; never edit a file once applied.)
- If no row exists → apply the file via `psql -v ON_ERROR_STOP=1 -f <path>`. On success, insert the row. On failure, exit non-zero with the SQL error.
4. **Log a one-line summary** at the end: `db-init complete: <N> applied, <M> skipped`.
## Specification
- **Environment variables expected:** `DB_HOST`, `DB_PORT`, `DB_USER`, `DB_PASSWORD`, `DB_DATABASE`. Plus `DB_INIT_DIR` (default `/directus/db-init`) and `DB_INIT_TIMEOUT_SECONDS` (default `60`).
- **Use `PGPASSWORD` for psql auth** — exported in the script before `psql` calls, never printed in logs.
- **Each migration runs in a single transaction** by virtue of `psql -v ON_ERROR_STOP=1 -1 -f`. The `-1` flag wraps the whole file in `BEGIN/COMMIT`. (Some statements like `CREATE EXTENSION` or `CREATE INDEX CONCURRENTLY` can't run in a transaction — those go in their own files without `-1` if needed. Document the exception inline.)
- **Numeric-prefix convention.** `001_`, `002_`, …, `999_`. Pad to 3 digits; gives 999 slots which is well beyond what we'll need.
- **Filename uniqueness.** Two files can't share a prefix. Lint check at script start: detect collisions, error out before applying anything.
- **Logging.** One line per file at INFO level. Failure logs include the psql exit code and the offending file. No SQL output to stdout (verbose `psql` output goes to stderr and is suppressed unless `DEBUG=1` is set).
- **Idempotency.** Running the script twice in a row → second run does zero psql work beyond the readiness check + guard-table query.
- **Exit codes.** `0` = success, `1` = readiness timeout, `2` = checksum mismatch, `3` = psql error, `4` = filename collision.
## Acceptance criteria
- [ ] Script is executable (`chmod +x`), shebang is `#!/usr/bin/env bash`.
- [ ] `set -euo pipefail` at the top.
- [ ] Against a fresh Postgres, no `db-init/*.sql` files yet → script creates `migrations_applied` table, prints "0 applied, 0 skipped", exits 0.
- [ ] After 1.3 lands, script applies all three migrations on first run (3 applied, 0 skipped), no-ops on second run (0 applied, 3 skipped).
- [ ] Manually editing an applied file → next run exits 2 with a clear "checksum mismatch" error.
- [ ] Adding two files with the same numeric prefix → script exits 4 before applying anything.
- [ ] Killing Postgres mid-run during file 002 → script exits 3 with the psql error; on next run, file 002 retries cleanly.
## Risks / open questions
- **`CREATE EXTENSION` inside a transaction.** Some Postgres extensions can be created inside a transaction (timescaledb, postgis), some cannot (pg_partman with parallel apply). For Phase 1 the only extension is timescaledb, which is fine. Re-evaluate per phase.
- **Concurrent boots.** If two Directus containers boot against the same DB at the same time (rolling deploy), both will try to apply migrations. The guard table's `PRIMARY KEY` on `filename` makes the insert race-safe, but two containers running the *same* `psql -f` at once is risky. Mitigation for Phase 1: assume single-replica boot during deploy; Phase 3+ revisit if rolling deploy is a goal.
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,92 @@
# Task 1.3 — Initial migrations
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.2
**Wiki refs:** `docs/wiki/entities/postgres-timescaledb.md`, `docs/wiki/concepts/position-record.md`, `docs/wiki/entities/processor.md` (Faulty position handling)
## Goal
Author the three Phase 1 migrations under `db-init/`: the TimescaleDB extension, the `positions` hypertable creation, and the `faulty boolean` column. Each is internally idempotent so that environments where they were applied ad-hoc (e.g. existing stage) absorb them as no-ops.
## Deliverables
- `db-init/001_extensions.sql`:
```sql
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
```
- `db-init/002_positions_hypertable.sql`:
```sql
CREATE TABLE IF NOT EXISTS positions (
device_id TEXT NOT NULL,
ts TIMESTAMPTZ NOT NULL,
latitude DOUBLE PRECISION NOT NULL,
longitude DOUBLE PRECISION NOT NULL,
altitude DOUBLE PRECISION,
angle SMALLINT,
speed SMALLINT,
satellites SMALLINT,
priority SMALLINT,
attributes JSONB NOT NULL DEFAULT '{}'::jsonb,
PRIMARY KEY (device_id, ts)
);
-- Idempotent hypertable creation: if_not_exists => true
SELECT create_hypertable(
'positions', 'ts',
chunk_time_interval => INTERVAL '7 days',
if_not_exists => TRUE
);
CREATE INDEX IF NOT EXISTS positions_device_ts_idx
ON positions (device_id, ts DESC);
```
- `db-init/003_faulty_column.sql`:
```sql
ALTER TABLE positions
ADD COLUMN IF NOT EXISTS faulty BOOLEAN NOT NULL DEFAULT FALSE;
CREATE INDEX IF NOT EXISTS positions_faulty_idx
ON positions (device_id, ts DESC) WHERE faulty = FALSE;
```
## Specification
- **Schema must match what `processor` writes.** Cross-check column names, types, nullability against `docs/wiki/concepts/position-record.md` and the actual `processor` writer code (`processor/src/db/migrations/0001_positions.sql`). If any field differs, this task is **blocked** until [[directus-schema-draft]] and the processor's existing migration are reconciled — fix the divergence in the doc first, then this task.
- **`attributes` is `JSONB NOT NULL DEFAULT '{}'`** — never null, always an object. Keeps query plans simple.
- **`(device_id, ts)` primary key** — natural key, idempotent for the processor's `ON CONFLICT DO NOTHING` writer.
- **Chunk interval = 7 days.** Tunable later; 7 days is a reasonable default for hundreds of devices emitting at multi-Hz.
- **Faulty index uses a partial-index `WHERE faulty = FALSE`.** Optimizes the [[processor]] hot-path read which always filters faulty out. Operator queries that select faulty rows specifically use the broader `(device_id, ts DESC)` index.
- **`CASCADE` on `CREATE EXTENSION`** so that any dependent extensions install transparently. TimescaleDB has no required deps so CASCADE is a no-op for now, but harmless and future-proof.
- **No `IF EXISTS` shortcuts that hide schema drift.** The migrations are idempotent at the *DDL* level (`IF NOT EXISTS`), but if a column type already differs from what the file declares, the migration silently passes — leaving stage in an inconsistent state. Add a final `DO $$ ... $$` block per file that asserts the table shape is what the migration intends:
```sql
-- end of 002_positions_hypertable.sql
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'positions' AND column_name = 'attributes' AND data_type = 'jsonb'
) THEN
RAISE EXCEPTION 'positions.attributes is not JSONB — schema drift';
END IF;
END $$;
```
One assertion per critical column shape. Catches the case where stage has the table but with subtly different types.
## Acceptance criteria
- [ ] Against a fresh Postgres + TimescaleDB image, `apply-db-init.sh` runs all three files cleanly.
- [ ] `\d positions` shows the expected columns (including `faulty`).
- [ ] `SELECT * FROM timescaledb_information.hypertables WHERE hypertable_name = 'positions';` returns one row.
- [ ] Both indexes (`positions_device_ts_idx`, `positions_faulty_idx`) exist (`\di+`).
- [ ] Re-running the script is a no-op (verified via `migrations_applied` table contents).
- [ ] Against a Postgres that *already* has `positions` from a prior ad-hoc run, the migration absorbs it as a no-op (provided the existing schema matches; otherwise the assertion blocks deploy).
- [ ] Cross-checked against `processor/src/db/migrations/0001_positions.sql` — column names, types, indexes match.
## Risks / open questions
- **Existing stage Postgres may have a slightly different schema.** Run `pg_dump --schema-only -t positions` on stage before this task lands and compare to the migration above. Reconcile differences in this file (or document them as known-divergent).
- **Hypertable was created before — `create_hypertable` with `if_not_exists` should accept it, but the chunk interval can't be retroactively changed via this call.** If stage's chunk interval differs from `7 days`, that's a non-blocking divergence (functional, just suboptimal). Don't try to migrate it via SQL; leave it as a follow-up.
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,129 @@
# Task 1.4 — Org-level catalog collections
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.3 (db-init applied so Directus can boot)
**Wiki refs:** `docs/wiki/synthesis/directus-schema-draft.md` (Org-level catalog section), `docs/wiki/sources/rally-albania-regulations-2025.md`
## Goal
Create the durable, org-level collections in the Directus admin UI: `organizations`, `users` (using Directus's built-in users with custom fields), `organization_users`, `vehicles`, `organization_vehicles`, `devices`, `organization_devices`. These are the resources that exist independently of any single event.
This task happens against a locally running Directus instance (from `pnpm dev`). The output is a snapshot YAML that captures the collection definitions; that snapshot lands in git in task 1.6.
## Deliverables
Create the following collections via the admin UI (Settings → Data Model). Field shapes per [[directus-schema-draft]]. Required-field columns marked `*`.
### `organizations`
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | primary key, auto-generated |
| `name` * | string | display name |
| `slug` * | string | URL-friendly identifier, unique |
| `created_at` | timestamp | Directus standard |
| `updated_at` | timestamp | Directus standard |
Singleton: false. Sort: `name asc`.
### `users` (extending Directus built-in `directus_users`)
Use the built-in user collection. Add custom fields (Settings → Data Model → directus_users):
| Field | Type | Notes |
|---|---|---|
| `phone` | string | optional |
| `birth_date` | date | optional, used for age-derived class eligibility (M-5/M-6/M-7) |
| `nationality` | string | ISO 3166-1 alpha-2 country code |
Do NOT add an `organization_id` here — multi-tenancy goes through `organization_users`.
### `organization_users` (junction)
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `organization_id` * | M2O → organizations | |
| `user_id` * | M2O → directus_users | |
| `role` * | string (dropdown) | enum: `org-admin`, `race-director`, `marshal`, `timekeeper`, `participant`, `viewer` |
| `joined_at` | timestamp | default `now()` |
Unique constraint: `(organization_id, user_id)` — a user can only have one row per org. Multiple roles per user in same org → not yet (single role per tenant; revisit if needed).
### `vehicles`
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `make` * | string | "Toyota" |
| `model` * | string | "Land Cruiser 70" |
| `year` | integer | |
| `engine_cc` | integer | engine displacement, used for class assignment |
| `vin` | string | optional |
| `plate_number` | string | optional |
| `notes` | text | |
No `owner_user_id` / `owner_team_id` — vehicles are org-scoped only, ownership is not modeled (per [[directus-schema-draft]] decision).
### `organization_vehicles` (junction)
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `organization_id` * | M2O → organizations | |
| `vehicle_id` * | M2O → vehicles | |
| `registered_at` | timestamp | default `now()` |
Unique constraint: `(organization_id, vehicle_id)`.
### `devices`
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `imei` * | string | unique, the canonical device identifier |
| `model` * | string | "FMB920", "FMB003", etc. — drives IO mapping in [[processor]] |
| `serial_number` | string | optional |
| `notes` | text | |
`imei` UNIQUE — same IMEI can't be registered twice anywhere in the system.
### `organization_devices` (junction)
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `organization_id` * | M2O → organizations | |
| `device_id` * | M2O → devices | |
| `registered_at` | timestamp | default `now()` |
Unique constraint: `(organization_id, device_id)`.
## Specification
- **Use UUIDs for all primary keys** (Directus offers UUID v4 generation natively). Avoids leaking row counts and simplifies cross-env data sync.
- **All M2O relations have `ON DELETE` set to `RESTRICT`** by default — accidentally deleting an org or vehicle should require the operator to clean up dependents first. Override per-relation only with explicit reason.
- **No permission policies** — Phase 4 territory. Set every collection to "All Access" → none (admin only) for now.
- **No interface customization beyond defaults** — the SPA isn't using these collections directly yet, and admin UI usability for operators happens after Phase 4 (when policies define what they see).
- **Do not commit `.env` or any secrets.** This task only modifies Directus schema, which is captured in the snapshot.
## Acceptance criteria
- [ ] All seven collections exist in the admin UI with the fields listed above.
- [ ] Required fields are flagged required.
- [ ] All unique constraints are enforced (test by trying to create a duplicate row — should error).
- [ ] M2O relations are visible and clickable in the admin UI's relational fields.
- [ ] No permission policies attached (admin-only).
- [ ] Manually create one organization, one user, one organization_user row → the relationships work end-to-end.
- [ ] `pnpm run schema:snapshot` produces a `snapshots/schema.yaml` with all seven collections present (verified by grep).
- [ ] Booting a brand-new Directus instance (fresh DB, fresh containers) and running `directus schema apply --yes snapshots/schema.yaml` recreates the seven collections identically.
## Risks / open questions
- **`directus_users` field additions** — Directus does allow adding fields to its built-in user collection, but the snapshot/apply behavior for those additions has historically been finicky across versions. Verify on the pinned Directus version that custom user fields round-trip cleanly via `schema snapshot` + `schema apply`. If they don't, fall back to a separate `user_profiles` collection M2O'd to `directus_users`.
- **Slug uniqueness on `organizations`** — Directus enforces this at the field level. Confirm it generates a unique-index DDL in the snapshot.
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,125 @@
# Task 1.5 — Event-participation collections
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.4
**Wiki refs:** `docs/wiki/synthesis/directus-schema-draft.md` (Event-level participation section), `docs/wiki/sources/rally-albania-regulations-2025.md` (§2.2–§2.5 for class taxonomy reference)
## Goal
Create the per-event participation collections in the Directus admin UI: `events`, `classes`, `entries`, `entry_crew`, `entry_devices`. These are scoped to a single event and form the unit of timing.
## Deliverables
Create the following collections via the admin UI. Field shapes per [[directus-schema-draft]].
### `events`
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `organization_id` * | M2O → organizations | event lives in exactly one org |
| `name` * | string | "Rally Albania 2026" |
| `slug` * | string | unique within an org |
| `discipline` * | string (dropdown) | enum: `rally`, `time-trial`, `regatta`, `trail-run`, `hike` — drives validation |
| `starts_at` * | timestamp | event window begin |
| `ends_at` * | timestamp | event window end |
| `regulation_doc_url` | string | external URL to the rulebook PDF/page (e.g. `wiki/sources/rally-albania-regulations-2025.md`) |
| `notes` | text | |
Unique constraint: `(organization_id, slug)`.
### `classes`
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `event_id` * | M2O → events | classes are per-event |
| `code` * | string | "M-1", "C-2", "S-1", … |
| `name` * | string | human-readable |
| `description` | text | eligibility rules in plain text |
| `sort_order` | integer | for display ordering |
Unique constraint: `(event_id, code)`.
### `entries`
The unit of timing. One row per (vehicle or solo participant) registered for an event.
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `event_id` * | M2O → events | |
| `vehicle_id` | M2O → vehicles | nullable — null for foot races (trail-run, hike) |
| `team_id` | M2O → teams | nullable — for now, no `teams` collection in Phase 1, leave the field nullable and unwired (`teams` collection is Phase 2 territory if needed; per the schema draft, teams are an org-level catalog item) |
| `class_id` * | M2O → classes | required: every entry has a class |
| `race_number` * | integer | per Rally Albania §5: 1199 moto, 2xx quad, 3xx car, 4xx SSV |
| `status` * | string (dropdown) | enum: `registered`, `confirmed`, `started`, `finished`, `dnf`, `dns`, `dq`, `withdrawn` |
| `registered_at` | timestamp | default `now()` |
| `notes` | text | |
Unique constraint: `(event_id, race_number)` — no two entries share a race number in the same event.
> **Status enum semantics** (from the schema draft):
> - `registered` — paid, not yet confirmed at scrutineering
> - `confirmed` — passed scrutineering, eligible to start
> - `started` — has begun the first stage
> - `finished` — completed all stages within MTA
> - `dnf` — did not finish (started but couldn't complete)
> - `dns` — did not start (confirmed but absent at start)
> - `dq` — disqualified (rule violation, see Rally Albania §12.13)
> - `withdrawn` — voluntary withdraw (Rally Albania §12.15 — MTA penalty for remaining stages)
> **`teams` deferred:** Phase 1 doesn't define a `teams` collection. The `team_id` field on `entries` is nullable and the FK target is intentionally unwired in Phase 1. Drop the field entirely if it complicates the snapshot — re-add in Phase 2 if a real team relationship is needed.
### `entry_crew` (junction)
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `entry_id` * | M2O → entries | |
| `user_id` * | M2O → directus_users | |
| `role` * | string (dropdown) | enum: `pilot`, `co-pilot`, `navigator`, `mechanic`, `rider`, `runner`, `hiker` |
Unique constraint: `(entry_id, user_id)` — a user can't appear twice in the same entry's crew.
### `entry_devices` (junction)
| Field | Type | Notes |
|---|---|---|
| `id` * | UUID | |
| `entry_id` * | M2O → entries | |
| `device_id` * | M2O → devices | |
| `assigned_user_id` | M2O → directus_users | nullable. null = vehicle-mounted; set = body-worn on this crew member |
| `mount_position` | string | optional free text: "panic_button_pilot", "hardwired_dash", "backup_chassis" |
Unique constraint: `(entry_id, device_id)` — a device can't appear twice in the same entry.
## Specification
- **All M2O `ON DELETE`:** `RESTRICT` by default. Cascading from event → entries is appealing but risky for audit/historical purposes — leave `RESTRICT` and require explicit operator action.
- **`status` enum order matters for display.** Set the dropdown's option order to match the lifecycle: `registered``confirmed``started``finished``dnf``dns``dq``withdrawn`.
- **`race_number` is integer**, not string. Plate background color (white/yellow/green/red per Rally Albania §5.5) is derivable from the number range; not a stored field.
- **No permission policies yet** — Phase 4 territory. Admin-only access.
- **No `team_id` field if it adds complexity** — the schema draft leaves teams as an org-level catalog item that's not yet defined. Phase 1 ships entries without team support.
## Acceptance criteria
- [ ] All five collections exist in the admin UI with the fields listed above.
- [ ] Required fields flagged required.
- [ ] Unique constraints enforced.
- [ ] M2O relations work in the admin UI.
- [ ] `entries.status` dropdown shows all eight values in lifecycle order.
- [ ] Manually walk through the registration: create an event → create classes → create one entry referencing a vehicle, class, and race number → add two `entry_crew` rows (pilot + co-pilot) → add three `entry_devices` rows (one with `assigned_user_id` set, two with null). All FKs resolve.
- [ ] Try to create a second entry with the same `race_number` in the same event → error.
- [ ] `pnpm run schema:snapshot` produces a snapshot containing the new collections.
- [ ] Cross-checked against the schema draft: every field that should exist does, every nullable field is nullable, every unique constraint is in place.
## Risks / open questions
- **`assigned_user_id` on entry_devices** — Directus represents this as an M2O. Verify the snapshot encodes the nullable / non-required nature correctly.
- **Cascading deletes vs RESTRICT** — RESTRICT is the safe default but may make admin UX painful (you can't delete an event without first deleting all its entries, etc.). Phase 4 / Phase 5 may revisit with custom Flows that walk the dependency graph.
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,59 @@
# Task 1.6 — Schema snapshot/apply tooling
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.4, 1.5 (collections must exist before there's anything to snapshot)
**Wiki refs:** `docs/wiki/entities/directus.md` (Schema management section)
## Goal
Wrap Directus's native `schema snapshot` and `schema apply` commands in repo-local scripts and npm aliases so the snapshot/apply lifecycle is one command, ergonomic for daily dev, and reliable in the entrypoint and CI. Commit the first generated `snapshots/schema.yaml` containing the 12 Phase 1 collections.
## Deliverables
- `scripts/schema-snapshot.sh`:
- Runs against a *running* Directus container (the local `directus` service from compose.dev.yaml).
- Invokes `directus schema snapshot --yes /tmp/snapshot.yaml` inside the container.
- Copies the generated snapshot out to `./snapshots/schema.yaml`.
- Exits non-zero if Directus isn't reachable or the snapshot command fails.
- One-line success log: `snapshot written to snapshots/schema.yaml (<size> bytes)`.
- `scripts/schema-apply.sh`:
- Used at boot (entrypoint) and in CI dry-run.
- Invokes `directus schema apply --yes /directus/snapshots/schema.yaml`.
- Logs the diff before applying (`directus schema apply --dry-run` then real apply).
- Exits non-zero on failure.
- `package.json` scripts (already stubbed in 1.1):
- `schema:snapshot` → runs the snapshot script (dev-time only).
- `schema:apply` → runs the apply script (used by entrypoint, also useful for local "apply this committed snapshot to my running dev DB").
- `schema:diff` → wraps `directus schema apply --dry-run` to preview pending changes without applying.
- `snapshots/schema.yaml` — first committed snapshot, containing the 12 Phase 1 collections from tasks 1.4 + 1.5.
- `snapshots/README.md` — short note explaining: this directory is **generated**, edit Directus via the admin UI and re-snapshot, do not hand-edit YAML.
## Specification
- **Snapshot script runs against a running container, not via Node.** The `directus` CLI requires the same env (DB connection, KEY, SECRET) the server uses; easiest is to `docker compose exec directus directus schema snapshot ...`. Document this assumption — the script fails clearly if no compose stack is running.
- **Apply script is environment-agnostic.** It runs inside the image at boot (where Directus is in PATH) and in CI (where it runs against a throwaway Postgres). Don't assume compose; the script just calls `directus schema apply` with paths injected via env or arguments.
- **Snapshot format.** Directus 11 snapshots are YAML by default. Pin the format explicitly via the `--format=yaml` flag if available — otherwise rely on the default. Verify the chosen Directus 11 patch version's snapshot format is stable across patch bumps.
- **Diff before apply, always.** The apply script logs `directus schema apply --dry-run` output before the real apply. This makes container boot logs self-explanatory: "applying these changes". On a clean re-deploy, the diff is empty.
- **Snapshot regeneration is a manual, conscious action.** Don't auto-regenerate on file save. The dev edits the schema in admin UI, decides the change is good, then runs `pnpm run schema:snapshot` to capture it.
## Acceptance criteria
- [ ] With Phase 1's 12 collections in the running dev Directus, `pnpm run schema:snapshot` produces a `snapshots/schema.yaml` file.
- [ ] `snapshots/schema.yaml` contains all 12 collections (verified by grep for `collection: organizations`, `collection: events`, etc.).
- [ ] The snapshot is < 200 KB (sanity check — much larger means something is wrong like committed data).
- [ ] `pnpm run schema:diff` against the same running Directus shows "no changes".
- [ ] Wipe Directus DB (`pnpm dev:reset`) → boot fresh → `pnpm run schema:apply` recreates the 12 collections from the committed snapshot.
- [ ] Snapshot a second time after no admin UI changes → result is byte-identical to the first.
- [ ] Make a trivial admin UI change (add a description to a field) → snapshot → diff against committed → exactly that change shows up.
- [ ] `snapshots/schema.yaml` is committed; `snapshots/README.md` warns against hand-editing.
## Risks / open questions
- **Snapshot determinism across runs.** Some Directus versions have re-ordered keys in their snapshot output between identical runs, producing noisy diffs. If this happens on the pinned version, document it as a known issue and consider a post-snapshot `yq sort-keys` normalization step.
- **Permission policies in the snapshot.** Phase 1 has no policies set; verify the snapshot is empty in those sections. When Phase 4 adds policies, re-evaluate whether snapshot/apply round-trips them faithfully.
- **`directus_users` custom-field round-trip.** Already flagged in task 1.4. If those fields don't round-trip, the workaround (separate `user_profiles` collection) needs to be applied before this snapshot lands.
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,85 @@
# Task 1.7 — Image build & entrypoint
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.2, 1.3, 1.6 (need the runner, migrations, and snapshot tooling all in place)
**Wiki refs:** `docs/wiki/entities/directus.md` (Schema management section)
## Goal
Build a production-ready Directus image that bakes in the snapshot, db-init migrations, extensions directory, and entrypoint script. Replace the placeholder entrypoint from 1.1 with the real boot sequence: db-init → schema apply → directus start.
## Deliverables
- `Dockerfile` (replacing the placeholder from 1.1):
```dockerfile
FROM directus/directus:11.5.1 # pin specific patch version
USER root
RUN apk add --no-cache postgresql16-client bash coreutils
USER node
COPY --chown=node:node snapshots/ /directus/snapshots/
COPY --chown=node:node db-init/ /directus/db-init/
COPY --chown=node:node extensions/ /directus/extensions/
COPY --chown=node:node scripts/ /directus/scripts/
COPY --chown=node:node entrypoint.sh /directus/entrypoint.sh
RUN chmod +x /directus/entrypoint.sh /directus/scripts/*.sh
ENTRYPOINT ["/directus/entrypoint.sh"]
```
Adjust `apk` / `apt-get` based on the upstream image's distro. `postgresql-client` is required for `psql` in the db-init runner.
- `entrypoint.sh`:
```sh
#!/usr/bin/env bash
set -euo pipefail
echo "[entrypoint] running db-init"
/directus/scripts/apply-db-init.sh
echo "[entrypoint] applying Directus schema snapshot"
/directus/scripts/schema-apply.sh
echo "[entrypoint] starting Directus"
exec /directus/cli.js start
```
(Verify `/directus/cli.js start` is the correct upstream command for the pinned version. Some versions use `node /directus/server.js`.)
- Update `compose.dev.yaml` so the dev image uses the same Dockerfile (no special path in dev). The local image has identical boot semantics to prod — only env vars differ.
## Specification
- **Pin the Directus version exactly** (e.g. `11.5.1`, not `11`). Version bumps land via PR.
- **Layer ordering for cache friendliness.**
1. `FROM` + apk install (rarely changes).
2. `COPY scripts/` (changes occasionally).
3. `COPY entrypoint.sh` (rarely changes).
4. `COPY db-init/` (changes per migration PR).
5. `COPY snapshots/` (changes per schema PR — most volatile).
6. `COPY extensions/` (Phase 5+).
Putting the most-changed layer last maximizes cache reuse for the rest.
- **`USER node`** for runtime (matches upstream image's non-root convention).
- **Health check.** Add a `HEALTHCHECK` instruction calling `wget -qO- http://localhost:8055/server/ping` (or the upstream's health endpoint), with sensible interval/timeout. Useful in compose and Portainer.
- **Entrypoint failure modes.** If db-init fails → exit, container restarts (Docker will retry). If schema apply fails → same. Both failures should produce clear log lines so an operator looking at Portainer logs can diagnose.
- **No `EXPOSE` change** — the upstream image already exposes `8055`.
- **No `ENV` overrides** for Directus runtime config in the Dockerfile — that's the deployer's concern via env vars at runtime.
## Acceptance criteria
- [ ] `docker build -t trm-directus:dev .` succeeds.
- [ ] Image size is reasonable (< 600 MB; upstream image + tooling).
- [ ] Booting against a fresh Postgres: db-init applies all three migrations, schema apply creates 12 collections, Directus starts and serves on `:8055`.
- [ ] Re-booting against the same Postgres (warm DB): db-init reports "0 applied, 3 skipped", schema apply reports "no changes", Directus starts.
- [ ] Killing Postgres mid-db-init → container exits non-zero with clear error in logs.
- [ ] Killing Postgres mid-schema-apply → container exits non-zero with clear error in logs.
- [ ] HEALTHCHECK reports "healthy" once Directus is serving.
- [ ] `compose.dev.yaml` `directus` service uses the local Dockerfile build and works end-to-end (`pnpm dev:reset` → fresh boot → admin UI loads).
## Risks / open questions
- **Upstream image distro.** Directus's official image has used both Alpine and Debian-based bases over the years. Verify the current 11.x base and adjust `apk` vs `apt-get` accordingly.
- **`/directus/cli.js start` path.** Confirm against the upstream Dockerfile / docs for the pinned version. Bake the right command into entrypoint.sh.
- **Permissions on `/directus/snapshots/` etc.** If the upstream user is `node` (uid 1000), the `--chown=node:node` flag is right. Verify with `docker run --rm trm-directus:dev id`.
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,129 @@
# Task 1.8 — Gitea CI dry-run workflow
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.7
**Wiki refs:** `docs/wiki/entities/directus.md` (Schema management section)
## Goal
Build a Gitea Actions workflow that on push to `main` (when relevant paths change): builds the image, spins up a throwaway Postgres + TimescaleDB in CI, runs the entrypoint flow as a **dry-run** to catch snapshot/migration breakage, and only publishes the image to the registry if the dry-run succeeds. Mirrors the `processor` and `tcp-ingestion` workflow shape.
## Deliverables
- `.gitea/workflows/build.yml`:
```yaml
name: Build directus image
on:
push:
branches: [main]
paths:
- 'snapshots/**'
- 'db-init/**'
- 'extensions/**'
- 'scripts/**'
- 'entrypoint.sh'
- 'Dockerfile'
- '.gitea/workflows/build.yml'
workflow_dispatch:
jobs:
build-and-publish:
runs-on: ubuntu-22.04
services:
postgres:
image: timescale/timescaledb-ha:pg16-latest
env:
POSTGRES_USER: directus
POSTGRES_PASSWORD: directus
POSTGRES_DB: directus
ports: ['5432:5432']
options: >-
--health-cmd "pg_isready -U directus"
--health-interval 5s
--health-timeout 5s
--health-retries 10
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t trm-directus:ci .
- name: Dry-run boot against throwaway Postgres
env:
DB_HOST: postgres
DB_PORT: 5432
DB_USER: directus
DB_PASSWORD: directus
DB_DATABASE: directus
KEY: ci-key-not-secret
SECRET: ci-secret-not-secret
ADMIN_EMAIL: ci@example.com
ADMIN_PASSWORD: ci-password-not-secret
PUBLIC_URL: http://localhost:8055
run: |
docker run --rm \
-e DB_CLIENT=pg \
-e DB_HOST=$DB_HOST -e DB_PORT=$DB_PORT \
-e DB_USER=$DB_USER -e DB_PASSWORD=$DB_PASSWORD -e DB_DATABASE=$DB_DATABASE \
-e KEY=$KEY -e SECRET=$SECRET \
-e ADMIN_EMAIL=$ADMIN_EMAIL -e ADMIN_PASSWORD=$ADMIN_PASSWORD \
-e PUBLIC_URL=$PUBLIC_URL \
--network host \
--entrypoint bash \
trm-directus:ci \
-c '/directus/scripts/apply-db-init.sh && /directus/scripts/schema-apply.sh && echo "dry-run ok"'
- name: Login to Gitea registry
uses: docker/login-action@v3
with:
registry: git.dev.microservices.al
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Tag and push
run: |
docker tag trm-directus:ci git.dev.microservices.al/trm/directus:main
docker tag trm-directus:ci git.dev.microservices.al/trm/directus:${{ github.sha }}
docker push git.dev.microservices.al/trm/directus:main
docker push git.dev.microservices.al/trm/directus:${{ github.sha }}
- name: Trigger Portainer redeploy (optional)
if: secrets.PORTAINER_WEBHOOK_URL != ''
run: curl -X POST "${{ secrets.PORTAINER_WEBHOOK_URL }}"
```
## Specification
- **Dry-run runs the entrypoint scripts only**, not `directus start`. Starting the server and waiting for it to serve is slow and unnecessary — the goal is to catch DDL / snapshot apply errors. Override the `ENTRYPOINT` and run the two scripts directly.
- **Service container is the throwaway Postgres.** `services:` block in Gitea Actions (compatible syntax with GitHub Actions). Use the pinned TimescaleDB image; mismatch with prod hides bugs.
- **Path filter on `on.push.paths`** keeps CI quiet for unrelated repo changes (docs-only commits, etc.). Mirrors the processor workflow.
- **Two image tags published:** `:main` (always points at latest main) and `:<sha>` (specific commit, immutable). The deploy stack can pin to either.
- **Portainer webhook is optional** (gated by secret presence). If unset, no auto-deploy.
- **No integration tests in CI for Phase 1.** The dry-run boot *is* the integration test — it proves the snapshot+db-init combination works against a fresh Postgres. Phase 5+ adds extension-specific tests as those land.
- **Required Gitea secrets:**
- `REGISTRY_USERNAME`, `REGISTRY_PASSWORD` — for the image push.
- `PORTAINER_WEBHOOK_URL` — optional, for auto-deploy.
## Acceptance criteria
- [ ] Workflow file is committed at `.gitea/workflows/build.yml`.
- [ ] First push to `main` after this lands triggers the workflow.
- [ ] Workflow steps in order: checkout → build → dry-run boot → registry login → tag/push → optional Portainer ping.
- [ ] Dry-run step exits 0 with logs showing "db-init complete" and "schema apply: no changes" (after the snapshot has been applied once, subsequent runs against fresh Postgres still apply from scratch — verify the apply step works in both cases).
- [ ] Intentionally break the snapshot (manually edit `snapshots/schema.yaml` to a malformed YAML) → workflow fails at the dry-run step → image is NOT pushed.
- [ ] Intentionally break a migration (introduce SQL syntax error in `db-init/`) → workflow fails at the dry-run step → image is NOT pushed.
- [ ] Push a docs-only change → workflow does NOT trigger.
- [ ] Image pushed to registry under `git.dev.microservices.al/trm/directus:main` and `:<sha>`.
- [ ] Portainer webhook fires if configured.
## Risks / open questions
- **Gitea Actions `services:` syntax compatibility.** Gitea's runner is mostly GitHub-Actions-compatible but has historically had quirks with the `services:` block (especially around image pulls from private registries). If the throwaway Postgres can't be brought up via `services:`, fall back to a `docker run` step that backgrounds the container and a wait-loop on `pg_isready`. Document the chosen approach.
- **Network access between job container and service container.** `--network host` is the simplest solution if Gitea's runner allows it. If not, use the Docker network created by the runner and reference the service by name (`postgres:5432`).
## Done
(Fill in commit SHA + one-line note when this lands.)
@@ -0,0 +1,106 @@
# Task 1.9 — Rally Albania 2026 dogfood seed
**Phase:** 1 — Slice 1 schema + deploy pipeline
**Status:** ⬜ Not started
**Depends on:** 1.5, 1.7 (need event-participation collections live; need a deployable image to run them on stage)
**Wiki refs:** `docs/wiki/sources/rally-albania-regulations-2025.md` (§2.2–§2.5 class catalog, §1 event metadata), memory `project_rally_albania_2026.md`
## Goal
Seed the stage instance with real data: the "Motorsport Club Albania" organization, the "Rally Albania 2026" event, the full class catalog from the regulations, and at least one fully-registered test entry. Walk the registration workflow end-to-end through the admin UI to confirm the slice-1 schema actually supports a real event registration without surprises. **This is the dogfood gate.**
## Deliverables
Done via the admin UI on the stage Directus instance (no code changes — this task is operational, not a build). Capture screenshots / brief notes in this task's "Done" section.
### 1. Organization
| Field | Value |
|---|---|
| `name` | Motorsport Club Albania |
| `slug` | msc-albania |
### 2. Event
| Field | Value |
|---|---|
| `organization_id` | (the org from step 1) |
| `name` | Rally Albania 2026 |
| `slug` | rally-albania-2026 |
| `discipline` | rally |
| `starts_at` | 2026-06-06T00:00:00Z |
| `ends_at` | 2026-06-13T23:59:59Z |
| `regulation_doc_url` | https://www.rallyalbania.org or the wiki source page URL |
### 3. Class catalog (per Rally Albania §2.2–§2.5)
Create one row per class. `event_id` = the event from step 2.
| code | name | sort_order |
|---|---|---|
| M-1 | MOTO Under 450cc | 10 |
| M-2 | MOTO 450600cc | 20 |
| M-3 | MOTO over 600cc, single cylinder | 30 |
| M-4 | MOTO over 600cc, bi-cylinder | 40 |
| M-5 | MOTO Senior, under 450cc | 50 |
| M-6 | MOTO Senior, over 450cc | 60 |
| M-7 | MOTO Veteran (any bike) | 70 |
| M-8 | MOTO Female driver | 80 |
| Q-1 | QUAD 2WD | 90 |
| Q-2 | QUAD 4WD | 100 |
| Q-3 | QUAD Female pilot | 110 |
| C-1 | CAR Modified | 120 |
| C-2 | CAR Production | 130 |
| C-A | CAR Standard Automobiles | 140 |
| C-3 | CAR All-female team | 150 |
| S-1 | SSV Single pilot | 160 |
| S-2 | SSV Two-driver team | 170 |
| S-3 | SSV All-female team | 180 |
> **Numbering note:** The regulations doc uses `M-7` for both Veteran and Female driver — apparent typo. This seed renames the Female driver class to **M-8** to disambiguate. Flag this in the post-event review with the organizer; if they confirm M-8 is wrong, rename later.
### 4. Test entry — full registration walkthrough
Pick (or create) a test user in `directus_users`, a test vehicle in `vehicles`, and two test devices in `devices`. Register them all in the event:
1. Add the test user to `organization_users` with role `participant`.
2. Add the test vehicle to `organization_vehicles`.
3. Add the test devices to `organization_devices`.
4. Create an `entries` row: `event_id` = Rally Albania 2026, `vehicle_id` = test vehicle, `class_id` = M-1 (or whatever fits the test vehicle), `race_number` = 1, `status` = `registered`.
5. Create one `entry_crew` row: `entry_id` = the entry, `user_id` = test user, `role` = `pilot`.
6. Create two `entry_devices` rows: one with `assigned_user_id` = test user (panic button), one with `assigned_user_id` = null (vehicle-mounted). `mount_position` field filled in for both.
7. Verify the live map (Phase 1 of [[processor]]) still renders the test devices' positions correctly under the new entry-aware schema. (If the SPA isn't yet wired to look up entries, that's fine — verify in DB / processor logs that the device IDs match what the entry registered.)
### 5. Post-walkthrough checklist
In this task's "Done" section, capture:
- [ ] Any field that was awkward to enter via admin UI (interface improvements for Phase 5 hooks).
- [ ] Any constraint that fired unexpectedly (data model bugs to fix in a follow-up).
- [ ] Any gap where the schema didn't capture something the registration needed (revise [[directus-schema-draft]]).
- [ ] How long the full registration took. Realistic baseline for "register N entries" planning.
## Specification
- **Stage env, not local.** This task verifies the deploy pipeline end-to-end: image was built by Phase 1.8 CI, pulled by Portainer, booted with snapshot+db-init applied, then operator interacts with the live admin UI.
- **Real-ish data.** Use plausible names / IMEIs / VINs — not "test1", "foo", "bar". The data will be reviewed by the organizer eventually; quality matters.
- **One full crew, not many.** A single pilot entry is enough to dogfood. Save the multi-crew rally car case for a Phase 2 dogfood.
- **No SPA work in this task.** The registration is admin-UI only. SPA-side work (operator-friendly registration UX) is a separate workstream not blocked on Phase 1.
## Acceptance criteria
- [ ] All 18 class rows visible in admin UI under the Rally Albania 2026 event.
- [ ] One complete entry exists with vehicle + class + crew + devices.
- [ ] Live map shows the test devices' positions tagged with their device IDs (existing Phase 1 [[processor]] behavior).
- [ ] Post-walkthrough checklist filled in.
- [ ] Any schema bugs surfaced are tracked as new tasks (or revisions to existing task files).
- [ ] Decision: does the slice-1 schema support Rally Albania 2026 as a test event, or does it need revisions before June? Captured as a one-line verdict in this task's Done section.
## Risks / open questions
- **Phase 4 (permissions) hasn't landed yet.** Operators using admin UI for registration are doing so as Directus admins, which is fine for dogfood but obviously not for production use. Phase 4 is the gate for non-admin users.
- **The "live map" verification step** depends on Phase 1 [[processor]] being deployed and pointed at the same database. Confirm before starting.
## Done
(Fill in commit SHA / dogfood date + one-line verdict when this lands.)
@@ -0,0 +1,87 @@
# Phase 1 — Slice 1 schema + deploy pipeline
Stand up a Directus 11 instance with the minimum schema needed to register entries and tie them to devices, plus the schema-as-code pipeline (snapshots + db-init) and Gitea Actions CI. **This is what Rally Albania 2026 needs to run as a test event.**
## Outcome statement
When Phase 1 is done:
- Directus runs locally via `docker compose -f compose.dev.yaml up`, against a Postgres 16 + TimescaleDB + PostGIS container.
- `db-init/` contains three migrations applied at boot: TimescaleDB extension, `positions` hypertable creation, `faulty boolean` column on positions. All idempotent, all guarded by a `migrations_applied` table.
- `snapshots/schema.yaml` contains 12 collections: `organizations`, `users`, `organization_users`, `vehicles`, `organization_vehicles`, `devices`, `organization_devices`, `events`, `classes`, `entries`, `entry_crew`, `entry_devices`. Relations and required fields per [[directus-schema-draft]] (the org-level catalog and event-participation sections).
- The image entrypoint runs db-init, then `directus schema apply --yes`, then `directus start`. All three exit 0 against a fresh Postgres.
- Gitea Actions builds the image on push to `main` (when `snapshots/`, `db-init/`, `extensions/`, `Dockerfile`, or workflow file changes), runs the apply pipeline against a throwaway Postgres in CI, and pushes the image to `git.dev.microservices.al/trm/directus:main` only if the dry-run passes.
- "Motorsport Club Albania" exists as an organization, "Rally Albania 2026" exists as an event under it, and the Rally Albania class catalog is seeded (M-1..M-7, Q-1..Q-3, C-1/C-2/C-A/C-3, S-1/S-2/S-3 from `wiki/sources/rally-albania-regulations-2025.md` §2.2–§2.5). At least one test entry registered with vehicle + crew + devices, used to dogfood the registration workflow.
Phase 1 deliberately stops short of:
- Course definition (stages, segments, geofences, SLZs) — Phase 2.
- Penalty system tables and timing tables — Phase 3.
- Permission policies — Phase 4 (collections are admin-only by default).
- Custom extension code — Phase 5.
## Sequencing
```
1.1 Project scaffold
└─→ 1.2 db-init runner script
└─→ 1.3 Initial migrations
├─→ 1.4 Org-level catalog collections (admin UI work)
│ └─→ 1.5 Event-participation collections (admin UI work)
│ └─→ 1.6 Schema snapshot/apply tooling
│ └─→ 1.7 Image build & entrypoint
│ └─→ 1.8 Gitea CI dry-run
│ └─→ 1.9 Rally Albania 2026 seed
```
Tasks 1.1 → 1.3 are pure infrastructure and can land before any Directus admin UI work begins. Tasks 1.4 + 1.5 happen against a locally running Directus instance. Tasks 1.6 → 1.8 wire the artifacts together. Task 1.9 is dogfood verification.
## Files modified
Phase 1 produces this layout in `directus/`:
```
directus/
├── .gitea/workflows/build.yml
├── snapshots/
│ └── schema.yaml # generated; edits via admin UI + pnpm run schema:snapshot
├── db-init/
│ ├── 001_extensions.sql # CREATE EXTENSION timescaledb (postgis added in Phase 2)
│ ├── 002_positions_hypertable.sql
│ └── 003_faulty_column.sql
├── extensions/ # empty — Phase 5 fills this
├── scripts/
│ ├── apply-db-init.sh # numeric-order, guard-table-protected runner
│ ├── schema-snapshot.sh # wraps `directus schema snapshot --yes`
│ └── schema-apply.sh # wraps `directus schema apply --yes`
├── entrypoint.sh # apply-db-init.sh && directus schema apply && directus start
├── Dockerfile # FROM directus/directus:11.x + bundled artifacts
├── compose.dev.yaml # local dev: directus + timescaledb container
├── package.json # only for the snapshot/apply npm scripts and tooling
├── pnpm-lock.yaml
├── .env.example
├── .dockerignore
├── .gitignore
└── README.md
```
## Tech stack (decided)
- **Directus 11.x** (latest stable on the 11.x line at time of build). Pinned in `Dockerfile` `FROM` line.
- **Postgres 16 + TimescaleDB + PostGIS** as the database (PostGIS extension added in Phase 2; Phase 1 only uses TimescaleDB).
- **pnpm** for any local dev scripts (snapshot wrappers, lint).
- **bash** (POSIX-compatible) for `apply-db-init.sh` and `entrypoint.sh`. No Node dependency at runtime — only Directus needs Node, and that's the upstream image's responsibility.
- **psql** (from `postgresql-client` package) inside the image for db-init application.
- **Gitea Actions** for CI, matching the `processor` and `tcp-ingestion` workflow shape.
If an implementer wants to deviate, they must update the relevant task file first.
## Key design decisions inherited from `processor`
- **Image is bundled, not assembled at runtime.** `snapshots/`, `db-init/`, and `extensions/` are baked into the image, not mounted as volumes. Reproducible across envs.
- **Slim Dockerfile.** Multi-stage if extensions need a build step (Phase 5+); for Phase 1 a single stage is enough.
- **CI workflow** — single-job pattern matching `processor/.gitea/workflows/build.yml`. Use `services:` for the throwaway Postgres in the dry-run step.
- **No `.env` in image.** All env vars come from the deploy stack (Portainer / compose) at runtime.
## Open questions blocking task-level detail
None. The schema draft pinned the org-level catalog and event-participation shape; Phase 1 implements exactly that subset.