52524eb72d
Five collections + 10 relations + 5 composite unique constraints,
captured into snapshots/schema.yaml (now 105 KB, up from 53 KB).
Collections:
- events — 11 fields incl. organization_id M2O, discipline enum
(rally / time-trial / regatta / trail-run / hike),
starts_at/ends_at required.
- classes — 8 fields incl. event_id M2O, code unique within event.
- entries — 11 fields incl. event_id/vehicle_id (nullable for foot
races) /class_id M2O, race_number, status enum with
8 lifecycle values, archive on `withdrawn`.
team_id deliberately omitted (Phase 2+).
- entry_crew — junction with role enum
(pilot/co-pilot/navigator/mechanic/rider/runner/hiker).
- entry_devices — junction with optional assigned_user_id (panic button
body-wear); ON DELETE SET NULL on that field since
user removal shouldn't block the device record.
10 M2O relations wired, all ON DELETE RESTRICT except
entry_devices.assigned_user_id (SET NULL).
db-init/005_event_participation_unique_constraints.sql adds composite
UNIQUE on:
events (organization_id, slug)
classes (event_id, code)
entries (event_id, race_number)
entry_crew (entry_id, user_id)
entry_devices (entry_id, device_id)
---
Destructive-apply incident (recovered):
First attempt at this task hit a real foot-gun. After creating the 5
collections via MCP, we ran `compose build && up -d`. The image rebuild
baked in the snapshot from task 1.4 (only 7 collections). Boot's
schema-apply step ran `directus schema apply --yes` against that stale
snapshot — saw the 5 new collections in the DB but not in the snapshot
— DELETED THEM, taking the constraints with them.
Recovery: re-created the 5 collections + 10 relations via MCP, ran the
ALTER TABLE statements directly via psql to restore the constraints,
ran schema:snapshot BEFORE any further restart so the YAML reflects
the live state. Documented the operator rule (never rebuild with
uncommitted schema changes) inline in the task spec and in the
directus wiki entity page (separate commit in trm/docs).
Phase 3 hardening on the radar: DIRECTUS_SCHEMA_APPLY_MODE env var
with auto/dry-run/skip modes so dev environments default to non-
destructive behavior.
ROADMAP marks 1.5 done. Phase 1 progress: 7/9 tasks complete (1.1–1.7);
1.8, 1.9 remain.
snapshots/
This directory holds the Directus schema snapshot for the TRM directus service.
What lives here
schema.yaml— the authoritative Directus schema: all collections, fields, and relations. Committed to git and applied at every container boot..gitkeep— present until the first real snapshot lands (task 1.4/1.5/1.6). Onceschema.yamlis committed,.gitkeepis no longer needed and can be removed.
Do NOT hand-edit schema.yaml
schema.yaml is generated programmatically. Its format is tightly coupled to
the version of Directus that produced it. Hand-editing produces subtle breakage
(key-order drift, missing internal fields, format violations) that schema apply
will reject or silently misinterpret.
The only supported workflow for schema changes is:
- Edit the schema in the Directus admin UI (local dev stack).
- Run
pnpm run schema:snapshotfrom thedirectus/repo root. - Review the diff in
snapshots/schema.yaml. - Commit and open a PR.
How schema.yaml is applied
entrypoint.sh calls scripts/schema-apply.sh at every container boot.
The apply script:
- Skips silently if
schema.yamldoes not exist or is empty (safe for first-boot before any collections are defined). - Runs a dry-run preview (
directus schema apply --dry-run) and prints the diff to container logs. - Applies the snapshot (
directus schema apply --yes). This is idempotent — Directus computes the diff against the live DB and applies only what has changed. A clean re-deploy where the DB already matches the snapshot is a no-op.
Snapshot/apply lifecycle
edit in admin UI
│
▼
pnpm run schema:snapshot ←── writes snapshots/schema.yaml
│
▼
git commit + PR
│
▼
CI: directus schema apply --dry-run (fails PR if snapshot is broken)
│
▼
container boot: entrypoint.sh → schema-apply.sh → directus start