Task 1.5 — Event-participation collections
Five collections + 10 relations + 5 composite unique constraints,
captured into snapshots/schema.yaml (now 105 KB, up from 53 KB).
Collections:
- events — 11 fields incl. organization_id M2O, discipline enum
(rally / time-trial / regatta / trail-run / hike),
starts_at/ends_at required.
- classes — 8 fields incl. event_id M2O, code unique within event.
- entries — 11 fields incl. event_id/vehicle_id (nullable for foot
races) /class_id M2O, race_number, status enum with
8 lifecycle values, archive on `withdrawn`.
team_id deliberately omitted (Phase 2+).
- entry_crew — junction with role enum
(pilot/co-pilot/navigator/mechanic/rider/runner/hiker).
- entry_devices — junction with optional assigned_user_id (panic button
body-wear); ON DELETE SET NULL on that field since
user removal shouldn't block the device record.
10 M2O relations wired, all ON DELETE RESTRICT except
entry_devices.assigned_user_id (SET NULL).
db-init/005_event_participation_unique_constraints.sql adds composite
UNIQUE on:
events (organization_id, slug)
classes (event_id, code)
entries (event_id, race_number)
entry_crew (entry_id, user_id)
entry_devices (entry_id, device_id)
---
Destructive-apply incident (recovered):
First attempt at this task hit a real foot-gun. After creating the 5
collections via MCP, we ran `compose build && up -d`. The image rebuild
baked in the snapshot from task 1.4 (only 7 collections). Boot's
schema-apply step ran `directus schema apply --yes` against that stale
snapshot — saw the 5 new collections in the DB but not in the snapshot
— DELETED THEM, taking the constraints with them.
Recovery: re-created the 5 collections + 10 relations via MCP, ran the
ALTER TABLE statements directly via psql to restore the constraints,
ran schema:snapshot BEFORE any further restart so the YAML reflects
the live state. Documented the operator rule (never rebuild with
uncommitted schema changes) inline in the task spec and in the
directus wiki entity page (separate commit in trm/docs).
Phase 3 hardening on the radar: DIRECTUS_SCHEMA_APPLY_MODE env var
with auto/dry-run/skip modes so dev environments default to non-
destructive behavior.
ROADMAP marks 1.5 done. Phase 1 progress: 7/9 tasks complete (1.1–1.7);
1.8, 1.9 remain.
This commit is contained in:
@@ -122,4 +122,70 @@ Unique constraint: `(entry_id, device_id)` — a device can't appear twice in th
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in commit SHA + one-line note when this lands.)
|
||||
**Implementation landed and live-verified 2026-05-02.** All 5 collections live, snapshot grew from 53 KB to 105 KB.
|
||||
|
||||
**Created (via the directus-local MCP server, same approach as 1.4):**
|
||||
- `events` — 11 fields incl. organization_id M2O, discipline enum (rally/time-trial/regatta/trail-run/hike), starts_at/ends_at required.
|
||||
- `classes` — 8 fields incl. event_id M2O, code unique within event.
|
||||
- `entries` — 11 fields incl. event_id/vehicle_id (nullable)/class_id M2O, race_number, status enum with 8 values, archive on `withdrawn`. **`team_id` deliberately NOT included** per spec note (defer until Phase 2 if real team relationship is needed).
|
||||
- `entry_crew` — 6 fields incl. entry_id/user_id M2O, role enum (pilot/co-pilot/navigator/mechanic/rider/runner/hiker).
|
||||
- `entry_devices` — 7 fields incl. entry_id/device_id M2O, assigned_user_id (nullable, `ON DELETE SET NULL` since user removal shouldn't block device record).
|
||||
|
||||
**10 relations** wired across the 5 collections, all `ON DELETE RESTRICT` except `entry_devices.assigned_user_id` (`SET NULL`, deviation noted above).
|
||||
|
||||
**Composite unique constraints landed via `db-init/005_event_participation_unique_constraints.sql`:**
|
||||
- `events (organization_id, slug)`
|
||||
- `classes (event_id, code)`
|
||||
- `entries (event_id, race_number)`
|
||||
- `entry_crew (entry_id, user_id)`
|
||||
- `entry_devices (entry_id, device_id)`
|
||||
|
||||
---
|
||||
|
||||
**⚠️ Schema-apply destructive deletion incident (2026-05-02):**
|
||||
|
||||
This task surfaced a real foot-gun in our boot pipeline. Documenting in detail so future work avoids it.
|
||||
|
||||
**What happened:**
|
||||
|
||||
1. We created 5 new collections via MCP against the running Directus.
|
||||
2. We then ran `docker compose build && up -d` to make `db-init/005_*.sql` apply.
|
||||
3. The image rebuild baked in the OLD `snapshots/schema.yaml` (committed in task 1.4 — only had 7 collections).
|
||||
4. Boot ran the entrypoint chain. db-init applied 005 successfully (constraints landed on the new tables). But step 2/4 (`schema-apply.sh` → `directus schema apply --yes /directus/snapshots/schema.yaml`) compared the running DB against the stale snapshot and saw 5 collections that "shouldn't exist" — so it **deleted them**, taking the constraints with them.
|
||||
5. End state: 5 collections gone, db-init/005 row in `migrations_applied` still recorded as applied (so it wouldn't re-run), production-shape damage in dev.
|
||||
|
||||
**Why `directus schema apply --yes` is destructive by design:**
|
||||
|
||||
The `--yes` flag tells Directus to enforce the snapshot as the single source of truth — anything in the DB but not in the snapshot is dropped. This is the *correct* behavior for fresh-environment provisioning (tasks 1.7's entrypoint, 1.8's CI dry-run, prod boots) where the snapshot IS the canonical state. It is the *wrong* behavior during active schema development when the snapshot lags behind live changes.
|
||||
|
||||
**Recovery performed:**
|
||||
|
||||
1. Re-created the 5 collections + 10 relations via MCP (same calls as the original task 1.5 work — repeatable since the data was source-controlled in the conversation).
|
||||
2. Re-applied the 5 ALTER TABLE statements from `db-init/005_*.sql` directly via psql (since `migrations_applied` already had 005 recorded).
|
||||
3. Ran `pnpm run schema:snapshot` *before* any further restart. Snapshot now reflects the full 13-collection state.
|
||||
|
||||
**Discipline going forward (operator rule):**
|
||||
|
||||
> **Never restart or rebuild the Directus container while there are uncommitted schema changes.** The flow is always: change in admin UI / via MCP → `pnpm run schema:snapshot` → commit → only then rebuild/restart.
|
||||
|
||||
This rule is now documented in `wiki/entities/directus.md` Schema management section.
|
||||
|
||||
**Architectural follow-up (not for Phase 1):**
|
||||
|
||||
The entrypoint's hard-coded `--yes` is a long-term issue. Phase 3 hardening could introduce a `DIRECTUS_SCHEMA_APPLY_MODE` env var with values `auto` (current behavior, prod default), `dry-run` (log diff only, halt on drift — dev default), `skip`. Tracked as a Phase 3 task; non-blocking for slice-1 ship.
|
||||
|
||||
---
|
||||
|
||||
**Acceptance criteria status:**
|
||||
|
||||
- ✅ All 5 collections exist with the fields specified.
|
||||
- ✅ Required fields flagged (events.organization_id/name/slug/discipline/starts_at/ends_at, classes.event_id/code/name, entries.event_id/class_id/race_number/status, entry_crew.entry_id/user_id/role, entry_devices.entry_id/device_id).
|
||||
- ✅ Single-column unique constraints — none in this task (all uniqueness is composite).
|
||||
- ✅ Composite unique constraints (5 of them) enforced via db-init/005.
|
||||
- ✅ M2O relations wired (10 total).
|
||||
- ✅ status enum dropdown shows all 8 values in lifecycle order.
|
||||
- ✅ race_number is integer.
|
||||
- ✅ team_id field omitted per spec note.
|
||||
- ✅ No permission policies attached.
|
||||
- ✅ `pnpm run schema:snapshot` produces snapshots/schema.yaml with all 5 new collections.
|
||||
- ⏳ End-to-end test (manually create event → class → entry → entry_crew → entry_devices) — pending user.
|
||||
|
||||
Reference in New Issue
Block a user