5035bfc11704306891fa080f4597220e993a5276
5 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
5035bfc117 |
Strip ghost-collection entries from snapshot
Build directus image / build-and-publish (push) Has been cancelled
Third CI dry-run failure: schema-apply tried to "Create migrations_applied" and "Create positions" as Directus collections — both already exist as raw tables created by db-init pre-schema. The conflict halts schema-apply on a fresh CI DB. Why these end up in the snapshot at all: `directus schema snapshot` auto-discovers every table in the public schema, including ones owned by db-init (positions hypertable, migrations_applied guard). It registers them as ghost entries with no fields and no relations — just enough metadata to make Directus aware of the table. In local dev this never tripped because the tables existed BEFORE the snapshot ran, and any subsequent apply was a no-op against directus_collections which already had matching ghost rows. On a fresh CI DB the order is: 1. db-init pre-schema → creates the tables 2. bootstrap → installs Directus system tables (NOT the ghosts) 3. schema-apply → tries to "Create" the ghosts → conflict → fail Fixes: - snapshots/schema.yaml: stripped the migrations_applied and positions entries (24 lines each) from the collections: section. The user collections remain untouched. - scripts/schema-snapshot.sh: post-process step that filters the same ghost names from every future snapshot capture. Awk-based, applied after `docker compose cp` writes the file out. The ghost list is a bash array near the top of the new step — add to it when introducing more db-init-only tables. Snapshot is now 105 KB → ~103 KB. The user collections, fields, and relations are unchanged. positions and migrations_applied stay as raw Postgres tables managed by db-init/, never registered in directus_collections, never shown in the admin UI. That matches the schema-as-code split: Directus owns user collections; db-init owns the positions hypertable and the runner's guard table. Three CI iterations to get the boot pipeline right (port collision → ordering → ghost entries). The dry-run gate has now caught three distinct failure modes that would have damaged stage if pushed unguarded. |
||
|
|
52524eb72d |
Task 1.5 — Event-participation collections
Five collections + 10 relations + 5 composite unique constraints,
captured into snapshots/schema.yaml (now 105 KB, up from 53 KB).
Collections:
- events — 11 fields incl. organization_id M2O, discipline enum
(rally / time-trial / regatta / trail-run / hike),
starts_at/ends_at required.
- classes — 8 fields incl. event_id M2O, code unique within event.
- entries — 11 fields incl. event_id/vehicle_id (nullable for foot
races) /class_id M2O, race_number, status enum with
8 lifecycle values, archive on `withdrawn`.
team_id deliberately omitted (Phase 2+).
- entry_crew — junction with role enum
(pilot/co-pilot/navigator/mechanic/rider/runner/hiker).
- entry_devices — junction with optional assigned_user_id (panic button
body-wear); ON DELETE SET NULL on that field since
user removal shouldn't block the device record.
10 M2O relations wired, all ON DELETE RESTRICT except
entry_devices.assigned_user_id (SET NULL).
db-init/005_event_participation_unique_constraints.sql adds composite
UNIQUE on:
events (organization_id, slug)
classes (event_id, code)
entries (event_id, race_number)
entry_crew (entry_id, user_id)
entry_devices (entry_id, device_id)
---
Destructive-apply incident (recovered):
First attempt at this task hit a real foot-gun. After creating the 5
collections via MCP, we ran `compose build && up -d`. The image rebuild
baked in the snapshot from task 1.4 (only 7 collections). Boot's
schema-apply step ran `directus schema apply --yes` against that stale
snapshot — saw the 5 new collections in the DB but not in the snapshot
— DELETED THEM, taking the constraints with them.
Recovery: re-created the 5 collections + 10 relations via MCP, ran the
ALTER TABLE statements directly via psql to restore the constraints,
ran schema:snapshot BEFORE any further restart so the YAML reflects
the live state. Documented the operator rule (never rebuild with
uncommitted schema changes) inline in the task spec and in the
directus wiki entity page (separate commit in trm/docs).
Phase 3 hardening on the radar: DIRECTUS_SCHEMA_APPLY_MODE env var
with auto/dry-run/skip modes so dev environments default to non-
destructive behavior.
ROADMAP marks 1.5 done. Phase 1 progress: 7/9 tasks complete (1.1–1.7);
1.8, 1.9 remain.
|
||
|
|
6f376a479f |
Task 1.4 — Org-level catalog collections
Seven collections + 3 directus_users custom fields, captured as
snapshots/schema.yaml (53 KB, 2,159 lines).
Collections:
- organizations — UUID PK, name, slug UNIQUE
- vehicles — UUID PK, make/model required, year/cc/vin/plate optional
- devices — UUID PK, imei UNIQUE, model required
- organization_users — junction with role enum (org-admin, race-director,
marshal, timekeeper, participant, viewer)
- organization_vehicles — junction with registered_at
- organization_devices — junction with registered_at
- directus_users — extended with phone, birth_date, nationality
Six M2O relations on the junctions, all ON DELETE RESTRICT (matching
the schema-draft decision: deletion of an org/vehicle/device/user
requires explicit cleanup of dependents).
db-init/004_junction_unique_constraints.sql adds the composite UNIQUE
constraints on the three junctions:
organization_users (organization_id, user_id)
organization_vehicles (organization_id, vehicle_id)
organization_devices (organization_id, device_id)
Composite uniqueness lives in db-init rather than the Directus snapshot
because Directus's snapshot YAML format only captures single-column
unique constraints (the field-level is_unique flag). The migration file
documents the split inline.
Driven via the directus-local MCP server rather than admin-UI clicking
— programmatic create-collection/create-field/create-relation calls
against the running Directus instance, then `pnpm run schema:snapshot`
to capture the canonical YAML.
Live-verified: db-init/004 applies cleanly on container restart
(0 rows in the empty junctions, no constraint violations); schema-apply
against a snapshot-empty boot still skips correctly; all seven new
collections show up in the admin UI's data model navigation.
Snapshot includes positions and migrations_applied as auto-discovered
ghost entries (Directus introspects all public-schema tables). Harmless
— db-init creates them before schema-apply runs, so snapshot apply just
finds them already present.
ROADMAP marks 1.4 done. Phase 1 progress: 6/9 tasks complete (1.1, 1.2,
1.3, 1.4, 1.6, 1.7); 1.5, 1.8, 1.9 remain.
|
||
|
|
e22d9d489a |
Tasks 1.6 + 1.7 — schema tooling + real entrypoint flow
Two parallel tasks landing together. The boot pipeline is now wired end-to-end: db-init → schema apply → directus bootstrap → pm2-runtime. Live-verified by booting a fresh compose stack to a serving Directus admin UI on :8055. Task 1.6 — snapshot tooling: - scripts/schema-snapshot.sh — host-side, dev-time. Verifies docker is on PATH and the directus compose service is running, runs `node /directus/cli.js schema snapshot --yes` inside the container, copies the YAML out to ./snapshots/schema.yaml. Used after admin-UI schema changes to capture the new state for git commit. - scripts/schema-apply.sh — image-side, boot-time. Reads /directus/snapshots/schema.yaml, runs a dry-run preview, then applies. Gracefully skips when the snapshot is absent or whitespace- only (Phase 1 first-boot path before tasks 1.4/1.5 produce collections). SNAPSHOT_PATH env var override for CI flexibility. - snapshots/README.md — lifecycle doc; warns against hand-editing. Task 1.7 — real entrypoint flow: - entrypoint.sh rewritten from Phase 1.1's placeholder to the 4-step boot per ROADMAP design rule #3: 1/4 db-init → /directus/scripts/apply-db-init.sh 2/4 schema apply → /directus/scripts/schema-apply.sh 3/4 directus bootstrap → node /directus/cli.js bootstrap 4/4 directus start → exec pm2-runtime start ecosystem.config.cjs set -euo pipefail halts boot on any step's non-zero exit. Each step emits a [entrypoint] log marker so an operator reading container logs sees which step failed. Bug found and fixed during live verification: - Both 1.6 scripts initially called bare `directus schema ...` as if the CLI were on PATH. Upstream directus/directus:11.17.4 does NOT expose `directus` on PATH — invocation is via `node /directus/cli.js`, same pattern as the entrypoint's bootstrap step. Both scripts corrected. Also added -T to docker compose exec in schema-snapshot.sh so the script works in non-TTY contexts (CI). Phase 5 follow-up (non-blocking) flagged in 07's Done section: Directus warns "Collection 'positions' doesn't have a primary key column and will be ignored". The positions table uses UNIQUE INDEX (device_id, ts) matching processor's pattern, not a PK constraint. Means positions is not auto-registered as a Directus collection — fine for Phase 1, but the operator faulty-flag workflow will need a custom endpoint or manual collection registration in Phase 5. ROADMAP marks 1.6 + 1.7 done. Phase 1 progress: 5/9 tasks complete (1.1, 1.2, 1.3, 1.6, 1.7); 1.4, 1.5, 1.8, 1.9 remain. |
||
|
|
387c3c4cfa |
Task 1.1 — Project scaffold
Phase 1 task 1.1 lands. Directus 11.17.4 boots locally end-to-end against a TimescaleDB+PostGIS container; admin UI serves at :8055, admin bootstrap from env vars works, named volumes preserve data across down/up cycles. Scaffold: - Dockerfile — FROM directus/directus:11.17.4. Pre-installs postgresql16-client (ahead of task 1.2's db-init runner needing psql). Bakes in /directus/snapshots, /directus/db-init, /directus/scripts, /directus/extensions, /directus/entrypoint.sh. - compose.dev.yaml — db (timescale/timescaledb-ha:pg16.6-ts2.17.2-all) + directus (local build), healthchecks, named volumes directus-pg-data + directus-uploads. - entrypoint.sh — placeholder using upstream's actual flow (node cli.js bootstrap && pm2-runtime start ecosystem.config.cjs); the real db-init -> schema apply -> start wrapper lands in task 1.7. - package.json — scripts-only (dev, dev:down, dev:reset, schema:snapshot, schema:apply, db:init), no runtime deps. - .env.example — sectioned, fully documented, KEY/SECRET marked required with generation hints. - .gitignore, .dockerignore — match the processor service conventions. - snapshots/, db-init/, scripts/, extensions/ — empty with .gitkeep, filled by later Phase 1 tasks (1.3, 1.6) and Phase 5. Lessons locked in (against the empirical pnpm dev boot): - timescale/timescaledb-ha:pg16-latest does NOT exist on Docker Hub. Pin a concrete version (we used pg16.6-ts2.17.2-all). - This image's data directory is /home/postgres/pgdata/data, not /pgdata or /var/lib/postgresql/data. PGDATA env var and the volume mount must both target it. - The -all variant bundles PostGIS binaries but the extension is not auto-created on the directus database; CREATE EXTENSION lands in Phase 2 alongside the geofences/SLZs/waypoints collections. - The upstream image's CMD is bootstrap + pm2-runtime, not a simple cli.js start. Bypassing pm2 would lose crash recovery. These corrections folded into 01-project-scaffold.md (deliverable line + Done section), 08-gitea-ci-dryrun.md (CI service tag), and the inline comments in compose.dev.yaml so future implementers don't re-discover them. Status: ROADMAP marks 1.1 done, Phase 1 in progress, 1.2 next. |