5035bfc117
Build directus image / build-and-publish (push) Waiting to run
Third CI dry-run failure: schema-apply tried to "Create migrations_applied" and "Create positions" as Directus collections — both already exist as raw tables created by db-init pre-schema. The conflict halts schema-apply on a fresh CI DB. Why these end up in the snapshot at all: `directus schema snapshot` auto-discovers every table in the public schema, including ones owned by db-init (positions hypertable, migrations_applied guard). It registers them as ghost entries with no fields and no relations — just enough metadata to make Directus aware of the table. In local dev this never tripped because the tables existed BEFORE the snapshot ran, and any subsequent apply was a no-op against directus_collections which already had matching ghost rows. On a fresh CI DB the order is: 1. db-init pre-schema → creates the tables 2. bootstrap → installs Directus system tables (NOT the ghosts) 3. schema-apply → tries to "Create" the ghosts → conflict → fail Fixes: - snapshots/schema.yaml: stripped the migrations_applied and positions entries (24 lines each) from the collections: section. The user collections remain untouched. - scripts/schema-snapshot.sh: post-process step that filters the same ghost names from every future snapshot capture. Awk-based, applied after `docker compose cp` writes the file out. The ghost list is a bash array near the top of the new step — add to it when introducing more db-init-only tables. Snapshot is now 105 KB → ~103 KB. The user collections, fields, and relations are unchanged. positions and migrations_applied stay as raw Postgres tables managed by db-init/, never registered in directus_collections, never shown in the admin UI. That matches the schema-as-code split: Directus owns user collections; db-init owns the positions hypertable and the runner's guard table. Three CI iterations to get the boot pipeline right (port collision → ordering → ghost entries). The dry-run gate has now caught three distinct failure modes that would have damaged stage if pushed unguarded.
snapshots/
This directory holds the Directus schema snapshot for the TRM directus service.
What lives here
schema.yaml— the authoritative Directus schema: all collections, fields, and relations. Committed to git and applied at every container boot..gitkeep— present until the first real snapshot lands (task 1.4/1.5/1.6). Onceschema.yamlis committed,.gitkeepis no longer needed and can be removed.
Do NOT hand-edit schema.yaml
schema.yaml is generated programmatically. Its format is tightly coupled to
the version of Directus that produced it. Hand-editing produces subtle breakage
(key-order drift, missing internal fields, format violations) that schema apply
will reject or silently misinterpret.
The only supported workflow for schema changes is:
- Edit the schema in the Directus admin UI (local dev stack).
- Run
pnpm run schema:snapshotfrom thedirectus/repo root. - Review the diff in
snapshots/schema.yaml. - Commit and open a PR.
How schema.yaml is applied
entrypoint.sh calls scripts/schema-apply.sh at every container boot.
The apply script:
- Skips silently if
schema.yamldoes not exist or is empty (safe for first-boot before any collections are defined). - Runs a dry-run preview (
directus schema apply --dry-run) and prints the diff to container logs. - Applies the snapshot (
directus schema apply --yes). This is idempotent — Directus computes the diff against the live DB and applies only what has changed. A clean re-deploy where the DB already matches the snapshot is a no-op.
Snapshot/apply lifecycle
edit in admin UI
│
▼
pnpm run schema:snapshot ←── writes snapshots/schema.yaml
│
▼
git commit + PR
│
▼
CI: directus schema apply --dry-run (fails PR if snapshot is broken)
│
▼
container boot: entrypoint.sh → schema-apply.sh → directus start