Third CI dry-run failure: schema-apply tried to "Create migrations_applied" and "Create positions" as Directus collections — both already exist as raw tables created by db-init pre-schema. The conflict halts schema-apply on a fresh CI DB. Why these end up in the snapshot at all: `directus schema snapshot` auto-discovers every table in the public schema, including ones owned by db-init (positions hypertable, migrations_applied guard). It registers them as ghost entries with no fields and no relations — just enough metadata to make Directus aware of the table. In local dev this never tripped because the tables existed BEFORE the snapshot ran, and any subsequent apply was a no-op against directus_collections which already had matching ghost rows. On a fresh CI DB the order is: 1. db-init pre-schema → creates the tables 2. bootstrap → installs Directus system tables (NOT the ghosts) 3. schema-apply → tries to "Create" the ghosts → conflict → fail Fixes: - snapshots/schema.yaml: stripped the migrations_applied and positions entries (24 lines each) from the collections: section. The user collections remain untouched. - scripts/schema-snapshot.sh: post-process step that filters the same ghost names from every future snapshot capture. Awk-based, applied after `docker compose cp` writes the file out. The ghost list is a bash array near the top of the new step — add to it when introducing more db-init-only tables. Snapshot is now 105 KB → ~103 KB. The user collections, fields, and relations are unchanged. positions and migrations_applied stay as raw Postgres tables managed by db-init/, never registered in directus_collections, never shown in the admin UI. That matches the schema-as-code split: Directus owns user collections; db-init owns the positions hypertable and the runner's guard table. Three CI iterations to get the boot pipeline right (port collision → ordering → ghost entries). The dry-run gate has now caught three distinct failure modes that would have damaged stage if pushed unguarded.
directus
The TRM business plane. Directus 11 instance owning the relational schema (organizations, users, events, entries, course definition, penalty system, timing tables), exposing it through auto-generated REST/GraphQL APIs and the admin UI, and enforcing role-based permissions.
For the architectural specification see ../docs/wiki/entities/directus.md. For the work plan and task status see .planning/ROADMAP.md.
This service is part of the TRM (Time Racing Management) platform.
Schema management — at a glance
Schema is defined and migrated through Directus, with two artifact directories:
snapshots/schema.yaml— Directus collections, fields, relations. Generated locally viadirectus schema snapshot, applied at container startup viadirectus schema apply.db-init/*.sql— schema Directus does not manage: the postgres-timescaledbpositionshypertable, thefaultycolumn, PostGIS-specific DDL, etc. Sequential numbered files (001_,002_, …) applied byscripts/apply-db-init.shwith amigrations_appliedguard table to skip already-run files.
Apply order at boot: db-init first, then directus schema apply, then directus start. Any failure halts boot.
Quick start (local)
Prerequisites: Docker, the directus/directus:11.17.4 image (pulled automatically by compose), a running Postgres 16 + TimescaleDB + PostGIS instance (provided by compose.dev.yaml).
git clone <repo-url>
cd directus
cp .env.example .env
# Edit .env — at minimum set DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE, KEY, SECRET
docker compose -f compose.dev.yaml up --build
Admin UI lands at http://localhost:8055. Default admin credentials are read from ADMIN_EMAIL / ADMIN_PASSWORD in .env.
After making schema changes in the admin UI, snapshot before commit:
pnpm run schema:snapshot
git add snapshots/schema.yaml && git commit
Test the image locally
compose.dev.yaml builds the image from source and runs it next to a TimescaleDB+PostGIS container. Useful for verifying Dockerfile changes, db-init migrations, or snapshot apply behavior before pushing.
docker compose -f compose.dev.yaml down -v # wipe volumes for a fresh run
docker compose -f compose.dev.yaml up --build
The entrypoint runs db-init, then directus schema apply, then directus start. Watch the logs to confirm each step exits 0.
Production / stage deployment
This service is not deployed standalone. It runs as part of the platform stack defined in the deploy/ repo, which Portainer pulls and runs on the stage and production hosts.
The image itself is published to git.dev.microservices.al/trm/directus:main on every push to main (see CI behavior below). The deploy/ repo's compose.yaml references that image.
To pin a specific commit in production, set DIRECTUS_TAG=<sha> in the deploy stack's environment variables.
Note: The
deploy/compose.yamlwill need adirectusservice entry referencing this image, plus a TimescaleDB+PostGIS service if not already present, before this service can run in stage/production. See.planning/phase-1-slice-1-schema/07-image-and-dockerfile.md.
Environment variables
See .env.example for the full list. Required for boot:
| Variable | Description |
|---|---|
DB_CLIENT |
pg (always) |
DB_HOST / DB_PORT / DB_DATABASE / DB_USER / DB_PASSWORD |
Postgres connection |
KEY |
Directus instance key (random UUID) |
SECRET |
Directus JWT signing secret (random) |
ADMIN_EMAIL / ADMIN_PASSWORD |
Bootstrap admin (only used on first init) |
PUBLIC_URL |
External-facing URL of the instance |
All other Directus envs (cache, logging, CORS, etc.) follow upstream defaults unless overridden.
CI behavior
Gitea Actions workflow lands at .gitea/workflows/build.yml in Phase 1 task 1.8 — not yet present.
When the workflow exists:
- Push to
main(only whensnapshots/,db-init/,extensions/,Dockerfile, or the workflow file itself changes): builds the image, spins up a throwaway Postgres + TimescaleDB + PostGIS viaservices:, runsapply-db-init.shanddirectus schema apply --yesagainst it as a dry-run, then publishes the image tagged:mainif the dry-run exits 0. Auto-deploys to stage if a Portainer webhook is configured viasecrets.PORTAINER_WEBHOOK_URL. - Manual trigger (
workflow_dispatch): same flow, run on demand.
The dry-run is non-negotiable — it catches snapshot drift, broken db-init scripts, and incompatible schema changes before they touch any real DB.