julian 25a9731070 Task 1.3 — Initial migrations
Three SQL files under db-init/ create the schema processor writes
against. All three apply cleanly via apply-db-init.sh, are idempotent
on re-run, and end with assertion blocks that catch silent
schema drift.

001_extensions.sql — registers timescaledb on the directus database.
  PostGIS deferred to Phase 2 (per Plan A). The timescaledb-ha image
  pre-creates the extension at DB init, so the IF NOT EXISTS guard
  fires as a NOTICE — expected and harmless.

002_positions_hypertable.sql — positions hypertable, exact
  column-by-column match against processor/src/db/migrations/0001_positions.sql.

  Cross-checking against processor surfaced 8 divergences from the
  original task spec; processor wins in every case (it is the writer
  and is in production). The corrections:

    - added ingested_at timestamptz NOT NULL DEFAULT now()
    - added codec text NOT NULL
    - altitude/angle/speed: real NOT NULL (not DOUBLE PRECISION nullable)
    - satellites/priority: NOT NULL
    - removed attributes DEFAULT '{}'::jsonb (processor always writes)
    - replaced PRIMARY KEY with UNIQUE INDEX positions_device_ts
      (idiomatic for TimescaleDB hypertables)
    - chunk interval 1 day, not 7 days
    - two indexes (positions_device_ts + positions_ts), not one composite

  Without these corrections every processor INSERT would have failed
  with NOT NULL violations. Spec deliverables section updated to
  reflect the correct shape so future readers see the right schema.

003_faulty_column.sql — adds the operator-controlled faulty boolean
  flag plus the partial index positions_faulty_idx ON (device_id,
  ts DESC) WHERE faulty = FALSE. The column is set only via Directus
  admin (Phase 4 permissions); processor's writer never touches it.
  The partial index optimises the hot-path read pattern (every
  processor evaluator filters faulty = FALSE); operator queries that
  look at faulty rows specifically use the broader positions_device_ts
  index from 002.

Live-verified 2026-05-01:
  - First apply: 3 applied, 0 skipped, exit 0.
  - Re-run: 0 applied, 3 skipped, exit 0.
  - All 13 columns present with correct types/nullability/defaults.
  - Hypertable registered with 1-day chunk interval.
  - Three expected indexes present.

Non-blocking observation: TimescaleDB's create_hypertable()
auto-created a fourth index (positions_ts_idx) duplicating our
explicit positions_ts. Processor's migration has the same redundancy
so stage already lives with this. Cleanup path documented in the
task spec for Phase 3 hardening (create_default_indexes => FALSE
in the create_hypertable call).

ROADMAP marks 1.3 done; 1.4 next.
2026-05-01 22:52:06 +02:00
2026-05-01 22:52:06 +02:00
2026-05-01 22:52:06 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 22:35:17 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00

directus

The TRM business plane. Directus 11 instance owning the relational schema (organizations, users, events, entries, course definition, penalty system, timing tables), exposing it through auto-generated REST/GraphQL APIs and the admin UI, and enforcing role-based permissions.

For the architectural specification see ../docs/wiki/entities/directus.md. For the work plan and task status see .planning/ROADMAP.md.

This service is part of the TRM (Time Racing Management) platform.


Schema management — at a glance

Schema is defined and migrated through Directus, with two artifact directories:

  • snapshots/schema.yaml — Directus collections, fields, relations. Generated locally via directus schema snapshot, applied at container startup via directus schema apply.
  • db-init/*.sql — schema Directus does not manage: the postgres-timescaledb positions hypertable, the faulty column, PostGIS-specific DDL, etc. Sequential numbered files (001_, 002_, …) applied by scripts/apply-db-init.sh with a migrations_applied guard table to skip already-run files.

Apply order at boot: db-init first, then directus schema apply, then directus start. Any failure halts boot.


Quick start (local)

Prerequisites: Docker, the directus/directus:11.17.4 image (pulled automatically by compose), a running Postgres 16 + TimescaleDB + PostGIS instance (provided by compose.dev.yaml).

git clone <repo-url>
cd directus
cp .env.example .env
# Edit .env — at minimum set DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE, KEY, SECRET
docker compose -f compose.dev.yaml up --build

Admin UI lands at http://localhost:8055. Default admin credentials are read from ADMIN_EMAIL / ADMIN_PASSWORD in .env.

After making schema changes in the admin UI, snapshot before commit:

pnpm run schema:snapshot
git add snapshots/schema.yaml && git commit

Test the image locally

compose.dev.yaml builds the image from source and runs it next to a TimescaleDB+PostGIS container. Useful for verifying Dockerfile changes, db-init migrations, or snapshot apply behavior before pushing.

docker compose -f compose.dev.yaml down -v   # wipe volumes for a fresh run
docker compose -f compose.dev.yaml up --build

The entrypoint runs db-init, then directus schema apply, then directus start. Watch the logs to confirm each step exits 0.


Production / stage deployment

This service is not deployed standalone. It runs as part of the platform stack defined in the deploy/ repo, which Portainer pulls and runs on the stage and production hosts.

The image itself is published to git.dev.microservices.al/trm/directus:main on every push to main (see CI behavior below). The deploy/ repo's compose.yaml references that image.

To pin a specific commit in production, set DIRECTUS_TAG=<sha> in the deploy stack's environment variables.

Note: The deploy/compose.yaml will need a directus service entry referencing this image, plus a TimescaleDB+PostGIS service if not already present, before this service can run in stage/production. See .planning/phase-1-slice-1-schema/07-image-and-dockerfile.md.


Environment variables

See .env.example for the full list. Required for boot:

Variable Description
DB_CLIENT pg (always)
DB_HOST / DB_PORT / DB_DATABASE / DB_USER / DB_PASSWORD Postgres connection
KEY Directus instance key (random UUID)
SECRET Directus JWT signing secret (random)
ADMIN_EMAIL / ADMIN_PASSWORD Bootstrap admin (only used on first init)
PUBLIC_URL External-facing URL of the instance

All other Directus envs (cache, logging, CORS, etc.) follow upstream defaults unless overridden.


CI behavior

Gitea Actions workflow lands at .gitea/workflows/build.yml in Phase 1 task 1.8 — not yet present.

When the workflow exists:

  • Push to main (only when snapshots/, db-init/, extensions/, Dockerfile, or the workflow file itself changes): builds the image, spins up a throwaway Postgres + TimescaleDB + PostGIS via services:, runs apply-db-init.sh and directus schema apply --yes against it as a dry-run, then publishes the image tagged :main if the dry-run exits 0. Auto-deploys to stage if a Portainer webhook is configured via secrets.PORTAINER_WEBHOOK_URL.
  • Manual trigger (workflow_dispatch): same flow, run on demand.

The dry-run is non-negotiable — it catches snapshot drift, broken db-init scripts, and incompatible schema changes before they touch any real DB.

S
Description
No description provided
Readme 144 KiB
Languages
Shell 91.8%
Dockerfile 8.2%