julian 57624cb997 Task 1.9 — Rally Albania 2026 dogfood seed (Phase 1 complete)
Pre-seed landed via the directus-local MCP server. Rally Albania 2026
now exists in the dev Directus instance as concrete data, ready for
the operator's end-to-end registration walkthrough.

Seeded:
- Organization "Motorsport Club Albania" (slug msc-albania).
- Event "Rally Albania 2026" — discipline rally, 06-06 to 06-13.
- 18 classes from §2.2–§2.5 of the regs:
    M-1..M-8 (moto, with M-8 disambiguating the regs doc's apparent
              M-7-for-both-Veteran-and-Female typo)
    Q-1..Q-3 (quad)
    C-1, C-2, C-A, C-3 (car)
    S-1..S-3 (SSV)
- Test vehicle: 1998 Toyota Land Cruiser 70, plate AA-001-AA, 4500cc.
- Test devices: FMB920 chassis + FMB920 dash backup + FMB003 panic
  button. Plausible IMEIs (Teltonika TAC range).
- Junction rows: organization_vehicles (1), organization_devices (3).

Deliberately NOT seeded — left for operator's manual admin-UI
walkthrough as the dogfood acceptance test:
- organization_users row (admin in MSC Albania as race-director)
- entry row (Toyota in C-2, race_number 301, status registered)
- entry_crew row (admin as pilot)
- entry_devices rows × 3 (chassis + backup vehicle-mounted, body
  device assigned_user_id = admin)

This split validates the schema two ways: programmatic creation works
(via MCP), and the admin UI exposes the same collections with working
dropdowns / required-field validation / composite-unique enforcement.

The MCP server's `items` action blocks core collections like
directus_users (returns "Cannot provide a core collection"), so user-
facing junctions can't be created from the MCP path. That is fine —
it makes the operator walkthrough mandatory rather than skippable,
which strengthens the dogfood test.

---

Phase 1 complete (8/8 → 9/9). Status flips to 🟩 in ROADMAP.

Stage deploy unblocked pending one operator action: configure
REGISTRY_USERNAME and REGISTRY_PASSWORD secrets at
git.dev.microservices.al/trm/directus → Settings → Secrets. Without
those, task 1.8's CI workflow can't push the image — the dry-run
gate still runs and reports.

Project memory at .claude/projects/.../project_rally_albania_2026.md
updated to reflect Phase 1 completion and the seed state.
2026-05-02 10:29:22 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 22:35:17 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00
2026-05-01 21:29:13 +02:00

directus

The TRM business plane. Directus 11 instance owning the relational schema (organizations, users, events, entries, course definition, penalty system, timing tables), exposing it through auto-generated REST/GraphQL APIs and the admin UI, and enforcing role-based permissions.

For the architectural specification see ../docs/wiki/entities/directus.md. For the work plan and task status see .planning/ROADMAP.md.

This service is part of the TRM (Time Racing Management) platform.


Schema management — at a glance

Schema is defined and migrated through Directus, with two artifact directories:

  • snapshots/schema.yaml — Directus collections, fields, relations. Generated locally via directus schema snapshot, applied at container startup via directus schema apply.
  • db-init/*.sql — schema Directus does not manage: the postgres-timescaledb positions hypertable, the faulty column, PostGIS-specific DDL, etc. Sequential numbered files (001_, 002_, …) applied by scripts/apply-db-init.sh with a migrations_applied guard table to skip already-run files.

Apply order at boot: db-init first, then directus schema apply, then directus start. Any failure halts boot.


Quick start (local)

Prerequisites: Docker, the directus/directus:11.17.4 image (pulled automatically by compose), a running Postgres 16 + TimescaleDB + PostGIS instance (provided by compose.dev.yaml).

git clone <repo-url>
cd directus
cp .env.example .env
# Edit .env — at minimum set DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE, KEY, SECRET
docker compose -f compose.dev.yaml up --build

Admin UI lands at http://localhost:8055. Default admin credentials are read from ADMIN_EMAIL / ADMIN_PASSWORD in .env.

After making schema changes in the admin UI, snapshot before commit:

pnpm run schema:snapshot
git add snapshots/schema.yaml && git commit

Test the image locally

compose.dev.yaml builds the image from source and runs it next to a TimescaleDB+PostGIS container. Useful for verifying Dockerfile changes, db-init migrations, or snapshot apply behavior before pushing.

docker compose -f compose.dev.yaml down -v   # wipe volumes for a fresh run
docker compose -f compose.dev.yaml up --build

The entrypoint runs db-init, then directus schema apply, then directus start. Watch the logs to confirm each step exits 0.


Production / stage deployment

This service is not deployed standalone. It runs as part of the platform stack defined in the deploy/ repo, which Portainer pulls and runs on the stage and production hosts.

The image itself is published to git.dev.microservices.al/trm/directus:main on every push to main (see CI behavior below). The deploy/ repo's compose.yaml references that image.

To pin a specific commit in production, set DIRECTUS_TAG=<sha> in the deploy stack's environment variables.

Note: The deploy/compose.yaml will need a directus service entry referencing this image, plus a TimescaleDB+PostGIS service if not already present, before this service can run in stage/production. See .planning/phase-1-slice-1-schema/07-image-and-dockerfile.md.


Environment variables

See .env.example for the full list. Required for boot:

Variable Description
DB_CLIENT pg (always)
DB_HOST / DB_PORT / DB_DATABASE / DB_USER / DB_PASSWORD Postgres connection
KEY Directus instance key (random UUID)
SECRET Directus JWT signing secret (random)
ADMIN_EMAIL / ADMIN_PASSWORD Bootstrap admin (only used on first init)
PUBLIC_URL External-facing URL of the instance

All other Directus envs (cache, logging, CORS, etc.) follow upstream defaults unless overridden.


CI behavior

Gitea Actions workflow lands at .gitea/workflows/build.yml in Phase 1 task 1.8 — not yet present.

When the workflow exists:

  • Push to main (only when snapshots/, db-init/, extensions/, Dockerfile, or the workflow file itself changes): builds the image, spins up a throwaway Postgres + TimescaleDB + PostGIS via services:, runs apply-db-init.sh and directus schema apply --yes against it as a dry-run, then publishes the image tagged :main if the dry-run exits 0. Auto-deploys to stage if a Portainer webhook is configured via secrets.PORTAINER_WEBHOOK_URL.
  • Manual trigger (workflow_dispatch): same flow, run on demand.

The dry-run is non-negotiable — it catches snapshot drift, broken db-init scripts, and incompatible schema changes before they touch any real DB.

S
Description
No description provided
Readme 144 KiB
Languages
Shell 91.8%
Dockerfile 8.2%