Split db-init into pre-schema and post-schema phases

CI dry-run revealed an architectural ordering bug: db-init/004 and
db-init/005 ALTER TABLE the Directus-managed tables (organization_users,
events, etc.), but db-init runs BEFORE schema-apply creates those
tables. On a fresh CI Postgres this fails with "relation does not
exist." Local dev never tripped this because we'd created the tables
via MCP first.

Fix: introduce a post-schema migration phase. Two db-init runs in the
entrypoint, with schema-apply in between:

  1. apply-db-init.sh   db-init/        → positions hypertable + faulty
                                          column (tables Directus does
                                          NOT manage)
  2. schema-apply.sh                    → creates Directus-managed tables
                                          from snapshots/schema.yaml
  3. apply-db-init.sh   db-init-post/   → composite UNIQUE constraints on
                                          the Directus-managed tables
  4. directus bootstrap
  5. directus start

Files moved:
  db-init/004_junction_unique_constraints.sql →
    db-init-post/001_junction_unique_constraints.sql
  db-init/005_event_participation_unique_constraints.sql →
    db-init-post/002_event_participation_unique_constraints.sql

Each ALTER TABLE in the post-schema migrations is now wrapped in a
pg_constraint existence guard for idempotency. This handles the dev DB
where the constraints already exist (from the original 004/005 runs +
the manual psql recovery during task 1.5's destructive-apply
incident). Old 004/005 rows in migrations_applied become orphans —
harmless.

Updates:
- Dockerfile: COPY db-init-post into the image
- entrypoint.sh: 4-step → 5-step flow with the post-schema run between
  schema-apply and bootstrap
- .gitea/workflows/build.yml: dry-run chains all three pre-boot scripts
  (pre-schema → schema-apply → post-schema); path filter includes
  db-init-post/**
- Task specs 1.4 and 1.5 Done sections: updated to reference the new
  db-init-post/ path (db-init/004 → db-init-post/001, etc.)

The reusable runner script (apply-db-init.sh) didn't need to change —
it already accepts DB_INIT_DIR and uses just the basename for the
guard-table key. The two phases share migrations_applied; filenames
don't collide because pre-schema and post-schema use distinct
descriptive names.

Phase 1 is still "done" — this is a Phase 1 architectural correction
exposed by the CI dry-run, not a new task.
This commit is contained in:
2026-05-02 10:47:52 +02:00
parent 82615c0a66
commit e01abfef27
10 changed files with 245 additions and 157 deletions
+8 -5
View File
@@ -6,6 +6,7 @@ on:
paths:
- 'snapshots/**'
- 'db-init/**'
- 'db-init-post/**'
- 'extensions/**'
- 'scripts/**'
- 'entrypoint.sh'
@@ -67,10 +68,12 @@ jobs:
# -------------------------------------------------------------------------
# Dry-run boot — the gate that protects the registry from broken images.
#
# Runs only the two pre-boot scripts (apply-db-init.sh → schema-apply.sh)
# against the throwaway Postgres service above. Intentionally does NOT run
# `directus bootstrap` or `directus start` — that would require waiting for
# the HTTP server to come up, which adds minutes and tests nothing new.
# Runs the pre-boot script chain (apply-db-init.sh → schema-apply.sh
# apply-db-init.sh against db-init-post) against the throwaway Postgres
# service above. Mirrors the entrypoint's first three steps.
# Intentionally does NOT run `directus bootstrap` or `directus start` —
# that would require waiting for the HTTP server to come up, which adds
# minutes and tests nothing new.
#
# --network host: the service container is mapped on 127.0.0.1:5432; the
# docker run container sees it as localhost:5432 only when host networking
@@ -107,7 +110,7 @@ jobs:
-e ADMIN_PASSWORD=ci-password-not-secret \
-e PUBLIC_URL=http://localhost:8055 \
trm-directus:ci \
-c '/directus/scripts/apply-db-init.sh && /directus/scripts/schema-apply.sh && echo "dry-run ok"'
-c '/directus/scripts/apply-db-init.sh && /directus/scripts/schema-apply.sh && DB_INIT_DIR=/directus/db-init-post /directus/scripts/apply-db-init.sh && echo "dry-run ok"'
# -------------------------------------------------------------------------
# Registry login — runs only if the dry-run succeeded (default: workflow
@@ -140,7 +140,7 @@ Unique constraint: `(organization_id, device_id)`.
- `organization_devices` — 6 fields (id UUID PK, organization_id M2O, device_id M2O, registered_at, date_created, date_updated).
- 6 M2O relations on the junctions, all with `ON DELETE RESTRICT`.
**Composite unique constraints landed via `db-init/004_junction_unique_constraints.sql`** because Directus's snapshot YAML format does not capture composite unique constraints (only single-column ones via `is_unique`). The migration adds:
**Composite unique constraints landed via `db-init-post/001_junction_unique_constraints.sql`** because Directus's snapshot YAML format does not capture composite unique constraints (only single-column ones via `is_unique`). The migration adds:
- `organization_users (organization_id, user_id)`
- `organization_vehicles (organization_id, vehicle_id)`
- `organization_devices (organization_id, device_id)`
@@ -157,7 +157,7 @@ Boot logs confirm: `[db-init] apply 004_junction_unique_constraints.sql` → `[d
- ✅ All seven collections exist with the fields specified.
- ✅ Required fields flagged (organizations.name/slug, devices.imei/model, vehicles.make/model, junction org/target/role).
- ✅ Single-column unique constraints (organizations.slug, devices.imei) enforced.
- ✅ Composite unique constraints on junctions enforced via db-init/004 (assertion block confirms).
- ✅ Composite unique constraints on junctions enforced via db-init-post/001 (assertion block confirms).
- ✅ M2O relations clickable in admin UI (Directus auto-resolves the dropdowns from the relation metadata).
- ✅ No permission policies attached — admin-only by default.
-`pnpm run schema:snapshot` produces snapshots/schema.yaml with all 7 collections present.
@@ -133,7 +133,7 @@ Unique constraint: `(entry_id, device_id)` — a device can't appear twice in th
**10 relations** wired across the 5 collections, all `ON DELETE RESTRICT` except `entry_devices.assigned_user_id` (`SET NULL`, deviation noted above).
**Composite unique constraints landed via `db-init/005_event_participation_unique_constraints.sql`:**
**Composite unique constraints landed via `db-init-post/002_event_participation_unique_constraints.sql`:**
- `events (organization_id, slug)`
- `classes (event_id, code)`
- `entries (event_id, race_number)`
@@ -149,10 +149,10 @@ This task surfaced a real foot-gun in our boot pipeline. Documenting in detail s
**What happened:**
1. We created 5 new collections via MCP against the running Directus.
2. We then ran `docker compose build && up -d` to make `db-init/005_*.sql` apply.
2. We then ran `docker compose build && up -d` to make `db-init-post/002_*.sql` apply.
3. The image rebuild baked in the OLD `snapshots/schema.yaml` (committed in task 1.4 — only had 7 collections).
4. Boot ran the entrypoint chain. db-init applied 005 successfully (constraints landed on the new tables). But step 2/4 (`schema-apply.sh``directus schema apply --yes /directus/snapshots/schema.yaml`) compared the running DB against the stale snapshot and saw 5 collections that "shouldn't exist" — so it **deleted them**, taking the constraints with them.
5. End state: 5 collections gone, db-init/005 row in `migrations_applied` still recorded as applied (so it wouldn't re-run), production-shape damage in dev.
5. End state: 5 collections gone, db-init-post/002 row in `migrations_applied` still recorded as applied (so it wouldn't re-run), production-shape damage in dev.
**Why `directus schema apply --yes` is destructive by design:**
@@ -161,7 +161,7 @@ The `--yes` flag tells Directus to enforce the snapshot as the single source of
**Recovery performed:**
1. Re-created the 5 collections + 10 relations via MCP (same calls as the original task 1.5 work — repeatable since the data was source-controlled in the conversation).
2. Re-applied the 5 ALTER TABLE statements from `db-init/005_*.sql` directly via psql (since `migrations_applied` already had 005 recorded).
2. Re-applied the 5 ALTER TABLE statements from `db-init-post/002_*.sql` directly via psql (since `migrations_applied` already had 005 recorded).
3. Ran `pnpm run schema:snapshot` *before* any further restart. Snapshot now reflects the full 13-collection state.
**Discipline going forward (operator rule):**
@@ -181,7 +181,7 @@ The entrypoint's hard-coded `--yes` is a long-term issue. Phase 3 hardening coul
- ✅ All 5 collections exist with the fields specified.
- ✅ Required fields flagged (events.organization_id/name/slug/discipline/starts_at/ends_at, classes.event_id/code/name, entries.event_id/class_id/race_number/status, entry_crew.entry_id/user_id/role, entry_devices.entry_id/device_id).
- ✅ Single-column unique constraints — none in this task (all uniqueness is composite).
- ✅ Composite unique constraints (5 of them) enforced via db-init/005.
- ✅ Composite unique constraints (5 of them) enforced via db-init-post/002.
- ✅ M2O relations wired (10 total).
- ✅ status enum dropdown shows all 8 values in lifecycle order.
- ✅ race_number is integer.
+8 -4
View File
@@ -6,11 +6,14 @@
# extensions) lands in Phase 5 when TypeScript extensions are introduced.
#
# Artifacts baked into the image at build time:
# /directus/snapshots/ — schema.yaml (generated; empty placeholder in Phase 1)
# /directus/db-init/ — numbered SQL migration files (Phase 1 task 1.3 fills these)
# /directus/scripts/ — shell helpers (Phase 1 tasks 1.2, 1.6 fill these)
# /directus/snapshots/ — schema.yaml (generated)
# /directus/db-init/ pre-schema migrations (positions hypertable etc.)
# /directus/db-init-post/ — post-schema migrations (constraints on Directus
# managed tables; applied AFTER schema-apply)
# /directus/scripts/ — shell helpers (apply-db-init.sh, schema-apply.sh)
# /directus/extensions/ — TypeScript extensions (Phase 5)
# /directus/entrypoint.sh — boot wrapper (real flow lands in Phase 1 task 1.7)
# /directus/entrypoint.sh — boot wrapper (5-step flow: pre-schema db-init →
# schema apply → post-schema db-init → bootstrap → start)
#
# No bind mounts of these directories in compose.dev.yaml — the image is the
# source of truth. Reproducible across local, CI, and production environments.
@@ -36,6 +39,7 @@ RUN apk add --no-cache bash postgresql16-client
# .gitkeep files ensure the directories always exist so COPY never fails.
COPY snapshots/ /directus/snapshots/
COPY db-init/ /directus/db-init/
COPY db-init-post/ /directus/db-init-post/
COPY scripts/ /directus/scripts/
COPY extensions/ /directus/extensions/
COPY entrypoint.sh /directus/entrypoint.sh
+21
View File
@@ -0,0 +1,21 @@
# Post-schema migrations applied AFTER directus schema apply runs.
#
# Pre-schema migrations live in ../db-init/ — they create tables that
# Directus does NOT manage (positions hypertable, faulty column, future
# PostGIS extension). Post-schema migrations live here — they constrain
# tables that Directus DOES manage (organization_*, events, entries,
# entry_*, classes), which are created by `directus schema apply` from
# the snapshot YAML during entrypoint step 2/5.
#
# Order at boot:
# 1. apply-db-init.sh DB_INIT_DIR=/directus/db-init (pre-schema)
# 2. schema-apply.sh (Directus tables created)
# 3. apply-db-init.sh DB_INIT_DIR=/directus/db-init-post (post-schema)
# 4. directus bootstrap
# 5. directus start
#
# Both pre- and post- runs share the same `migrations_applied` guard
# table. Filenames must be unique across both directories (which they
# are by convention — pre-schema files start with descriptive names
# from the table they create; post-schema files start with descriptive
# names from the constraint they add).
@@ -0,0 +1,80 @@
-- 001_junction_unique_constraints.sql (post-schema phase)
-- Composite UNIQUE constraints on the org-junction tables.
--
-- Why post-schema?
-- The tables this migration constrains (organization_users,
-- organization_vehicles, organization_devices) are Directus-managed —
-- created by `directus schema apply` from snapshots/schema.yaml during
-- entrypoint step 2/5. Pre-schema migrations (db-init/) cannot reference
-- them because they don't exist yet at that point. This file lives in
-- db-init-post/ which the runner walks AFTER schema-apply.
--
-- Why composite uniqueness lives here at all (not in the snapshot YAML)?
-- Directus's snapshot format only captures single-column unique
-- constraints (the field-level `is_unique` flag). Composite uniqueness
-- is enforced via raw DDL.
--
-- Idempotency: each ALTER TABLE is wrapped in a `pg_constraint` existence
-- check so the migration is safe to apply against a database where the
-- constraints were created out-of-band (e.g. via psql during the dev
-- recovery from the schema-apply destructive-delete incident in task
-- 1.5). The runner's checksum guard is a separate layer; this guard
-- protects against state drift that the runner can't see.
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'organization_users_org_user_unique'
) THEN
ALTER TABLE organization_users
ADD CONSTRAINT organization_users_org_user_unique
UNIQUE (organization_id, user_id);
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'organization_vehicles_org_vehicle_unique'
) THEN
ALTER TABLE organization_vehicles
ADD CONSTRAINT organization_vehicles_org_vehicle_unique
UNIQUE (organization_id, vehicle_id);
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'organization_devices_org_device_unique'
) THEN
ALTER TABLE organization_devices
ADD CONSTRAINT organization_devices_org_device_unique
UNIQUE (organization_id, device_id);
END IF;
END $$;
-- -------------------------------------------------------------------------
-- Assertion block: verify all three constraints landed.
-- -------------------------------------------------------------------------
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint
WHERE conname = 'organization_users_org_user_unique'
) THEN
RAISE EXCEPTION 'organization_users composite unique constraint missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint
WHERE conname = 'organization_vehicles_org_vehicle_unique'
) THEN
RAISE EXCEPTION 'organization_vehicles composite unique constraint missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint
WHERE conname = 'organization_devices_org_device_unique'
) THEN
RAISE EXCEPTION 'organization_devices composite unique constraint missing';
END IF;
END $$;
@@ -0,0 +1,97 @@
-- 002_event_participation_unique_constraints.sql (post-schema phase)
-- Composite UNIQUE constraints on the event-participation collections.
--
-- Same rationale as 001 in this dir: tables are Directus-managed (events,
-- classes, entries, entry_crew, entry_devices), created by schema-apply,
-- so the constraints land here in db-init-post/ rather than in db-init/.
--
-- All ALTER TABLE statements are wrapped in pg_constraint existence guards
-- for idempotency against pre-existing constraints (see 001 for full
-- rationale).
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'events_org_slug_unique'
) THEN
ALTER TABLE events
ADD CONSTRAINT events_org_slug_unique
UNIQUE (organization_id, slug);
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'classes_event_code_unique'
) THEN
ALTER TABLE classes
ADD CONSTRAINT classes_event_code_unique
UNIQUE (event_id, code);
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entries_event_race_number_unique'
) THEN
ALTER TABLE entries
ADD CONSTRAINT entries_event_race_number_unique
UNIQUE (event_id, race_number);
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entry_crew_entry_user_unique'
) THEN
ALTER TABLE entry_crew
ADD CONSTRAINT entry_crew_entry_user_unique
UNIQUE (entry_id, user_id);
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entry_devices_entry_device_unique'
) THEN
ALTER TABLE entry_devices
ADD CONSTRAINT entry_devices_entry_device_unique
UNIQUE (entry_id, device_id);
END IF;
END $$;
-- -------------------------------------------------------------------------
-- Assertion block: verify all five constraints landed.
-- -------------------------------------------------------------------------
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'events_org_slug_unique'
) THEN
RAISE EXCEPTION 'events composite unique constraint (org, slug) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'classes_event_code_unique'
) THEN
RAISE EXCEPTION 'classes composite unique constraint (event, code) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entries_event_race_number_unique'
) THEN
RAISE EXCEPTION 'entries composite unique constraint (event, race_number) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entry_crew_entry_user_unique'
) THEN
RAISE EXCEPTION 'entry_crew composite unique constraint (entry, user) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entry_devices_entry_device_unique'
) THEN
RAISE EXCEPTION 'entry_devices composite unique constraint (entry, device) missing';
END IF;
END $$;
@@ -1,60 +0,0 @@
-- 004_junction_unique_constraints.sql
-- Composite UNIQUE constraints on the three org-junction tables.
--
-- Why this lives in db-init/ rather than being captured by Directus snapshot:
-- Directus's field-level `is_unique` flag only generates single-column
-- unique constraints. Junction tables need composite uniqueness on the
-- pair (org, target) so the same user/vehicle/device cannot be registered
-- twice within the same org. The snapshot YAML format does NOT capture
-- composite unique constraints, so Directus cannot round-trip them.
-- They belong here, in the same place the positions hypertable's DDL lives.
--
-- Owned by: task 1.4 (org catalog collections). The constraints are part of
-- the data model contract, not a separate Phase 1 migration concern.
--
-- Idempotency: ALTER TABLE ... ADD CONSTRAINT is NOT idempotent. The
-- migrations_applied guard table ensures this file runs at most once per
-- environment. If a constraint already exists (e.g. ad-hoc on an existing
-- stage DB), the operator must INSERT INTO migrations_applied (filename,
-- checksum) VALUES ('004_junction_unique_constraints.sql', '<sha256>') to
-- skip this file on next boot.
ALTER TABLE organization_users
ADD CONSTRAINT organization_users_org_user_unique
UNIQUE (organization_id, user_id);
ALTER TABLE organization_vehicles
ADD CONSTRAINT organization_vehicles_org_vehicle_unique
UNIQUE (organization_id, vehicle_id);
ALTER TABLE organization_devices
ADD CONSTRAINT organization_devices_org_device_unique
UNIQUE (organization_id, device_id);
-- -------------------------------------------------------------------------
-- Assertion block: verify all three constraints landed.
-- -------------------------------------------------------------------------
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint
WHERE conname = 'organization_users_org_user_unique'
) THEN
RAISE EXCEPTION 'organization_users composite unique constraint missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint
WHERE conname = 'organization_vehicles_org_vehicle_unique'
) THEN
RAISE EXCEPTION 'organization_vehicles composite unique constraint missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint
WHERE conname = 'organization_devices_org_device_unique'
) THEN
RAISE EXCEPTION 'organization_devices composite unique constraint missing';
END IF;
END $$;
@@ -1,65 +0,0 @@
-- 005_event_participation_unique_constraints.sql
-- Composite UNIQUE constraints on the event-participation collections.
--
-- Same rationale as 004: Directus's `is_unique` flag is single-column only;
-- composite uniqueness lives in db-init/ because the snapshot YAML format
-- does not capture multi-column unique constraints.
--
-- Owned by: task 1.5 (event-participation collections).
ALTER TABLE events
ADD CONSTRAINT events_org_slug_unique
UNIQUE (organization_id, slug);
ALTER TABLE classes
ADD CONSTRAINT classes_event_code_unique
UNIQUE (event_id, code);
ALTER TABLE entries
ADD CONSTRAINT entries_event_race_number_unique
UNIQUE (event_id, race_number);
ALTER TABLE entry_crew
ADD CONSTRAINT entry_crew_entry_user_unique
UNIQUE (entry_id, user_id);
ALTER TABLE entry_devices
ADD CONSTRAINT entry_devices_entry_device_unique
UNIQUE (entry_id, device_id);
-- -------------------------------------------------------------------------
-- Assertion block: verify all five constraints landed.
-- -------------------------------------------------------------------------
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'events_org_slug_unique'
) THEN
RAISE EXCEPTION 'events composite unique constraint (org, slug) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'classes_event_code_unique'
) THEN
RAISE EXCEPTION 'classes composite unique constraint (event, code) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entries_event_race_number_unique'
) THEN
RAISE EXCEPTION 'entries composite unique constraint (event, race_number) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entry_crew_entry_user_unique'
) THEN
RAISE EXCEPTION 'entry_crew composite unique constraint (entry, user) missing';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'entry_devices_entry_device_unique'
) THEN
RAISE EXCEPTION 'entry_devices composite unique constraint (entry, device) missing';
END IF;
END $$;
+19 -11
View File
@@ -3,15 +3,20 @@
# entrypoint.sh — TRM directus image boot flow
#
# Apply order (non-negotiable, per ROADMAP design rule #3):
# 1. db-init runner — applies db-init/*.sql migrations against Postgres,
# guarded by the migrations_applied table. Owns DDL Directus does not
# manage (positions hypertable, faulty column).
# 1. db-init runner (PRE-schema) — applies db-init/*.sql migrations against
# Postgres. These are migrations for tables Directus does NOT manage
# (positions hypertable, faulty column, future PostGIS extension).
# 2. Directus schema apply — applies snapshots/schema.yaml so the running
# schema matches what's in git. No-op if schema.yaml doesn't exist
# (Phase 1 task 1.4/1.5 hasn't produced one yet).
# 3. Directus bootstrap — idempotent first-boot setup (admin user, system
# schema matches what's in git. This creates the Directus-managed
# tables (organizations, events, entries, etc.). No-op if schema.yaml
# doesn't exist or is empty.
# 3. db-init runner (POST-schema) — applies db-init-post/*.sql migrations.
# These are constraints/indexes on Directus-managed tables that the
# snapshot YAML format cannot capture (composite UNIQUE constraints).
# Must run AFTER schema-apply because the tables don't exist before then.
# 4. Directus bootstrap — idempotent first-boot setup (admin user, system
# tables). Already-bootstrapped instances treat this as a fast no-op.
# 4. Directus start under pm2-runtime — the upstream image's actual run
# 5. Directus start under pm2-runtime — the upstream image's actual run
# pattern. pm2 provides crash recovery and signal handling inside the
# container.
#
@@ -25,14 +30,17 @@ log() {
printf '[entrypoint] %s\n' "$*"
}
log "step 1/4: db-init"
log "step 1/5: db-init (pre-schema)"
/directus/scripts/apply-db-init.sh
log "step 2/4: directus schema apply"
log "step 2/5: directus schema apply"
/directus/scripts/schema-apply.sh
log "step 3/4: directus bootstrap"
log "step 3/5: db-init (post-schema)"
DB_INIT_DIR=/directus/db-init-post /directus/scripts/apply-db-init.sh
log "step 4/5: directus bootstrap"
node /directus/cli.js bootstrap
log "step 4/4: directus start (pm2-runtime)"
log "step 5/5: directus start (pm2-runtime)"
exec pm2-runtime start /directus/ecosystem.config.cjs