417c21f49e
A real incident hit during directus Phase 1 task 1.5: 5 newly-created collections were destroyed by a container rebuild because the baked-in snapshot was stale. directus schema apply --yes enforces the snapshot as the single source of truth — anything not in the snapshot gets deleted. This is correct for fresh-environment provisioning and prod, but catastrophic during active schema development. Adding a callout to the directus entity page so future readers see the operator rule alongside the snapshot/apply pattern documentation: Never restart or rebuild the Directus container while there are uncommitted schema changes. Always: change → snapshot → commit → rebuild/restart. The recovery path (re-apply via MCP / admin UI, snapshot before restart) is straightforward in dev but would be data-loss in prod. Phase 3 hardening will introduce a DIRECTUS_SCHEMA_APPLY_MODE env var with auto/dry-run/skip modes so dev environments default to non-destructive behavior.
83 lines
6.4 KiB
Markdown
83 lines
6.4 KiB
Markdown
---
|
|
title: Directus
|
|
type: entity
|
|
created: 2026-04-30
|
|
updated: 2026-05-01
|
|
sources: [gps-tracking-architecture, teltonika-ingestion-architecture]
|
|
tags: [service, business-plane, api]
|
|
---
|
|
|
|
# Directus
|
|
|
|
The **business plane**. Owns the relational schema, exposes it through auto-generated REST/GraphQL APIs, enforces role-based permissions, and provides the admin UI for back-office users.
|
|
|
|
## What Directus owns
|
|
|
|
- **Schema management** — collections, fields, relations, migrations.
|
|
- **API generation** — REST and GraphQL endpoints, no boilerplate.
|
|
- **Authentication and authorization** — users, roles, permissions, JWT issuance.
|
|
- **Real-time** — WebSocket subscriptions on collections for live UIs.
|
|
- **Workflow automation** — Flows for orchestrating side effects (notifications, integrations).
|
|
- **Admin UI** — complete back-office interface for operators.
|
|
|
|
## What Directus is NOT
|
|
|
|
Not in the telemetry hot path. Does not accept device connections, run a geofence engine, or hold per-device runtime state. Mixing those responsibilities into the same process would couple deployment lifecycles and contaminate failure domains. See [[plane-separation]].
|
|
|
|
## Schema ownership vs. write access
|
|
|
|
Directus is the schema **owner** even though [[processor]] writes directly to the database. New tables, columns, and relations are defined through Directus. Reasons:
|
|
|
|
- Auto-generated admin UI and APIs are derived from the schema Directus knows about. Tables created outside Directus are invisible to it.
|
|
- Permissions are configured per-collection in Directus.
|
|
- Audit columns (created_at, updated_at, user_created) follow Directus conventions; bypassing them inconsistently leads to subtle UI bugs.
|
|
|
|
This is a normal Directus deployment pattern — it does not require sole write access, only schema authority.
|
|
|
|
## Extensions
|
|
|
|
Used for things that genuinely belong in the business layer:
|
|
|
|
- **Hooks** that react to data changes (e.g. on event-write, trigger a notification Flow).
|
|
- **Custom endpoints** for permission-gated, audited operations that are not throughput-critical.
|
|
- **Custom admin UI panels** for back-office workflows (data review, manual overrides, bulk ops).
|
|
- **Flows** for declarative orchestration.
|
|
|
|
**Not** used for long-running listeners, persistent network sockets, or anything in the telemetry hot path.
|
|
|
|
## Real-time delivery
|
|
|
|
Directus's WebSocket subscriptions push live data to the [[react-spa]] **for writes that go through Directus's own services** (REST, GraphQL, Admin UI, Flows, custom endpoints). The mechanism is action hooks (`action('items.create', ...)`) firing from the `ItemsService`, not Postgres-level change detection.
|
|
|
|
This means **direct database writes from [[processor]] are not visible** to Directus's subscription system. The platform handles this with two cleanly-separated WebSocket channels:
|
|
|
|
- **[[directus]]'s WebSocket** — broadcasts business-plane events: timing edits, configuration changes, manual entries, anything operators do through the admin UI or via [[directus]]'s API.
|
|
- **[[processor]]'s WebSocket** — broadcasts the high-volume telemetry firehose: live position updates fanned out from [[redis-streams]] directly to subscribed [[react-spa]] clients. Authentication uses Directus-issued JWTs; per-subscription authorization delegates to Directus once at subscribe time.
|
|
|
|
See [[live-channel-architecture]] for the full design, including why this split is preferable to routing telemetry writes through [[directus]]'s API or running a bridging extension inside [[directus]].
|
|
|
|
## Schema management — snapshot/apply pipeline
|
|
|
|
Schema changes flow through Directus's native snapshot mechanism, kept under git. Two artifact directories:
|
|
|
|
- **`snapshots/schema.yaml`** — Directus collections, fields, relations. Generated locally with `directus schema snapshot`. Applied at container startup with `directus schema apply --yes`. Idempotent — applies only the diff against the running DB.
|
|
- **`db-init/*.sql`** — schema Directus does not manage: the [[postgres-timescaledb]] positions hypertable, the `faulty` column, indexes that need PostGIS-specific syntax, or any DDL that predates Directus knowing about a collection. Numbered (`001_`, `002_`, …) and applied by a sidecar container or one-shot job ahead of `directus schema apply`. Tracked via a `migrations_applied` guard table to skip already-run files.
|
|
|
|
Local dev edits the schema in the admin UI, then snapshots before commit. CI builds the image with both directories baked in, spins a throwaway Postgres, and dry-runs `apply` to catch breakage before deploy. Production (Portainer) runs the same apply at container start; multi-env separation is a connection string, not different artifacts.
|
|
|
|
This treats `schema.yaml` as the source of truth and the admin UI as its editor. Don't hand-edit `schema.yaml`; round-trip through the UI to keep the format consistent.
|
|
|
|
> **⚠️ Destructive-apply hazard.** `directus schema apply --yes` enforces the snapshot as the single source of truth: anything in the running DB that is *not* in the snapshot gets **deleted** during apply. This is correct for fresh-environment provisioning and prod, but a foot-gun during active schema development. The boot pipeline runs apply on every container start (entrypoint step 2/4 — see [[processor]] for the analogous staged-apply pattern).
|
|
>
|
|
> **Operator rule:** *Never restart or rebuild the Directus container while there are uncommitted schema changes.* The flow is always: change in admin UI / via MCP → `pnpm run schema:snapshot` → commit → only then rebuild/restart.
|
|
>
|
|
> A real incident hit this during Phase 1 task 1.5: 5 newly-created collections were destroyed by a rebuild because the baked-in snapshot was stale. Recovery was straightforward in dev (recreate via MCP, snapshot, commit) but would be data-loss in prod. CI dry-run (Phase 1 task 1.8) catches snapshot drift before it reaches stage. A long-term mitigation — `DIRECTUS_SCHEMA_APPLY_MODE` env var with `auto` / `dry-run` / `skip` modes — is on the Phase 3 hardening roadmap.
|
|
|
|
## Phase 2 role
|
|
|
|
Directus owns the `commands` collection and is the **single auth surface** for outbound device commands. The SPA inserts command rows; a Directus Flow routes them via Redis to the Ingestion instance holding the device's socket. See [[phase-2-commands]].
|
|
|
|
## Failure mode
|
|
|
|
Crash → telemetry continues to flow into the database; admin UI and SPA are unavailable; no telemetry is lost. See [[failure-domains]].
|