Correct live-channel architecture; document dual-WebSocket design

Researched Directus's WebSocket subscription mechanism via context7 and
confirmed it only fires events for writes that go through Directus's
own ItemsService. Direct INSERTs from Processor are invisible to
subscribers. The previous claim in entities/directus.md that Directus
broadcasts Processor's writes was wrong.

New: wiki/concepts/live-channel-architecture.md captures the corrected
design with three options table, chosen-architecture diagram,
authorization flow, failure modes, multi-instance plumbing, scale
considerations, and open questions. Chosen path: Processor exposes its
own WebSocket endpoint for the high-volume telemetry firehose
(authentication via Directus-issued JWT, authorization delegated to
Directus once at subscribe time); Directus's built-in WebSocket covers
business-plane events. Each WebSocket serves the writes its plane
manages — preserves plane-separation and gives the gentlest failure
mode (Directus down only blocks new authorizations).

Updated:
- entities/directus.md — corrected the real-time-delivery section,
  added pointer to the new concept page.
- entities/processor.md — added Live broadcast section in
  responsibilities and a section explaining the dual-consumer-group
  plumbing for multi-instance HA.
- index.md — listed the new concept.
- log.md — synthesis entry for 2026-05-01 documenting the correction.
This commit is contained in:
2026-05-01 10:40:12 +02:00
parent bf403332d0
commit 9acde675d9
5 changed files with 164 additions and 4 deletions
+9 -2
View File
@@ -2,7 +2,7 @@
title: Directus
type: entity
created: 2026-04-30
updated: 2026-04-30
updated: 2026-05-01
sources: [gps-tracking-architecture, teltonika-ingestion-architecture]
tags: [service, business-plane, api]
---
@@ -47,7 +47,14 @@ Used for things that genuinely belong in the business layer:
## Real-time delivery
Directus's WebSocket subscriptions push live data to the [[react-spa]]. When [[processor]] writes a row, Directus broadcasts the change to subscribed clients. Sufficient for tens to low hundreds of concurrent subscribers. If fan-out becomes a bottleneck, a dedicated WebSocket gateway can read directly from [[redis-streams]] and push to clients, bypassing Directus for the live channel only — REST/GraphQL stays in Directus.
Directus's WebSocket subscriptions push live data to the [[react-spa]] **for writes that go through Directus's own services** (REST, GraphQL, Admin UI, Flows, custom endpoints). The mechanism is action hooks (`action('items.create', ...)`) firing from the `ItemsService`, not Postgres-level change detection.
This means **direct database writes from [[processor]] are not visible** to Directus's subscription system. The platform handles this with two cleanly-separated WebSocket channels:
- **[[directus]]'s WebSocket** — broadcasts business-plane events: timing edits, configuration changes, manual entries, anything operators do through the admin UI or via [[directus]]'s API.
- **[[processor]]'s WebSocket** — broadcasts the high-volume telemetry firehose: live position updates fanned out from [[redis-streams]] directly to subscribed [[react-spa]] clients. Authentication uses Directus-issued JWTs; per-subscription authorization delegates to Directus once at subscribe time.
See [[live-channel-architecture]] for the full design, including why this split is preferable to routing telemetry writes through [[directus]]'s API or running a bridging extension inside [[directus]].
## Phase 2 role
+11 -2
View File
@@ -2,20 +2,21 @@
title: Processor
type: entity
created: 2026-04-30
updated: 2026-04-30
updated: 2026-05-01
sources: [gps-tracking-architecture]
tags: [service, telemetry-plane, domain-logic]
---
# Processor
The service where domain logic lives. Consumes normalized telemetry from [[redis-streams]] and is responsible for per-device runtime state, applying domain rules, and writing durable state to [[postgres-timescaledb]].
The service where domain logic lives. Consumes normalized telemetry from [[redis-streams]] and is responsible for per-device runtime state, applying domain rules, writing durable state to [[postgres-timescaledb]], and broadcasting live position updates over WebSockets to the [[react-spa]].
## Responsibilities
- Maintain **per-device runtime state** — last position, derived metrics, current zone, accumulators.
- Apply **domain rules** that turn raw telemetry into meaningful events.
- Write **durable state** — both raw position history and any derived events.
- **Broadcast live positions** to subscribed [[react-spa]] clients over a WebSocket endpoint. See [[live-channel-architecture]] for the full design and rationale.
- Emit events for downstream consumers (Directus Flows, notification services, dashboards).
Where [[tcp-ingestion]] is about throughput and protocol correctness, the Processor is about correctness of meaning. It is the component most likely to evolve as requirements grow, which is why it is isolated from the sockets on one side and the API surface on the other.
@@ -34,6 +35,14 @@ The database is the source of truth for replay/analysis; in-memory state is the
- For derived business entities (events, violations, alerts), the Processor writes directly to tables [[directus]] also knows about. Schema is owned by Directus; the Processor inserts rows respecting that schema.
- This keeps the hot write path off the Directus HTTP stack while still letting Directus expose the data through API and admin UI.
## Live broadcast
The Processor exposes a WebSocket endpoint that the [[react-spa]] connects to for live position updates. The endpoint authenticates connections by validating Directus-issued JWTs and authorizes subscriptions by delegating to Directus's permission system once at subscribe time — never per record.
This decouples the live channel from [[directus]]'s failure domain (Directus down blocks only new authorizations, not the live firehose) and preserves [[plane-separation]] (telemetry stays in the telemetry plane end-to-end). [[directus]]'s built-in WebSocket subscriptions remain the right channel for changes to the business-plane tables it writes to (timing edits, configuration, manual overrides). See [[live-channel-architecture]] for the full design.
In multi-instance deployments, each Processor reads the [[redis-streams]] stream on two consumer groups: a shared `processor` group for durable writes (work-split across instances) and a per-instance `live-broadcast-{instance_id}` group for fan-out (every instance reads every record for its own connected clients).
## IO element interpretation
Per-model IO mappings live here, not in the Ingestion layer. Example: `{ "FMB920": { "16": "odometer_km", "240": "movement" } }`. This is the boundary set by the [[teltonika]] adapter — Ingestion produces raw IO maps; the Processor names and interprets them.