Files
docs/wiki/entities/processor.md
T
julian 9acde675d9 Correct live-channel architecture; document dual-WebSocket design
Researched Directus's WebSocket subscription mechanism via context7 and
confirmed it only fires events for writes that go through Directus's
own ItemsService. Direct INSERTs from Processor are invisible to
subscribers. The previous claim in entities/directus.md that Directus
broadcasts Processor's writes was wrong.

New: wiki/concepts/live-channel-architecture.md captures the corrected
design with three options table, chosen-architecture diagram,
authorization flow, failure modes, multi-instance plumbing, scale
considerations, and open questions. Chosen path: Processor exposes its
own WebSocket endpoint for the high-volume telemetry firehose
(authentication via Directus-issued JWT, authorization delegated to
Directus once at subscribe time); Directus's built-in WebSocket covers
business-plane events. Each WebSocket serves the writes its plane
manages — preserves plane-separation and gives the gentlest failure
mode (Directus down only blocks new authorizations).

Updated:
- entities/directus.md — corrected the real-time-delivery section,
  added pointer to the new concept page.
- entities/processor.md — added Live broadcast section in
  responsibilities and a section explaining the dual-consumer-group
  plumbing for multi-instance HA.
- index.md — listed the new concept.
- log.md — synthesis entry for 2026-05-01 documenting the correction.
2026-05-01 10:42:00 +02:00

57 lines
4.0 KiB
Markdown

---
title: Processor
type: entity
created: 2026-04-30
updated: 2026-05-01
sources: [gps-tracking-architecture]
tags: [service, telemetry-plane, domain-logic]
---
# Processor
The service where domain logic lives. Consumes normalized telemetry from [[redis-streams]] and is responsible for per-device runtime state, applying domain rules, writing durable state to [[postgres-timescaledb]], and broadcasting live position updates over WebSockets to the [[react-spa]].
## Responsibilities
- Maintain **per-device runtime state** — last position, derived metrics, current zone, accumulators.
- Apply **domain rules** that turn raw telemetry into meaningful events.
- Write **durable state** — both raw position history and any derived events.
- **Broadcast live positions** to subscribed [[react-spa]] clients over a WebSocket endpoint. See [[live-channel-architecture]] for the full design and rationale.
- Emit events for downstream consumers (Directus Flows, notification services, dashboards).
Where [[tcp-ingestion]] is about throughput and protocol correctness, the Processor is about correctness of meaning. It is the component most likely to evolve as requirements grow, which is why it is isolated from the sockets on one side and the API surface on the other.
## State management
- **Static reference data** (spatial assets, configurations) loaded at startup; refreshed on a known cadence or via explicit invalidation.
- **Per-device state** held in memory keyed by device identifier (last seen, current segment, accumulators).
- **Durable state** written asynchronously to the database.
The database is the source of truth for replay/analysis; in-memory state is the source of truth for the current decision. On restart, hot state is rehydrated from the DB — this is a recovery path, not a hot path.
## Database writes
- The Processor is the **only writer** for high-volume telemetry tables (e.g. the positions hypertable). [[directus]] does not insert positions; it reads them.
- For derived business entities (events, violations, alerts), the Processor writes directly to tables [[directus]] also knows about. Schema is owned by Directus; the Processor inserts rows respecting that schema.
- This keeps the hot write path off the Directus HTTP stack while still letting Directus expose the data through API and admin UI.
## Live broadcast
The Processor exposes a WebSocket endpoint that the [[react-spa]] connects to for live position updates. The endpoint authenticates connections by validating Directus-issued JWTs and authorizes subscriptions by delegating to Directus's permission system once at subscribe time — never per record.
This decouples the live channel from [[directus]]'s failure domain (Directus down blocks only new authorizations, not the live firehose) and preserves [[plane-separation]] (telemetry stays in the telemetry plane end-to-end). [[directus]]'s built-in WebSocket subscriptions remain the right channel for changes to the business-plane tables it writes to (timing edits, configuration, manual overrides). See [[live-channel-architecture]] for the full design.
In multi-instance deployments, each Processor reads the [[redis-streams]] stream on two consumer groups: a shared `processor` group for durable writes (work-split across instances) and a per-instance `live-broadcast-{instance_id}` group for fan-out (every instance reads every record for its own connected clients).
## IO element interpretation
Per-model IO mappings live here, not in the Ingestion layer. Example: `{ "FMB920": { "16": "odometer_km", "240": "movement" } }`. This is the boundary set by the [[teltonika]] adapter — Ingestion produces raw IO maps; the Processor names and interprets them.
## Scaling
Multiple Processor instances join a Redis Streams consumer group and split the load across device IDs. Consumer-group offsets ensure a crashed instance's work is picked up by the next one.
## Failure mode
Crash → consumer-group offsets ensure the next instance picks up where the last left off. In-memory state is rehydrated from the database. See [[failure-domains]].