Files
docs/wiki/entities/processor.md
T
julian 90d036dbf0 Document canonical Redis stream names in wiki
The wiki was silent on the actual stream name used by tcp-ingestion and
processor — anyone reading it to understand the architecture had no way
to find out what stream the services use. This gap contributed to a
stage-side bug where the two services' compiled defaults drifted
(tcp-ingestion: telemetry:teltonika, processor: telemetry:t), causing
~7 hours of silent zero-throughput before symptoms surfaced.

Changes:
- entities/redis-streams.md — added "Stream and key naming" table
  covering the inbound telemetry stream, Phase 2 command streams, and
  registry/heartbeat keys. Documented the telemetry:{vendor} convention
  so a future Queclink/Concox adapter fits predictably.
- entities/processor.md — opening paragraph names the stream and
  consumer group consumed.
- entities/tcp-ingestion.md — opening paragraph names the stream
  produced; defers full naming convention to redis-streams.
- log.md — note entry recording the canonicalization and the stage
  incident that triggered it.
2026-05-01 11:43:59 +02:00

57 lines
4.1 KiB
Markdown

---
title: Processor
type: entity
created: 2026-04-30
updated: 2026-05-01
sources: [gps-tracking-architecture]
tags: [service, telemetry-plane, domain-logic]
---
# Processor
The service where domain logic lives. Consumes normalized telemetry from [[redis-streams]] (default stream `telemetry:teltonika`, consumer group `processor`) and is responsible for per-device runtime state, applying domain rules, writing durable state to [[postgres-timescaledb]], and broadcasting live position updates over WebSockets to the [[react-spa]].
## Responsibilities
- Maintain **per-device runtime state** — last position, derived metrics, current zone, accumulators.
- Apply **domain rules** that turn raw telemetry into meaningful events.
- Write **durable state** — both raw position history and any derived events.
- **Broadcast live positions** to subscribed [[react-spa]] clients over a WebSocket endpoint. See [[live-channel-architecture]] for the full design and rationale.
- Emit events for downstream consumers (Directus Flows, notification services, dashboards).
Where [[tcp-ingestion]] is about throughput and protocol correctness, the Processor is about correctness of meaning. It is the component most likely to evolve as requirements grow, which is why it is isolated from the sockets on one side and the API surface on the other.
## State management
- **Static reference data** (spatial assets, configurations) loaded at startup; refreshed on a known cadence or via explicit invalidation.
- **Per-device state** held in memory keyed by device identifier (last seen, current segment, accumulators).
- **Durable state** written asynchronously to the database.
The database is the source of truth for replay/analysis; in-memory state is the source of truth for the current decision. On restart, hot state is rehydrated from the DB — this is a recovery path, not a hot path.
## Database writes
- The Processor is the **only writer** for high-volume telemetry tables (e.g. the positions hypertable). [[directus]] does not insert positions; it reads them.
- For derived business entities (events, violations, alerts), the Processor writes directly to tables [[directus]] also knows about. Schema is owned by Directus; the Processor inserts rows respecting that schema.
- This keeps the hot write path off the Directus HTTP stack while still letting Directus expose the data through API and admin UI.
## Live broadcast
The Processor exposes a WebSocket endpoint that the [[react-spa]] connects to for live position updates. The endpoint authenticates connections by validating Directus-issued JWTs and authorizes subscriptions by delegating to Directus's permission system once at subscribe time — never per record.
This decouples the live channel from [[directus]]'s failure domain (Directus down blocks only new authorizations, not the live firehose) and preserves [[plane-separation]] (telemetry stays in the telemetry plane end-to-end). [[directus]]'s built-in WebSocket subscriptions remain the right channel for changes to the business-plane tables it writes to (timing edits, configuration, manual overrides). See [[live-channel-architecture]] for the full design.
In multi-instance deployments, each Processor reads the [[redis-streams]] stream on two consumer groups: a shared `processor` group for durable writes (work-split across instances) and a per-instance `live-broadcast-{instance_id}` group for fan-out (every instance reads every record for its own connected clients).
## IO element interpretation
Per-model IO mappings live here, not in the Ingestion layer. Example: `{ "FMB920": { "16": "odometer_km", "240": "movement" } }`. This is the boundary set by the [[teltonika]] adapter — Ingestion produces raw IO maps; the Processor names and interprets them.
## Scaling
Multiple Processor instances join a Redis Streams consumer group and split the load across device IDs. Consumer-group offsets ensure a crashed instance's work is picked up by the next one.
## Failure mode
Crash → consumer-group offsets ensure the next instance picks up where the last left off. In-memory state is rehydrated from the database. See [[failure-domains]].