22 lines
2.5 KiB
Markdown
22 lines
2.5 KiB
Markdown
# Phase 4 — Future / optional
|
||
|
||
**Status:** ❄️ Not committed
|
||
|
||
Ideas on radar that may or may not become real tasks. Captured here so they don't get forgotten and so we have a place to push scope creep that surfaces during Phase 1–3.
|
||
|
||
## Candidates
|
||
|
||
- **Directus Flow trigger emission.** When a domain event fires (timing record written, stage result computed, anomaly detected), publish a structured event Directus Flows can subscribe to. Lets Directus orchestrate notifications, integrations, derived workflows without polling the database.
|
||
|
||
- **Replay tooling.** Read historical positions for a device + time range from Postgres, re-emit them through the domain pipeline (geofence engine, timing logic) without touching `positions`. Useful for: validating a new geofence layout against past races, regenerating timing records after a rule change, demoing.
|
||
|
||
- **Derived-metric backfill.** When the IO mapping table changes (new model, corrected mapping), backfill `decoded_attributes` for affected devices over a chosen time range without touching `positions`.
|
||
|
||
- **Alternate consumer for analytics export.** A second consumer group reading the same stream, writing to a parallel destination (Parquet on object storage, ClickHouse, etc.) for offline analytics. The Phase 1 architecture already supports this — it's a separate process joining the same stream with a different group name. No Processor changes needed; just operational scaffolding.
|
||
|
||
- **Lifting the live-broadcast WebSocket out of the Processor into a standalone gateway service.** Phase 1.5 implements the WS endpoint inside the Processor process per [[live-channel-architecture]]. If sustained throughput exceeds the threshold documented there (~10k WS messages/sec, or connection-time auth becomes a thundering herd at race start with thousands of viewers), the wiki's documented escape hatch is to extract the WS code into a standalone service that subscribes to the same `live-broadcast-*` consumer group. The Redis-stream-in / WebSocket-out contract doesn't change; only the host process does. Promote this to a numbered phase only when measurements justify it.
|
||
|
||
- **Per-instance sharding hint.** If consumer-group load distribution turns out to be uneven (one instance handles all the chatty devices), introduce hashing-by-device-id with explicit assignment. Probably overkill — Redis Streams' default round-robin works for most workloads.
|
||
|
||
None of these are committed. Move them out of Phase 4 and into a numbered phase only when there's a concrete reason to do them.
|