# Phase 2 — Domain logic **Status:** ⬜ Not started — blocks on Directus schema decisions The phase that makes the Processor *racing-aware*. Phase 1 produces a generic position firehose into Postgres; Phase 2 layers the domain rules that turn raw positions into racing events: geofence crossings, timing records, IO interpretation, stage results. ## Outcome statement When Phase 2 is done: - Per-model Teltonika IO mappings (e.g. `FMB920 IO 16 → odometer_km`) live in a Directus-managed collection that the Processor reads at startup and refreshes on a known cadence. Decoded attributes are written to a typed shape alongside the raw bag. - The geofence engine evaluates each incoming Position against the active geofences for the device's current event/stage and emits cross-events (entry/exit) when transitions happen. - A `timing_records` table is written for each cross-event of interest (start gate, finish gate, intermediate splits), tied to the entry's bib/competitor/stage. - A `stage_results` rollup is maintained per `(entry, stage)` showing total time, position, and split times. Updated on each new timing record. ## Why this is a separate phase - **Throughput correctness is independent of domain correctness.** Phase 1 ships a working firehose; Phase 2 layers logic on top without touching the consumer/writer/state plumbing. - **The Directus schema gates everything in this phase.** Geofences, entries, classes, device_assignments — all live in Directus collections. Until those are designed and migrated, Phase 2 has no schema to write against. - **Multiple Phase 1 production milestones can pass before Phase 2 starts.** Real-device pilot, second tcp-ingestion instance, Redis high availability — none of those need Phase 2. ## Tasks (sketched, not detailed) These tasks will get full task files once the Directus schema conversation is settled and we know the exact collection shapes. For now, this is the planned shape: | # | Task | Notes | |---|------|-------| | 2.1 | Directus reflection — read-only client for `geofences`, `device_assignments`, `entries`, `events`, `stages` | Cached in memory, refreshed on a cadence; the boundary that lets the Processor know "what is this device currently racing in" | | 2.2 | IO mapping table & per-model decoder | `device_models` collection in Directus → in-memory map → `decoded_attributes` JSONB column on `positions` (or a separate table) | | 2.3 | Geofence engine | Per-position, evaluate active geofences for the device's current entry. Use PostGIS `ST_Contains` for the cross-detection. Emit cross-events | | 2.4 | Timing record writer | Cross-events of interest → rows in `timing_records` (Directus-owned). Idempotent on `(entry_id, geofence_id, ts)` | | 2.5 | Stage result aggregator | On each new `timing_records` row, recompute `stage_results.{total_time, position}` for the affected entry. Materialized incrementally to avoid full recomputation | | 2.6 | Per-device runtime state extension | Phase 1's `DeviceState` extended with current entry, current stage, last geofence membership, accumulators. Note: Phase 3 rehydration becomes important once this state has substance | ## Architectural boundary to maintain `src/core/` from Phase 1 stays untouched. Phase 2 lives in `src/domain/`. The wire-up point is the `sink` function in `src/main.ts`: after `state.update` and `writer.write`, the sink invokes domain handlers. Per the ESLint rule from task 1.1, `src/core/` cannot import from `src/domain/` — only `main.ts` glues them. ## Open questions blocking task-level detail (These get answered in the Directus schema conversation.) 1. Are `geofences` org-scoped, event-scoped, or both? 2. Is `device_assignments` time-bounded (start_at + end_at) or just event-bounded? 3. Where does the IO mapping table live — Directus collection, hardcoded in Processor, or in a config file? 4. What's the canonical name for the sub-event unit — `stage`, `session`, `run`, `leg`? 5. Is there a live leaderboard requirement, or is timing reviewed post-event?