Realign processor stream-name default to telemetry:teltonika

Stage discovered the wrong default at runtime: tcp-ingestion's compiled
default REDIS_TELEMETRY_STREAM is 'telemetry:teltonika' but processor's
was 'telemetry:t', so the two services were talking past each other —
tcp-ingestion publishing to one stream, processor reading another empty
one. The deploy stack now pins both to the same value via a shared env
var, but the processor's compiled default should also match so local
development and the integration test stay aligned with reality.

Changes:
- src/config/load.ts — default changed to 'telemetry:teltonika'
- .env.example — same
- test/config.test.ts — default-value assertion updated
- planning docs (ROADMAP, phase-1 README, tasks 03/08/10, phase-3 README) —
  occurrences of 'telemetry:t' replaced with 'telemetry:teltonika'

The deploy stack remains the single source of truth via the shared
REDIS_TELEMETRY_STREAM env var. Compiled defaults are belt-and-braces.
This commit is contained in:
2026-05-01 11:38:25 +02:00
parent d758c211ae
commit e1c6f59948
9 changed files with 15 additions and 13 deletions
@@ -30,7 +30,7 @@ Validate environment variables on startup with `zod`, build the pino root logger
| `LOG_LEVEL` | no | `info` | `trace` / `debug` / `info` / `warn` / `error` |
| `REDIS_URL` | yes | — | e.g. `redis://redis:6379` |
| `POSTGRES_URL` | yes | — | e.g. `postgres://user:pass@db:5432/trm` |
| `REDIS_TELEMETRY_STREAM` | no | `telemetry:t` | Must match `tcp-ingestion`'s `REDIS_TELEMETRY_STREAM` |
| `REDIS_TELEMETRY_STREAM` | no | `telemetry:teltonika` | Must match `tcp-ingestion`'s `REDIS_TELEMETRY_STREAM`. Pinned via the deploy-stack shared env var so the two services cannot drift from each other. |
| `REDIS_CONSUMER_GROUP` | no | `processor` | All Processor instances join this group |
| `REDIS_CONSUMER_NAME` | no | `${INSTANCE_ID}` | Unique per instance — defaults to instance id |
| `METRICS_PORT` | no | `9090` | HTTP server port for `/metrics`, `/healthz`, `/readyz` |
@@ -80,13 +80,13 @@ Match `tcp-ingestion`'s convention:
- `debug` for per-batch: `batch consumed n=42`, `batch written inserted=40 duplicates=2 failed=0`.
- `warn` / `error` for the obvious.
After this task lands you should be able to run `pnpm dev` against a local Redis + Postgres, publish a synthetic `Position` to `telemetry:t`, and watch a row appear in `positions` while seeing the lifecycle logs above.
After this task lands you should be able to run `pnpm dev` against a local Redis + Postgres, publish a synthetic `Position` to `telemetry:teltonika`, and watch a row appear in `positions` while seeing the lifecycle logs above.
## Acceptance criteria
- [ ] `pnpm typecheck`, `pnpm lint`, `pnpm test` clean.
- [ ] `pnpm dev` (with local Redis + Postgres reachable) shows the lifecycle log sequence and `processor ready`.
- [ ] Manually publishing a `Position` to `telemetry:t` results in a row in `positions` within seconds.
- [ ] Manually publishing a `Position` to `telemetry:teltonika` results in a row in `positions` within seconds.
- [ ] SIGTERM during idle exits cleanly (no error, no force-exit warning).
- [ ] SIGTERM with in-flight writes waits for them to complete before exiting.
@@ -7,7 +7,7 @@
## Goal
End-to-end pipeline test: spin up Redis 7 and TimescaleDB via testcontainers, boot the Processor against them, publish a synthetic `Position` to `telemetry:t`, verify the row appears in `positions` with byte-equivalent attribute decoding (bigint, Buffer included).
End-to-end pipeline test: spin up Redis 7 and TimescaleDB via testcontainers, boot the Processor against them, publish a synthetic `Position` to `telemetry:teltonika`, verify the row appears in `positions` with byte-equivalent attribute decoding (bigint, Buffer included).
This is the integration test that proves the upstream contract from `tcp-ingestion` flows through end-to-end. Mirror `tcp-ingestion/test/publish.integration.test.ts`'s structure and skip-on-no-Docker pattern.
@@ -35,7 +35,7 @@ Copy `tcp-ingestion/test/publish.integration.test.ts`'s pattern verbatim:
### Synthetic Position publishing
Reuse `serializePosition` from `tcp-ingestion`'s `publish.ts` if it can be imported (likely not — separate repos). Otherwise inline the encoding: a Position object → JSON.stringify with the bigint/Buffer replacer → `XADD telemetry:t * ts <iso> device_id <imei> codec 8E payload <json>`.
Reuse `serializePosition` from `tcp-ingestion`'s `publish.ts` if it can be imported (likely not — separate repos). Otherwise inline the encoding: a Position object → JSON.stringify with the bigint/Buffer replacer → `XADD telemetry:teltonika * ts <iso> device_id <imei> codec 8E payload <json>`.
### Why test 4 (writer failure → retry)
+1 -1
View File
@@ -6,7 +6,7 @@ Implement a Node.js worker that joins a Redis Streams consumer group, decodes `P
When Phase 1 is done:
- The Processor connects to Redis and joins consumer group `processor` on stream `telemetry:t` (configurable). On startup it creates the group with `MKSTREAM` if missing.
- The Processor connects to Redis and joins consumer group `processor` on stream `telemetry:teltonika` (configurable; must match tcp-ingestion's compiled default). On startup it creates the group with `MKSTREAM` if missing.
- Every `Position` record published by `tcp-ingestion` lands as exactly one row in the `positions` hypertable, with `device_id`, `ts`, GPS fields, and the IO `attributes` bag preserved as `JSONB` (sentinel-decoded — bigint values become `numeric`, Buffer values become `bytea` or `text` per the spec in task 1.2).
- Per-device in-memory state (`last_position`, `last_seen`, `position_count_session`) is updated on every record and bounded by an LRU cap.
- `XACK` is sent only after the Postgres write succeeds. A crashed instance leaves work pending; on its next start it picks up via consumer-group resumption, and any other instance can claim its pending entries (full `XAUTOCLAIM` polish lives in Phase 3, but the basic resumption works in Phase 1).