Adds snapshot provider that queries the latest non-faulty position per device registered to an event, returned in the `subscribed` reply so the SPA map is populated immediately rather than waiting for the first live broadcast batch. Key changes: - src/live/snapshot.ts: createSnapshotProvider factory using DISTINCT ON (device_id) ... ORDER BY device_id, ts DESC with WHERE faulty=false; converts Date ts to epoch ms; omits speed/course when 0 (matching broadcast convention) - src/main.ts: injects createSnapshotProvider(pool) into createSubscriptionRegistry - test/live-snapshot.test.ts: 7 unit tests covering: two-device result, empty event, faulty exclusion, DISTINCT ON semantics, parameterized query, metrics observation, and error propagation The snapshot query requires the positions_device_ts_idx created in migration 0002 (task 1.5.4). Snapshot failures fail open — registry.fetchSnapshot returns [] so the subscription still succeeds with an empty initial state.
processor
Node.js worker that consumes Position records from a Redis Stream (produced by tcp-ingestion), maintains per-device runtime state, applies racing-domain rules, and writes durable state to Postgres / TimescaleDB.
For the architectural specification see ../docs/wiki/entities/processor.md. For the work plan and task status see .planning/ROADMAP.md.
This service is part of the TRM (Time Racing Management) platform.
Quick start (local)
Prerequisites: Node.js 22+, pnpm, a local Redis instance, and a TimescaleDB instance.
git clone <repo-url>
cd processor
pnpm install
cp .env.example .env
# Edit .env — at minimum set REDIS_URL and POSTGRES_URL
pnpm dev
pnpm dev uses tsx watch for hot-reload during development. The metrics server listens on METRICS_PORT (default 9090). The service connects to Redis and Postgres on startup; both must be reachable before the process starts.
Test the Docker build locally
compose.dev.yaml builds the image from source and runs it next to Redis and TimescaleDB containers. Useful for verifying Dockerfile changes before pushing:
docker compose -f compose.dev.yaml up --build
Once running, the readiness endpoint confirms everything is wired:
curl http://localhost:9090/readyz
# {"status":"ok"}
For day-to-day development, prefer pnpm dev directly — it has hot reload and faster iteration.
Production / stage deployment
This service is not deployed standalone. It runs as part of the platform stack defined in the deploy/ repo, which Portainer pulls and runs on the stage and production hosts.
The image itself is published to git.dev.microservices.al/trm/processor:main on every push to main (see CI behavior below). The deploy/ repo's compose.yaml references that image; updates flow through there, not through this repo.
To pin a specific commit in production, set PROCESSOR_TAG=<sha> in the deploy stack's environment variables.
Note: The
deploy/compose.yamlwill need aprocessorservice entry and a TimescaleDB service added before this service can run in stage/production. See.planning/phase-1-throughput/11-dockerfile-and-ci.mdfor the expected service block shape. That is a deploy-side change for the user to make.
Environment variables
See .env.example for all variables with descriptions and defaults. Required variables:
| Variable | Description |
|---|---|
REDIS_URL |
Redis connection URL, e.g. redis://localhost:6379 |
POSTGRES_URL |
TimescaleDB connection URL, e.g. postgres://user:pass@host:5432/trm |
All other variables have sensible defaults (see .env.example).
Tests
pnpm test— unit tests only. Fast (~1–2 s), no external dependencies. This is what CI runs.pnpm test:integration— integration tests that need Docker (testcontainers spins up real Redis 7 and TimescaleDB containers). Opt-in. Run locally before changes to the consumer, writer, or migration.
Integration tests live in test/**/*.integration.test.ts and are excluded from the default run by vitest.config.ts.
Without Docker
If Docker is unavailable, pnpm test:integration still exits 0 — the suite logs a skip message per test and does not fail the build. This is the correct behavior for CI runners that lack Docker access.
CI behavior
Gitea Actions workflow is at .gitea/workflows/build.yml.
- Push to
main(only whensrc/,test/, build config, Dockerfile, or the workflow file itself changes): runstypecheck,lint,test(unit tests only), then builds and pushes the Docker image tagged:main. Auto-deploys to stage if a Portainer webhook is configured viasecrets.PORTAINER_WEBHOOK_URL. - Manual trigger (
workflow_dispatch): same flow, run on demand.
Integration tests are not run in CI — they need Docker access on the runner, which is not currently configured. Run them locally as needed.
The workflow uses secrets.REGISTRY_USERNAME and secrets.REGISTRY_PASSWORD for the Gitea registry login — these must be configured in the repo's (or org's) Actions secrets.