Files
processor/.planning/phase-1-throughput/10-integration-test.md
T
julian c314ba0902 Add planning documents for Phase 1 (throughput pipeline) and stub Phases 2-4
ROADMAP.md establishes status legend, architectural anchors pointing at the
wiki, and seven non-negotiable design rules — most importantly the
core/domain boundary that protects Phase 1 from Phase 2 churn, the
schema-authority split (positions hypertable owned here; everything else
owned by Directus), and idempotent-writes via (device_id, ts) ON CONFLICT.

Phase 1 (throughput pipeline) is fully detailed across 11 task files:
scaffold, core types + sentinel decoder, config + logging, Postgres
hypertable, Redis Stream consumer, per-device LRU state, batched writer,
main wiring, observability, integration test, Dockerfile + Gitea CI.
Observability is in Phase 1 (not deferred) — lesson learned from
tcp-ingestion task 1.10.

Phases 2-4 are stub READMEs. Phase 2 (domain logic) blocks on Directus
schema decisions and lists those open questions explicitly. Phase 3
(production hardening) and Phase 4 (future) sketch the task shape.
2026-04-30 21:16:59 +02:00

4.1 KiB

Task 1.10 — Integration test (testcontainers Redis + Postgres)

Phase: 1 — Throughput pipeline Status: Not started Depends on: 1.5, 1.7, 1.8, 1.9 Wiki refs:

Goal

End-to-end pipeline test: spin up Redis 7 and TimescaleDB via testcontainers, boot the Processor against them, publish a synthetic Position to telemetry:t, verify the row appears in positions with byte-equivalent attribute decoding (bigint, Buffer included).

This is the integration test that proves the upstream contract from tcp-ingestion flows through end-to-end. Mirror tcp-ingestion/test/publish.integration.test.ts's structure and skip-on-no-Docker pattern.

Deliverables

  • test/pipeline.integration.test.ts:

    • beforeAll: start Redis container, start TimescaleDB container, run migrations, build a Processor instance pointed at both. If Docker is unavailable, log a clear skip message and set a flag so all it blocks early-return without failing.
    • afterAll: stop the Processor, stop containers.
    • Test 1: publish a Position with bigint and Buffer attributes via XADD; wait for the row in positions (poll, timeout 10s); assert device_id, ts, GPS fields, and a JSON round-trip of attributes matches the original (bigint as string, Buffer as base64).
    • Test 2: publish two records with the same (device_id, ts); verify only one row in positions (idempotency check).
    • Test 3: publish a malformed payload (broken JSON) on the stream; verify processor_decode_errors_total increments and the bad entry stays in PEL (not ACKed).
    • Test 4: simulate the writer failing once (e.g. by temporarily shutting Postgres mid-test, then bringing it back); verify the record gets retried and eventually lands.
  • Use the TimescaleDB image, not stock postgres:7-alpine. Suggested: timescale/timescaledb:latest-pg16. Confirm the migration's CREATE EXTENSION IF NOT EXISTS timescaledb no-ops (extension already loaded).

  • Use the same Vitest config split as tcp-ingestion: vitest.integration.config.ts with hookTimeout: 120_000, testTimeout: 60_000. Default pnpm test excludes *.integration.test.ts; opt-in via pnpm test:integration.

Specification

Skip-on-no-Docker pattern

Copy tcp-ingestion/test/publish.integration.test.ts's pattern verbatim:

  • Try to start the first container in beforeAll. On error, set dockerAvailable = false, log a warning, and return.
  • Each it block early-returns with a console.warn if !dockerAvailable.
  • This pattern was the fix for the CI test failure on the runner without Docker — keep it.

Synthetic Position publishing

Reuse serializePosition from tcp-ingestion's publish.ts if it can be imported (likely not — separate repos). Otherwise inline the encoding: a Position object → JSON.stringify with the bigint/Buffer replacer → XADD telemetry:t * ts <iso> device_id <imei> codec 8E payload <json>.

Why test 4 (writer failure → retry)

This validates the core ACK semantics: if a write fails, the record stays pending, and re-delivery brings it back. Without this test, we have unit tests showing each piece behaves correctly, but no proof the pieces compose right. Skip-conditions: if simulating Postgres failure mid-test is too flaky in testcontainers, weaken to: stop Postgres before publishing, publish, start Postgres, verify row appears.

Acceptance criteria

  • pnpm test:integration runs all four scenarios green when Docker is available.
  • Without Docker, the suite logs skip messages and exits 0 (does not fail).
  • CI (pnpm test, unit only) does not run these — they are opt-in.
  • First-run container pull is reasonable; subsequent runs are fast (testcontainers caches the image).

Risks / open questions

  • Image pull on first CI run. The TimescaleDB image is large (~700MB). If we ever wire integration tests into CI (separate job with Docker), pre-pulling may be required. Document but defer.
  • Test flakiness from polling. Polling for "row appears in positions" uses a 10s timeout. If CI is slow, raise it. Don't replace polling with await sleep(2000) — that's reliably wrong.

Done

(Fill in once complete: commit SHA, brief notes.)