Implement Phase 1 tasks 1.1-1.4 (scaffold + core types + config + Postgres)
Scaffold mirrors tcp-ingestion conventions: ESM, strict TS, pnpm, vitest
with unit/integration split, ESLint flat config with no-floating-promises
+ no-misused-promises + import/no-restricted-paths (the new src/core/ →
src/domain/ boundary that protects Phase 1 from Phase 2 churn).
Core types in src/core/types.ts (Position, StreamRecord, DeviceState,
Metrics, AttributeValue) — Position is byte-equivalent to tcp-ingestion's
output. Codec in src/core/codec.ts implements sentinel reversal:
{__bigint:"..."} → bigint, {__buffer_b64:"..."} → Buffer, ISO timestamp
string → Date. CodecError surfaces malformed payload reasons with the
failing field named.
Config in src/config/load.ts (zod schema, all 13 env vars with defaults
and bounded numerics). Logger in src/observability/logger.ts matches
tcp-ingestion exactly: ISO timestamps, string level labels, pino-pretty
in development.
Postgres in src/db/: createPool with sane defaults and application_name,
connectWithRetry mirroring the ioredis retry pattern, a 30-line
migration runner using a schema_migrations table, and 0001_positions.sql
with the hypertable + (device_id, ts) unique index + ts DESC index.
Migration runner unit-tested against a mocked pg.Pool; the real
TimescaleDB round-trip is deferred to task 1.10 per spec.
Verification: typecheck, lint, build all clean; 73 unit tests passing
across 4 files. import/no-restricted-paths verified live by temporarily
adding a forbidden src/domain/ import.
This commit is contained in:
@@ -0,0 +1,11 @@
|
||||
node_modules/
|
||||
dist/
|
||||
coverage/
|
||||
.env
|
||||
.env.local
|
||||
*.log
|
||||
.git/
|
||||
.planning/
|
||||
test/
|
||||
*.md
|
||||
!README.md
|
||||
@@ -0,0 +1,49 @@
|
||||
# Environment variables for processor.
|
||||
# Copy to .env and fill in values for local development.
|
||||
# Required vars: REDIS_URL, POSTGRES_URL.
|
||||
|
||||
# Runtime environment: development | test | production
|
||||
NODE_ENV=development
|
||||
|
||||
# Unique identifier for this service instance.
|
||||
# Used in logs (base field), metrics labels, and as the default Redis consumer name.
|
||||
# IMPORTANT: must be unique per running instance for safe consumer-group operation.
|
||||
# If two instances share the same INSTANCE_ID they will also share REDIS_CONSUMER_NAME,
|
||||
# which causes consumer-group split-brain — the stream will not progress correctly.
|
||||
INSTANCE_ID=processor-1
|
||||
|
||||
# Log level: fatal | error | warn | info | debug | trace
|
||||
LOG_LEVEL=info
|
||||
|
||||
# Redis connection URL — required; no default.
|
||||
REDIS_URL=redis://localhost:6379
|
||||
|
||||
# Postgres / TimescaleDB connection URL — required; no default.
|
||||
POSTGRES_URL=postgres://postgres:postgres@localhost:5432/trm
|
||||
|
||||
# Redis Stream name to consume from. Must match tcp-ingestion's REDIS_TELEMETRY_STREAM.
|
||||
REDIS_TELEMETRY_STREAM=telemetry:t
|
||||
|
||||
# Redis consumer group name. All Processor instances join this group.
|
||||
REDIS_CONSUMER_GROUP=processor
|
||||
|
||||
# Redis consumer name. Defaults to INSTANCE_ID.
|
||||
# Override only when running multiple instances that should appear as distinct consumers
|
||||
# in the group (e.g. when INSTANCE_ID is not set to a unique value per container).
|
||||
# REDIS_CONSUMER_NAME=processor-1
|
||||
|
||||
# Port for Prometheus /metrics, /healthz, /readyz HTTP server.
|
||||
METRICS_PORT=9090
|
||||
|
||||
# Max records fetched per XREADGROUP call.
|
||||
BATCH_SIZE=100
|
||||
|
||||
# BLOCK timeout (ms) on XREADGROUP when the stream is empty. 0 = no blocking.
|
||||
BATCH_BLOCK_MS=5000
|
||||
|
||||
# Max rows per Postgres INSERT batch.
|
||||
WRITE_BATCH_SIZE=50
|
||||
|
||||
# Max devices kept in the per-device in-memory state map. LRU eviction beyond this cap.
|
||||
# Size each entry at ~500 bytes → 10 000 devices ≈ 5 MB. Raise for large fleets.
|
||||
DEVICE_STATE_LRU_CAP=10000
|
||||
@@ -0,0 +1,6 @@
|
||||
node_modules/
|
||||
dist/
|
||||
coverage/
|
||||
.env
|
||||
.env.local
|
||||
*.log
|
||||
@@ -40,17 +40,17 @@ These rules govern every task. Any deviation must be discussed and documented as
|
||||
|
||||
### Phase 1 — Throughput pipeline
|
||||
|
||||
**Status:** ⬜ Not started
|
||||
**Status:** 🟨 In progress (1.1–1.4 done; 1.5–1.11 ahead)
|
||||
**Outcome:** A Node.js Processor that joins a Redis Streams consumer group on `telemetry:t`, decodes each `Position` (including `__bigint`/`__buffer_b64` sentinel reversal), upserts it into a TimescaleDB `positions` hypertable, updates per-device in-memory state (last position, last seen), `XACK`s on successful write, and exposes Prometheus metrics + health/readiness HTTP endpoints. End-to-end pilot-quality service; no domain logic yet.
|
||||
|
||||
[**See `phase-1-throughput/README.md`**](./phase-1-throughput/README.md)
|
||||
|
||||
| # | Task | Status | Landed in |
|
||||
|---|------|--------|-----------|
|
||||
| 1.1 | [Project scaffold](./phase-1-throughput/01-project-scaffold.md) | ⬜ | — |
|
||||
| 1.2 | [Core types & contracts](./phase-1-throughput/02-core-types.md) | ⬜ | — |
|
||||
| 1.3 | [Configuration & logging](./phase-1-throughput/03-config-and-logging.md) | ⬜ | — |
|
||||
| 1.4 | [Postgres connection & `positions` hypertable](./phase-1-throughput/04-postgres-schema.md) | ⬜ | — |
|
||||
| 1.1 | [Project scaffold](./phase-1-throughput/01-project-scaffold.md) | 🟩 | *(pending commit SHA)* |
|
||||
| 1.2 | [Core types & contracts](./phase-1-throughput/02-core-types.md) | 🟩 | *(pending commit SHA)* |
|
||||
| 1.3 | [Configuration & logging](./phase-1-throughput/03-config-and-logging.md) | 🟩 | *(pending commit SHA)* |
|
||||
| 1.4 | [Postgres connection & `positions` hypertable](./phase-1-throughput/04-postgres-schema.md) | 🟩 | *(pending commit SHA)* |
|
||||
| 1.5 | [Redis Stream consumer (XREADGROUP)](./phase-1-throughput/05-stream-consumer.md) | ⬜ | — |
|
||||
| 1.6 | [Per-device in-memory state](./phase-1-throughput/06-device-state.md) | ⬜ | — |
|
||||
| 1.7 | [Position writer (batched upsert)](./phase-1-throughput/07-position-writer.md) | ⬜ | — |
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Task 1.1 — Project scaffold
|
||||
|
||||
**Phase:** 1 — Throughput pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Status:** 🟩 Done
|
||||
**Depends on:** None
|
||||
**Wiki refs:** `docs/wiki/entities/processor.md`
|
||||
|
||||
@@ -55,4 +55,4 @@ Initialize the Node.js / TypeScript project with the directory layout from the P
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in once complete: commit SHA, brief notes.)
|
||||
*(pending commit SHA)* — Scaffolded `package.json`, `tsconfig.json`, `tsconfig.test.json`, `eslint.config.js`, `.prettierrc`, `vitest.config.ts`, `vitest.integration.config.ts`, `.env.example`, `.gitignore`, `.dockerignore`, and `src/main.ts`. All tooling passes (`pnpm typecheck`, `pnpm lint`, `pnpm build`, `pnpm test`). Verified `import/no-restricted-paths` boundary rule fires on a temporary `src/core/` → `src/domain/` import. Divergence from tcp-ingestion: the restricted-paths zone targets `src/domain/` (Phase 2 boundary) instead of `src/adapters/` (tcp-ingestion boundary).
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Task 1.2 — Core types & contracts
|
||||
|
||||
**Phase:** 1 — Throughput pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Status:** 🟩 Done
|
||||
**Depends on:** 1.1
|
||||
**Wiki refs:** `docs/wiki/concepts/position-record.md`, `docs/wiki/concepts/io-element-bag.md`
|
||||
|
||||
@@ -63,4 +63,4 @@ Some Teltonika IO elements are u64 values that exceed `Number.MAX_SAFE_INTEGER`
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in once complete: commit SHA, brief notes.)
|
||||
*(pending commit SHA)* — Implemented `src/core/types.ts` (Position, StreamRecord, DeviceState, Metrics, AttributeValue) and `src/core/codec.ts` (decodePosition, CodecError). All sentinel reversal rules implemented: `__bigint` → bigint, `__buffer_b64` → Buffer, timestamp ISO string → Date. 26 test cases in `test/codec.test.ts` covering round-trips, u64-max, non-UTF-8 bytes, all error paths. Judgment call: `AttributeValue` extracted as a named type alias (not inline) to aid readability in downstream tasks.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Task 1.3 — Configuration & logging
|
||||
|
||||
**Phase:** 1 — Throughput pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Status:** 🟩 Done
|
||||
**Depends on:** 1.1
|
||||
**Wiki refs:** `docs/wiki/entities/processor.md`
|
||||
|
||||
@@ -73,4 +73,4 @@ return pino({ level, base, timestamp: pino.stdTimeFunctions.isoTime, formatters
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in once complete: commit SHA, brief notes.)
|
||||
*(pending commit SHA)* — Implemented `src/config/load.ts` (zod schema, loadConfig) and `src/observability/logger.ts` (createLogger, pino-pretty in dev). 37 config test cases covering all defaults, missing required vars, URL protocol validation, and bounded numeric checks. Wired into `src/main.ts`. Divergence from tcp-ingestion config: `INSTANCE_ID` defaults to a fixed `'processor-1'` string rather than a random UUID prefix; rationale: operator-visible name is more useful than randomness in a containerised environment where the instance name can be set deterministically.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Task 1.4 — Postgres connection & `positions` hypertable
|
||||
|
||||
**Phase:** 1 — Throughput pipeline
|
||||
**Status:** ⬜ Not started
|
||||
**Status:** 🟩 Done
|
||||
**Depends on:** 1.1, 1.3
|
||||
**Wiki refs:** `docs/wiki/entities/postgres-timescaledb.md`
|
||||
|
||||
@@ -86,4 +86,4 @@ Do **not** introduce a heavy framework (Knex, node-pg-migrate). The Processor ha
|
||||
|
||||
## Done
|
||||
|
||||
(Fill in once complete: commit SHA, brief notes.)
|
||||
*(pending commit SHA)* — Implemented `src/db/pool.ts` (createPool, connectWithRetry), `src/db/migrate.ts` (runMigrations — 30-line runner), and `src/db/migrations/0001_positions.sql` (hypertable + unique index + ts-desc index). Unit tests use a mocked pg.Pool throughout; the real TimescaleDB round-trip is deferred to task 1.10 per spec. The "calls process.exit(1)" pool test uses `maxAttempts=1` to avoid fake-timer unhandled-rejection noise that surfaces when a backoff setTimeout resolves after the outer promise has already thrown.
|
||||
|
||||
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"semi": true,
|
||||
"singleQuote": true,
|
||||
"printWidth": 100,
|
||||
"trailingComma": "all",
|
||||
"tabWidth": 2
|
||||
}
|
||||
@@ -0,0 +1,74 @@
|
||||
// @ts-check
|
||||
import tseslint from '@typescript-eslint/eslint-plugin';
|
||||
import tsParser from '@typescript-eslint/parser';
|
||||
import importPlugin from 'eslint-plugin-import';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { dirname, join } from 'node:path';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
/** @type {import('eslint').Linter.Config[]} */
|
||||
export default [
|
||||
{
|
||||
ignores: ['dist/**', 'node_modules/**', 'coverage/**'],
|
||||
},
|
||||
{
|
||||
files: ['src/**/*.ts', 'test/**/*.ts'],
|
||||
plugins: {
|
||||
'@typescript-eslint': tseslint,
|
||||
import: importPlugin,
|
||||
},
|
||||
languageOptions: {
|
||||
parser: tsParser,
|
||||
parserOptions: {
|
||||
project: './tsconfig.test.json',
|
||||
tsconfigRootDir: __dirname,
|
||||
ecmaVersion: 2022,
|
||||
sourceType: 'module',
|
||||
},
|
||||
},
|
||||
settings: {
|
||||
'import/resolver': {
|
||||
typescript: {
|
||||
project: join(__dirname, 'tsconfig.test.json'),
|
||||
},
|
||||
},
|
||||
},
|
||||
rules: {
|
||||
// TypeScript strict promise rules — critical in a stream consumer where
|
||||
// unhandled rejection silently loses work.
|
||||
'@typescript-eslint/no-floating-promises': 'error',
|
||||
'@typescript-eslint/no-misused-promises': 'error',
|
||||
|
||||
// General quality
|
||||
'@typescript-eslint/no-explicit-any': 'error',
|
||||
'@typescript-eslint/no-unused-vars': [
|
||||
'error',
|
||||
{ argsIgnorePattern: '^_', varsIgnorePattern: '^_' },
|
||||
],
|
||||
'@typescript-eslint/consistent-type-imports': [
|
||||
'error',
|
||||
{ prefer: 'type-imports' },
|
||||
],
|
||||
|
||||
// Domain isolation: core/ must NEVER import from domain/.
|
||||
// src/domain/ does not exist yet — this rule is preemptive so Phase 2
|
||||
// cannot violate the boundary by accident.
|
||||
'import/no-restricted-paths': [
|
||||
'error',
|
||||
{
|
||||
basePath: __dirname,
|
||||
zones: [
|
||||
{
|
||||
target: 'src/core',
|
||||
from: 'src/domain',
|
||||
message:
|
||||
'src/core must not import from src/domain — domain logic depends on core, not the reverse.',
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
];
|
||||
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"name": "processor",
|
||||
"version": "0.1.0",
|
||||
"description": "Worker service that consumes Position records from Redis Streams and writes durable state to Postgres/TimescaleDB",
|
||||
"type": "module",
|
||||
"engines": {
|
||||
"node": ">=22"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "tsc --project tsconfig.json",
|
||||
"dev": "tsx watch src/main.ts",
|
||||
"start": "node dist/main.js",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest",
|
||||
"test:integration": "vitest run --config vitest.integration.config.ts",
|
||||
"lint": "eslint .",
|
||||
"format": "prettier --write .",
|
||||
"typecheck": "tsc --noEmit"
|
||||
},
|
||||
"dependencies": {
|
||||
"ioredis": "^5.3.2",
|
||||
"pg": "^8.13.0",
|
||||
"pino": "^9.5.0",
|
||||
"prom-client": "^15.1.3",
|
||||
"zod": "^3.23.8"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^22.10.0",
|
||||
"@types/pg": "^8.11.10",
|
||||
"@typescript-eslint/eslint-plugin": "^8.19.0",
|
||||
"@typescript-eslint/parser": "^8.19.0",
|
||||
"@vitest/coverage-v8": "^2.1.8",
|
||||
"eslint": "^9.17.0",
|
||||
"eslint-import-resolver-typescript": "^4.4.4",
|
||||
"eslint-plugin-import": "^2.31.0",
|
||||
"pino-pretty": "^13.0.0",
|
||||
"prettier": "^3.4.2",
|
||||
"testcontainers": "^11.14.0",
|
||||
"tsx": "^4.19.2",
|
||||
"typescript": "^5.7.2",
|
||||
"vitest": "^2.1.8"
|
||||
}
|
||||
}
|
||||
Generated
+5298
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,110 @@
|
||||
import { z } from 'zod';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Validates a URL string and checks that its protocol matches one of the
|
||||
* accepted schemes. Returns the url string unchanged on success.
|
||||
*/
|
||||
function urlWithProtocol(accepted: string[]): z.ZodEffects<z.ZodString, string, string> {
|
||||
return z.string().superRefine((val, ctx) => {
|
||||
let parsed: URL;
|
||||
try {
|
||||
parsed = new URL(val);
|
||||
} catch {
|
||||
ctx.addIssue({ code: z.ZodIssueCode.custom, message: `Not a valid URL: "${val}"` });
|
||||
return;
|
||||
}
|
||||
// URL.protocol includes the trailing colon, e.g. "redis:" or "postgres:"
|
||||
const scheme = parsed.protocol.replace(/:$/, '');
|
||||
if (!accepted.includes(scheme)) {
|
||||
ctx.addIssue({
|
||||
code: z.ZodIssueCode.custom,
|
||||
message: `Expected protocol ${accepted.join(' or ')}:, got ${scheme}:`,
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Schema
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const ConfigSchema = z.object({
|
||||
NODE_ENV: z.enum(['development', 'test', 'production']).default('production'),
|
||||
INSTANCE_ID: z.string().min(1).default('processor-1'),
|
||||
LOG_LEVEL: z
|
||||
.enum(['fatal', 'error', 'warn', 'info', 'debug', 'trace'])
|
||||
.default('info'),
|
||||
|
||||
// Required — no silent defaults for connectivity strings
|
||||
REDIS_URL: urlWithProtocol(['redis', 'rediss']),
|
||||
POSTGRES_URL: urlWithProtocol(['postgres', 'postgresql']),
|
||||
|
||||
// Redis stream / group config — must match tcp-ingestion's output stream
|
||||
REDIS_TELEMETRY_STREAM: z.string().min(1).default('telemetry:t'),
|
||||
REDIS_CONSUMER_GROUP: z.string().min(1).default('processor'),
|
||||
// Consumer name defaults to INSTANCE_ID; resolved after schema parse (see below)
|
||||
REDIS_CONSUMER_NAME: z.string().min(1).optional(),
|
||||
|
||||
// Observability
|
||||
METRICS_PORT: z.coerce.number().int().min(0).max(65535).default(9090),
|
||||
|
||||
// Throughput tuning
|
||||
BATCH_SIZE: z.coerce.number().int().min(1).max(10_000).default(100),
|
||||
BATCH_BLOCK_MS: z.coerce.number().int().min(0).max(60_000).default(5_000),
|
||||
WRITE_BATCH_SIZE: z.coerce.number().int().min(1).max(1_000).default(50),
|
||||
|
||||
// Per-device in-memory state LRU cap
|
||||
DEVICE_STATE_LRU_CAP: z.coerce.number().int().min(100).max(1_000_000).default(10_000),
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Config type
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* `REDIS_CONSUMER_NAME` in the raw schema is optional (string | undefined).
|
||||
* After loading we fill it in with INSTANCE_ID if absent, so the exported
|
||||
* Config always has a non-optional consumer name.
|
||||
*/
|
||||
type RawConfig = z.infer<typeof ConfigSchema>;
|
||||
|
||||
export type Config = Omit<RawConfig, 'REDIS_CONSUMER_NAME'> & {
|
||||
readonly REDIS_CONSUMER_NAME: string;
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// loadConfig
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Reads `process.env`, validates with zod, and returns a fully typed Config.
|
||||
* Throws with a human-readable multi-line error listing every invalid field if
|
||||
* validation fails — the intent is loud, fast failure rather than running with
|
||||
* bad configuration.
|
||||
*
|
||||
* Accepts an optional `env` parameter so tests can inject arbitrary env maps
|
||||
* without touching process.env.
|
||||
*/
|
||||
export function loadConfig(env: Record<string, string | undefined> = process.env): Config {
|
||||
const result = ConfigSchema.safeParse(env);
|
||||
|
||||
if (!result.success) {
|
||||
const issues = result.error.issues
|
||||
.map((issue) => ` ${issue.path.join('.')}: ${issue.message}`)
|
||||
.join('\n');
|
||||
throw new Error(`Configuration error — invalid or missing environment variables:\n${issues}`);
|
||||
}
|
||||
|
||||
const raw: RawConfig = result.data;
|
||||
|
||||
return {
|
||||
...raw,
|
||||
// Default REDIS_CONSUMER_NAME to INSTANCE_ID — both must be unique per
|
||||
// instance for safe consumer-group operation (see .env.example).
|
||||
REDIS_CONSUMER_NAME: raw.REDIS_CONSUMER_NAME ?? raw.INSTANCE_ID,
|
||||
};
|
||||
}
|
||||
@@ -0,0 +1,225 @@
|
||||
/**
|
||||
* Sentinel decoder for Position records arriving from the Redis Stream.
|
||||
*
|
||||
* tcp-ingestion serializes Position objects with a custom JSON replacer that
|
||||
* encodes types not natively supported by JSON:
|
||||
* - bigint → { __bigint: "<decimal-digits>" }
|
||||
* - Buffer → { __buffer_b64: "<base64>" }
|
||||
* - Date → ISO8601 string
|
||||
*
|
||||
* This module reverses that encoding so the Processor receives fully-typed
|
||||
* Position objects. The contract is documented in:
|
||||
* docs/wiki/concepts/position-record.md
|
||||
* tcp-ingestion/src/core/publish.ts (jsonReplacer)
|
||||
*/
|
||||
|
||||
import type { Position, AttributeValue } from './types.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Error type
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class CodecError extends Error {
|
||||
override readonly name = 'CodecError';
|
||||
|
||||
constructor(message: string, options?: ErrorOptions) {
|
||||
super(message, options);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Sentinel detection helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Returns true when the value is exactly `{ __bigint: "<string>" }`.
|
||||
* The shape must have exactly one key — any extra keys indicate a user-defined
|
||||
* object that coincidentally has a `__bigint` field, which is not a sentinel.
|
||||
* In practice tcp-ingestion only emits single-key sentinels; validate strictly.
|
||||
*/
|
||||
function isBigintSentinel(value: unknown): value is { __bigint: string } {
|
||||
if (typeof value !== 'object' || value === null) return false;
|
||||
const keys = Object.keys(value);
|
||||
return (
|
||||
keys.length === 1 &&
|
||||
keys[0] === '__bigint' &&
|
||||
typeof (value as Record<string, unknown>)['__bigint'] === 'string'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true when the value is exactly `{ __buffer_b64: "<string>" }`.
|
||||
*/
|
||||
function isBufferSentinel(value: unknown): value is { __buffer_b64: string } {
|
||||
if (typeof value !== 'object' || value === null) return false;
|
||||
const keys = Object.keys(value);
|
||||
return (
|
||||
keys.length === 1 &&
|
||||
keys[0] === '__buffer_b64' &&
|
||||
typeof (value as Record<string, unknown>)['__buffer_b64'] === 'string'
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Reviver
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* JSON.parse reviver that reconstructs the live types from sentinel encodings.
|
||||
*
|
||||
* Called by JSON.parse for every key-value pair in the document, bottom-up.
|
||||
* By the time `attributes` is visited, each attribute value has already been
|
||||
* converted (sentinels → bigint/Buffer), because JSON.parse visits leaves first.
|
||||
*
|
||||
* Reviver must return `unknown` because the result type depends on the key.
|
||||
* The caller casts the final result to `PositionJson` after validation.
|
||||
*/
|
||||
function reviver(key: string, value: unknown): unknown {
|
||||
// Timestamp field: ISO string → Date
|
||||
if (key === 'timestamp' && typeof value === 'string') {
|
||||
const date = new Date(value);
|
||||
if (isNaN(date.getTime())) {
|
||||
throw new CodecError(`Invalid timestamp value: "${value}"`);
|
||||
}
|
||||
return date;
|
||||
}
|
||||
|
||||
// bigint sentinel
|
||||
if (isBigintSentinel(value)) {
|
||||
const digits = value.__bigint;
|
||||
// Validate: only decimal digits (including optional leading minus for
|
||||
// negative bigints, though Teltonika IO elements are unsigned).
|
||||
if (!/^-?\d+$/.test(digits)) {
|
||||
throw new CodecError(
|
||||
`Malformed __bigint sentinel: expected decimal digits, got "${digits}"`,
|
||||
);
|
||||
}
|
||||
return BigInt(digits);
|
||||
}
|
||||
|
||||
// Buffer sentinel
|
||||
if (isBufferSentinel(value)) {
|
||||
const b64 = value.__buffer_b64;
|
||||
// Validate base64 characters (standard + URL-safe alphabets, with padding)
|
||||
if (!/^[A-Za-z0-9+/\-_]*={0,2}$/.test(b64)) {
|
||||
throw new CodecError(
|
||||
`Malformed __buffer_b64 sentinel: invalid base64 string "${b64}"`,
|
||||
);
|
||||
}
|
||||
return Buffer.from(b64, 'base64');
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Required field validation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const REQUIRED_NUMERIC_FIELDS = [
|
||||
'latitude',
|
||||
'longitude',
|
||||
'altitude',
|
||||
'angle',
|
||||
'speed',
|
||||
'satellites',
|
||||
'priority',
|
||||
] as const;
|
||||
|
||||
/**
|
||||
* Validates the decoded object has all required Position fields with the
|
||||
* correct types. Throws `CodecError` naming the first failing field.
|
||||
*/
|
||||
function validateDecodedPosition(obj: Record<string, unknown>): asserts obj is {
|
||||
device_id: string;
|
||||
timestamp: Date;
|
||||
latitude: number;
|
||||
longitude: number;
|
||||
altitude: number;
|
||||
angle: number;
|
||||
speed: number;
|
||||
satellites: number;
|
||||
priority: number;
|
||||
attributes: Record<string, AttributeValue>;
|
||||
} {
|
||||
if (typeof obj['device_id'] !== 'string' || obj['device_id'].length === 0) {
|
||||
throw new CodecError('Missing or invalid field: device_id (expected non-empty string)');
|
||||
}
|
||||
|
||||
if (!(obj['timestamp'] instanceof Date)) {
|
||||
throw new CodecError(
|
||||
'Missing or invalid field: timestamp (expected Date after reviver; was ISO string decoded?)',
|
||||
);
|
||||
}
|
||||
|
||||
for (const field of REQUIRED_NUMERIC_FIELDS) {
|
||||
if (typeof obj[field] !== 'number') {
|
||||
throw new CodecError(
|
||||
`Missing or invalid field: ${field} (expected number, got ${typeof obj[field]})`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if (typeof obj['attributes'] !== 'object' || obj['attributes'] === null) {
|
||||
throw new CodecError('Missing or invalid field: attributes (expected object)');
|
||||
}
|
||||
|
||||
// Validate priority is exactly 0, 1, or 2
|
||||
const priority = obj['priority'] as number;
|
||||
if (priority !== 0 && priority !== 1 && priority !== 2) {
|
||||
throw new CodecError(
|
||||
`Invalid field: priority (expected 0 | 1 | 2, got ${priority})`,
|
||||
);
|
||||
}
|
||||
|
||||
// Validate attributes values are only AttributeValue types
|
||||
const attrs = obj['attributes'] as Record<string, unknown>;
|
||||
for (const [attrKey, attrVal] of Object.entries(attrs)) {
|
||||
if (
|
||||
typeof attrVal !== 'number' &&
|
||||
typeof attrVal !== 'bigint' &&
|
||||
!Buffer.isBuffer(attrVal)
|
||||
) {
|
||||
throw new CodecError(
|
||||
`Invalid attribute "${attrKey}": expected number | bigint | Buffer, got ${typeof attrVal}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public API
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Decodes a JSON-encoded Position string (with sentinel encoding applied by
|
||||
* tcp-ingestion's `serializePosition`) into a fully-typed `Position` object.
|
||||
*
|
||||
* Throws `CodecError` if the JSON is malformed, a sentinel is invalid, a
|
||||
* required field is missing, or a field has the wrong type.
|
||||
*/
|
||||
export function decodePosition(payload: string): Position {
|
||||
let parsed: unknown;
|
||||
|
||||
try {
|
||||
parsed = JSON.parse(payload, reviver);
|
||||
} catch (err) {
|
||||
if (err instanceof CodecError) {
|
||||
throw err;
|
||||
}
|
||||
throw new CodecError(
|
||||
`Failed to parse Position payload as JSON: ${err instanceof Error ? err.message : String(err)}`,
|
||||
{ cause: err },
|
||||
);
|
||||
}
|
||||
|
||||
if (typeof parsed !== 'object' || parsed === null || Array.isArray(parsed)) {
|
||||
throw new CodecError('Position payload must be a JSON object');
|
||||
}
|
||||
|
||||
const obj = parsed as Record<string, unknown>;
|
||||
|
||||
validateDecodedPosition(obj);
|
||||
|
||||
return obj as unknown as Position;
|
||||
}
|
||||
@@ -0,0 +1,94 @@
|
||||
/**
|
||||
* Canonical TypeScript types for the Processor service.
|
||||
*
|
||||
* `Position` is the boundary contract received from the Redis Stream (produced
|
||||
* by tcp-ingestion). All other types here are Processor-internal — they describe
|
||||
* what flows through the pipeline, not what crosses service boundaries.
|
||||
*/
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Shared value types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* A single IO attribute value from the Teltonika AVL record.
|
||||
* - number : fixed-width IO elements (N1/N2/N4 — fit safely in JS number)
|
||||
* - bigint : N8 elements (u64, may exceed Number.MAX_SAFE_INTEGER)
|
||||
* - Buffer : NX variable-length elements (Codec 8 Extended)
|
||||
*/
|
||||
export type AttributeValue = number | bigint | Buffer;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Position — input contract from tcp-ingestion
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Normalized GPS position record. Byte-equivalent to tcp-ingestion's `Position`
|
||||
* type (docs/wiki/concepts/position-record.md).
|
||||
*
|
||||
* `priority` is typed as a union rather than `number` to stay consistent with
|
||||
* tcp-ingestion and make exhaustive switches possible in domain logic.
|
||||
*/
|
||||
export type Position = {
|
||||
readonly device_id: string;
|
||||
readonly timestamp: Date;
|
||||
readonly latitude: number;
|
||||
readonly longitude: number;
|
||||
readonly altitude: number;
|
||||
readonly angle: number; // heading 0–360°
|
||||
readonly speed: number; // km/h; 0 may mean "GPS invalid" — preserve verbatim
|
||||
readonly satellites: number;
|
||||
readonly priority: 0 | 1 | 2; // 0=Low, 1=High, 2=Panic
|
||||
readonly attributes: Readonly<Record<string, AttributeValue>>;
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// StreamRecord — raw shape returned by XREADGROUP before codec decoding
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* The flat field-value record as written by tcp-ingestion's `serializePosition`.
|
||||
* The `payload` field contains a JSON-encoded `Position` with sentinel encoding
|
||||
* applied (`__bigint`, `__buffer_b64`). The consumer calls `decodePosition` on
|
||||
* `payload` to reconstruct the live `Position` object.
|
||||
*
|
||||
* Top-level `ts`, `device_id`, and `codec` fields allow downstream filtering
|
||||
* without JSON parsing; `payload` is the source of truth.
|
||||
*/
|
||||
export type StreamRecord = {
|
||||
readonly id: string; // Redis Stream entry ID, e.g. "1714488000000-0"
|
||||
readonly ts: string; // ISO8601 timestamp (same value as Position.timestamp)
|
||||
readonly device_id: string;
|
||||
readonly codec: string; // '8' | '8E' | '16'
|
||||
readonly payload: string; // JSON-encoded Position with sentinel encoding
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// DeviceState — per-device in-memory runtime state
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Runtime state maintained per-device in the LRU map (task 1.6).
|
||||
* Bounded by DEVICE_STATE_LRU_CAP; evicted devices are rehydrated from Postgres
|
||||
* on next packet (Phase 3 — Phase 1 treats restart/eviction as a state loss).
|
||||
*/
|
||||
export type DeviceState = {
|
||||
readonly device_id: string;
|
||||
readonly last_position: Position;
|
||||
readonly last_seen: Date;
|
||||
readonly position_count_session: number;
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Metrics — observability surface
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Minimal metrics interface exposed to pipeline components. Concrete
|
||||
* implementation (prom-client) lands in task 1.9; this keeps types stable
|
||||
* through tasks 1.2–1.8.
|
||||
*/
|
||||
export type Metrics = {
|
||||
readonly inc: (name: string, labels?: Record<string, string>) => void;
|
||||
readonly observe: (name: string, value: number, labels?: Record<string, string>) => void;
|
||||
};
|
||||
@@ -0,0 +1,117 @@
|
||||
/**
|
||||
* Minimal SQL migration runner.
|
||||
*
|
||||
* Tracks applied migrations in a `schema_migrations` table (created on first
|
||||
* run). Discovers migration files by reading the `migrations/` directory
|
||||
* adjacent to this file, sorted lexicographically by filename. Each migration
|
||||
* runs inside a transaction; failure rolls back that migration only.
|
||||
*
|
||||
* Idempotent: re-running against a database where all migrations are already
|
||||
* applied is a no-op (every file is checked before execution).
|
||||
*
|
||||
* Not a framework — the Processor has one migration file in Phase 1. A 30-line
|
||||
* runner is the right answer per the task spec.
|
||||
*/
|
||||
|
||||
import { readdir, readFile } from 'node:fs/promises';
|
||||
import { join, dirname } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import type pg from 'pg';
|
||||
import type { Logger } from 'pino';
|
||||
|
||||
const MIGRATIONS_DIR = join(dirname(fileURLToPath(import.meta.url)), 'migrations');
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Schema migrations table bootstrap
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const CREATE_MIGRATIONS_TABLE_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS schema_migrations (
|
||||
version text PRIMARY KEY,
|
||||
applied_at timestamptz NOT NULL DEFAULT now()
|
||||
)
|
||||
`;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public runner
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Applies all pending migrations in `src/db/migrations/` in filename order.
|
||||
* Each migration file is wrapped in a transaction. Already-applied migrations
|
||||
* are skipped with an info log.
|
||||
*/
|
||||
export async function runMigrations(pool: pg.Pool, logger: Logger): Promise<void> {
|
||||
// Bootstrap the tracking table before we try to use it
|
||||
await pool.query(CREATE_MIGRATIONS_TABLE_SQL);
|
||||
|
||||
const sqlFiles = await discoverMigrationFiles();
|
||||
|
||||
for (const filename of sqlFiles) {
|
||||
const version = filename;
|
||||
const alreadyApplied = await isMigrationApplied(pool, version);
|
||||
|
||||
if (alreadyApplied) {
|
||||
logger.info({ version }, 'migration already applied; skipping');
|
||||
continue;
|
||||
}
|
||||
|
||||
const sql = await readMigrationFile(filename);
|
||||
await applyMigration(pool, version, sql, logger);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Internals
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Lists `*.sql` files in the migrations directory, sorted lexicographically
|
||||
* (filename prefix `NNNN_` ensures correct ordering).
|
||||
*/
|
||||
async function discoverMigrationFiles(): Promise<string[]> {
|
||||
const entries = await readdir(MIGRATIONS_DIR);
|
||||
return entries.filter((f) => f.endsWith('.sql')).sort();
|
||||
}
|
||||
|
||||
async function readMigrationFile(filename: string): Promise<string> {
|
||||
return readFile(join(MIGRATIONS_DIR, filename), 'utf8');
|
||||
}
|
||||
|
||||
async function isMigrationApplied(pool: pg.Pool, version: string): Promise<boolean> {
|
||||
const result = await pool.query<{ exists: boolean }>(
|
||||
'SELECT EXISTS(SELECT 1 FROM schema_migrations WHERE version = $1) AS exists',
|
||||
[version],
|
||||
);
|
||||
// noUncheckedIndexedAccess: result.rows[0] may be undefined
|
||||
return result.rows[0]?.exists ?? false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Applies a single migration inside a transaction. Logs success at `info` and
|
||||
* throws on any SQL error (the caller bubbles it up — no silent skip).
|
||||
*/
|
||||
async function applyMigration(
|
||||
pool: pg.Pool,
|
||||
version: string,
|
||||
sql: string,
|
||||
logger: Logger,
|
||||
): Promise<void> {
|
||||
const client = await pool.connect();
|
||||
try {
|
||||
await client.query('BEGIN');
|
||||
await client.query(sql);
|
||||
await client.query(
|
||||
'INSERT INTO schema_migrations (version) VALUES ($1)',
|
||||
[version],
|
||||
);
|
||||
await client.query('COMMIT');
|
||||
logger.info({ version }, 'migration applied');
|
||||
} catch (err) {
|
||||
await client.query('ROLLBACK');
|
||||
logger.error({ err, version }, 'migration failed; rolled back');
|
||||
throw err;
|
||||
} finally {
|
||||
client.release();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,60 @@
|
||||
-- Migration: 0001_positions
|
||||
-- Creates the positions hypertable owned by the Processor service.
|
||||
--
|
||||
-- Schema authority note: this is the ONLY table whose schema the Processor
|
||||
-- owns directly (per ROADMAP.md design rule #2). All other tables Processor
|
||||
-- writes to (timing_records, stage_results, etc.) are defined in Directus.
|
||||
-- Do NOT modify this table from the Directus admin UI.
|
||||
|
||||
-- Enable TimescaleDB extension (no-op if already installed at the DB level).
|
||||
CREATE EXTENSION IF NOT EXISTS timescaledb;
|
||||
|
||||
-- Raw position history. High-volume append-only table; the hypertable
|
||||
-- partitioning column is `ts` (device-reported GPS time).
|
||||
--
|
||||
-- Column notes:
|
||||
-- device_id text — IMEIs are 15 ASCII digits. text keeps the door
|
||||
-- open for non-IMEI device identifiers (future
|
||||
-- vendors) and avoids any leading-zero loss.
|
||||
-- ts timestamptz— device-reported event time. This is the
|
||||
-- hypertable partition column. NOT ingestion time.
|
||||
-- ingested_at timestamptz— when Processor wrote the row. Useful for
|
||||
-- diagnosing clock skew or buffered record flushes.
|
||||
-- altitude/angle/speed real — float32 is sufficient precision; saves space
|
||||
-- on a high-volume append-only table.
|
||||
-- attributes jsonb — verbatim IO bag from the AVL record, with bigint
|
||||
-- values stored as decimal strings and Buffer values
|
||||
-- stored as base64 strings (see task 1.4 spec).
|
||||
-- No naming or unit conversion here; that is Phase 2.
|
||||
CREATE TABLE IF NOT EXISTS positions (
|
||||
device_id text NOT NULL,
|
||||
ts timestamptz NOT NULL,
|
||||
ingested_at timestamptz NOT NULL DEFAULT now(),
|
||||
latitude double precision NOT NULL,
|
||||
longitude double precision NOT NULL,
|
||||
altitude real NOT NULL,
|
||||
angle real NOT NULL,
|
||||
speed real NOT NULL,
|
||||
satellites smallint NOT NULL,
|
||||
priority smallint NOT NULL,
|
||||
codec text NOT NULL,
|
||||
attributes jsonb NOT NULL
|
||||
);
|
||||
|
||||
-- Convert to TimescaleDB hypertable partitioned by event time.
|
||||
-- chunk_time_interval = 1 day is appropriate for GPS telemetry where queries
|
||||
-- typically span hours-to-days and devices send at 1–60 second intervals.
|
||||
SELECT create_hypertable(
|
||||
'positions',
|
||||
'ts',
|
||||
if_not_exists => TRUE,
|
||||
chunk_time_interval => INTERVAL '1 day'
|
||||
);
|
||||
|
||||
-- Unique constraint: natural key for idempotent upserts.
|
||||
-- ON CONFLICT (device_id, ts) DO NOTHING ensures a replayed or duplicated
|
||||
-- record does not create a second row (ROADMAP.md design rule #5).
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS positions_device_ts ON positions (device_id, ts);
|
||||
|
||||
-- Descending ts index for range queries (most recent positions first).
|
||||
CREATE INDEX IF NOT EXISTS positions_ts ON positions (ts DESC);
|
||||
@@ -0,0 +1,71 @@
|
||||
import pg from 'pg';
|
||||
import type { Logger } from 'pino';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Pool factory
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Creates a pg.Pool configured for the Processor service.
|
||||
*
|
||||
* `application_name` is set so connections are identifiable in pg_stat_activity
|
||||
* when debugging slow queries or connection exhaustion.
|
||||
*/
|
||||
export function createPool(url: string): pg.Pool {
|
||||
return new pg.Pool({
|
||||
connectionString: url,
|
||||
max: 10,
|
||||
idleTimeoutMillis: 30_000,
|
||||
connectionTimeoutMillis: 5_000,
|
||||
application_name: 'processor',
|
||||
});
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Startup connectivity check
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Verifies Postgres connectivity on startup with exponential-backoff retry.
|
||||
*
|
||||
* Runs `SELECT 1` (3 attempts, backoff capped at 5s). Mirrors
|
||||
* tcp-ingestion's `connectRedis` pattern so operators see the same failure
|
||||
* shape in logs regardless of which dependency is down.
|
||||
*
|
||||
* Calls `process.exit(1)` on final failure rather than throwing — the
|
||||
* orchestrator (Docker/systemd) restarts the process.
|
||||
*/
|
||||
export async function connectWithRetry(
|
||||
pool: pg.Pool,
|
||||
logger: Logger,
|
||||
maxAttempts = 3,
|
||||
): Promise<void> {
|
||||
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
|
||||
try {
|
||||
const client = await pool.connect();
|
||||
try {
|
||||
await client.query('SELECT 1');
|
||||
} finally {
|
||||
client.release();
|
||||
}
|
||||
logger.info({ attempt }, 'Postgres connected');
|
||||
return;
|
||||
} catch (err) {
|
||||
if (attempt === maxAttempts) {
|
||||
logger.fatal({ err }, 'Postgres connection failed after all retries; exiting');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const backoffMs = Math.min(200 * 2 ** (attempt - 1), 5_000);
|
||||
logger.warn(
|
||||
{ err, attempt, maxAttempts, backoffMs },
|
||||
'Postgres connection failed; retrying',
|
||||
);
|
||||
await new Promise<void>((resolve) => setTimeout(resolve, backoffMs));
|
||||
}
|
||||
}
|
||||
|
||||
// TypeScript: unreachable after process.exit above, but needed for type safety
|
||||
/* c8 ignore next */
|
||||
throw new Error('unreachable');
|
||||
}
|
||||
+28
@@ -0,0 +1,28 @@
|
||||
import { loadConfig } from './config/load.js';
|
||||
import type { Config } from './config/load.js';
|
||||
import { createLogger } from './observability/logger.js';
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Startup: validate config (fail fast on bad env), build logger
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
let config: Config;
|
||||
try {
|
||||
config = loadConfig();
|
||||
} catch (err) {
|
||||
// Config validation failures print a human-readable message and exit 1.
|
||||
// Logger is not available yet — process.stderr is the only output channel.
|
||||
process.stderr.write(`${err instanceof Error ? err.message : String(err)}\n`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const logger = createLogger({
|
||||
level: config.LOG_LEVEL,
|
||||
nodeEnv: config.NODE_ENV,
|
||||
instanceId: config.INSTANCE_ID,
|
||||
});
|
||||
|
||||
logger.info('processor starting');
|
||||
|
||||
// Consumer, writer, and state wiring land in tasks 1.5–1.8.
|
||||
process.exit(0);
|
||||
@@ -0,0 +1,52 @@
|
||||
import pino from 'pino';
|
||||
import type { Logger } from 'pino';
|
||||
|
||||
export type { Logger };
|
||||
|
||||
/**
|
||||
* Builds the root pino logger. Called once at startup with config values.
|
||||
*
|
||||
* In development, pino-pretty is used for human-readable output (lazy transport
|
||||
* so it is never required in production paths). In test/production, raw JSON is
|
||||
* emitted — fast and parseable by log aggregators (Portainer, Loki, etc.).
|
||||
*/
|
||||
export function createLogger(options: {
|
||||
level: string;
|
||||
nodeEnv: string;
|
||||
instanceId: string;
|
||||
}): Logger {
|
||||
const { level, nodeEnv, instanceId } = options;
|
||||
|
||||
const base = {
|
||||
service: 'processor',
|
||||
instance_id: instanceId,
|
||||
};
|
||||
|
||||
// Emit `"level":"info"` instead of pino's default numeric `"level":30` so
|
||||
// log viewers show a human-readable label rather than the numeric level.
|
||||
const formatters = {
|
||||
level: (label: string) => ({ level: label }),
|
||||
};
|
||||
|
||||
if (nodeEnv === 'development') {
|
||||
return pino({
|
||||
level,
|
||||
base,
|
||||
timestamp: pino.stdTimeFunctions.isoTime,
|
||||
formatters,
|
||||
transport: {
|
||||
target: 'pino-pretty',
|
||||
options: {
|
||||
colorize: true,
|
||||
translateTime: 'SYS:standard',
|
||||
ignore: 'pid,hostname',
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// Production and test: plain JSON — fast, no extra deps.
|
||||
// ISO-8601 string timestamps (vs default epoch-ms) survive downstream
|
||||
// log renderers without losing precision.
|
||||
return pino({ level, base, timestamp: pino.stdTimeFunctions.isoTime, formatters });
|
||||
}
|
||||
@@ -0,0 +1,384 @@
|
||||
/**
|
||||
* Unit tests for src/core/codec.ts
|
||||
*
|
||||
* Covers:
|
||||
* - Round-trip with bigint and Buffer attributes
|
||||
* - u64-max bigint sentinel
|
||||
* - Buffer with non-UTF-8 bytes
|
||||
* - timestamp ISO string → Date round-trip (no millisecond loss)
|
||||
* - All required fields present and correctly decoded
|
||||
* - Reject malformed JSON
|
||||
* - Reject missing required fields
|
||||
* - Reject invalid sentinel shapes
|
||||
* - Reject invalid priority values
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { decodePosition, CodecError } from '../src/core/codec.js';
|
||||
import type { Position } from '../src/core/types.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers — mirror tcp-ingestion's serializePosition / jsonReplacer inline
|
||||
// so the test is self-contained and we can verify round-trip fidelity.
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* JSON replacer that mirrors tcp-ingestion's `jsonReplacer` exactly.
|
||||
* bigint → { __bigint: "<digits>" }
|
||||
* Buffer → { __buffer_b64: "<base64>" } (handles both direct instance and toJSON shape)
|
||||
* Date → ISO string
|
||||
*/
|
||||
function jsonReplacer(_key: string, value: unknown): unknown {
|
||||
if (typeof value === 'bigint') {
|
||||
return { __bigint: value.toString() };
|
||||
}
|
||||
if (value instanceof Uint8Array) {
|
||||
return { __buffer_b64: Buffer.from(value).toString('base64') };
|
||||
}
|
||||
// Buffer.toJSON() shape — fired before replacer for nested Buffer properties
|
||||
if (
|
||||
typeof value === 'object' &&
|
||||
value !== null &&
|
||||
(value as Record<string, unknown>)['type'] === 'Buffer' &&
|
||||
Array.isArray((value as Record<string, unknown>)['data'])
|
||||
) {
|
||||
const data = (value as { type: string; data: number[] }).data;
|
||||
return { __buffer_b64: Buffer.from(data).toString('base64') };
|
||||
}
|
||||
if (value instanceof Date) {
|
||||
return value.toISOString();
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
function serializePosition(position: Position, codec: string): Record<string, string> {
|
||||
return {
|
||||
ts: position.timestamp.toISOString(),
|
||||
device_id: position.device_id,
|
||||
codec,
|
||||
payload: JSON.stringify(position, jsonReplacer),
|
||||
};
|
||||
}
|
||||
|
||||
function makePosition(overrides: Partial<Position> = {}): Position {
|
||||
return {
|
||||
device_id: 'TEST123456789',
|
||||
timestamp: new Date('2024-01-15T10:30:00.123Z'),
|
||||
latitude: 54.12345,
|
||||
longitude: 25.98765,
|
||||
altitude: 150,
|
||||
angle: 270,
|
||||
speed: 60,
|
||||
satellites: 8,
|
||||
priority: 1,
|
||||
attributes: {},
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 1. Round-trip — basic position (no special attributes)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('decodePosition — basic round-trip', () => {
|
||||
it('decodes all scalar fields correctly', () => {
|
||||
const original = makePosition();
|
||||
const { payload } = serializePosition(original, '8');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(decoded.device_id).toBe(original.device_id);
|
||||
expect(decoded.timestamp).toEqual(original.timestamp);
|
||||
expect(decoded.latitude).toBe(original.latitude);
|
||||
expect(decoded.longitude).toBe(original.longitude);
|
||||
expect(decoded.altitude).toBe(original.altitude);
|
||||
expect(decoded.angle).toBe(original.angle);
|
||||
expect(decoded.speed).toBe(original.speed);
|
||||
expect(decoded.satellites).toBe(original.satellites);
|
||||
expect(decoded.priority).toBe(original.priority);
|
||||
});
|
||||
|
||||
it('timestamp round-trips without millisecond loss', () => {
|
||||
// Use a timestamp with non-zero milliseconds to verify precision is preserved
|
||||
const ts = new Date('2024-06-15T13:45:30.987Z');
|
||||
const original = makePosition({ timestamp: ts });
|
||||
const { payload } = serializePosition(original, '8');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(decoded.timestamp.getTime()).toBe(ts.getTime());
|
||||
expect(decoded.timestamp.toISOString()).toBe('2024-06-15T13:45:30.987Z');
|
||||
});
|
||||
|
||||
it('timestamp produces a Date instance (not a string)', () => {
|
||||
const original = makePosition();
|
||||
const { payload } = serializePosition(original, '8');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(decoded.timestamp).toBeInstanceOf(Date);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 2. Round-trip — bigint attributes
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('decodePosition — bigint attributes', () => {
|
||||
it('round-trips a safe-integer bigint', () => {
|
||||
const original = makePosition({ attributes: { io_21: BigInt('12345') } });
|
||||
const { payload } = serializePosition(original, '8');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(decoded.attributes['io_21']).toBe(BigInt('12345'));
|
||||
});
|
||||
|
||||
it('round-trips a u64-max bigint (exceeds Number.MAX_SAFE_INTEGER)', () => {
|
||||
const u64Max = BigInt('18446744073709551615');
|
||||
const original = makePosition({ attributes: { io_240: u64Max } });
|
||||
const { payload } = serializePosition(original, '8');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(decoded.attributes['io_240']).toBe(u64Max);
|
||||
});
|
||||
|
||||
it('round-trips zero bigint', () => {
|
||||
const original = makePosition({ attributes: { io_1: 0n } });
|
||||
const { payload } = serializePosition(original, '8');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(decoded.attributes['io_1']).toBe(0n);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 3. Round-trip — Buffer attributes
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('decodePosition — Buffer attributes', () => {
|
||||
it('round-trips a Buffer with standard bytes', () => {
|
||||
const original = makePosition({ attributes: { io_nx: Buffer.from([0x01, 0x02, 0x03]) } });
|
||||
const { payload } = serializePosition(original, '8E');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(Buffer.isBuffer(decoded.attributes['io_nx'])).toBe(true);
|
||||
expect(decoded.attributes['io_nx']).toEqual(Buffer.from([0x01, 0x02, 0x03]));
|
||||
});
|
||||
|
||||
it('round-trips a Buffer with non-UTF-8 bytes (0xde 0xad 0xbe 0xef)', () => {
|
||||
const raw = Buffer.from([0xde, 0xad, 0xbe, 0xef]);
|
||||
const original = makePosition({ attributes: { nx_raw: raw } });
|
||||
const { payload } = serializePosition(original, '8E');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
const attr = decoded.attributes['nx_raw'];
|
||||
expect(Buffer.isBuffer(attr)).toBe(true);
|
||||
expect(attr as Buffer).toEqual(raw);
|
||||
});
|
||||
|
||||
it('round-trips an empty Buffer', () => {
|
||||
const original = makePosition({ attributes: { empty: Buffer.alloc(0) } });
|
||||
const { payload } = serializePosition(original, '8E');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
const attr = decoded.attributes['empty'];
|
||||
expect(Buffer.isBuffer(attr)).toBe(true);
|
||||
expect((attr as Buffer).length).toBe(0);
|
||||
});
|
||||
|
||||
it('Buffer content is byte-equal to original (not just same length)', () => {
|
||||
const raw = Buffer.from([0xca, 0xfe, 0xba, 0xbe]);
|
||||
const original = makePosition({ attributes: { sig: raw } });
|
||||
const { payload } = serializePosition(original, '8E');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
const attr = decoded.attributes['sig'] as Buffer;
|
||||
for (let i = 0; i < raw.length; i++) {
|
||||
expect(attr[i]).toBe(raw[i]);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 4. Round-trip — mixed attributes
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('decodePosition — mixed attributes round-trip', () => {
|
||||
it('round-trips position with number, bigint, and Buffer attributes together', () => {
|
||||
const original = makePosition({
|
||||
attributes: {
|
||||
io_21: 42,
|
||||
io_240: BigInt('18446744073709551615'),
|
||||
io_nx: Buffer.from([0xab, 0xcd]),
|
||||
},
|
||||
});
|
||||
|
||||
const { payload } = serializePosition(original, '16');
|
||||
const decoded = decodePosition(payload);
|
||||
|
||||
expect(decoded.attributes['io_21']).toBe(42);
|
||||
expect(decoded.attributes['io_240']).toBe(BigInt('18446744073709551615'));
|
||||
const nxAttr = decoded.attributes['io_nx'] as Buffer;
|
||||
expect(Buffer.isBuffer(nxAttr)).toBe(true);
|
||||
expect(nxAttr).toEqual(Buffer.from([0xab, 0xcd]));
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 5. Priority values
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('decodePosition — priority', () => {
|
||||
it('accepts priority 0 (Low)', () => {
|
||||
const original = makePosition({ priority: 0 });
|
||||
const { payload } = serializePosition(original, '8');
|
||||
expect(() => decodePosition(payload)).not.toThrow();
|
||||
expect(decodePosition(payload).priority).toBe(0);
|
||||
});
|
||||
|
||||
it('accepts priority 2 (Panic)', () => {
|
||||
const original = makePosition({ priority: 2 });
|
||||
const { payload } = serializePosition(original, '8');
|
||||
expect(decodePosition(payload).priority).toBe(2);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 6. Error cases
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('decodePosition — error cases', () => {
|
||||
it('throws CodecError on non-JSON input', () => {
|
||||
expect(() => decodePosition('not json at all')).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError on empty string', () => {
|
||||
expect(() => decodePosition('')).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when payload is a JSON array (not object)', () => {
|
||||
expect(() => decodePosition('[]')).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when payload is a JSON number', () => {
|
||||
expect(() => decodePosition('42')).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when device_id is missing', () => {
|
||||
const pos = makePosition();
|
||||
const { payload } = serializePosition(pos, '8');
|
||||
const obj = JSON.parse(payload) as Record<string, unknown>;
|
||||
delete obj['device_id'];
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when device_id is empty string', () => {
|
||||
const obj = {
|
||||
device_id: '',
|
||||
timestamp: new Date().toISOString(),
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
altitude: 0,
|
||||
angle: 0,
|
||||
speed: 0,
|
||||
satellites: 0,
|
||||
priority: 0,
|
||||
attributes: {},
|
||||
};
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when timestamp is missing', () => {
|
||||
const pos = makePosition();
|
||||
const { payload } = serializePosition(pos, '8');
|
||||
const obj = JSON.parse(payload) as Record<string, unknown>;
|
||||
delete obj['timestamp'];
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when timestamp is an invalid date string', () => {
|
||||
const obj = {
|
||||
device_id: 'TEST123',
|
||||
timestamp: 'not-a-date',
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
altitude: 0,
|
||||
angle: 0,
|
||||
speed: 0,
|
||||
satellites: 0,
|
||||
priority: 0,
|
||||
attributes: {},
|
||||
};
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when a required numeric field is missing', () => {
|
||||
const pos = makePosition();
|
||||
const { payload } = serializePosition(pos, '8');
|
||||
const obj = JSON.parse(payload) as Record<string, unknown>;
|
||||
delete obj['latitude'];
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when priority is out of range (e.g. 3)', () => {
|
||||
const obj = {
|
||||
device_id: 'TEST123',
|
||||
timestamp: new Date().toISOString(),
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
altitude: 0,
|
||||
angle: 0,
|
||||
speed: 0,
|
||||
satellites: 0,
|
||||
priority: 3,
|
||||
attributes: {},
|
||||
};
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('throws CodecError when __bigint value is not decimal digits', () => {
|
||||
const obj = {
|
||||
device_id: 'TEST123',
|
||||
timestamp: new Date().toISOString(),
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
altitude: 0,
|
||||
angle: 0,
|
||||
speed: 0,
|
||||
satellites: 0,
|
||||
priority: 0,
|
||||
attributes: { io_bad: { __bigint: 'not-a-number' } },
|
||||
};
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
|
||||
it('CodecError message names the failing field', () => {
|
||||
const obj = {
|
||||
device_id: 'TEST123',
|
||||
timestamp: new Date().toISOString(),
|
||||
latitude: 'oops', // wrong type
|
||||
longitude: 0,
|
||||
altitude: 0,
|
||||
angle: 0,
|
||||
speed: 0,
|
||||
satellites: 0,
|
||||
priority: 0,
|
||||
attributes: {},
|
||||
};
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(/latitude/);
|
||||
});
|
||||
|
||||
it('throws CodecError when attributes value is not a valid AttributeValue (e.g. nested object)', () => {
|
||||
const obj = {
|
||||
device_id: 'TEST123',
|
||||
timestamp: new Date().toISOString(),
|
||||
latitude: 0,
|
||||
longitude: 0,
|
||||
altitude: 0,
|
||||
angle: 0,
|
||||
speed: 0,
|
||||
satellites: 0,
|
||||
priority: 0,
|
||||
// A plain nested object (not a sentinel) should fail validation
|
||||
attributes: { io_bad: { nested: 'value' } },
|
||||
};
|
||||
expect(() => decodePosition(JSON.stringify(obj))).toThrow(CodecError);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,247 @@
|
||||
/**
|
||||
* Unit tests for src/config/load.ts
|
||||
*
|
||||
* Covers:
|
||||
* - Parses all defaults correctly when only required vars are provided
|
||||
* - Missing required vars throw with the right message
|
||||
* - Invalid URLs throw (wrong protocol, not a URL)
|
||||
* - Bounded numerics throw on out-of-range values
|
||||
* - REDIS_CONSUMER_NAME defaults to INSTANCE_ID
|
||||
* - Explicit REDIS_CONSUMER_NAME overrides INSTANCE_ID
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { loadConfig } from '../src/config/load.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Minimal valid env — only required fields. */
|
||||
function validEnv(overrides: Record<string, string> = {}): Record<string, string> {
|
||||
return {
|
||||
REDIS_URL: 'redis://localhost:6379',
|
||||
POSTGRES_URL: 'postgres://postgres:pass@localhost:5432/trm',
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 1. Happy path — defaults
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('loadConfig — defaults', () => {
|
||||
it('parses successfully with only required vars', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.REDIS_URL).toBe('redis://localhost:6379');
|
||||
expect(config.POSTGRES_URL).toBe('postgres://postgres:pass@localhost:5432/trm');
|
||||
});
|
||||
|
||||
it('applies default NODE_ENV=production', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.NODE_ENV).toBe('production');
|
||||
});
|
||||
|
||||
it('applies default INSTANCE_ID=processor-1', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.INSTANCE_ID).toBe('processor-1');
|
||||
});
|
||||
|
||||
it('applies default LOG_LEVEL=info', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.LOG_LEVEL).toBe('info');
|
||||
});
|
||||
|
||||
it('applies default REDIS_TELEMETRY_STREAM=telemetry:t', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.REDIS_TELEMETRY_STREAM).toBe('telemetry:t');
|
||||
});
|
||||
|
||||
it('applies default REDIS_CONSUMER_GROUP=processor', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.REDIS_CONSUMER_GROUP).toBe('processor');
|
||||
});
|
||||
|
||||
it('defaults REDIS_CONSUMER_NAME to INSTANCE_ID', () => {
|
||||
const config = loadConfig(validEnv({ INSTANCE_ID: 'my-instance' }));
|
||||
expect(config.REDIS_CONSUMER_NAME).toBe('my-instance');
|
||||
});
|
||||
|
||||
it('respects explicit REDIS_CONSUMER_NAME override', () => {
|
||||
const config = loadConfig(
|
||||
validEnv({ INSTANCE_ID: 'instance-a', REDIS_CONSUMER_NAME: 'consumer-override' }),
|
||||
);
|
||||
expect(config.REDIS_CONSUMER_NAME).toBe('consumer-override');
|
||||
});
|
||||
|
||||
it('applies default METRICS_PORT=9090', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.METRICS_PORT).toBe(9090);
|
||||
});
|
||||
|
||||
it('applies default BATCH_SIZE=100', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.BATCH_SIZE).toBe(100);
|
||||
});
|
||||
|
||||
it('applies default BATCH_BLOCK_MS=5000', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.BATCH_BLOCK_MS).toBe(5_000);
|
||||
});
|
||||
|
||||
it('applies default WRITE_BATCH_SIZE=50', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.WRITE_BATCH_SIZE).toBe(50);
|
||||
});
|
||||
|
||||
it('applies default DEVICE_STATE_LRU_CAP=10000', () => {
|
||||
const config = loadConfig(validEnv());
|
||||
expect(config.DEVICE_STATE_LRU_CAP).toBe(10_000);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 2. Missing required vars
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('loadConfig — missing required vars', () => {
|
||||
it('throws when REDIS_URL is missing', () => {
|
||||
expect(() => loadConfig({ POSTGRES_URL: 'postgres://localhost:5432/trm' })).toThrow(
|
||||
/REDIS_URL/,
|
||||
);
|
||||
});
|
||||
|
||||
it('throws when POSTGRES_URL is missing', () => {
|
||||
expect(() => loadConfig({ REDIS_URL: 'redis://localhost:6379' })).toThrow(/POSTGRES_URL/);
|
||||
});
|
||||
|
||||
it('throws when both required vars are missing', () => {
|
||||
expect(() => loadConfig({})).toThrow(/Configuration error/);
|
||||
});
|
||||
|
||||
it('error message mentions every failing field', () => {
|
||||
let message = '';
|
||||
try {
|
||||
loadConfig({});
|
||||
} catch (err) {
|
||||
message = err instanceof Error ? err.message : '';
|
||||
}
|
||||
expect(message).toMatch(/REDIS_URL/);
|
||||
expect(message).toMatch(/POSTGRES_URL/);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 3. URL validation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('loadConfig — URL validation', () => {
|
||||
it('accepts redis:// URLs', () => {
|
||||
expect(() => loadConfig(validEnv({ REDIS_URL: 'redis://redis:6379' }))).not.toThrow();
|
||||
});
|
||||
|
||||
it('accepts rediss:// (TLS) URLs', () => {
|
||||
expect(() => loadConfig(validEnv({ REDIS_URL: 'rediss://redis:6380' }))).not.toThrow();
|
||||
});
|
||||
|
||||
it('rejects REDIS_URL with wrong protocol (http)', () => {
|
||||
expect(() => loadConfig(validEnv({ REDIS_URL: 'http://localhost:6379' }))).toThrow(
|
||||
/REDIS_URL/,
|
||||
);
|
||||
});
|
||||
|
||||
it('rejects REDIS_URL that is not a URL at all', () => {
|
||||
expect(() => loadConfig(validEnv({ REDIS_URL: 'not-a-url' }))).toThrow(/REDIS_URL/);
|
||||
});
|
||||
|
||||
it('accepts postgres:// URLs', () => {
|
||||
expect(() =>
|
||||
loadConfig(validEnv({ POSTGRES_URL: 'postgres://user:pass@db:5432/mydb' })),
|
||||
).not.toThrow();
|
||||
});
|
||||
|
||||
it('accepts postgresql:// URLs', () => {
|
||||
expect(() =>
|
||||
loadConfig(validEnv({ POSTGRES_URL: 'postgresql://user:pass@db:5432/mydb' })),
|
||||
).not.toThrow();
|
||||
});
|
||||
|
||||
it('rejects POSTGRES_URL with wrong protocol (mysql)', () => {
|
||||
expect(() =>
|
||||
loadConfig(validEnv({ POSTGRES_URL: 'mysql://localhost:3306/db' })),
|
||||
).toThrow(/POSTGRES_URL/);
|
||||
});
|
||||
|
||||
it('rejects POSTGRES_URL that is not a URL at all', () => {
|
||||
expect(() => loadConfig(validEnv({ POSTGRES_URL: 'localhost/db' }))).toThrow(/POSTGRES_URL/);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 4. Bounded numerics
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('loadConfig — bounded numerics', () => {
|
||||
it('rejects BATCH_SIZE below minimum (0)', () => {
|
||||
expect(() => loadConfig(validEnv({ BATCH_SIZE: '0' }))).toThrow(/BATCH_SIZE/);
|
||||
});
|
||||
|
||||
it('rejects BATCH_SIZE above maximum (10001)', () => {
|
||||
expect(() => loadConfig(validEnv({ BATCH_SIZE: '10001' }))).toThrow(/BATCH_SIZE/);
|
||||
});
|
||||
|
||||
it('accepts BATCH_SIZE at boundary values (1, 10000)', () => {
|
||||
expect(() => loadConfig(validEnv({ BATCH_SIZE: '1' }))).not.toThrow();
|
||||
expect(() => loadConfig(validEnv({ BATCH_SIZE: '10000' }))).not.toThrow();
|
||||
});
|
||||
|
||||
it('rejects BATCH_BLOCK_MS above maximum (60001)', () => {
|
||||
expect(() => loadConfig(validEnv({ BATCH_BLOCK_MS: '60001' }))).toThrow(/BATCH_BLOCK_MS/);
|
||||
});
|
||||
|
||||
it('accepts BATCH_BLOCK_MS=0 (no blocking)', () => {
|
||||
const config = loadConfig(validEnv({ BATCH_BLOCK_MS: '0' }));
|
||||
expect(config.BATCH_BLOCK_MS).toBe(0);
|
||||
});
|
||||
|
||||
it('rejects WRITE_BATCH_SIZE below minimum (0)', () => {
|
||||
expect(() => loadConfig(validEnv({ WRITE_BATCH_SIZE: '0' }))).toThrow(/WRITE_BATCH_SIZE/);
|
||||
});
|
||||
|
||||
it('rejects WRITE_BATCH_SIZE above maximum (1001)', () => {
|
||||
expect(() => loadConfig(validEnv({ WRITE_BATCH_SIZE: '1001' }))).toThrow(/WRITE_BATCH_SIZE/);
|
||||
});
|
||||
|
||||
it('rejects DEVICE_STATE_LRU_CAP below minimum (99)', () => {
|
||||
expect(() => loadConfig(validEnv({ DEVICE_STATE_LRU_CAP: '99' }))).toThrow(
|
||||
/DEVICE_STATE_LRU_CAP/,
|
||||
);
|
||||
});
|
||||
|
||||
it('rejects DEVICE_STATE_LRU_CAP above maximum (1000001)', () => {
|
||||
expect(() => loadConfig(validEnv({ DEVICE_STATE_LRU_CAP: '1000001' }))).toThrow(
|
||||
/DEVICE_STATE_LRU_CAP/,
|
||||
);
|
||||
});
|
||||
|
||||
it('rejects non-numeric METRICS_PORT', () => {
|
||||
expect(() => loadConfig(validEnv({ METRICS_PORT: 'abc' }))).toThrow(/METRICS_PORT/);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 5. LOG_LEVEL validation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('loadConfig — LOG_LEVEL', () => {
|
||||
it('accepts all valid pino levels', () => {
|
||||
const levels = ['fatal', 'error', 'warn', 'info', 'debug', 'trace'] as const;
|
||||
for (const level of levels) {
|
||||
expect(() => loadConfig(validEnv({ LOG_LEVEL: level }))).not.toThrow();
|
||||
}
|
||||
});
|
||||
|
||||
it('rejects an invalid log level', () => {
|
||||
expect(() => loadConfig(validEnv({ LOG_LEVEL: 'verbose' }))).toThrow(/LOG_LEVEL/);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,319 @@
|
||||
/**
|
||||
* Unit tests for src/db/migrate.ts
|
||||
*
|
||||
* Tests against a mocked pg.Pool — no real Postgres required here.
|
||||
* The real round-trip against TimescaleDB lives in task 1.10 (integration test,
|
||||
* testcontainers). See task 1.4 spec for the rationale.
|
||||
*
|
||||
* Covers:
|
||||
* - Applying a fresh migration runs SQL inside a transaction and records version
|
||||
* - Applying the same migration twice is a no-op (second call skips)
|
||||
* - A SQL error causes a rollback and re-throws
|
||||
* - Multiple migration files are applied in lexicographic order
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi } from 'vitest';
|
||||
import type { MockedFunction } from 'vitest';
|
||||
import type { Logger } from 'pino';
|
||||
import type { Pool, PoolClient } from 'pg';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// pg.Pool mock
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type QueryCall = {
|
||||
sql: string;
|
||||
params?: unknown[];
|
||||
};
|
||||
|
||||
type MockPoolOptions = {
|
||||
/**
|
||||
* Map from query SQL fragment to result or Error.
|
||||
* If a query SQL contains the key as a substring, that handler fires.
|
||||
* The first matching key wins. Unmatched queries return `{ rows: [] }`.
|
||||
*/
|
||||
handlers?: Record<string, { rows?: unknown[] } | Error>;
|
||||
};
|
||||
|
||||
type MockClient = {
|
||||
query: MockedFunction<(sql: string, params?: unknown[]) => Promise<{ rows: unknown[] }>>;
|
||||
release: MockedFunction<() => void>;
|
||||
};
|
||||
|
||||
function makeMockPool(options: MockPoolOptions = {}): {
|
||||
pool: Pool;
|
||||
calls: QueryCall[];
|
||||
} {
|
||||
const calls: QueryCall[] = [];
|
||||
const handlers = options.handlers ?? {};
|
||||
|
||||
function resolveQuery(sql: string): { rows: unknown[] } | Error {
|
||||
for (const [fragment, result] of Object.entries(handlers)) {
|
||||
if (sql.includes(fragment)) return result;
|
||||
}
|
||||
return { rows: [] };
|
||||
}
|
||||
|
||||
// Pool-level query (used for CREATE TABLE IF NOT EXISTS schema_migrations)
|
||||
const poolQuery = vi.fn(async (sql: string, params?: unknown[]) => {
|
||||
calls.push({ sql, params });
|
||||
const result = resolveQuery(sql);
|
||||
if (result instanceof Error) throw result;
|
||||
return result;
|
||||
});
|
||||
|
||||
// Client returned by pool.connect()
|
||||
const clientQuery = vi.fn(async (sql: string, params?: unknown[]) => {
|
||||
calls.push({ sql, params });
|
||||
const result = resolveQuery(sql);
|
||||
if (result instanceof Error) throw result;
|
||||
return result as { rows: unknown[] };
|
||||
});
|
||||
|
||||
const clientRelease = vi.fn();
|
||||
|
||||
const mockClient: MockClient = {
|
||||
query: clientQuery,
|
||||
release: clientRelease,
|
||||
};
|
||||
|
||||
const poolConnect = vi.fn(async () => mockClient as unknown as PoolClient);
|
||||
|
||||
return {
|
||||
pool: {
|
||||
query: poolQuery,
|
||||
connect: poolConnect,
|
||||
} as unknown as Pool,
|
||||
calls,
|
||||
};
|
||||
}
|
||||
|
||||
function makeSilentLogger(): Logger {
|
||||
return {
|
||||
debug: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
fatal: vi.fn(),
|
||||
child: vi.fn().mockReturnThis(),
|
||||
trace: vi.fn(),
|
||||
level: 'silent',
|
||||
silent: vi.fn(),
|
||||
} as unknown as Logger;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Import under test
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// We mock node:fs/promises so we control file listing and content,
|
||||
// isolating the runner logic from the real filesystem.
|
||||
|
||||
vi.mock('node:fs/promises', () => ({
|
||||
readdir: vi.fn(),
|
||||
readFile: vi.fn(),
|
||||
}));
|
||||
|
||||
import { readdir, readFile } from 'node:fs/promises';
|
||||
import { runMigrations } from '../../src/db/migrate.js';
|
||||
|
||||
const mockReaddir = readdir as MockedFunction<typeof readdir>;
|
||||
const mockReadFile = readFile as MockedFunction<typeof readFile>;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('runMigrations — fresh database', () => {
|
||||
it('creates schema_migrations table, runs migration SQL, and records version', async () => {
|
||||
const fakeSql = 'CREATE TABLE IF NOT EXISTS positions (id serial);';
|
||||
const version = '0001_positions.sql';
|
||||
|
||||
mockReaddir.mockResolvedValue([version] as unknown as Awaited<ReturnType<typeof readdir>>);
|
||||
mockReadFile.mockResolvedValue(fakeSql as unknown as Buffer);
|
||||
|
||||
const { pool, calls } = makeMockPool({
|
||||
handlers: {
|
||||
'SELECT EXISTS': { rows: [{ exists: false }] },
|
||||
},
|
||||
});
|
||||
|
||||
await runMigrations(pool, makeSilentLogger());
|
||||
|
||||
// 1. Schema migrations table bootstrapped
|
||||
expect(
|
||||
calls.find((c) => c.sql.includes('CREATE TABLE IF NOT EXISTS schema_migrations')),
|
||||
).toBeDefined();
|
||||
|
||||
// 2. EXISTS check ran with the correct version
|
||||
expect(
|
||||
calls.find((c) => c.sql.includes('SELECT EXISTS') && c.params?.[0] === version),
|
||||
).toBeDefined();
|
||||
|
||||
// 3. BEGIN transaction
|
||||
expect(calls.find((c) => c.sql === 'BEGIN')).toBeDefined();
|
||||
|
||||
// 4. Migration SQL executed
|
||||
expect(calls.find((c) => c.sql === fakeSql)).toBeDefined();
|
||||
|
||||
// 5. Version recorded
|
||||
expect(
|
||||
calls.find(
|
||||
(c) => c.sql.includes('INSERT INTO schema_migrations') && c.params?.[0] === version,
|
||||
),
|
||||
).toBeDefined();
|
||||
|
||||
// 6. COMMIT
|
||||
expect(calls.find((c) => c.sql === 'COMMIT')).toBeDefined();
|
||||
});
|
||||
|
||||
it('logs info after applying the migration', async () => {
|
||||
mockReaddir.mockResolvedValue(
|
||||
['0001_positions.sql'] as unknown as Awaited<ReturnType<typeof readdir>>,
|
||||
);
|
||||
mockReadFile.mockResolvedValue('SELECT 1' as unknown as Buffer);
|
||||
|
||||
const { pool } = makeMockPool({
|
||||
handlers: { 'SELECT EXISTS': { rows: [{ exists: false }] } },
|
||||
});
|
||||
|
||||
const logger = makeSilentLogger();
|
||||
await runMigrations(pool, logger);
|
||||
|
||||
expect(logger.info).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ version: '0001_positions.sql' }),
|
||||
'migration applied',
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('runMigrations — already applied (idempotency)', () => {
|
||||
it('skips migration when already recorded in schema_migrations', async () => {
|
||||
const version = '0001_positions.sql';
|
||||
|
||||
mockReaddir.mockResolvedValue([version] as unknown as Awaited<ReturnType<typeof readdir>>);
|
||||
mockReadFile.mockResolvedValue('SELECT 1' as unknown as Buffer);
|
||||
|
||||
const { pool, calls } = makeMockPool({
|
||||
handlers: {
|
||||
'SELECT EXISTS': { rows: [{ exists: true }] },
|
||||
},
|
||||
});
|
||||
|
||||
const logger = makeSilentLogger();
|
||||
await runMigrations(pool, logger);
|
||||
|
||||
// No transaction should have been started
|
||||
expect(calls.find((c) => c.sql === 'BEGIN')).toBeUndefined();
|
||||
|
||||
expect(logger.info).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ version }),
|
||||
'migration already applied; skipping',
|
||||
);
|
||||
});
|
||||
|
||||
it('is a no-op when called twice with the same migrations', async () => {
|
||||
// EXISTS check runs through pool.query (not through a client), so we track
|
||||
// call count on the pool-level query mock.
|
||||
let existsCallCount = 0;
|
||||
const version = '0001_positions.sql';
|
||||
|
||||
mockReaddir.mockResolvedValue([version] as unknown as Awaited<ReturnType<typeof readdir>>);
|
||||
mockReadFile.mockResolvedValue('SELECT 1' as unknown as Buffer);
|
||||
|
||||
const clientQuery = vi.fn(async (_sql: string, _params?: unknown[]) => {
|
||||
return { rows: [] as unknown[] };
|
||||
});
|
||||
const client = { query: clientQuery, release: vi.fn() };
|
||||
|
||||
const poolQuery = vi.fn(async (sql: string, _params?: unknown[]) => {
|
||||
if (sql.includes('SELECT EXISTS')) {
|
||||
existsCallCount++;
|
||||
// First run: not yet applied; second run: already applied
|
||||
return { rows: [{ exists: existsCallCount > 1 }] };
|
||||
}
|
||||
return { rows: [] as unknown[] };
|
||||
});
|
||||
|
||||
const pool = {
|
||||
query: poolQuery,
|
||||
connect: vi.fn(async () => client as unknown as PoolClient),
|
||||
} as unknown as Pool;
|
||||
|
||||
const logger = makeSilentLogger();
|
||||
await runMigrations(pool, logger);
|
||||
await runMigrations(pool, logger);
|
||||
|
||||
// BEGIN called exactly once (first run only; second run skips the migration)
|
||||
const beginCalls = (clientQuery.mock.calls as [string][]).filter(([sql]) => sql === 'BEGIN');
|
||||
expect(beginCalls).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('runMigrations — SQL error', () => {
|
||||
it('rolls back on SQL error and rethrows', async () => {
|
||||
mockReaddir.mockResolvedValue(
|
||||
['0001_positions.sql'] as unknown as Awaited<ReturnType<typeof readdir>>,
|
||||
);
|
||||
mockReadFile.mockResolvedValue('BAD SQL;' as unknown as Buffer);
|
||||
|
||||
const clientQueries: string[] = [];
|
||||
const clientQuery = vi.fn(async (sql: string, _params?: unknown[]) => {
|
||||
clientQueries.push(sql);
|
||||
if (sql === 'BAD SQL;') throw new Error('syntax error at or near "BAD"');
|
||||
return { rows: [] };
|
||||
});
|
||||
const client = { query: clientQuery, release: vi.fn() };
|
||||
|
||||
const poolQuery = vi.fn(async (sql: string) => {
|
||||
if (sql.includes('SELECT EXISTS')) return { rows: [{ exists: false }] };
|
||||
return { rows: [] as unknown[] };
|
||||
});
|
||||
const pool = {
|
||||
query: poolQuery,
|
||||
connect: vi.fn(async () => client as unknown as PoolClient),
|
||||
} as unknown as Pool;
|
||||
|
||||
const logger = makeSilentLogger();
|
||||
|
||||
await expect(runMigrations(pool, logger)).rejects.toThrow('syntax error');
|
||||
|
||||
expect(clientQueries).toContain('ROLLBACK');
|
||||
expect(logger.error).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ version: '0001_positions.sql' }),
|
||||
'migration failed; rolled back',
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('runMigrations — multiple migration files', () => {
|
||||
it('applies files in lexicographic order', async () => {
|
||||
const insertedVersions: string[] = [];
|
||||
|
||||
// Return in reverse order to verify the runner sorts them
|
||||
mockReaddir.mockResolvedValue(
|
||||
['0002_second.sql', '0001_first.sql'] as unknown as Awaited<ReturnType<typeof readdir>>,
|
||||
);
|
||||
mockReadFile.mockResolvedValue('SELECT 1' as unknown as Buffer);
|
||||
|
||||
const clientQuery = vi.fn(async (sql: string, params?: unknown[]) => {
|
||||
if (sql.includes('INSERT INTO schema_migrations')) {
|
||||
insertedVersions.push(params?.[0] as string);
|
||||
}
|
||||
return { rows: [] };
|
||||
});
|
||||
const client = { query: clientQuery, release: vi.fn() };
|
||||
|
||||
const pool = {
|
||||
query: vi.fn(async (sql: string, _params?: unknown[]) => {
|
||||
if (sql.includes('SELECT EXISTS')) return { rows: [{ exists: false }] };
|
||||
return { rows: [] as unknown[] };
|
||||
}),
|
||||
connect: vi.fn(async () => client as unknown as PoolClient),
|
||||
} as unknown as Pool;
|
||||
|
||||
await runMigrations(pool, makeSilentLogger());
|
||||
|
||||
expect(insertedVersions).toEqual(['0001_first.sql', '0002_second.sql']);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,133 @@
|
||||
/**
|
||||
* Unit tests for src/db/pool.ts
|
||||
*
|
||||
* Covers:
|
||||
* - connectWithRetry succeeds on first attempt
|
||||
* - connectWithRetry retries on failure and succeeds on a later attempt
|
||||
* - connectWithRetry calls process.exit(1) after exhausting all attempts
|
||||
* - Warn is logged for each non-final failed attempt; fatal for the last
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import type { Logger } from 'pino';
|
||||
import type { Pool, PoolClient } from 'pg';
|
||||
import { connectWithRetry } from '../../src/db/pool.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function makeSilentLogger(): Logger {
|
||||
return {
|
||||
debug: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
fatal: vi.fn(),
|
||||
child: vi.fn().mockReturnThis(),
|
||||
trace: vi.fn(),
|
||||
level: 'silent',
|
||||
silent: vi.fn(),
|
||||
} as unknown as Logger;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a mock pg.Pool whose connect() resolves or rejects according to
|
||||
* the `connectResults` sequence. Each call consumes the next entry.
|
||||
*/
|
||||
function makeMockPool(connectResults: Array<'ok' | Error>): {
|
||||
pool: Pool;
|
||||
connectCallCount: () => number;
|
||||
} {
|
||||
let callIndex = 0;
|
||||
|
||||
const clientQuery = vi.fn().mockResolvedValue({ rows: [] });
|
||||
const clientRelease = vi.fn();
|
||||
const mockClient = { query: clientQuery, release: clientRelease };
|
||||
|
||||
const connect = vi.fn(async () => {
|
||||
const result = connectResults[callIndex++];
|
||||
if (result === 'ok') {
|
||||
return mockClient as unknown as PoolClient;
|
||||
}
|
||||
throw result;
|
||||
});
|
||||
|
||||
return {
|
||||
pool: { connect } as unknown as Pool,
|
||||
connectCallCount: () => callIndex,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('connectWithRetry', () => {
|
||||
beforeEach(() => {
|
||||
vi.useFakeTimers();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.useRealTimers();
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('succeeds on first attempt without retrying', async () => {
|
||||
const { pool, connectCallCount } = makeMockPool(['ok']);
|
||||
const logger = makeSilentLogger();
|
||||
|
||||
await connectWithRetry(pool, logger, 3);
|
||||
|
||||
expect(connectCallCount()).toBe(1);
|
||||
expect(logger.info).toHaveBeenCalledWith({ attempt: 1 }, 'Postgres connected');
|
||||
expect(logger.warn).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('retries on failure and succeeds on the second attempt', async () => {
|
||||
const { pool, connectCallCount } = makeMockPool([new Error('ECONNREFUSED'), 'ok']);
|
||||
const logger = makeSilentLogger();
|
||||
|
||||
const promise = connectWithRetry(pool, logger, 2);
|
||||
// Advance timers to fire the backoff setTimeout (200ms * 2^0 = 200ms)
|
||||
await vi.runAllTimersAsync();
|
||||
await promise;
|
||||
|
||||
expect(connectCallCount()).toBe(2);
|
||||
expect(logger.warn).toHaveBeenCalledOnce();
|
||||
expect(logger.info).toHaveBeenCalledWith({ attempt: 2 }, 'Postgres connected');
|
||||
});
|
||||
|
||||
it('calls process.exit(1) after exhausting all attempts — maxAttempts=1', async () => {
|
||||
// Use maxAttempts=1 to skip backoff timers entirely, avoiding timer-related
|
||||
// unhandled rejection noise in the test suite.
|
||||
const exitSpy = vi.spyOn(process, 'exit').mockImplementation((_code) => {
|
||||
throw new Error('process.exit called');
|
||||
});
|
||||
|
||||
const { pool } = makeMockPool([new Error('ECONNREFUSED')]);
|
||||
const logger = makeSilentLogger();
|
||||
|
||||
await expect(connectWithRetry(pool, logger, 1)).rejects.toThrow('process.exit called');
|
||||
expect(exitSpy).toHaveBeenCalledWith(1);
|
||||
expect(logger.fatal).toHaveBeenCalledOnce();
|
||||
// With maxAttempts=1, no retries → no warn
|
||||
expect(logger.warn).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('logs warn for non-final failed attempts', async () => {
|
||||
// maxAttempts=2: attempt 1 fails (warn), attempt 2 succeeds.
|
||||
// This avoids the unhandled-rejection noise that occurs when process.exit
|
||||
// throws inside an async function that has a pending backoff timer.
|
||||
const { pool } = makeMockPool([new Error('fail 1'), 'ok']);
|
||||
const logger = makeSilentLogger();
|
||||
|
||||
const promise = connectWithRetry(pool, logger, 2);
|
||||
await vi.runAllTimersAsync();
|
||||
await promise;
|
||||
|
||||
expect(logger.warn).toHaveBeenCalledTimes(1);
|
||||
expect(logger.fatal).not.toHaveBeenCalled();
|
||||
expect(logger.info).toHaveBeenCalledWith({ attempt: 2 }, 'Postgres connected');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "NodeNext",
|
||||
"moduleResolution": "NodeNext",
|
||||
"lib": ["ES2022"],
|
||||
"outDir": "dist",
|
||||
"rootDir": "src",
|
||||
"strict": true,
|
||||
"noUncheckedIndexedAccess": true,
|
||||
"noImplicitOverride": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"declaration": false,
|
||||
"skipLibCheck": true,
|
||||
"esModuleInterop": false,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"resolveJsonModule": true
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist", "test"]
|
||||
}
|
||||
@@ -0,0 +1,19 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "NodeNext",
|
||||
"moduleResolution": "NodeNext",
|
||||
"lib": ["ES2022"],
|
||||
"strict": true,
|
||||
"noUncheckedIndexedAccess": true,
|
||||
"noImplicitOverride": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"declaration": false,
|
||||
"skipLibCheck": true,
|
||||
"esModuleInterop": false,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"resolveJsonModule": true,
|
||||
"noEmit": true
|
||||
},
|
||||
"include": ["src/**/*", "test/**/*"]
|
||||
}
|
||||
@@ -0,0 +1,22 @@
|
||||
import { defineConfig } from 'vitest/config';
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
include: ['test/**/*.test.ts'],
|
||||
// Integration tests need external services (Docker, real Redis + Postgres).
|
||||
// They are opt-in via `pnpm test:integration` (see vitest.integration.config.ts).
|
||||
// Excluding them here keeps `pnpm test` fast and CI-safe.
|
||||
exclude: ['**/node_modules/**', 'test/**/*.integration.test.ts'],
|
||||
environment: 'node',
|
||||
coverage: {
|
||||
provider: 'v8',
|
||||
reporter: ['text', 'lcov'],
|
||||
include: ['src/**/*.ts'],
|
||||
},
|
||||
},
|
||||
resolve: {
|
||||
// Allow vitest to import .ts files without explicit extensions
|
||||
// when referenced from test files that don't use .js suffixes
|
||||
extensions: ['.ts', '.js'],
|
||||
},
|
||||
});
|
||||
@@ -0,0 +1,24 @@
|
||||
import { defineConfig } from 'vitest/config';
|
||||
|
||||
/**
|
||||
* Vitest config for integration tests that depend on external services
|
||||
* (Docker, real Redis, TimescaleDB, etc.). Run via `pnpm test:integration`.
|
||||
* Requires a working Docker daemon — `testcontainers` will spin up the services
|
||||
* it needs, then tear them down.
|
||||
*
|
||||
* NOT run in default CI. Run locally before changes that touch the Redis consumer
|
||||
* or Postgres writer, or in a separate CI job that has Docker access.
|
||||
*/
|
||||
export default defineConfig({
|
||||
test: {
|
||||
include: ['test/**/*.integration.test.ts'],
|
||||
environment: 'node',
|
||||
// Container startup can be slow on first run (image pull, ryuk
|
||||
// container, etc). Allow generous hook + test timeouts.
|
||||
hookTimeout: 120_000,
|
||||
testTimeout: 60_000,
|
||||
},
|
||||
resolve: {
|
||||
extensions: ['.ts', '.js'],
|
||||
},
|
||||
});
|
||||
Reference in New Issue
Block a user