--- url: 'https://kravetsone.github.io/taskora/taskora/skills/taskora-nestjs/SKILL.md' description: > @taskora/nestjs — first-class NestJS integration for taskora. Use when building task queues in a Nest backend: TaskoraModule.forRoot, @TaskConsumer classes with constructor DI, @OnTaskEvent method bindings, TaskoraRef.for() zero-decorator dispatching, class middleware via @TaskMiddleware, @InjectInspector / @InjectDeadLetters / @InjectSchedules observability accessors, TaskoraBoardModule for the admin UI mount, producer/worker split deployment, multi-app routing, and the TaskoraTestHarness end-to-end test helper. Not for @nestjs/bull, @bull-board/nestjs, or bullmq-nestjs — those are different libraries with different APIs. --- # @taskora/nestjs — NestJS Integration for taskora First-class Nest integration. Everything taskora exposes (dispatching, handlers, events, middleware, inspector, DLQ, schedules, admin dashboard, testing) is injectable through Nest's DI graph. Constructor injection just works — including for `@TaskConsumer` classes that run as workers. ## When to use * Building a task queue inside an existing NestJS backend * Want DI-managed `@TaskConsumer` classes instead of bare handler functions * Need class-based middleware with injectable dependencies (loggers, tracers) * Need `@InjectInspector` / `@InjectDeadLetters` / `@InjectSchedules` to build custom admin endpoints * Splitting producer/worker into separate Nest processes with shared contracts * Unit-testing Nest modules that dispatch to the queue without Redis/Docker **Do NOT** use this skill for `@nestjs/bull`, `@bull-board/nestjs`, or any bullmq-based integration — those are different libraries with different APIs. ## Installation ```bash npm install @taskora/nestjs taskora reflect-metadata npm install taskora ioredis # for the Redis adapter npm install @taskora/board hono @hono/node-server # optional: admin dashboard ``` `main.ts` **must** import `reflect-metadata` as the first line — above every other import — or Nest's constructor DI silently injects undefined: ```ts import "reflect-metadata" import { NestFactory } from "@nestjs/core" // ... ``` `tsconfig.json` needs: ```jsonc { "compilerOptions": { "experimentalDecorators": true, "emitDecoratorMetadata": true } } ``` ## Module registration Single app: ```ts import { Module } from "@nestjs/common" import { TaskoraModule } from "@taskora/nestjs" import { redisAdapter } from "taskora/redis" import { Redis } from "ioredis" @Module({ imports: [ TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), defaults: { retry: { attempts: 3, backoff: "exponential" }, timeout: 30_000, }, }), ], }) export class AppModule {} ``` Async (ConfigService pattern): ```ts import { ConfigModule, ConfigService } from "@nestjs/config" TaskoraModule.forRootAsync({ imports: [ConfigModule], useFactory: (cfg: ConfigService) => ({ adapter: redisAdapter({ client: new Redis(cfg.getOrThrow("REDIS_URL")), }), }), inject: [ConfigService], }) ``` `TaskoraCoreModule` is `@Global`, so providers in any feature module can inject `TaskoraRef`, `Inspector`, `DeadLetterManager`, etc. without re-importing `TaskoraModule`. ### Options `TaskoraModule.forRoot(options)` accepts everything taskora's `createTaskora()` takes plus four Nest-specific fields: * `name?: string` — named slot for multi-app setups (default: `"default"`) * `autoStart?: boolean` — whether `app.start()` runs on `onApplicationBootstrap` (default: `true`; set to `false` for producer-only processes) * `middleware?: Type[]` — class middleware resolved via DI * Everything else (`adapter`, `defaults`, `retention`, `stall`, `scheduler`, `serializer`, `validateOnDispatch`) passes through to taskora unchanged ## Recommended file layout ``` src/ ├── main.ts ├── app.module.ts ├── tasks/ ← all contracts live here │ ├── index.ts ← barrel: `export * from './email.contracts'` │ └── email.contracts.ts ├── email/ │ ├── email.module.ts │ ├── email.service.ts ← PRODUCER — dispatches via TaskoraRef │ ├── email.consumer.ts ← CONSUMER — @TaskConsumer class │ └── mailer.service.ts ← DI dependency of the consumer └── common/ └── middleware/ └── logging.middleware.ts ``` **Rules:** * **Contracts go in `src/tasks/`**, not in feature folders. Even in a monolith, services dispatch tasks they don't own — keeping contracts central avoids cross-folder imports. * **Consumers live next to their dependencies** in the feature module. A feature module's `providers: []` array is the full registration surface. * **Dispatchers go in service classes**, not controllers. Controllers call `userService.onSignup()` which internally dispatches — same layering as SQL calls. * **`TaskoraModule.forRoot` only in `AppModule`** — `@Global` handles the rest. Feature modules that need per-contract DI tokens use `TaskoraModule.forFeature([...])`, but most of the time `TaskoraRef.for(contract)` removes that need. * **Don't register a consumer in the same module as the service that dispatches its contract** — they're structurally independent. `BillingModule` can dispatch `sendEmailTask` via `TaskoraRef` without importing anything from `EmailModule`. ## Contracts Always use `defineTask` from `taskora` to declare contracts. The contract is the shared surface between producer and consumer — even if they're in the same process today, you want the option to split tomorrow. ```ts // src/tasks/email.contracts.ts import { defineTask } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string().email(), subject: z.string(), body: z.string().optional(), }), output: z.object({ messageId: z.string(), }), retry: { attempts: 5, backoff: "exponential" }, timeout: "30s", }) ``` ```ts // src/tasks/index.ts export * from "./email.contracts" export * from "./image.contracts" // ... ``` Producers and consumers both import from `@/tasks` — they only share the contract, never the handler. ## Dispatching — TaskoraRef.for() (primary path) Zero-decorator, zero-annotation DI. This is the default path — use it unless you have a concrete reason not to. ```ts import { Injectable } from "@nestjs/common" import { TaskoraRef } from "@taskora/nestjs" import { sendEmailTask } from "@/tasks" @Injectable() export class EmailService { constructor(private readonly tasks: TaskoraRef) {} async notifySignup(user: { email: string; name: string }) { const handle = this.tasks.for(sendEmailTask).dispatch({ to: user.email, subject: `Welcome, ${user.name}`, }) const { messageId } = await handle.result return messageId } } ``` * `TaskoraRef` is **auto-provided** by `TaskoraModule.forRoot` — inject it anywhere, no decorator required. * `.for()` is a generic method: `for(contract: TaskContract): BoundTask`. TypeScript propagates the contract's input/output types to `.dispatch()` and `handle.result` with zero annotations. * `.for()` is cheap — `app.register(contract)` is idempotent, backed by a Map lookup. Call it inline in every method or cache in a getter, same cost either way. ## Dispatching — @InjectTask escape hatch If you prefer property-style injection, use `@InjectTask` + `InferBoundTask`: ```ts import { Injectable } from "@nestjs/common" import { InjectTask, type InferBoundTask, TaskoraModule } from "@taskora/nestjs" import { sendEmailTask } from "@/tasks" @Injectable() export class EmailService { constructor( @InjectTask(sendEmailTask) private readonly sendEmail: InferBoundTask, ) {} async notifySignup(user: User) { await this.sendEmail.dispatch({ to: user.email, subject: "Welcome" }) } } ``` **Requires `forFeature`** — the per-contract DI token only exists if you register it: ```ts @Module({ imports: [TaskoraModule.forFeature([sendEmailTask, processImageTask])], providers: [EmailService], }) export class EmailModule {} ``` **Why `InferBoundTask` and NOT `BoundTask`:** TypeScript parameter decorators cannot propagate generics into the decorated property's type, so without the helper you'd have to manually spell `BoundTask<{to: string}, {messageId: string}>` and it drifts every time the schema changes. `InferBoundTask` reads the types directly from the contract value — rename a field and it tracks. For new code, prefer `TaskoraRef.for()`. `@InjectTask` is an escape hatch for callers who want property-level injection. ## Consumers — @TaskConsumer Mark any Nest provider as a worker handler for a specific contract. Full constructor DI. ```ts import { TaskConsumer, OnTaskEvent } from "@taskora/nestjs" import type { InferInput, InferOutput, Taskora } from "taskora" import { MailerService } from "./mailer.service" import { sendEmailTask } from "@/tasks" @TaskConsumer(sendEmailTask, { concurrency: 10, timeout: "30s", retry: { attempts: 5, backoff: "exponential" }, }) export class SendEmailConsumer { constructor(private readonly mailer: MailerService) {} async process( data: InferInput, ctx: Taskora.Context, ): Promise> { ctx.log.info("sending", { to: data.to }) return this.mailer.send(data) } @OnTaskEvent("completed") onDone(evt: Taskora.TaskEventMap>["completed"]) { // metrics, logs — DI deps are live here } @OnTaskEvent("failed") onFail(evt: Taskora.TaskEventMap["failed"]) { // alerts, dead-letter analysis } } ``` Register as a normal provider: ```ts @Module({ providers: [SendEmailConsumer, MailerService], }) export class EmailModule {} ``` **The process() method is the handler.** It receives the deserialized+validated input and taskora's `Context`. Throwing goes through taskora's retry machinery automatically. The consumer instance is a **singleton** — the same DI-managed instance handles every job, injected dependencies stay live across runs. ### @TaskConsumer options Accepts a subset of `ImplementOptions`: ```ts interface TaskConsumerOptions { app?: string // multi-app routing — default: DEFAULT_APP_NAME concurrency?: number timeout?: number | Duration retry?: Taskora.RetryConfig stall?: Taskora.StallConfig singleton?: boolean concurrencyLimit?: number ttl?: Taskora.TtlConfig version?: number // payload version since?: number // oldest supported version } ``` ### @OnTaskEvent Method-level binding to per-task events (`completed`, `failed`, `retrying`, `progress`, `active`, `stalled`, `cancelled`). The method runs on the same consumer instance as `process()`, with DI dependencies intact. For cross-task / app-level events (`worker:ready`, `worker:error`), inject the raw App and use `.on()`: ```ts import { InjectApp } from "@taskora/nestjs" import type { App } from "taskora" @Injectable() export class WorkerHealthService implements OnModuleInit { constructor(@InjectApp() private readonly app: App) {} onModuleInit() { this.app.on("worker:ready", () => { /* ... */ }) } } ``` ### Lifecycle `TaskoraExplorer` runs the discovery pass inside `onApplicationBootstrap`: 1. Walk `DiscoveryService.getProviders()` → find every `@TaskConsumer`. 2. Filter by `options.app` matching this explorer's app name. 3. Call `app.implement(contract, handler)` where `handler` is a closure over the DI instance's `process` method. 4. Wire every `@OnTaskEvent` method via `task.on(event, boundHandler)`. 5. Call `app.start()` unless `autoStart: false`. **All handlers are attached BEFORE the worker starts** — no race where a worker picks up a job for an unimplemented contract. On `onApplicationShutdown` the explorer awaits `app.close()`. Call `app.enableShutdownHooks()` in `main.ts` so SIGTERM triggers the drain. ## Class middleware Taskora's middleware is a Koa-style onion chain. `@taskora/nestjs` lets you write each middleware as a DI-managed class. ```ts import { TaskMiddleware, type TaskoraMiddleware } from "@taskora/nestjs" import { Logger } from "@nestjs/common" import type { Taskora } from "taskora" @TaskMiddleware() export class LoggingMiddleware implements TaskoraMiddleware { private readonly logger = new Logger("Taskora") async use(ctx: Taskora.MiddlewareContext, next: () => Promise) { const start = Date.now() try { await next() this.logger.log(`✓ ${ctx.task.name} (${Date.now() - start}ms)`) } catch (err) { this.logger.error(`✗ ${ctx.task.name} (${Date.now() - start}ms)`, err as Error) throw err } } } ``` Wire it in `forRoot` **and** register as a provider: ```ts @Module({ imports: [ TaskoraModule.forRoot({ adapter: …, middleware: [LoggingMiddleware], // ← reference the class }), ], providers: [LoggingMiddleware], // ← MUST also be in providers }) export class AppModule {} ``` If you forget the `providers` entry, the explorer throws at init with a clear error. Composition order: list order is **outermost to innermost**. `middleware: [A, B, C]` → A wraps B wraps C wraps the handler. Multiple middlewares can share DI singletons (all resolve from the same container). ## Observability — Inspector, DeadLetters, Schedules All three are injectable per app. ### Inspector (default slot — zero decorator) ```ts import { Injectable } from "@nestjs/common" import { Inspector } from "taskora" @Injectable() export class QueueStatsService { constructor(private readonly inspector: Inspector) {} async dashboardSnapshot() { const stats = await this.inspector.stats() const recentlyFailed = await this.inspector.failed({ limit: 20 }) return { stats, recentlyFailed } } async findJob(jobId: string) { return this.inspector.find(jobId) // cross-task search } } ``` ### DeadLetterManager (default slot — zero decorator) ```ts import { Injectable } from "@nestjs/common" import { DeadLetterManager } from "taskora" @Injectable() export class DlqService { constructor(private readonly dlq: DeadLetterManager) {} async retry(jobId: string) { return this.dlq.retry(jobId) } async retryAllForTask(task: string) { return this.dlq.retryAll({ task }) } } ``` ### Schedules (always via decorator — ScheduleManager not in taskora's public exports) ```ts import { Injectable } from "@nestjs/common" import { InjectSchedules } from "@taskora/nestjs" import type { App } from "taskora" @Injectable() export class SchedulesService { constructor(@InjectSchedules() private readonly schedules: App["schedules"]) {} async listAll() { return this.schedules.list() } async pause(name: string) { return this.schedules.pause(name) } async trigger(name: string) { return this.schedules.trigger(name) } } ``` Type annotation uses `App["schedules"]` because `ScheduleManager` isn't in taskora's public class exports. ### Multi-app accessors ```ts import { InjectDeadLetters, InjectInspector, InjectSchedules, } from "@taskora/nestjs" import { DeadLetterManager, Inspector } from "taskora" import type { App } from "taskora" @Injectable() class AdminService { constructor( // Default app — class tokens, no decorator readonly primaryInspector: Inspector, readonly primaryDlq: DeadLetterManager, @InjectSchedules() readonly primarySchedules: App["schedules"], // Named "secondary" app — string tokens via decorators @InjectInspector("secondary") readonly secondaryInspector: Inspector, @InjectDeadLetters("secondary") readonly secondaryDlq: DeadLetterManager, @InjectSchedules("secondary") readonly secondarySchedules: App["schedules"], ) {} } ``` Only the default slot uses class tokens — one forRoot without a `name` can own them. Every named slot uses string-token decorators. ## Admin dashboard — TaskoraBoardModule `@taskora/board` is an optional peer dep. Install it only when you want the dashboard. `TaskoraBoardModule.forRoot` dynamically imports it inside an async factory, so unused → zero cost. ### Register ```ts import { TaskoraBoardModule, TaskoraModule } from "@taskora/nestjs" @Module({ imports: [ TaskoraModule.forRoot({ adapter: … }), TaskoraBoardModule.forRoot({ basePath: "/board", readOnly: process.env.NODE_ENV !== "production", auth: { cookiePassword: process.env.BOARD_COOKIE_PASSWORD!, // 32+ chars authenticate: async ({ username, password }) => { return await this.validateAdmin(username, password) }, }, }), ], }) export class AppModule {} ``` ### Mount in main.ts The board is a Hono app, NOT an Express router. Mount it explicitly in `main.ts` using `@hono/node-server`: ```ts // src/main.ts import "reflect-metadata" import { NestFactory } from "@nestjs/core" import { getRequestListener } from "@hono/node-server" import type { Board } from "@taskora/board" import { getBoardToken } from "@taskora/nestjs" import { AppModule } from "./app.module" async function bootstrap() { const app = await NestFactory.create(AppModule) app.enableShutdownHooks() const board = app.get(getBoardToken()) app.use("/board", getRequestListener(board.fetch)) await app.listen(3000) } bootstrap() ``` Three lines to wire. The `Board` type comes from `@taskora/board`, not from `@taskora/nestjs` — `@taskora/nestjs` deliberately doesn't re-export it so its type graph stays independent. ### Fastify ```ts app.getHttpAdapter().getInstance().all("/board/*", (req, reply) => { getRequestListener(board.fetch)(req.raw, reply.raw) }) ``` ### @InjectBoard Inject the Board into services that need runtime access (e.g. to attach custom Hono routes): ```ts import { InjectBoard } from "@taskora/nestjs" import type { Board } from "@taskora/board" @Injectable() class BoardIntegrationService { constructor(@InjectBoard() private readonly board: Board) {} } ``` ## Multi-app One Nest container hosting multiple independent taskora apps. ```ts @Module({ imports: [ TaskoraModule.forRoot({ adapter: criticalAdapter }), TaskoraModule.forRoot({ name: "batch", adapter: batchAdapter }), ], }) export class AppModule {} ``` ```ts @TaskConsumer(placeOrderTask) // → default app class OrderConsumer {} @TaskConsumer(analyticsRollupTask, { app: "batch" }) // → "batch" app class RollupConsumer {} @Injectable() class OrderService { constructor( readonly critical: TaskoraRef, // default @InjectTaskoraRef("batch") readonly batch: TaskoraRef, // named ) {} } ``` Each app has its own worker loop, subscribe stream, Inspector, DLQ, schedules. Fully isolated. ## Producer/worker split ### Shared contracts package ``` services/ ├── api/ ← producer ├── worker/ ← consumer └── packages/tasks/ ← @tasks/contracts — zero deps, just defineTask calls ``` ### API (producer-only) ```ts TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), autoStart: false, // explicit: no workers in this process }) ``` Only import producer services that dispatch via `TaskoraRef`. Don't register any `@TaskConsumer` classes. Even with `autoStart: true`, taskora's contract-only short-circuit skips worker loops when no task has a handler — but `autoStart: false` makes the intent unmistakable. ### Worker (consumer-only, no HTTP) ```ts // services/worker/src/main.ts import "reflect-metadata" import { NestFactory } from "@nestjs/core" import { WorkerModule } from "./worker.module" async function bootstrap() { const app = await NestFactory.createApplicationContext(WorkerModule) app.enableShutdownHooks() // NO app.listen() — the App IS the server, processing jobs forever. } bootstrap() ``` Use `createApplicationContext` instead of `create` — skips the Express/Fastify HTTP adapter entirely. Pure DI container + lifecycle hooks. ## Testing — TaskoraTestHarness Import from `@taskora/nestjs/testing` (separate subpath). ### Vitest config (prerequisite) Vitest's default esbuild transform does NOT emit decorator metadata. Install SWC: ```bash npm install -D unplugin-swc @swc/core ``` ```ts // vitest.config.ts import swc from "unplugin-swc" import { defineConfig } from "vitest/config" export default defineConfig({ plugins: [ swc.vite({ jsc: { parser: { syntax: "typescript", decorators: true }, transform: { legacyDecorator: true, decoratorMetadata: true }, }, }), ], test: { setupFiles: ["./tests/setup.ts"], }, }) ``` ```ts // tests/setup.ts import "reflect-metadata" ``` ### createTaskoraTestHarness — end-to-end unit testing ```ts import { createTaskoraTestHarness } from "@taskora/nestjs/testing" it("sends the welcome email", async () => { const harness = await createTaskoraTestHarness({ providers: [SendEmailConsumer, MailerService], }) const result = await harness.execute(sendEmailTask, { to: "alice@example.com", subject: "Welcome", }) expect(result.state).toBe("completed") expect(result.result?.messageId).toBeDefined() // DI deps are the real singletons — assert on their state: const mailer = harness.moduleRef.get(MailerService) expect(mailer.sent).toEqual(["alice@example.com"]) await harness.close() }) ``` **What it does under the hood:** 1. Compiles a Nest `TestingModule` with `TaskoraTestingModule.forRoot({ autoStart: true })` pre-imported (memory adapter). 2. Runs `moduleRef.init()` → explorer runs → every `@TaskConsumer` gets registered → `app.start()` spins up the worker loop. 3. Returns a harness that routes `dispatch` / `execute` through the running App. **Events fire naturally** — the harness runs the real subscribe stream, so `@OnTaskEvent` bindings are exercised end-to-end: ```ts it("fires @OnTaskEvent on completion", async () => { const harness = await createTaskoraTestHarness({ providers: [SendEmailConsumer, MailerService], }) await harness.execute(sendEmailTask, { to: "bob@x", subject: "Hi" }) const consumer = harness.moduleRef.get(SendEmailConsumer) expect(consumer.completedCount).toBeGreaterThanOrEqual(1) await harness.close() }) ``` ### Harness API * `harness.dispatch(contract, data, options?)` → `ResultHandle` * `harness.execute(contract, data, options?)` → `Promise>` — dispatch + await + compact summary. **Errors don't throw** — `state: "failed"` + `error: string`. * `harness.inspect(contract, jobId)` → `Promise` * `harness.moduleRef`, `harness.app`, `harness.tasks` — escape hatches * `harness.close()` — tears down module, drains jobs, closes adapter ### ExecuteResult ```ts interface ExecuteResult { id: string state: "completed" | "failed" | "cancelled" | "expired" result: TOutput | undefined error: string | undefined attempts: number logs: Taskora.LogEntry[] progress: number | Record | undefined timeline: { dispatched: number; processed?: number; finished?: number } } ``` ### TaskoraTestingModule (lower-level primitive) If you don't need the harness and just want a Nest testing module without Redis: ```ts import { Test } from "@nestjs/testing" import { TaskoraTestingModule } from "@taskora/nestjs/testing" const moduleRef = await Test.createTestingModule({ imports: [TaskoraTestingModule.forRoot()], // memory + autoStart: false providers: [EmailService], }).compile() await moduleRef.init() // ... await moduleRef.close() ``` ### Virtual time The harness uses **real time**. For virtual-time tests (fast-forwarding delayed jobs, testing schedules, deterministic retries), drop down to `taskora/test`'s `createTestRunner()` directly — construct a fresh App and wrap it. The harness deliberately doesn't merge both worlds because dual-backend setups are fragile. ## main.ts — production bootstrap checklist ```ts // src/main.ts import "reflect-metadata" // 1. FIRST LINE import { NestFactory } from "@nestjs/core" import { AppModule } from "./app.module" async function bootstrap() { const app = await NestFactory.create(AppModule) app.enableShutdownHooks() // 2. drain jobs on SIGTERM // 3. Optional: mount the board // const board = app.get(getBoardToken()) // app.use("/board", getRequestListener(board.fetch)) await app.listen(3000) // 4. bind HTTP } bootstrap() ``` ## Common gotchas * **`cannot read property 'for' of undefined` inside a service**: Vitest is running without `unplugin-swc`, so decorator metadata isn't emitted. Install SWC per the Testing section. * **Job disappeared before inspector could read it**: you're calling `inspector.find(taskName, jobId)` which resolves to the wrong overload. Use `inspector.find(jobId)` (cross-task search) or `inspector.find(task, jobId)` with a `Task` object. * **Middleware class listed but not found**: forgot to add the class to `providers: []` in the module. Explorer throws a clear message pointing at it. * **SIGTERM kills jobs immediately**: `app.enableShutdownHooks()` wasn't called in `main.ts`. Without it, Nest doesn't propagate shutdown to `OnApplicationShutdown` providers. * **Board options missing auth**: `cookiePassword` must be ≥ 32 chars. Taskora throws at `createBoard` time if shorter. * **Two `forRoot` calls in different modules**: only call `forRoot` in `AppModule` (or a shared `CoreModule`). `@Global` exposes the App to every feature module. Feature modules that need per-contract providers use `forFeature`. * **`@TaskConsumer` class has no `process()` method**: explorer throws at init. The method must be named exactly `process` and take `(data, ctx)`. * **`@TaskConsumer` for the same contract registered twice**: taskora's `app.implement` throws on double-implement. If you need both a default and a named consumer for the same contract, use `@TaskConsumer(contract, { app: 'secondary' })` on the second one. ## Public API reference ### From `@taskora/nestjs` ```ts // Modules TaskoraModule // forRoot, forRootAsync, forFeature TaskoraCoreModule // @Global, internal — usually don't use directly TaskoraBoardModule // forRoot // Injectable classes TaskoraRef // .for(contract), .raw TaskoraExplorer // internal but exported // Decorators — producer side @InjectApp(name?) // raw App @InjectTaskoraRef(name?) // named TaskoraRef @InjectTask(contract, appName?) // per-contract BoundTask (requires forFeature) // Decorators — consumer side @TaskConsumer(contract, options?) // class-level marker @OnTaskEvent(event) // method-level event binding @TaskMiddleware() // class-level, applies @Injectable // Decorators — observability / admin @InjectInspector(name?) @InjectDeadLetters(name?) @InjectSchedules(name?) @InjectBoard(name?) // Tokens DEFAULT_APP_NAME getAppToken(name?) getOptionsToken(name?) getExplorerToken(name?) getTaskoraRefToken(name?) getTaskToken(contract, appName?) getInspectorToken(name?) getDeadLettersToken(name?) getSchedulesToken(name?) getBoardToken(name?) // Types TaskoraModuleOptions TaskoraModuleAsyncOptions TaskoraModuleOptionsFactory TaskoraMiddleware TaskConsumerOptions TaskConsumerMetadata TaskEventBinding InferBoundTask // convenience for BoundTask, InferOutput> // Re-exports from taskora (for convenience) InferInput InferOutput ``` ### From `@taskora/nestjs/testing` ```ts TaskoraTestingModule // forRoot(options?) — memory defaults createTaskoraTestHarness(options) // builder → TaskoraTestHarness TaskoraTestHarness // dispatch, execute, inspect, close TaskoraTestHarnessOptions TaskoraTestingModuleOptions ExecuteResult ``` ### From `taskora` (for type annotations) ```ts import { defineTask, createTaskora, App, BoundTask } from "taskora" import type { Taskora, TaskContract, InferInput, InferOutput } from "taskora" import { Inspector, DeadLetterManager } from "taskora" ``` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/JobOptions.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / JobOptions # ~~Type Alias: JobOptions~~ > **JobOptions** = [`DispatchOptions`](../interfaces/DispatchOptions) Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:836 ## Deprecated Use DispatchOptions instead --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/adapters.md' --- # Adapters Taskora separates the core engine from storage backends via the `Adapter` interface. Your task handlers never import `ioredis` directly — the adapter is the only thing that knows about your database. ## Redis Adapter The production adapter. Requires Redis 7.0+. You can pick one of two drivers depending on your runtime — both implement the same `Adapter` interface and share 100% of taskora's logic (Lua scripts, key layout, state machines). | Entry | Runtime | Peer dep | Use when | |---|---|---|---| | `taskora/redis` | Any | `ioredis` | The default. Re-exports `taskora/redis/ioredis`. | | `taskora/redis/ioredis` | Any (Node, Bun, Deno) | `ioredis` | Explicit ioredis. Required for Redis Cluster or Sentinel. | | `taskora/redis/bun` | **Bun only** | none | Bun deployments that want to skip the ioredis peer dep. | ### ioredis driver (default) ```ts import { redisAdapter } from "taskora/redis" // or, equivalently: import { redisAdapter } from "taskora/redis/ioredis" // URL string const adapter = redisAdapter("redis://localhost:6379") // Options object const adapter = redisAdapter({ host: "redis.internal", port: 6379, password: "secret" }) // Existing ioredis instance — adapter will NOT close it on disconnect import Redis from "ioredis" const redis = new Redis("redis://localhost:6379") const adapter = redisAdapter(redis) ``` ::: tip Migrating from BullMQ? You don't need `maxRetriesPerRequest: null` BullMQ requires `maxRetriesPerRequest: null` and `enableReadyCheck: false` on the ioredis client because its worker treats ioredis's `MaxRetriesPerRequestError` as fatal — a long-running `BRPOPLPUSH` in the offline queue during a Redis reconnect would trip the default retry counter of 20 and crash the worker loop. **Taskora does not have this constraint.** The worker poll loop, event reader, and job waiter all wrap their blocking commands (`BZPOPMIN`, `XREAD BLOCK`) in try/catch retry loops. A `MaxRetriesPerRequestError` during a Redis blip is swallowed and the next iteration reissues the command on a healthy socket. ioredis defaults are safe to use. User-facing `task.dispatch()` calls are the one place a long Redis outage can surface as an error to your application code — they're not in a retry loop by design, so your app decides how to handle unavailability. Catch the error and retry at the call site if you want fail-open behavior. ::: ### Bun driver ```ts import { redisAdapter } from "taskora/redis/bun" // URL string const adapter = redisAdapter("redis://localhost:6379") // Options const adapter = redisAdapter({ host: "localhost", port: 6379 }) // Existing Bun.RedisClient — adapter will NOT close it on disconnect const client = new Bun.RedisClient("redis://localhost:6379") const adapter = redisAdapter(client) ``` The Bun driver routes commands through `Bun.RedisClient.send()` (the generic RESP escape hatch). Bun's auto-pipelining batches same-tick calls into a single round trip, so pipeline performance matches ioredis. The driver issues `HELLO 2` at connect time to force RESP2 response shapes — this keeps `HGETALL`, Lua return values, and stream entries identical across drivers. ::: warning Bun driver limitations * **No Redis Cluster.** `Bun.RedisClient` does not support cluster mode. Cluster users must stay on the ioredis driver. * **No Redis Sentinel.** Same constraint. * **Bun runtime only.** The Bun driver throws a clear error if loaded under Node. ::: ### Key Prefix All Redis keys use the pattern `taskora:{taskName}:{key}`. You can add a custom prefix to namespace multiple apps on the same Redis instance: ```ts const adapter = redisAdapter("redis://localhost:6379", { prefix: "myapp", }) // Keys become: myapp:taskora:{taskName}:{key} ``` ### Redis Cluster All keys for a single job use a `{hash tag}` for cluster compatibility. Jobs are guaranteed to land on the same shard. Cluster routing requires the **ioredis driver** — Bun's built-in client does not support cluster mode. ## Memory Adapter Zero-dependency in-memory adapter. Ideal for development, testing, and single-process use cases. ```ts import { memoryAdapter } from "taskora/memory" const taskora = createTaskora({ adapter: memoryAdapter(), }) ``` The memory adapter implements the full `Adapter` interface — including delayed jobs, sorted sets, and concurrency tracking — using plain JavaScript data structures. ::: warning The memory adapter does not persist data. All jobs are lost when the process exits. Use it for development and testing only. ::: ## Adapter Interface If you want to build a custom adapter (e.g., PostgreSQL, SQLite), implement the `Taskora.Adapter` interface: ```ts interface Adapter { connect(): Promise disconnect(): Promise enqueue(task, jobId, data, options): Promise dequeue(task, lockTtl, token, options?): Promise blockingDequeue(task, lockTtl, token, timeoutMs, options?): Promise ack(task, jobId, token, result): Promise fail(task, jobId, token, error, retry?): Promise nack(task, jobId, token): Promise extendLock(task, jobId, token, ttl): Promise<"extended" | "lost" | "cancelled"> cancel(task, jobId, reason?): Promise<"cancelled" | "flagged" | "not_cancellable"> // ... and more (subscribe, inspect, schedule, etc.) } ``` The full interface has ~40 methods covering enqueue, dequeue, acknowledge, inspect, schedule, and dead letter queue operations. See `src/types.ts` for the complete definition. --- --- url: 'https://kravetsone.github.io/taskora/taskora/integrations/nestjs/board.md' --- # Admin Dashboard `@taskora/board` is the pre-built React SPA + Hono API that taskora ships as a separate package. `@taskora/nestjs` exposes it as an injectable provider via `TaskoraBoardModule`, with the actual HTTP mounting left to `main.ts` — because the board is a Hono app, not an Express or Fastify router, and Nest's platform adapters don't speak Hono natively. ## Installation The board is an **optional** peer dependency. Install it only when you want the dashboard: ::: pm-add @taskora/board hono @hono/node-server ::: * `@taskora/board` — the board itself (Hono backend + compiled SPA) * `hono` — peer dep of `@taskora/board` * `@hono/node-server` — bridge that turns `board.fetch` into a Node http listener Nest can mount If you skip `TaskoraBoardModule.forRoot` entirely, none of these packages are loaded — the module uses a dynamic `import("@taskora/board")` inside an async factory, so consumers who don't use the board pay zero bundle or startup cost. ## Registering the module ```ts // src/app.module.ts import { Module } from "@nestjs/common" import { TaskoraBoardModule, TaskoraModule } from "@taskora/nestjs" import { redisAdapter } from "taskora/redis" import { Redis } from "ioredis" @Module({ imports: [ TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), }), TaskoraBoardModule.forRoot({ basePath: "/board", readOnly: false, auth: { cookiePassword: process.env.BOARD_COOKIE_PASSWORD!, authenticate: async ({ username, password }) => { // Return a user object on success, null on failure. if (username === "admin" && password === process.env.BOARD_PASSWORD) { return { username: "admin", role: "admin" } } return null }, }, }), ], }) export class AppModule {} ``` Every option except `name` and `board` passes straight through to `@taskora/board`'s `createBoard(app, options)`. See the [Board operations guide](/operations/board) for the full `BoardOptions` shape — `title`, `logo`, `favicon`, `redact`, `theme`, `refreshInterval`, `cors`, `formatters`, etc. ## Mounting in `main.ts` The module provides the `Board` instance under `getBoardToken()`, but it's up to `main.ts` to attach that instance to the HTTP server. Three lines for Express: ```ts // src/main.ts import "reflect-metadata" import { NestFactory } from "@nestjs/core" import { getRequestListener } from "@hono/node-server" import type { Board } from "@taskora/board" import { getBoardToken } from "@taskora/nestjs" import { AppModule } from "./app.module" async function bootstrap() { const app = await NestFactory.create(AppModule) app.enableShutdownHooks() // Pull the Board out of the DI graph and mount it. const board = app.get(getBoardToken()) app.use("/board", getRequestListener(board.fetch)) await app.listen(3000) } bootstrap() ``` That's it. Open `http://localhost:3000/board/` and you get the full SPA — overview, per-task job tables, job detail with logs/progress/timeline, workflow DAG viewer, schedules, DLQ, migrations, throughput charts, live SSE updates. ### Fastify Swap `NestFactory.create` for the Fastify adapter and use Fastify's raw-request fall-through: ```ts import { NestFactory } from "@nestjs/core" import { FastifyAdapter, NestFastifyApplication } from "@nestjs/platform-fastify" import { getRequestListener } from "@hono/node-server" import type { Board } from "@taskora/board" import { getBoardToken } from "@taskora/nestjs" async function bootstrap() { const app = await NestFactory.create( AppModule, new FastifyAdapter(), ) const board = app.get(getBoardToken()) const listener = getRequestListener(board.fetch) // Fastify raw handler fall-through for the /board prefix. app.getHttpAdapter().getInstance().all("/board/*", (req, reply) => { listener(req.raw, reply.raw) }) await app.listen(3000) } ``` ### Bun's native HTTP If you're running Nest on Bun (via `@nestjs/platform-express` or a Bun-native adapter), you can skip `@hono/node-server` entirely and call `board.fetch` directly: ```ts const board = app.get(getBoardToken()) Bun.serve({ port: 3000, async fetch(req) { const url = new URL(req.url) if (url.pathname.startsWith("/board")) return board.fetch(req) return app.getHttpAdapter().getInstance()(req) }, }) ``` ## Using the Board from services You can inject the Board into any provider via `@InjectBoard()` — useful if you want to read its state from a custom admin endpoint, or register listeners on its underlying Hono app: ```ts import { Injectable } from "@nestjs/common" import { InjectBoard } from "@taskora/nestjs" import type { Board } from "@taskora/board" @Injectable() export class BoardIntegrationService { constructor(@InjectBoard() private readonly board: Board) {} // Add a custom Hono route alongside the board's own routes. registerExtraRoute() { this.board.app.get("/extra", (c) => c.json({ ok: true })) } } ``` `Board` is imported from `@taskora/board`, not from `@taskora/nestjs` — we deliberately don't re-export the type to keep `@taskora/nestjs`'s own type graph independent of the optional peer dep. ## Multi-app / multi-board Each `TaskoraBoardModule.forRoot` call is bound to a specific named app via the `name` option. You can mount two separate boards for two different taskora apps: ```ts @Module({ imports: [ TaskoraModule.forRoot({ adapter: primaryAdapter }), TaskoraModule.forRoot({ name: "background", adapter: backgroundAdapter }), TaskoraBoardModule.forRoot({ basePath: "/board" }), TaskoraBoardModule.forRoot({ name: "background", basePath: "/board-bg" }), ], }) export class AppModule {} ``` ```ts // main.ts const primaryBoard = app.get(getBoardToken()) const bgBoard = app.get(getBoardToken("background")) app.use("/board", getRequestListener(primaryBoard.fetch)) app.use("/board-bg", getRequestListener(bgBoard.fetch)) ``` Each board shows only jobs from its own app. They don't share state, auth, or UI — they're fully isolated. ## Auth patterns The board supports two auth modes: ### Built-in cookie auth Pass a `BoardAuthConfig` object to `auth`: ```ts TaskoraBoardModule.forRoot({ basePath: "/board", auth: { cookiePassword: process.env.BOARD_COOKIE_PASSWORD!, // 32+ chars authenticate: async ({ username, password }) => { // Validate however you want — call out to your UserService, check // a password hash, etc. Return a user object (shape is yours to // define; ends up in the signed cookie) or null. return await this.validateAdmin(username, password) }, // Optional: loginPath: "/login", // where unauthenticated GETs redirect to logoutPath: "/logout", // GET clears the cookie sessionTTL: 60 * 60, // seconds; default 2h }, }) ``` The board renders its own login page at `/board/login` and sets an encrypted cookie on success. You don't write any HTML. `cookiePassword` **must** be ≥ 32 characters — taskora throws at createBoard time if it's shorter. Use `openssl rand -hex 32` or a secrets manager. ### Legacy callback auth If you have an existing auth story and just want a guard fn: ```ts TaskoraBoardModule.forRoot({ basePath: "/board", auth: async (req) => { // Return truthy → allow, falsy → 401 return req.headers["x-admin-token"] === process.env.ADMIN_TOKEN }, }) ``` This path skips the login page entirely — the board assumes auth was handled upstream (e.g. by a reverse proxy or a Nest guard on a parent route). Useful when your API gateway already enforces admin auth. ### `readOnly: true` Regardless of auth mode, set `readOnly: true` in staging environments to block every mutating action (retry, cancel, trim, DLQ retry-all) while still allowing inspection: ```ts TaskoraBoardModule.forRoot({ basePath: "/board", readOnly: process.env.NODE_ENV !== "production", auth: { /* ... */ }, }) ``` ## Environment checklist Before production: * **`BOARD_COOKIE_PASSWORD`** set to 32+ chars (use `crypto.randomBytes(32).toString("hex")` to generate). * **`auth`** configured — never ship a board without auth on a public endpoint. * **`readOnly: true`** in non-prod if you want inspectors to browse without bricking anything. * **Reverse proxy** (nginx/cloudflare) with HTTPS enforcement — board sets `Secure` cookies only over HTTPS. * **`redact: ['password', 'token', 'secret', 'apiKey']`** (or your own keys) so PII in job payloads doesn't leak into the UI. See the full [Board operations guide](/operations/board) for production hardening — CORS, reverse proxy headers, session management, CSP. --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/ai-skills.md' --- # AI Skills Taskora ships a public **Agent Skill** that gives AI coding assistants deep knowledge of the library — every API, pattern, flow, and best practice. Instead of guessing from training data (which is frozen and often hallucinated), your AI gets a structured reference tuned to taskora's conventions. Agent Skills are a **shared specification** — a single `SKILL.md` file with YAML frontmatter — that works across 45+ AI coding tools including Claude Code, Cursor, Windsurf, Cline, Continue, GitHub Copilot, Codex, Gemini CLI, Zed, Aider, Goose, OpenCode, Kilo Code, and many more. ## What's included The `/taskora` skill provides: * **Full API reference** — `createTaskora()`, `app.task()`, `dispatch()`, `chain`/`group`/`chord`, events, inspector, DLQ, board * **Internal flows** — job lifecycle state machine, worker processing pipeline, retry decision tree, workflow DAG execution, scheduling loop, cancellation and stall detection flows * **Best practices** — production checklist, idempotent handlers, timeout/signal propagation, flow control selection guide, retry anti-patterns, testing strategy, graceful shutdown * **Type system** — `Taskora` namespace, all public interfaces, adapter abstraction ## Installation ### Universal install (any supported agent) The fastest way — works with any of the 45+ supported agents via the [`skills` CLI](https://github.com/vercel-labs/skills): ::: code-group ```bash [Quick install] npx skills add kravetsone/taskora/documentation/skills ``` ```bash [Global (all projects)] npx skills add kravetsone/taskora/documentation/skills --global ``` ```bash [Target specific agent] npx skills add kravetsone/taskora/documentation/skills --agent cursor # or: --agent claude-code, --agent windsurf, --agent cline, --agent codex, ... ``` ```bash [Install without prompts] npx skills add kravetsone/taskora/documentation/skills --all ``` ::: The CLI detects which agents you have installed and syncs the skill into the right directory for each. For a full list of supported targets, run `npx skills add --help`. ### Agent-specific install paths Agent Skills live in a well-known directory per tool. The `skills` CLI handles this automatically, but if you prefer manual installation, here's where to put `SKILL.md`: | Agent | Project-local path | Global path | |---|---|---| | Claude Code | `.claude/skills/using-taskora/` | `~/.claude/skills/using-taskora/` | | Cursor | `.cursor/skills/using-taskora/` | `~/.cursor/skills/using-taskora/` | | Windsurf | `.windsurf/skills/using-taskora/` | `~/.windsurf/skills/using-taskora/` | | Cline | `.cline/skills/using-taskora/` | `~/.cline/skills/using-taskora/` | | Continue | `.continue/skills/using-taskora/` | `~/.continue/skills/using-taskora/` | | Codex / GitHub Copilot / Gemini CLI / Zed / Aider / Goose / ... | see [vercel-labs/skills](https://github.com/vercel-labs/skills) | same | ### Manual install (copy one file) ```bash mkdir -p .claude/skills/using-taskora # or your agent's path curl -o .claude/skills/using-taskora/SKILL.md \ https://raw.githubusercontent.com/kravetsone/taskora/main/documentation/skills/using-taskora/SKILL.md ``` That's it — `SKILL.md` is a single self-contained file. No dependencies, no build step. ## Alternative: LLM-friendly docs If your AI tool doesn't yet support Agent Skills (or you're using ChatGPT / Claude Desktop / a custom RAG pipeline), taskora's documentation site publishes machine-readable variants following the [llmstxt.org](https://llmstxt.org) standard. These are generated at build time by [`vitepress-plugin-llms`](https://github.com/okineadev/vitepress-plugin-llms) — the plugin scans the VitePress source tree and produces the files automatically on every docs build, so they always match the current taskora version. | File | Description | |---|---| | [`/llms.txt`](/llms.txt) | Index file — table of contents with a link + frontmatter description for every documentation page. Small, low-token, ideal for RAG systems to pick what to fetch. | | [`/llms-full.txt`](/llms-full.txt) | Full site concatenated into one plain-text file. Paste the whole thing into an LLM's context window for comprehensive grounding. | | `/.md` | Per-page raw markdown. Append `.md` to any documentation URL (e.g. [`/features/workflows.md`](/features/workflows.md)) to fetch the source markdown without the VitePress chrome. | **How to use them:** ::: code-group ```bash [curl — download once] # Get the entire documentation as one file curl -O https://kravetsone.github.io/taskora/llms-full.txt # Or just the index to decide what pages to fetch curl -O https://kravetsone.github.io/taskora/llms.txt ``` ```ts [Programmatic fetch] // Load taskora docs into your own AI tool's context const docs = await fetch("https://kravetsone.github.io/taskora/llms-full.txt") .then(r => r.text()) // Or fetch a single page as markdown const workflowsDoc = await fetch( "https://kravetsone.github.io/taskora/features/workflows.md" ).then(r => r.text()) ``` ```text [ChatGPT / Claude Desktop] Paste this into your prompt: "Here is the complete taskora documentation: Now help me build a task that..." ``` ::: **What gets included:** The plugin walks every `.md` file under `documentation/` and respects frontmatter. Each page entry in `llms.txt` uses the page's title and `description` frontmatter field. Content inside `` tags appears **only** in the generated LLM files; content in `` is stripped from LLM output but still shown to human readers. **Skill vs llms-full.txt — which to use?** | | `/taskora` skill | `llms-full.txt` | |---|---|---| | Size | ~30 KB (curated) | ~200+ KB (full docs) | | Activation | Automatic on relevant prompts | Manual paste / explicit fetch | | Content | Quick reference + flows + best practices | Every page verbatim | | Use when | You use a supported agent (Claude Code, Cursor, Windsurf, ...) | You use ChatGPT, custom RAG, or want exhaustive detail | | Updates | Edit `documentation/skills/using-taskora/SKILL.md` | Auto-regenerated from all docs | Use the skill for day-to-day coding — it's designed to fit in context without bloat. Fall back to `llms-full.txt` when you need deeper detail on a specific subsystem. ## What the skill covers | Area | Topics | |---|---| | **Setup** | `createTaskora()`, `redisAdapter()` (ioredis / Bun variants), `memoryAdapter()`, adapter pattern | | **Tasks** | `app.task()`, handler signature, options, `Task` | | **Dispatching** | `dispatch()`, `dispatchMany()`, `ResultHandle`, dispatch options | | **Context** | `ctx.id`, `ctx.signal`, `ctx.progress()`, `ctx.log`, `ctx.heartbeat()`, `ctx.retry()` | | **Retry** | `RetryConfig`, backoff strategies, `retryOn`/`noRetryOn`, `RetryError`, `TimeoutError` | | **Schemas** | Standard Schema validation, input/output, Zod/Valibot/ArkType | | **Versioning** | `version`, `since`, `migrate` (tuple + record), `into()` helper, inspector | | **Scheduling** | `app.schedule()`, inline schedules, cron, duration type, missed policy, leader election | | **Workflows** | `chain()`, `group()`, `chord()`, `.pipe()`, `.map()`, `.chunk()`, `WorkflowHandle`, DAG model | | **Events** | Task events, app events, `subscribe()`, default error logging | | **Middleware** | `app.use()`, per-task middleware, Koa-style onion, `MiddlewareContext` | | **Flow control** | Debounce, throttle, deduplicate, TTL, singleton, concurrency key, collect | | **Cancellation** | `handle.cancel()`, `onCancel` hook, `CancelledError`, pub/sub detection | | **Inspector** | `active()`, `waiting()`, `delayed()`, `stats()`, `find()`, typed variants | | **DLQ** | `deadLetters.list()`, `.retry()`, `.retryAll()` | | **Testing** | `createTestRunner()`, `run()` vs `execute()`, from-instance mode, workflow testing | | **Board** | `createBoard()`, REST API, SSE, framework integration | | **Flows** | Job lifecycle, worker pipeline, retry decision tree, workflow DAG, scheduling, stall detection | ## How it works When the skill is installed, your AI agent automatically activates it when you: * Ask about taskora APIs or patterns * Write or modify task handlers * Set up scheduling, workflows, or flow control * Debug retry behavior or job state issues * Write tests using `taskora/test` The skill is a single markdown file (`SKILL.md`) with structured YAML frontmatter. The agent reads it on activation and uses the reference to generate accurate, convention-following code. ## Example prompts Once the skill is installed, try: * *"Create a task that processes images with retry on network errors and progress reporting"* * *"Set up a workflow: fetch data from 3 APIs in parallel, then merge results"* * *"Add debounced reindexing that batches updates within 5s"* * *"Write tests for my order processing task including retry scenarios"* * *"Configure scheduling with cron for daily cleanup at 2am UTC"* --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/batch-processing.md' --- # Batch Processing Collect tasks accumulate individual items into batches before processing. The handler receives an array of items. ## Configuration ```ts const batchInsertTask = taskora.task("batch-insert", { collect: { key: "db-inserts", delay: "2s", // debounce: flush 2s after last item maxSize: 100, // flush immediately at 100 items maxWait: "10s", // hard deadline: flush 10s after first item }, handler: async (items: { table: string; row: Record }[], ctx) => { ctx.log.info(`Batch inserting ${items.length} rows`) await db.batchInsert(items) return { inserted: items.length } }, }) ``` ## Dispatching Items Dispatch individual items as usual — they accumulate automatically: ```ts batchInsertTask.dispatch({ table: "events", row: { type: "click", ts: Date.now() } }) batchInsertTask.dispatch({ table: "events", row: { type: "view", ts: Date.now() } }) batchInsertTask.dispatch({ table: "events", row: { type: "scroll", ts: Date.now() } }) // → handler receives all 3 items as an array (or more, depending on timing) ``` ## Flush Triggers Three triggers compete — whichever fires first causes the flush: | Trigger | Behavior | Resets on new item? | |---|---|---| | `delay` (debounce) | Flush after N ms of inactivity | Yes | | `maxSize` | Flush immediately when buffer reaches this size | N/A | | `maxWait` | Hard deadline since first item after last flush | No | ### Debounce (`delay`) The timer resets every time a new item arrives. If items keep coming, the flush is delayed until there's a gap. ### Max Size Immediate flush when the buffer reaches `maxSize`. This is the "backpressure" trigger — it prevents unbounded memory growth. ### Max Wait Hard deadline since the first item arrived after the last flush. Guarantees a maximum latency regardless of incoming rate. ## Dynamic Collect Key The `key` can be a function for per-item routing: ```ts const batchByRegionTask = taskora.task("batch-by-region", { collect: { key: (data: { region: string }) => `region:${data.region}`, delay: "5s", maxSize: 50, }, handler: async (items, ctx) => { // items are grouped by region await processRegionBatch(items) }, }) ``` ## Peeking the Buffer Sometimes you need to read what's sitting in the buffer *without* draining it — for example, to surface unflushed data alongside already-processed data in a live query path. ```ts // Read-only snapshot of the current buffer — returns items in dispatch // order (oldest → newest). Does not drain, does not reset the debounce // timer, does not change when the flush fires. const pending = await ingestMessagesTask.peekCollect(`chat:${chatId}`) // Stats-only view — cheaper than peekCollect because it doesn't read // payloads. Returns null when no buffer exists for the key. const info = await ingestMessagesTask.inspectCollect(`chat:${chatId}`) // → { count: 12, oldestAt: 1712678400000, newestAt: 1712678520000 } | null ``` ### Live-context use case A chat ingestion pipeline buffers group messages via `collect` and batches them into an LLM extraction job that writes decisions, risks, and todos into long-term project memory. The same pipeline also answers user questions in the same chat ("what did Kolya just say about auth?") and needs to include the most recent minutes of chat in the prompt. The challenge: messages from the current collect cycle haven't been extracted yet — they're not in long-term memory. But they're sitting in the collect buffer. `peekCollect` gives the Q\&A path read-only access to that unflushed window without double-writing to a parallel Redis list, without a separate TTL to keep in sync, and without any risk of disturbing the pending flush: ```ts async function answerWithLiveContext(chatId: string, question: string) { const [longTerm, pending] = await Promise.all([ memory.search(chatId, question), ingestMessagesTask.peekCollect(`chat:${chatId}`), ]) return llm.complete(buildPrompt({ longTerm, pending, question })) } ``` ### Semantics * **Non-destructive.** Peek never alters buffer state — no `LPOP`, no TTL reset, no side effects that would change when the flush fires. * **Snapshot consistency.** The underlying read is a single atomic command (Redis `LRANGE` / memory `slice`), so the returned array always reflects a coherent point in time, even under concurrent dispatches or a flush running in parallel. * **Ownership boundary.** Once the handler has claimed the batch — i.e. `moveToActive` has drained the items list into the job — `peekCollect` returns `[]` and `inspectCollect` returns `null`. This preserves the invariant that items belong to *either* the buffer *or* the handler, never both. * **Empty array on any "no buffer" state.** Never dispatched to, already flushed, or the buffer was just drained — callers don't need to distinguish these cases. * **Throws on non-collect tasks.** Calling `peekCollect` / `inspectCollect` on a task without a `collect` config throws a `TaskoraError` — silently returning `[]` would mask a config bug. * **Deserialized.** Items come back as `TInput[]` (via the task's serializer), not raw serialized blobs. Individual deserialization failures are skipped so one bad item can't hide the rest of the snapshot. * **Dynamic `collect.key`.** When `key` is a function, pass the already-resolved string to `peekCollect` — same as what you'd compute for logging or tracing. ### What about retaining flushed history? A tempting-looking knob would be `collect: { retain: { size, ttl } }` — keep the last N drained items around so callers can peek them after the flush. **Taskora deliberately doesn't offer this.** Once a batch has been drained, those items are the handler's responsibility — they've been processed, extracted, and written to whatever downstream storage the handler owns. A retain list would duplicate that data with a looser TTL, creating overlap where the same items appear in both "raw retained" and "persisted derivative" forms, forcing consumers to de-duplicate. The right boundary is: **`collect` holds items that haven't been processed yet; once processed, they belong to the handler's output storage.** `peekCollect` preserves that boundary. `retain` would violate it. ## Limitations * Collect tasks are **mutually exclusive** with debounce, throttle, and deduplicate dispatch options * `dispatch()` returns a lightweight `ResultHandle` (push confirmation only — no per-item result tracking) * Items accumulate in Redis as a list per key ## Redis Implementation Under the hood: 1. Each `dispatch()` pushes the item to a Redis list (`collect:{key}:items`) 2. A sentinel delayed job tracks the debounce timer (replaced on each dispatch) 3. `maxSize` triggers an immediate flush inline in the Lua script 4. At dequeue time (`moveToActive.lua`), the worker drains the buffer into the job's `:data` key --- --- url: 'https://kravetsone.github.io/taskora/taskora/operations/board.md' description: >- @taskora/board — batteries-included admin UI for inspecting tasks, jobs, workflows, schedules, and DLQ in real time. Hono-based, framework-agnostic, with SSE live updates. Shipped as a separate package so it never bloats the core bundle. --- # Board — Admin Dashboard `@taskora/board` is a full-featured, batteries-included admin dashboard for taskora. It ships as a pre-built React SPA served by a [Hono](https://hono.dev) backend, so there is **no build step** on your side — import it, mount it, done. It's a **separate package** — install it alongside `taskora` only if you want the dashboard. No Hono dependency, no static assets, no board code lands in your bundle if you don't. Unlike bull-board (which focuses on queues) or Flower (Celery-only), the taskora board is **task-centric** and exposes everything taskora actually does: workflow DAGs, schedules, migrations, flow control, retention, DLQ. ![taskora board per-task view — stat cards, hourly throughput chart, state tabs, and a jobs table with a real retry error surfaced inline](/board/task-detail.jpg) ## Why a dashboard? `taskora.inspect()` gives you programmatic access to every job, but during development and incident response you want eyes on the system. The board covers: * **Observe** — real-time task counts, throughput charts, job timelines * **Debug** — drill into individual jobs (data, result, error, logs, progress, retry history) * **Visualize** — workflow DAG rendering with per-node state colors * **Manage** — retry failed jobs, cancel active jobs, pause/resume schedules, clean queues, retry entire DLQ * **Evolve** — version distribution per task, `canBumpSince` indicator for safe migration pruning ## Installation `@taskora/board` is a separate package with `taskora` and `hono` as peer dependencies. Install both together: ::: pm-add @taskora/board hono ::: The pre-built React SPA and its dependencies (Recharts, @xyflow/react, Tailwind) are **bundled inside the `@taskora/board` package** — you do not install them yourself. If you never install `@taskora/board`, none of that code reaches your bundle. ## Quick start ```ts import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" import { createBoard } from "@taskora/board" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), }) // ... define tasks, workers ... const board = createBoard(taskora) board.listen(3000) // → taskora board listening on http://localhost:3000/board ``` Open `http://localhost:3000/board` in your browser. That's it. ## The `Board` interface `createBoard(app, options?)` returns a `Board` object with four ways to serve the UI: ```ts interface Board { app: Hono // the raw Hono instance fetch: (req: Request) => Response | Promise // Web standard fetch handler handler: (req, res) => void // Node.js-style handler (requires @hono/node-server) listen: (port: number) => void // standalone server (Bun / Deno only) } ``` Pick whichever fits your host: ::: code-group ```ts [Standalone (Bun / Deno)] const board = createBoard(taskora) board.listen(3000) ``` ```ts [Bun.serve] Bun.serve({ port: 3000, fetch: board.fetch, }) ``` ```ts [Deno.serve] Deno.serve({ port: 3000 }, board.fetch) ``` ```ts [Hono (mount under route)] import { Hono } from "hono" const app = new Hono() app.route("/admin/taskora", board.app) ``` ```ts [Express / Fastify / Koa (Node.js)] import { serve } from "@hono/node-server" // Standalone Node.js server serve({ fetch: board.fetch, port: 3000 }) // Or as Express middleware (via @hono/node-server helpers) import { createAdaptorServer } from "@hono/node-server" const server = createAdaptorServer({ fetch: board.fetch }) expressApp.use("/admin/taskora", (req, res) => server.emit("request", req, res)) ``` ::: ::: tip Framework parity Anything that speaks the Web `Request`/`Response` standard (Bun, Deno, Cloudflare Workers, Hono, Vercel Edge, …) can mount `board.fetch` directly. For Node.js-native frameworks (Express, Fastify, Koa, Next.js API routes), wrap it with [`@hono/node-server`](https://github.com/honojs/node-server). ::: ## Options ```ts createBoard(taskora, { basePath: "/admin/taskora", // default: "/board" readOnly: false, // hide mutation buttons + reject POST/PUT/DELETE auth: async (req) => { /* ... */ }, // per-request auth middleware title: "My Queue", // browser tab title logo: "/custom-logo.svg", // header logo URL favicon: "/favicon.ico", theme: "auto", // "light" | "dark" | "auto" refreshInterval: 2000, // stats polling fallback (ms) — SSE is primary redact: ["password", "apiKey", "ssn"], // deep field redaction for job data/result cors: { origin: "*" }, formatters: { data: (data, task) => { /* custom render pre-processing */ return data }, result: (result, task) => result, }, }) ``` ### Authentication The board accepts two `auth` shapes. Pick one; they cannot be combined. #### Session auth (recommended) Drop in a single config object and the board mounts a login page, signs a session cookie, and guards the entire dashboard — SPA HTML, API, and SSE. Inspired by AdminJS: one provider, username + password, nothing else to set up. ```ts createBoard(taskora, { auth: { // HMAC signing secret, min 32 chars — generate with: // openssl rand -base64 48 cookiePassword: process.env.BOARD_COOKIE_SECRET!, // Called on each login attempt. Return a truthy user object to accept, // or null to reject. Your own code owns credential storage (env, DB, LDAP, …). authenticate: async ({ username, password }) => { if (username === "admin" && password === process.env.BOARD_PASSWORD) { return { id: "admin" } } return null }, // Optional cookieName: "taskora_board_session", // default // sessionTtl defaults to `false` — sessions do not expire server-side // and the cookie is a browser-session cookie (cleared on browser close). // Pass a Duration ("30s" | "5m" | "2h" | "1d" | ms number) to opt into rolling expiry. // sessionTtl: "7d", }, }) ``` What you get: * **`GET /board/login`** — server-rendered login form (no SPA rebuild required). * **`POST /board/auth/login`** — verifies creds via `authenticate`, sets a signed, `HttpOnly`, `SameSite=Lax` cookie, then redirects. * **`POST /board/auth/logout`** — clears the cookie and redirects back to the login page. The sidebar shows a `[ logout ]` button automatically when session auth is enabled. * Unauthenticated requests to any SPA path (`/board/*`) are redirected to the login page with a `?redirect=` parameter so users land back where they started. * Unauthenticated requests to the JSON API (`/board/api/*`) receive `401 {"error":"Unauthorized"}`. Security notes: * The session cookie is **stateless** — the user object (and optionally an expiry) is HMAC-signed into the cookie itself. No Redis state, works with every adapter. The trade-off: logout only clears the client cookie; a stolen cookie stays valid until its `exp` (or forever, if `sessionTtl` is disabled). Set `sessionTtl` to a tight Duration if that matters to you. * `SameSite=Lax` + `HttpOnly` defeats cross-site POST and prevents JS access. No separate CSRF token is required. * The `Secure` flag is set automatically when the request reaches the board over HTTPS. * Password hashing, rate limiting, and lockout live inside your `authenticate` function — the board has no opinion about how credentials are stored. #### Custom auth hook If you already ship JWT, OAuth, or your framework's own session middleware, pass a function instead. It runs on every `/board/api/*` request. Return `undefined` to allow, or a `Response` to short-circuit: ```ts createBoard(taskora, { auth: async (req) => { const token = req.headers.get("authorization")?.replace("Bearer ", "") if (!token || !(await verifyJwt(token))) { return new Response("Unauthorized", { status: 401 }) } // return nothing → request proceeds }, }) ``` The function form guards **only the API** — the SPA HTML and static assets stay public, so your own front door (reverse proxy, gateway, framework middleware) is expected to enforce access to the dashboard shell. This matches the pre-session-auth behavior and is preserved for backward compatibility. For public demos or local dev, pair `readOnly: true` with no auth. ### Field redaction Job payloads often contain secrets. The `redact` option walks every field in `data`, `result`, `error`, and `logs.meta`, masking any key whose name matches (case-insensitive): ```ts createBoard(taskora, { redact: ["password", "secret", "token", "apiKey", "ssn", "creditCard"], }) ``` Nested objects and arrays are walked recursively — `user.auth.password` and `payments[0].creditCard` are both redacted. Redaction happens **on the server before the response leaves the process**, so secrets never touch the browser. ### Read-only mode ```ts createBoard(taskora, { readOnly: true }) ``` Mutation buttons are hidden in the UI and all `POST`/`PUT`/`DELETE` endpoints reject with `403`. Safe to expose behind a read-only employee SSO without risking accidental retries. ## What you get ### Overview dashboard Global stat cards (waiting / active / delayed / failed / completed / cancelled / expired), a 24-hour throughput chart powered by Recharts, a task table with per-task counts and health indicators, and Redis server info (version, memory, uptime, connected state). Throughput is backed by per-minute `INCR` counters stamped in `ack.lua` / `fail.lua` with a 24h TTL, so the chart is accurate without any external time-series database. ### Task detail Per-task view with state tabs (waiting / active / delayed / retrying / failed / completed / cancelled / expired), a paginated job table, and bulk actions — retry-all and clean-by-age. ![Task detail view for send-email — stat cards, throughput chart, state tabs, and a job table showing completed jobs and retry errors](/board/task-detail.jpg) ### Job detail Everything about a single job on one screen: * **Timeline** — reconstructed from `ts` → `processedOn` → `finishedOn` (state transitions with absolute + relative timestamps) * **Data / Result / Error / Logs** tabs * **Progress bar** — renders numeric progress or arbitrary progress objects * **Attempt history** — current attempt number vs `maxAttempts` * **Actions** — retry, cancel (`handle.cancel()` equivalent) * **Workflow link** — jumps to the parent workflow DAG if the job was part of one ### Workflow DAG visualization Built on [`@xyflow/react`](https://reactflow.dev) with auto-layout (BFS layering). Nodes are colored by state, edges animate when the downstream node is active, and clicking a node opens its job detail view. Chains, groups, and chords all render correctly — including nested compositions. Cancel an entire workflow from the header and watch the cascade propagate. ### Schedule management List all registered schedules with their cron/interval, next-run countdown (relative time display), last run status, and job ID. Actions: pause, resume, trigger-now, delete. The trigger-now button dispatches the task immediately without disturbing the scheduled cadence. ### DLQ view Failed jobs grouped by error-message frequency so you can spot the top failure modes at a glance. Per-job retry and a global "retry all" button backed by the atomic `retryAllDLQ.lua` script (batched 100 at a time). ### Migrations view Version distribution bar chart per task, showing how many jobs are queued or delayed at each `_v` version. The `canBumpSince` indicator tells you whether it is safe to raise `since` and drop old migrations — taskora checks queue / delayed / retrying sets for any job still stamped with a version below the proposed floor. See [Versioning & Migrations](/features/versioning) for the underlying mechanics. ### Real-time updates (SSE) The `/api/events` endpoint streams Server-Sent Events from `adapter.subscribe()` — every `active`, `completed`, `failed`, `retrying`, `cancelled`, `stalled`, and `progress` event flows to the browser in real time. A periodic `stats:update` frame refreshes stat cards every few seconds even when no jobs are moving. SSE is the **primary** transport — `refreshInterval` polling only kicks in as a fallback if the EventSource disconnects. ### UX niceties * **Dark / light / auto theme** via CSS custom properties (follows `prefers-color-scheme`) * **Keyboard shortcuts** — `1`–`5` for top-level navigation, `/` for the global job-ID search bar * **Global job search** — paste a job ID from a log line, land on the detail view ## REST API Under the hood, the SPA talks to a plain REST API mounted at `${basePath}/api`. You can call these endpoints directly from your own tooling — they are not considered internal. | Method | Path | Description | |---|---|---| | `GET` | `/api/overview` | Global stat cards, tasks, Redis info | | `GET` | `/api/tasks/:task/jobs?state=&limit=&offset=` | Paginated jobs for a task | | `GET` | `/api/tasks/:task/stats` | Queue counts for a task | | `GET` | `/api/tasks/:task/migrations` | Version distribution + `canBumpSince` | | `POST` | `/api/tasks/:task/retry-all` | Retry every failed job for a task | | `POST` | `/api/tasks/:task/clean` | Trim completed/failed sets | | `GET` | `/api/jobs/:jobId` | Full `JobDetailResponse` with timeline, logs, workflow link | | `POST` | `/api/jobs/:jobId/retry` | Retry a single job (must be in failed/cancelled state) | | `POST` | `/api/jobs/:jobId/cancel` | Cancel an active or waiting job | | `GET` | `/api/schedules` | List all schedules | | `POST` | `/api/schedules/:name/pause` | Pause a schedule | | `POST` | `/api/schedules/:name/resume` | Resume a schedule | | `POST` | `/api/schedules/:name/trigger` | Dispatch immediately without advancing next-run | | `PUT` | `/api/schedules/:name` | Update schedule config | | `DELETE` | `/api/schedules/:name` | Remove schedule | | `GET` | `/api/workflows` | List active/recent workflows | | `GET` | `/api/workflows/:workflowId` | Workflow DAG graph + per-node state | | `POST` | `/api/workflows/:workflowId/cancel` | Cascade cancel workflow | | `GET` | `/api/dlq` | Dead-letter queue jobs with grouping | | `POST` | `/api/dlq/:jobId/retry` | Retry single DLQ entry | | `POST` | `/api/dlq/retry-all` | Retry entire DLQ (batched) | | `GET` | `/api/throughput` | 24h per-minute throughput buckets | | `GET` | `/api/events` | Server-Sent Events stream | | `GET` | `/api/config` | Static config (title, logo, theme, readOnly flag) | All mutation endpoints respect `readOnly` and `auth`. ## Recipes ### Mount behind nginx / reverse proxy Set `basePath` to match your upstream route so generated asset URLs are correct: ```ts const board = createBoard(taskora, { basePath: "/internal/taskora" }) Bun.serve({ fetch: board.fetch, port: 3000 }) ``` ```nginx location /internal/taskora/ { proxy_pass http://taskora-host:3000; proxy_set_header Host $host; proxy_buffering off; # important for SSE } ``` ### Share a port with your API If you already have a Hono app serving your public API, mount the board on a sub-route instead of running a second port: ```ts import { Hono } from "hono" const api = new Hono() api.get("/health", (c) => c.json({ ok: true })) // ... more routes ... api.route("/admin/taskora", createBoard(taskora, { basePath: "/admin/taskora", auth: requireAdmin, }).app) Bun.serve({ fetch: api.fetch, port: 8080 }) ``` ### Redact secrets for a specific task only `redact` is global, but `formatters` gives you per-task control: ```ts createBoard(taskora, { formatters: { data: (data, task) => { if (task === "send-email") { const { body, ...rest } = data as any return { ...rest, body: "[REDACTED]" } } return data }, }, }) ``` ### Expose only to internal network ```ts createBoard(taskora, { readOnly: true, auth: async (req) => { const ip = req.headers.get("x-forwarded-for") ?? "" if (!ip.startsWith("10.") && !ip.startsWith("192.168.")) { return new Response("Forbidden", { status: 403 }) } }, }) ``` ## Production checklist * \[ ] Set `auth` — never expose the board publicly without it * \[ ] Configure `redact` for any sensitive fields in job payloads * \[ ] Consider `readOnly: true` for broad internal visibility with narrow write access * \[ ] Mount behind HTTPS (board has no TLS — that's your proxy's job) * \[ ] If using nginx, disable `proxy_buffering` on the board location so SSE works * \[ ] Pin a stable `basePath` — changing it invalidates cached asset URLs in browsers ## See also * [Inspector](/operations/inspector) — the programmatic API the board is built on * [Retention & DLQ](/operations/dead-letter-queue) — what the DLQ view actually operates on * [Monitoring](/operations/monitoring) — metrics pipelines for long-term observability (the board is not a replacement for Grafana) * [Workflows](/features/workflows) — the DAG model the workflow view visualizes * [Versioning & Migrations](/features/versioning) — what the migrations view reports on --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/cancellation.md' --- # Cancellation Taskora supports graceful job cancellation with instant notification via Redis pub/sub. ## Cancelling a Job ```ts const handle = longRunningTask.dispatch(data) // Later... await handle.cancel({ reason: "User requested cancellation" }) ``` ## How It Works Cancellation depends on the job's current state: | State | Behavior | |---|---| | `waiting` / `delayed` / `retrying` | Moved to `cancelled` immediately (atomic Lua) | | `active` | Sets `cancelledAt` flag + PUBLISH to cancel channel. Worker picks it up. | | `completed` / `failed` | Not cancellable (`"not_cancellable"`) | ### Instant Cancel for Active Jobs Active jobs are cancelled via Redis pub/sub — no polling delay: 1. `cancel.lua` sets a `cancelledAt` flag on the job hash and PUBLISHes the job ID 2. The worker subscribes to the cancel channel on startup 3. When the message arrives, `controller.abort("cancelled")` fires immediately 4. `ctx.signal` is aborted — any `fetch()`, `setTimeout`, or signal-aware operation stops Fallback: if the pub/sub message is missed, `extendLock()` detects the flag and returns `"cancelled"`. ## Handling Cancellation in Tasks ### Using `ctx.signal` The simplest approach — pass the signal to APIs that support it: ```ts taskora.task("download-file", async (data: { url: string }, ctx) => { const res = await fetch(data.url, { signal: ctx.signal }) return await res.arrayBuffer() }) ``` ### Using `onCancel` Hook For custom cleanup logic, define an `onCancel` hook: ```ts taskora.task("transcode-video", { onCancel: async (data, ctx) => { // Clean up temporary files await fs.unlink(`/tmp/${ctx.id}.mp4`) ctx.log.info("Cleaned up temporary files") }, handler: async (data: { videoUrl: string }, ctx) => { const tmpPath = `/tmp/${ctx.id}.mp4` await downloadVideo(data.videoUrl, tmpPath, { signal: ctx.signal }) return await transcode(tmpPath) }, }) ``` The `onCancel` hook runs **after** the handler is aborted but **before** the job is finalized. The context's signal is already aborted at this point. ## Cancelled State `"cancelled"` is a distinct terminal state — separate from `"failed"`: ```ts try { await handle.result } catch (err) { if (err instanceof CancelledError) { console.log(err.reason) // "User requested cancellation" } } ``` ## Events ```ts // Per-task task.on("cancelled", ({ id, reason }) => { console.log(`Job ${id} cancelled: ${reason}`) }) // App-wide taskora.on("task:cancelled", ({ id, task, reason }) => { console.log(`${task}:${id} cancelled`) }) ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora/classes/App.md' --- [Taskora API Reference](../../index) / [index](../index) / App # Class: App Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:183 ## Constructors ### Constructor > **new App**(`options`): `App` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:213 #### Parameters | Parameter | Type | | ------ | ------ | | `options` | [`TaskoraOptions`](../interfaces/TaskoraOptions) | #### Returns `App` ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `adapter` | `readonly` | [`Adapter`](../namespaces/Taskora/interfaces/Adapter) | - | packages/taskora/dist/app-BkYbyud9.d.mts:184 | | `deadLetters` | `readonly` | [`DeadLetterManager`](DeadLetterManager) | - | packages/taskora/dist/app-BkYbyud9.d.mts:187 | | `retention` | `readonly` | `object` | **`Internal`** — used by Worker for retention trim | packages/taskora/dist/app-BkYbyud9.d.mts:192 | | `retention.completed` | `public` | `object` | - | packages/taskora/dist/app-BkYbyud9.d.mts:193 | | `retention.completed.maxAgeMs` | `public` | `number` | - | packages/taskora/dist/app-BkYbyud9.d.mts:194 | | `retention.completed.maxItems` | `public` | `number` | - | packages/taskora/dist/app-BkYbyud9.d.mts:195 | | `retention.failed` | `public` | `object` | - | packages/taskora/dist/app-BkYbyud9.d.mts:197 | | `retention.failed.maxAgeMs` | `public` | `number` | - | packages/taskora/dist/app-BkYbyud9.d.mts:198 | | `retention.failed.maxItems` | `public` | `number` | - | packages/taskora/dist/app-BkYbyud9.d.mts:199 | | `schedules` | `readonly` | `ScheduleManager` | - | packages/taskora/dist/app-BkYbyud9.d.mts:186 | | `serializer` | `readonly` | [`Serializer`](../namespaces/Taskora/interfaces/Serializer) | - | packages/taskora/dist/app-BkYbyud9.d.mts:185 | ## Methods ### close() > **close**(`options?`): `Promise`<`void`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:288 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `timeout?`: `number`; } | | `options.timeout?` | `number` | #### Returns `Promise`<`void`> *** ### ensureConnected() > **ensureConnected**(): `Promise`<`void`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:286 #### Returns `Promise`<`void`> *** ### getRegisteredTasks() > **getRegisteredTasks**(): `IterableIterator`<[`Task`](Task)<`unknown`, `unknown`>> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:283 **`Internal`** — used by TestRunner({ from }) #### Returns `IterableIterator`<[`Task`](Task)<`unknown`, `unknown`>> *** ### getScheduler() > **getScheduler**(): `Scheduler` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:285 **`Internal`** — used by ScheduleManager #### Returns `Scheduler` *** ### getTaskByName() > **getTaskByName**(`name`): [`Task`](Task)<`unknown`, `unknown`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:279 **`Internal`** — used by ScheduleManager #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | #### Returns [`Task`](Task)<`unknown`, `unknown`> *** ### implement() #### Call Signature > **implement**<`TInput`, `TOutput`>(`contract`, `handler`): [`BoundTask`](BoundTask)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:271 Attach a handler to a task contract. Returns a [BoundTask](BoundTask) ready to dispatch and to run inside this process's worker loop. Three call forms, pick whichever fits: ```ts // 1. Bare handler (most common) taskora.implement(sendEmail, async (data, ctx) => { return { messageId: await mailer.send(data) } }) // 2. Handler + worker-side options taskora.implement( processImage, async (data, ctx) => { ... }, { concurrency: 4, middleware: [withTracing()] }, ) // 3. Object form — required for collect tasks and preferred when // onCancel is heavy or you want all config in one place taskora.implement(batchEmail, { collect: { key: "user-emails", delay: "5s" }, handler: async (items, ctx) => { ... }, }) ``` Throws if the same contract is implemented twice in the same process. If [register](#register) was called first, the contract's existing `BoundTask` is reused and its handler is upgraded in place — existing `BoundTask` references continue to work. ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `contract` | [`TaskContract`](../interfaces/TaskContract)<`TInput`, `TOutput`> | | `handler` | (`data`, `ctx`) => `TOutput` | `Promise`<`TOutput`> | ##### Returns [`BoundTask`](BoundTask)<`TInput`, `TOutput`> #### Call Signature > **implement**<`TInput`, `TOutput`>(`contract`, `handler`, `options`): [`BoundTask`](BoundTask)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:272 Attach a handler to a task contract. Returns a [BoundTask](BoundTask) ready to dispatch and to run inside this process's worker loop. Three call forms, pick whichever fits: ```ts // 1. Bare handler (most common) taskora.implement(sendEmail, async (data, ctx) => { return { messageId: await mailer.send(data) } }) // 2. Handler + worker-side options taskora.implement( processImage, async (data, ctx) => { ... }, { concurrency: 4, middleware: [withTracing()] }, ) // 3. Object form — required for collect tasks and preferred when // onCancel is heavy or you want all config in one place taskora.implement(batchEmail, { collect: { key: "user-emails", delay: "5s" }, handler: async (items, ctx) => { ... }, }) ``` Throws if the same contract is implemented twice in the same process. If [register](#register) was called first, the contract's existing `BoundTask` is reused and its handler is upgraded in place — existing `BoundTask` references continue to work. ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `contract` | [`TaskContract`](../interfaces/TaskContract)<`TInput`, `TOutput`> | | `handler` | (`data`, `ctx`) => `TOutput` | `Promise`<`TOutput`> | | `options` | `ImplementOptions` | ##### Returns [`BoundTask`](BoundTask)<`TInput`, `TOutput`> #### Call Signature > **implement**<`TInput`, `TOutput`>(`contract`, `options`): [`BoundTask`](BoundTask)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:273 Attach a handler to a task contract. Returns a [BoundTask](BoundTask) ready to dispatch and to run inside this process's worker loop. Three call forms, pick whichever fits: ```ts // 1. Bare handler (most common) taskora.implement(sendEmail, async (data, ctx) => { return { messageId: await mailer.send(data) } }) // 2. Handler + worker-side options taskora.implement( processImage, async (data, ctx) => { ... }, { concurrency: 4, middleware: [withTracing()] }, ) // 3. Object form — required for collect tasks and preferred when // onCancel is heavy or you want all config in one place taskora.implement(batchEmail, { collect: { key: "user-emails", delay: "5s" }, handler: async (items, ctx) => { ... }, }) ``` Throws if the same contract is implemented twice in the same process. If [register](#register) was called first, the contract's existing `BoundTask` is reused and its handler is upgraded in place — existing `BoundTask` references continue to work. ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `contract` | [`TaskContract`](../interfaces/TaskContract)<`TInput`, `TOutput`> | | `options` | `ImplementOptionsWithHandler`<`TInput`, `TOutput`> | `ImplementCollectOptions`<`TInput`, `TOutput`> | ##### Returns [`BoundTask`](BoundTask)<`TInput`, `TOutput`> *** ### inspect() > **inspect**(): [`Inspector`](Inspector) Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:277 #### Returns [`Inspector`](Inspector) *** ### on() > **on**<`K`>(`event`, `handler`): () => `void` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:214 #### Type Parameters | Type Parameter | | ------ | | `K` *extends* `"task:completed"` | `"task:failed"` | `"task:active"` | `"task:stalled"` | `"task:cancelled"` | `"worker:ready"` | `"worker:error"` | `"worker:closing"` | #### Parameters | Parameter | Type | | ------ | ------ | | `event` | `K` | | `handler` | [`AppEventMap`](../namespaces/Taskora/interfaces/AppEventMap)\[`K`] *extends* `undefined` ? () => `void` : (`data`) => `void` | #### Returns () => `void` *** ### register() > **register**<`TInput`, `TOutput`>(`contract`): [`BoundTask`](BoundTask)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:239 Register a task contract without a handler. Returns a [BoundTask](BoundTask) that can dispatch jobs — intended for producer-only processes where the handler lives in a separate worker process. Idempotent: calling `register()` twice for the same task name returns the same underlying task. If a task with that name already exists (via `app.task()` or a prior `app.implement()`), the existing task is wrapped and its handler is left untouched. #### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | #### Parameters | Parameter | Type | | ------ | ------ | | `contract` | [`TaskContract`](../interfaces/TaskContract)<`TInput`, `TOutput`> | #### Returns [`BoundTask`](BoundTask)<`TInput`, `TOutput`> #### Example ```ts // producer.ts import { sendEmailContract } from "./contracts.js" const sendEmail = taskora.register(sendEmailContract) await sendEmail.dispatch({ to: "a@b.c", subject: "Welcome" }) ``` *** ### registerExternalTask() > **registerExternalTask**(`task`): `void` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:281 **`Internal`** — used by TestRunner.importTask() #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](Task)<`unknown`, `unknown`> | #### Returns `void` *** ### schedule() > **schedule**(`name`, `config`): `void` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:276 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `config` | [`ScheduleConfig`](../namespaces/Taskora/interfaces/ScheduleConfig) | #### Returns `void` *** ### start() > **start**(): `Promise`<`void`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:287 #### Returns `Promise`<`void`> *** ### task() #### Call Signature > **task**<`TInput`, `TOutput`>(`name`, `handler`): [`Task`](Task)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:216 ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `handler` | (`data`, `ctx`) => `TOutput` | `Promise`<`TOutput`> | ##### Returns [`Task`](Task)<`TInput`, `TOutput`> #### Call Signature > **task**<`TInput`, `TOutput`>(`name`, `options`): [`Task`](Task)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:217 ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `options` | `CollectTaskOptions`<`TInput`, `TOutput`> | ##### Returns [`Task`](Task)<`TInput`, `TOutput`> #### Call Signature > **task**<`TInput`, `TOutput`>(`name`, `options`): [`Task`](Task)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:218 ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `options` | `TaskOptionsWithSchema`<`TInput`, `TOutput`> | ##### Returns [`Task`](Task)<`TInput`, `TOutput`> #### Call Signature > **task**<`TInput`, `TOutput`>(`name`, `options`): [`Task`](Task)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:219 ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `options` | `TaskOptionsWithOutputSchema`<`TInput`, `TOutput`> | ##### Returns [`Task`](Task)<`TInput`, `TOutput`> #### Call Signature > **task**<`TInput`, `TOutput`>(`name`, `options`): [`Task`](Task)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:220 ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `options` | `TaskOptionsNoSchema`<`TInput`, `TOutput`> | ##### Returns [`Task`](Task)<`TInput`, `TOutput`> *** ### use() > **use**(`middleware`): `this` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:215 #### Parameters | Parameter | Type | | ------ | ------ | | `middleware` | [`Middleware`](../namespaces/Taskora/type-aliases/Middleware) | #### Returns `this` --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora/classes/BoundTask.md' --- [Taskora API Reference](../../index) / [index](../index) / BoundTask # Class: BoundTask\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:545 A dispatchable view of a task that was declared via a [TaskContract](../interfaces/TaskContract). Returned by `taskora.register(contract)` and `taskora.implement(contract, ...)`. Thin wrapper over the internal `Task` — all dispatch semantics (retries, delays, debounce, throttle, deduplicate, TTL, concurrency) work identically. Producer processes use `BoundTask` to dispatch without importing handler code. Worker processes get the same `BoundTask` back from `implement()` so they can still dispatch from within handlers (e.g. chaining). ## Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ## Constructors ### Constructor > **new BoundTask**<`TInput`, `TOutput`>(`task`): `BoundTask`<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:548 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](Task)<`TInput`, `TOutput`> | #### Returns `BoundTask`<`TInput`, `TOutput`> ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `_task` | `readonly` | [`Task`](Task)<`TInput`, `TOutput`> | **`Internal`** — used by App to share the underlying Task between register/implement | packages/taskora/dist/types-KwmL9wkN.d.mts:547 | ## Accessors ### name #### Get Signature > **get** **name**(): `string` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:550 The task's canonical name (kebab-case, e.g. `"send-email"`). ##### Returns `string` ## Methods ### chunk() > **chunk**(`items`, `options`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:595 Split items into chunks. Each chunk runs in parallel; chunks run sequentially. Useful for rate-limited batch processing. #### Parameters | Parameter | Type | | ------ | ------ | | `items` | `TInput`\[] | | `options` | { `size`: `number`; } | | `options.size` | `number` | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> *** ### dispatch() > **dispatch**(`data`, `options?`): [`ResultHandle`](ResultHandle)<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:555 Dispatch a job. Returns a [ResultHandle](ResultHandle) for awaiting the result, querying state, or cancelling. Identical semantics to `Task.dispatch`. #### Parameters | Parameter | Type | | ------ | ------ | | `data` | `TInput` | | `options?` | [`DispatchOptions`](../namespaces/Taskora/interfaces/DispatchOptions) | #### Returns [`ResultHandle`](ResultHandle)<`TOutput`> *** ### dispatchMany() > **dispatchMany**(`jobs`): [`ResultHandle`](ResultHandle)<`TOutput`>\[] Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:557 Dispatch multiple jobs in one call. Returns one [ResultHandle](ResultHandle) per job. #### Parameters | Parameter | Type | | ------ | ------ | | `jobs` | `object`\[] | #### Returns [`ResultHandle`](ResultHandle)<`TOutput`>\[] *** ### inspectCollect() > **inspectCollect**(`collectKey`): `Promise`<[`CollectBufferInfo`](../namespaces/Taskora/interfaces/CollectBufferInfo)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:609 Stats-only view of the current collect buffer for `collectKey`. Returns `null` if no active buffer exists. Throws if the underlying task was not declared with `collect`. See [Task.inspectCollect](Task#inspectcollect). #### Parameters | Parameter | Type | | ------ | ------ | | `collectKey` | `string` | #### Returns `Promise`<[`CollectBufferInfo`](../namespaces/Taskora/interfaces/CollectBufferInfo)> *** ### map() > **map**(`items`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:590 Dispatch one job per item in parallel. Sugar for `group(...items.map(i => this.s(i)))`. Returns a [WorkflowHandle](WorkflowHandle) that resolves when all jobs complete. #### Parameters | Parameter | Type | | ------ | ------ | | `items` | `TInput`\[] | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> *** ### on() > **on**<`K`>(`event`, `handler`): () => `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:566 Subscribe to task lifecycle events. Works in producer-only processes — events are delivered via `adapter.subscribe()` on an XREAD connection, independent of whether a worker loop runs locally. #### Type Parameters | Type Parameter | | ------ | | `K` *extends* `"cancelled"` | `"active"` | `"completed"` | `"failed"` | `"retrying"` | `"progress"` | `"stalled"` | #### Parameters | Parameter | Type | | ------ | ------ | | `event` | `K` | | `handler` | (`data`) => `void` | #### Returns () => `void` *** ### peekCollect() > **peekCollect**(`collectKey`): `Promise`<`TInput`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:603 Non-destructive read of the current collect buffer for `collectKey`. Returns deserialized input items in dispatch order. Throws if the underlying task was not declared with `collect`. See [Task.peekCollect](Task#peekcollect). #### Parameters | Parameter | Type | | ------ | ------ | | `collectKey` | `string` | #### Returns `Promise`<`TInput`\[]> *** ### s() > **s**(`data?`): [`Signature`](Signature)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:584 Create a [Signature](Signature) — a composable snapshot of this task invocation for use in `chain()`, `group()`, or `chord()` workflows. Contracts must be `register()`ed before composition: the returned signature carries a reference back to the underlying `Task`, which is how the workflow dispatcher resolves adapter and serializer at dispatch time. #### Parameters | Parameter | Type | | ------ | ------ | | `data?` | `TInput` | #### Returns [`Signature`](Signature)<`TInput`, `TOutput`> #### Example ```ts import { chain } from "taskora" const fetchUser = taskora.register(fetchUserContract) const sendEmail = taskora.register(sendEmailContract) await chain(fetchUser.s({ id: "42" }), sendEmail.s()).dispatch() ``` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/CancelledError.md --- [Taskora API Reference](../../index) / [index](../index) / CancelledError # Class: CancelledError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:222 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new CancelledError**(`jobId`, `reason?`): `CancelledError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:225 #### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | | `reason?` | `string` | #### Returns `CancelledError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `jobId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:223 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `reason?` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:224 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/ChainSignature.md --- [Taskora API Reference](../../index) / [index](../index) / ChainSignature # Class: ChainSignature\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:317 A chain of signatures — sequential pipeline. Result of `chain()` or `.pipe()`. ## Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ## Constructors ### Constructor > **new ChainSignature**<`TInput`, `TOutput`>(`steps`): `ChainSignature`<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:322 #### Parameters | Parameter | Type | | ------ | ------ | | `steps` | `AnySignature`\[] | #### Returns `ChainSignature`<`TInput`, `TOutput`> ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `_tag` | `readonly` | `"chain"` | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:319 | | `steps` | `readonly` | `AnySignature`\[] | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:321 | ## Methods ### dispatch() > **dispatch**(`options?`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:324 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`WorkflowDispatchOptions`](../interfaces/WorkflowDispatchOptions) | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`> *** ### pipe() > **pipe**<`TNext`>(`next`): `ChainSignature`<`TInput`, `TNext`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:323 #### Type Parameters | Type Parameter | | ------ | | `TNext` | #### Parameters | Parameter | Type | | ------ | ------ | | `next` | [`Signature`](Signature)<`TOutput`, `TNext`> | #### Returns `ChainSignature`<`TInput`, `TNext`> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/ChordSignature.md --- [Taskora API Reference](../../index) / [index](../index) / ChordSignature # Class: ChordSignature\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:340 A chord — group + callback. Parallel execution, then merge. ## Type Parameters | Type Parameter | | ------ | | `TOutput` | ## Constructors ### Constructor > **new ChordSignature**<`TOutput`>(`header`, `callback`): `ChordSignature`<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:347 #### Parameters | Parameter | Type | | ------ | ------ | | `header` | `AnySignature`\[] | | `callback` | `AnySignature` | #### Returns `ChordSignature`<`TOutput`> ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `_tag` | `readonly` | `"chord"` | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:342 | | `callback` | `readonly` | `AnySignature` | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:346 | | `header` | `readonly` | `AnySignature`\[] | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:344 | ## Methods ### dispatch() > **dispatch**(`options?`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:348 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`WorkflowDispatchOptions`](../interfaces/WorkflowDispatchOptions) | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/DeadLetterManager.md --- [Taskora API Reference](../../index) / [index](../index) / DeadLetterManager # Class: DeadLetterManager Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:26 ## Constructors ### Constructor > **new DeadLetterManager**(`adapter`, `tasks`, `getInspector`): `DeadLetterManager` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:30 #### Parameters | Parameter | Type | | ------ | ------ | | `adapter` | [`Adapter`](../namespaces/Taskora/interfaces/Adapter) | | `tasks` | `Map`<`string`, [`Task`](Task)<`unknown`, `unknown`>> | | `getInspector` | () => [`Inspector`](Inspector) | #### Returns `DeadLetterManager` ## Methods ### list() > **list**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:31 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> *** ### retry() #### Call Signature > **retry**(`jobId`): `Promise`<`boolean`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:32 ##### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | ##### Returns `Promise`<`boolean`> #### Call Signature > **retry**(`task`, `jobId`): `Promise`<`boolean`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:33 ##### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | ##### Returns `Promise`<`boolean`> *** ### retryAll() > **retryAll**(`options?`): `Promise`<`number`> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:34 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `task?`: `string`; } | | `options.task?` | `string` | #### Returns `Promise`<`number`> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/DuplicateJobError.md --- [Taskora API Reference](../../index) / [index](../index) / DuplicateJobError # Class: DuplicateJobError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:212 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new DuplicateJobError**(`jobId`, `key`, `existingId`): `DuplicateJobError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:216 #### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | | `key` | `string` | | `existingId` | `string` | #### Returns `DuplicateJobError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `existingId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:215 | | `jobId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:213 | | `key` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:214 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/ExpiredError.md --- [Taskora API Reference](../../index) / [index](../index) / ExpiredError # Class: ExpiredError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:218 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new ExpiredError**(`jobId`): `ExpiredError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:220 #### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | #### Returns `ExpiredError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `jobId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:219 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/GroupSignature.md --- [Taskora API Reference](../../index) / [index](../index) / GroupSignature # Class: GroupSignature\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:329 A group of signatures — parallel execution, tuple result. ## Type Parameters | Type Parameter | | ------ | | `TOutput` | ## Constructors ### Constructor > **new GroupSignature**<`TOutput`>(`members`): `GroupSignature`<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:334 #### Parameters | Parameter | Type | | ------ | ------ | | `members` | `AnySignature`\[] | #### Returns `GroupSignature`<`TOutput`> ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `_tag` | `readonly` | `"group"` | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:331 | | `members` | `readonly` | `AnySignature`\[] | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:333 | ## Methods ### dispatch() > **dispatch**(`options?`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:335 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`WorkflowDispatchOptions`](../interfaces/WorkflowDispatchOptions) | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`> --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora/classes/Inspector.md' --- [Taskora API Reference](../../index) / [index](../index) / Inspector # Class: Inspector Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:3 ## Constructors ### Constructor > **new Inspector**(`adapter`, `serializer`, `tasks`): `Inspector` Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:7 #### Parameters | Parameter | Type | | ------ | ------ | | `adapter` | [`Adapter`](../namespaces/Taskora/interfaces/Adapter) | | `serializer` | [`Serializer`](../namespaces/Taskora/interfaces/Serializer) | | `tasks` | `Map`<`string`, [`Task`](Task)<`unknown`, `unknown`>> | #### Returns `Inspector` ## Methods ### active() > **active**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:8 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> *** ### cancelled() > **cancelled**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:14 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> *** ### completed() > **completed**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:12 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> *** ### delayed() > **delayed**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:10 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> *** ### expired() > **expired**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:13 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> *** ### failed() > **failed**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:11 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> *** ### find() #### Call Signature > **find**(`jobId`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:18 ##### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | ##### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>> #### Call Signature > **find**<`TInput`, `TOutput`>(`task`, `jobId`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`TInput`, `TOutput`>> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:19 ##### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ##### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](Task)<`TInput`, `TOutput`> | | `jobId` | `string` | ##### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`TInput`, `TOutput`>> *** ### migrations() > **migrations**(`taskName`): `Promise`<[`MigrationStatus`](../namespaces/Taskora/interfaces/MigrationStatus)> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:20 #### Parameters | Parameter | Type | | ------ | ------ | | `taskName` | `string` | #### Returns `Promise`<[`MigrationStatus`](../namespaces/Taskora/interfaces/MigrationStatus)> *** ### stats() > **stats**(`options?`): `Promise`<[`QueueStats`](../namespaces/Taskora/interfaces/QueueStats)> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:15 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `task?`: `string`; } | | `options.task?` | `string` | #### Returns `Promise`<[`QueueStats`](../namespaces/Taskora/interfaces/QueueStats)> *** ### waiting() > **waiting**(`options?`): `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:9 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`InspectorListOptions`](../namespaces/Taskora/interfaces/InspectorListOptions) | #### Returns `Promise`<[`JobInfo`](../namespaces/Taskora/interfaces/JobInfo)<`unknown`, `unknown`>\[]> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/JobFailedError.md --- [Taskora API Reference](../../index) / [index](../index) / JobFailedError # Class: JobFailedError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:197 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new JobFailedError**(`jobId`, `taskName`, `message`): `JobFailedError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:200 #### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | | `taskName` | `string` | | `message` | `string` | #### Returns `JobFailedError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `jobId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:198 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `taskName` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:199 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/memory/classes/MemoryBackend.md --- [Taskora API Reference](../../index) / [memory](../index) / MemoryBackend # Class: MemoryBackend Defined in: packages/taskora/dist/memory/index.d.mts:3 ## Implements * [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter) ## Constructors ### Constructor > **new MemoryBackend**(`options?`): `MemoryBackend` Defined in: packages/taskora/dist/memory/index.d.mts:23 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `clock?`: () => `number`; } | | `options.clock?` | () => `number` | #### Returns `MemoryBackend` ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `_clock` | () => `number` | **`Internal`** | packages/taskora/dist/memory/index.d.mts:22 | ## Methods ### \_recordThroughput() > **\_recordThroughput**(`type`): `void` Defined in: packages/taskora/dist/memory/index.d.mts:197 **`Internal`** — called by ack/fail to record throughput #### Parameters | Parameter | Type | | ------ | ------ | | `type` | `"completed"` | `"failed"` | #### Returns `void` *** ### ack() > **ack**(`task`, `jobId`, `token`, `result`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:126 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `result` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`ack`](../../taskora/namespaces/Taskora/interfaces/Adapter#ack) *** ### ackAndDequeue() > **ackAndDequeue**(`task`, `jobId`, `token`, `result`, `newToken`, `newLockTtl`, `options?`): `Promise`<[`AckAndDequeueResult`](../../taskora/namespaces/Taskora/interfaces/AckAndDequeueResult)> Defined in: packages/taskora/dist/memory/index.d.mts:130 Atomically ack the current job and dequeue the next one in a single roundtrip. Optional — adapters that don't implement this fall back to separate `ack()` + `dequeue()` calls in the worker. The Redis adapter implements it as a fused Lua script (ACK\_AND\_MOVE\_TO\_ACTIVE) which is critical for high-concurrency throughput: each slot self-feeds without funneling through the worker's poll loop. The returned `AckAndDequeueResult` also carries the workflow binding of the acked job so the worker can advance/cancel the workflow without a follow-up `getWorkflowMeta` HMGET. #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `result` | `string` | | `newToken` | `string` | | `newLockTtl` | `number` | | `options?` | [`DequeueOptions`](../../taskora/namespaces/Taskora/interfaces/DequeueOptions) | #### Returns `Promise`<[`AckAndDequeueResult`](../../taskora/namespaces/Taskora/interfaces/AckAndDequeueResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`ackAndDequeue`](../../taskora/namespaces/Taskora/interfaces/Adapter#ackanddequeue) *** ### acquireSchedulerLock() > **acquireSchedulerLock**(`token`, `ttl`): `Promise`<`boolean`> Defined in: packages/taskora/dist/memory/index.d.mts:184 #### Parameters | Parameter | Type | | ------ | ------ | | `token` | `string` | | `ttl` | `number` | #### Returns `Promise`<`boolean`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`acquireSchedulerLock`](../../taskora/namespaces/Taskora/interfaces/Adapter#acquireschedulerlock) *** ### addLog() > **addLog**(`_task`, `jobId`, `entry`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:145 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | | `entry` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`addLog`](../../taskora/namespaces/Taskora/interfaces/Adapter#addlog) *** ### addSchedule() > **addSchedule**(`name`, `config`, `nextRun`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:169 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `config` | `string` | | `nextRun` | `number` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`addSchedule`](../../taskora/namespaces/Taskora/interfaces/Adapter#addschedule) *** ### advanceWorkflow() > **advanceWorkflow**(`workflowId`, `nodeIndex`, `result`): `Promise`<[`WorkflowAdvanceResult`](../../taskora/namespaces/Taskora/interfaces/WorkflowAdvanceResult)> Defined in: packages/taskora/dist/memory/index.d.mts:187 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `nodeIndex` | `number` | | `result` | `string` | #### Returns `Promise`<[`WorkflowAdvanceResult`](../../taskora/namespaces/Taskora/interfaces/WorkflowAdvanceResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`advanceWorkflow`](../../taskora/namespaces/Taskora/interfaces/Adapter#advanceworkflow) *** ### awaitJob() > **awaitJob**(`_task`, `jobId`, `timeoutMs?`): `Promise`<[`AwaitJobResult`](../../taskora/namespaces/Taskora/interfaces/AwaitJobResult)> Defined in: packages/taskora/dist/memory/index.d.mts:152 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | | `timeoutMs?` | `number` | #### Returns `Promise`<[`AwaitJobResult`](../../taskora/namespaces/Taskora/interfaces/AwaitJobResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`awaitJob`](../../taskora/namespaces/Taskora/interfaces/Adapter#awaitjob) *** ### blockingDequeue() > **blockingDequeue**(`task`, `lockTtl`, `token`, `timeoutMs`, `options?`): `Promise`<[`DequeueResult`](../../taskora/namespaces/Taskora/interfaces/DequeueResult)> Defined in: packages/taskora/dist/memory/index.d.mts:125 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `lockTtl` | `number` | | `token` | `string` | | `timeoutMs` | `number` | | `options?` | [`DequeueOptions`](../../taskora/namespaces/Taskora/interfaces/DequeueOptions) | #### Returns `Promise`<[`DequeueResult`](../../taskora/namespaces/Taskora/interfaces/DequeueResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`blockingDequeue`](../../taskora/namespaces/Taskora/interfaces/Adapter#blockingdequeue) *** ### cancel() > **cancel**(`task`, `jobId`, `reason?`): `Promise`<`"cancelled"` | `"flagged"` | `"not_cancellable"`> Defined in: packages/taskora/dist/memory/index.d.mts:137 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `reason?` | `string` | #### Returns `Promise`<`"cancelled"` | `"flagged"` | `"not_cancellable"`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`cancel`](../../taskora/namespaces/Taskora/interfaces/Adapter#cancel) *** ### cancelWorkflow() > **cancelWorkflow**(`workflowId`, `reason?`): `Promise`<[`WorkflowCancelResult`](../../taskora/namespaces/Taskora/interfaces/WorkflowCancelResult)> Defined in: packages/taskora/dist/memory/index.d.mts:190 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `reason?` | `string` | #### Returns `Promise`<[`WorkflowCancelResult`](../../taskora/namespaces/Taskora/interfaces/WorkflowCancelResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`cancelWorkflow`](../../taskora/namespaces/Taskora/interfaces/Adapter#cancelworkflow) *** ### cleanJobs() > **cleanJobs**(`task`, `state`, `before`, `limit`): `Promise`<`number`> Defined in: packages/taskora/dist/memory/index.d.mts:202 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `state` | [`JobState`](../../taskora/namespaces/Taskora/type-aliases/JobState) | | `before` | `number` | | `limit` | `number` | #### Returns `Promise`<`number`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`cleanJobs`](../../taskora/namespaces/Taskora/interfaces/Adapter#cleanjobs) *** ### clear() > **clear**(): `void` Defined in: packages/taskora/dist/memory/index.d.mts:47 **`Internal`** #### Returns `void` *** ### collectPush() > **collectPush**(`task`, `jobId`, `item`, `options`): `Promise`<{ `count`: `number`; `flushed`: `boolean`; }> Defined in: packages/taskora/dist/memory/index.d.mts:111 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `item` | `string` | | `options` | { `_v`: `number`; `collectKey`: `string`; `delayMs`: `number`; `maxAttempts?`: `number`; `maxSize`: `number`; `maxWaitMs`: `number`; } | | `options._v` | `number` | | `options.collectKey` | `string` | | `options.delayMs` | `number` | | `options.maxAttempts?` | `number` | | `options.maxSize` | `number` | | `options.maxWaitMs` | `number` | #### Returns `Promise`<{ `count`: `number`; `flushed`: `boolean`; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`collectPush`](../../taskora/namespaces/Taskora/interfaces/Adapter#collectpush) *** ### connect() > **connect**(): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:55 #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`connect`](../../taskora/namespaces/Taskora/interfaces/Adapter#connect) *** ### createWorkflow() > **createWorkflow**(`workflowId`, `graph`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:186 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `graph` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`createWorkflow`](../../taskora/namespaces/Taskora/interfaces/Adapter#createworkflow) *** ### debounceEnqueue() > **debounceEnqueue**(`task`, `jobId`, `data`, `options`, `debounceKey`, `delayMs`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:80 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | { `_v`: `number`; `concurrencyKey?`: `string`; `concurrencyLimit?`: `number`; `expireAt?`: `number`; `maxAttempts?`: `number`; `priority?`: `number`; } | | `options._v` | `number` | | `options.concurrencyKey?` | `string` | | `options.concurrencyLimit?` | `number` | | `options.expireAt?` | `number` | | `options.maxAttempts?` | `number` | | `options.priority?` | `number` | | `debounceKey` | `string` | | `delayMs` | `number` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`debounceEnqueue`](../../taskora/namespaces/Taskora/interfaces/Adapter#debounceenqueue) *** ### deduplicateEnqueue() > **deduplicateEnqueue**(`task`, `jobId`, `data`, `options`, `dedupKey`, `states`): `Promise`<{ `created`: `true`; } | { `created`: `false`; `existingId`: `string`; }> Defined in: packages/taskora/dist/memory/index.d.mts:97 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | { `_v`: `number`; `concurrencyKey?`: `string`; `concurrencyLimit?`: `number`; `delay?`: `number`; `expireAt?`: `number`; `maxAttempts?`: `number`; `priority?`: `number`; } | | `options._v` | `number` | | `options.concurrencyKey?` | `string` | | `options.concurrencyLimit?` | `number` | | `options.delay?` | `number` | | `options.expireAt?` | `number` | | `options.maxAttempts?` | `number` | | `options.priority?` | `number` | | `dedupKey` | `string` | | `states` | `string`\[] | #### Returns `Promise`<{ `created`: `true`; } | { `created`: `false`; `existingId`: `string`; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`deduplicateEnqueue`](../../taskora/namespaces/Taskora/interfaces/Adapter#deduplicateenqueue) *** ### dequeue() > **dequeue**(`task`, `lockTtl`, `token`, `options?`): `Promise`<[`DequeueResult`](../../taskora/namespaces/Taskora/interfaces/DequeueResult)> Defined in: packages/taskora/dist/memory/index.d.mts:124 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `lockTtl` | `number` | | `token` | `string` | | `options?` | [`DequeueOptions`](../../taskora/namespaces/Taskora/interfaces/DequeueOptions) | #### Returns `Promise`<[`DequeueResult`](../../taskora/namespaces/Taskora/interfaces/DequeueResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`dequeue`](../../taskora/namespaces/Taskora/interfaces/Adapter#dequeue) *** ### disconnect() > **disconnect**(): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:56 #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`disconnect`](../../taskora/namespaces/Taskora/interfaces/Adapter#disconnect) *** ### enqueue() > **enqueue**(`task`, `jobId`, `data`, `options`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:58 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | `object` & [`DispatchOptions`](../../taskora/namespaces/Taskora/interfaces/DispatchOptions) | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`enqueue`](../../taskora/namespaces/Taskora/interfaces/Adapter#enqueue) *** ### enqueueBulk() > **enqueueBulk**(`task`, `jobs`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:67 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobs` | `object`\[] | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`enqueueBulk`](../../taskora/namespaces/Taskora/interfaces/Adapter#enqueuebulk) *** ### extendLock() > **extendLock**(`task`, `jobId`, `token`, `ttl`): `Promise`<`"extended"` | `"lost"` | `"cancelled"`> Defined in: packages/taskora/dist/memory/index.d.mts:136 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `ttl` | `number` | #### Returns `Promise`<`"extended"` | `"lost"` | `"cancelled"`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`extendLock`](../../taskora/namespaces/Taskora/interfaces/Adapter#extendlock) *** ### fail() > **fail**(`task`, `jobId`, `token`, `error`, `retry?`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:127 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `error` | `string` | | `retry?` | { `delay`: `number`; } | | `retry.delay?` | `number` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`fail`](../../taskora/namespaces/Taskora/interfaces/Adapter#fail) *** ### failAndDequeue() > **failAndDequeue**(`task`, `jobId`, `token`, `error`, `retry`, `newToken`, `newLockTtl`, `options?`): `Promise`<[`AckAndDequeueResult`](../../taskora/namespaces/Taskora/interfaces/AckAndDequeueResult)> Defined in: packages/taskora/dist/memory/index.d.mts:131 Atomically fail the current job and dequeue the next one in a single roundtrip. Optional — see `ackAndDequeue` for rationale. The workflow binding is only populated on *permanent* failures (retries don't cascade). #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `error` | `string` | | `retry` | { `delay`: `number`; } | | `retry.delay` | `number` | | `newToken` | `string` | | `newLockTtl` | `number` | | `options?` | [`DequeueOptions`](../../taskora/namespaces/Taskora/interfaces/DequeueOptions) | #### Returns `Promise`<[`AckAndDequeueResult`](../../taskora/namespaces/Taskora/interfaces/AckAndDequeueResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`failAndDequeue`](../../taskora/namespaces/Taskora/interfaces/Adapter#failanddequeue) *** ### failWorkflow() > **failWorkflow**(`workflowId`, `nodeIndex`, `error`): `Promise`<[`WorkflowFailResult`](../../taskora/namespaces/Taskora/interfaces/WorkflowFailResult)> Defined in: packages/taskora/dist/memory/index.d.mts:188 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `nodeIndex` | `number` | | `error` | `string` | #### Returns `Promise`<[`WorkflowFailResult`](../../taskora/namespaces/Taskora/interfaces/WorkflowFailResult)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`failWorkflow`](../../taskora/namespaces/Taskora/interfaces/Adapter#failworkflow) *** ### finishCancel() > **finishCancel**(`task`, `jobId`, `token`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:138 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`finishCancel`](../../taskora/namespaces/Taskora/interfaces/Adapter#finishcancel) *** ### forceFlushCollect() > **forceFlushCollect**(`task`, `key?`): `void` Defined in: packages/taskora/dist/memory/index.d.mts:45 **`Internal`** #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `key?` | `string` | #### Returns `void` *** ### getAllJobs() > **getAllJobs**(): `object`\[] Defined in: packages/taskora/dist/memory/index.d.mts:49 **`Internal`** #### Returns `object`\[] *** ### getEarliestDelayedScore() > **getEarliestDelayedScore**(): `number` Defined in: packages/taskora/dist/memory/index.d.mts:43 **`Internal`** — earliest delayed job score across all tasks #### Returns `number` *** ### getError() > **getError**(`_task`, `jobId`): `Promise`<`string`> Defined in: packages/taskora/dist/memory/index.d.mts:148 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getError`](../../taskora/namespaces/Taskora/interfaces/Adapter#geterror) *** ### getJobDetails() > **getJobDetails**(`_task`, `jobId`): `Promise`<[`RawJobDetails`](../../taskora/namespaces/Taskora/interfaces/RawJobDetails)> Defined in: packages/taskora/dist/memory/index.d.mts:157 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | #### Returns `Promise`<[`RawJobDetails`](../../taskora/namespaces/Taskora/interfaces/RawJobDetails)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getJobDetails`](../../taskora/namespaces/Taskora/interfaces/Adapter#getjobdetails) *** ### getLogs() > **getLogs**(`_task`, `jobId`): `Promise`<`string`\[]> Defined in: packages/taskora/dist/memory/index.d.mts:150 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`\[]> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getLogs`](../../taskora/namespaces/Taskora/interfaces/Adapter#getlogs) *** ### getProgress() > **getProgress**(`_task`, `jobId`): `Promise`<`string`> Defined in: packages/taskora/dist/memory/index.d.mts:149 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getProgress`](../../taskora/namespaces/Taskora/interfaces/Adapter#getprogress) *** ### getQueueStats() > **getQueueStats**(`task`): `Promise`<[`QueueStats`](../../taskora/namespaces/Taskora/interfaces/QueueStats)> Defined in: packages/taskora/dist/memory/index.d.mts:158 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | #### Returns `Promise`<[`QueueStats`](../../taskora/namespaces/Taskora/interfaces/QueueStats)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getQueueStats`](../../taskora/namespaces/Taskora/interfaces/Adapter#getqueuestats) *** ### getResult() > **getResult**(`_task`, `jobId`): `Promise`<`string`> Defined in: packages/taskora/dist/memory/index.d.mts:147 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getResult`](../../taskora/namespaces/Taskora/interfaces/Adapter#getresult) *** ### getSchedule() > **getSchedule**(`name`): `Promise`<{ `config`: `string`; `nextRun`: `number`; `paused`: `boolean`; }> Defined in: packages/taskora/dist/memory/index.d.mts:171 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | #### Returns `Promise`<{ `config`: `string`; `nextRun`: `number`; `paused`: `boolean`; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getSchedule`](../../taskora/namespaces/Taskora/interfaces/Adapter#getschedule) *** ### getServerInfo() > **getServerInfo**(): `Promise`<{ `connected`: `boolean`; `uptime`: `number`; `usedMemory`: `string`; `version`: `string`; }> Defined in: packages/taskora/dist/memory/index.d.mts:203 #### Returns `Promise`<{ `connected`: `boolean`; `uptime`: `number`; `usedMemory`: `string`; `version`: `string`; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getServerInfo`](../../taskora/namespaces/Taskora/interfaces/Adapter#getserverinfo) *** ### getState() > **getState**(`_task`, `jobId`): `Promise`<[`JobState`](../../taskora/namespaces/Taskora/type-aliases/JobState)> Defined in: packages/taskora/dist/memory/index.d.mts:146 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | #### Returns `Promise`<[`JobState`](../../taskora/namespaces/Taskora/type-aliases/JobState)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getState`](../../taskora/namespaces/Taskora/interfaces/Adapter#getstate) *** ### getTaskKeyStats() > **getTaskKeyStats**(`_task`): `Promise`<{ `keyCount`: `number`; `memoryBytes`: `number`; }> Defined in: packages/taskora/dist/memory/index.d.mts:198 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | #### Returns `Promise`<{ `keyCount`: `number`; `memoryBytes`: `number`; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getTaskKeyStats`](../../taskora/namespaces/Taskora/interfaces/Adapter#gettaskkeystats) *** ### getTaskNames() > **getTaskNames**(): `string`\[] Defined in: packages/taskora/dist/memory/index.d.mts:41 **`Internal`** #### Returns `string`\[] *** ### getThroughput() > **getThroughput**(`_task`, `bucketSize`, `count`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/memory/index.d.mts:218 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `bucketSize` | `number` | | `count` | `number` | #### Returns `Promise`<`object`\[]> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getThroughput`](../../taskora/namespaces/Taskora/interfaces/Adapter#getthroughput) *** ### getVersionDistribution() > **getVersionDistribution**(`task`): `Promise`<{ `active`: `Record`<`number`, `number`>; `delayed`: `Record`<`number`, `number`>; `waiting`: `Record`<`number`, `number`>; }> Defined in: packages/taskora/dist/memory/index.d.mts:164 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | #### Returns `Promise`<{ `active`: `Record`<`number`, `number`>; `delayed`: `Record`<`number`, `number`>; `waiting`: `Record`<`number`, `number`>; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getVersionDistribution`](../../taskora/namespaces/Taskora/interfaces/Adapter#getversiondistribution) *** ### getWorkflowDetail() > **getWorkflowDetail**(`workflowId`): `Promise`<[`WorkflowDetail`](../../taskora/namespaces/Taskora/interfaces/WorkflowDetail)> Defined in: packages/taskora/dist/memory/index.d.mts:217 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | #### Returns `Promise`<[`WorkflowDetail`](../../taskora/namespaces/Taskora/interfaces/WorkflowDetail)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getWorkflowDetail`](../../taskora/namespaces/Taskora/interfaces/Adapter#getworkflowdetail) *** ### getWorkflowMeta() > **getWorkflowMeta**(`_task`, `jobId`): `Promise`<{ `nodeIndex`: `number`; `workflowId`: `string`; }> Defined in: packages/taskora/dist/memory/index.d.mts:191 #### Parameters | Parameter | Type | | ------ | ------ | | `_task` | `string` | | `jobId` | `string` | #### Returns `Promise`<{ `nodeIndex`: `number`; `workflowId`: `string`; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getWorkflowMeta`](../../taskora/namespaces/Taskora/interfaces/Adapter#getworkflowmeta) *** ### getWorkflowState() > **getWorkflowState**(`workflowId`): `Promise`<`string`> Defined in: packages/taskora/dist/memory/index.d.mts:189 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | #### Returns `Promise`<`string`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`getWorkflowState`](../../taskora/namespaces/Taskora/interfaces/Adapter#getworkflowstate) *** ### handshake() > **handshake**(`ours`): `Promise`<[`SchemaMeta`](../../taskora/namespaces/Taskora/interfaces/SchemaMeta)> Defined in: packages/taskora/dist/memory/index.d.mts:57 Atomically persist-or-read the wire-format meta record. Contract: • If no meta is currently stored in this backend (+ prefix), the adapter writes `ours` verbatim and returns `ours`. • Otherwise, the adapter returns whatever meta is already stored, UNTOUCHED — the check/upgrade decision belongs to core. Called exactly once per `App.ensureConnected()`, immediately after `connect()` and before any worker/scheduler/dispatch runs. Core calls `checkCompat(ours, returned)` and throws `SchemaVersionMismatchError` on any incompatibility. #### Parameters | Parameter | Type | | ------ | ------ | | `ours` | [`SchemaMeta`](../../taskora/namespaces/Taskora/interfaces/SchemaMeta) | #### Returns `Promise`<[`SchemaMeta`](../../taskora/namespaces/Taskora/interfaces/SchemaMeta)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`handshake`](../../taskora/namespaces/Taskora/interfaces/Adapter#handshake) *** ### inspectCollect() > **inspectCollect**(`task`, `collectKey`): `Promise`<[`CollectBufferInfo`](../../taskora/namespaces/Taskora/interfaces/CollectBufferInfo)> Defined in: packages/taskora/dist/memory/index.d.mts:123 Stats-only view of the current collect buffer for `(task, collectKey)`. Cheaper than [peekCollect](../../taskora/namespaces/Taskora/interfaces/Adapter#peekcollect) — reads only the meta hash, not the item payloads. Returns `null` if there is no active buffer for the key. #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `collectKey` | `string` | #### Returns `Promise`<[`CollectBufferInfo`](../../taskora/namespaces/Taskora/interfaces/CollectBufferInfo)> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`inspectCollect`](../../taskora/namespaces/Taskora/interfaces/Adapter#inspectcollect) *** ### listJobDetails() > **listJobDetails**(`task`, `state`, `offset`, `limit`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/memory/index.d.mts:153 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `state` | `"cancelled"` | `"waiting"` | `"delayed"` | `"active"` | `"completed"` | `"failed"` | `"expired"` | | `offset` | `number` | | `limit` | `number` | #### Returns `Promise`<`object`\[]> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`listJobDetails`](../../taskora/namespaces/Taskora/interfaces/Adapter#listjobdetails) *** ### listSchedules() > **listSchedules**(): `Promise`<[`ScheduleRecord`](../../taskora/namespaces/Taskora/interfaces/ScheduleRecord)\[]> Defined in: packages/taskora/dist/memory/index.d.mts:176 #### Returns `Promise`<[`ScheduleRecord`](../../taskora/namespaces/Taskora/interfaces/ScheduleRecord)\[]> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`listSchedules`](../../taskora/namespaces/Taskora/interfaces/Adapter#listschedules) *** ### listWorkflows() > **listWorkflows**(`state?`, `offset?`, `limit?`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/memory/index.d.mts:209 #### Parameters | Parameter | Type | | ------ | ------ | | `state?` | [`WorkflowState`](../../taskora/namespaces/Taskora/type-aliases/WorkflowState) | | `offset?` | `number` | | `limit?` | `number` | #### Returns `Promise`<`object`\[]> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`listWorkflows`](../../taskora/namespaces/Taskora/interfaces/Adapter#listworkflows) *** ### nack() > **nack**(`task`, `jobId`, `token`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:135 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`nack`](../../taskora/namespaces/Taskora/interfaces/Adapter#nack) *** ### onCancel() > **onCancel**(`task`, `handler`): `Promise`<() => `void`> Defined in: packages/taskora/dist/memory/index.d.mts:139 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `handler` | (`jobId`) => `void` | #### Returns `Promise`<() => `void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`onCancel`](../../taskora/namespaces/Taskora/interfaces/Adapter#oncancel) *** ### pauseSchedule() > **pauseSchedule**(`name`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:182 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`pauseSchedule`](../../taskora/namespaces/Taskora/interfaces/Adapter#pauseschedule) *** ### peekCollect() > **peekCollect**(`task`, `collectKey`): `Promise`<`string`\[]> Defined in: packages/taskora/dist/memory/index.d.mts:122 Read the current collect buffer for `(task, collectKey)` as raw serialized items. Non-destructive: never drains, never resets TTLs, never alters flush-trigger state. Returns items in dispatch order (oldest → newest). Empty array if the buffer has no items, was just flushed, or was never created. Called by `Task.peekCollect` / `BoundTask.peekCollect`; deserialization is done at the Task layer so the adapter stays schema-agnostic. #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `collectKey` | `string` | #### Returns `Promise`<`string`\[]> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`peekCollect`](../../taskora/namespaces/Taskora/interfaces/Adapter#peekcollect) *** ### promoteAll() > **promoteAll**(): `void` Defined in: packages/taskora/dist/memory/index.d.mts:39 **`Internal`** #### Returns `void` *** ### promoteDelayed() > **promoteDelayed**(`task`): `void` Defined in: packages/taskora/dist/memory/index.d.mts:37 **`Internal`** #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | #### Returns `void` *** ### removeSchedule() > **removeSchedule**(`name`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:170 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`removeSchedule`](../../taskora/namespaces/Taskora/interfaces/Adapter#removeschedule) *** ### renewSchedulerLock() > **renewSchedulerLock**(`token`, `ttl`): `Promise`<`boolean`> Defined in: packages/taskora/dist/memory/index.d.mts:185 #### Parameters | Parameter | Type | | ------ | ------ | | `token` | `string` | | `ttl` | `number` | #### Returns `Promise`<`boolean`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`renewSchedulerLock`](../../taskora/namespaces/Taskora/interfaces/Adapter#renewschedulerlock) *** ### resumeSchedule() > **resumeSchedule**(`name`, `nextRun`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:183 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `nextRun` | `number` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`resumeSchedule`](../../taskora/namespaces/Taskora/interfaces/Adapter#resumeschedule) *** ### retryAllFromDLQ() > **retryAllFromDLQ**(`task`, `limit`): `Promise`<`number`> Defined in: packages/taskora/dist/memory/index.d.mts:160 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `limit` | `number` | #### Returns `Promise`<`number`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`retryAllFromDLQ`](../../taskora/namespaces/Taskora/interfaces/Adapter#retryallfromdlq) *** ### retryFromDLQ() > **retryFromDLQ**(`task`, `jobId`): `Promise`<`boolean`> Defined in: packages/taskora/dist/memory/index.d.mts:159 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`boolean`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`retryFromDLQ`](../../taskora/namespaces/Taskora/interfaces/Adapter#retryfromdlq) *** ### setProgress() > **setProgress**(`task`, `jobId`, `value`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:144 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `value` | `string` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`setProgress`](../../taskora/namespaces/Taskora/interfaces/Adapter#setprogress) *** ### stalledCheck() > **stalledCheck**(`task`, `maxStalledCount`): `Promise`<{ `failed`: `string`\[]; `recovered`: `string`\[]; }> Defined in: packages/taskora/dist/memory/index.d.mts:140 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `maxStalledCount` | `number` | #### Returns `Promise`<{ `failed`: `string`\[]; `recovered`: `string`\[]; }> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`stalledCheck`](../../taskora/namespaces/Taskora/interfaces/Adapter#stalledcheck) *** ### subscribe() > **subscribe**(`_tasks`, `handler`): `Promise`<() => `Promise`<`void`>> Defined in: packages/taskora/dist/memory/index.d.mts:151 #### Parameters | Parameter | Type | | ------ | ------ | | `_tasks` | `string`\[] | | `handler` | (`event`) => `void` | #### Returns `Promise`<() => `Promise`<`void`>> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`subscribe`](../../taskora/namespaces/Taskora/interfaces/Adapter#subscribe) *** ### throttleEnqueue() > **throttleEnqueue**(`task`, `jobId`, `data`, `options`, `throttleKey`, `max`, `windowMs`): `Promise`<`boolean`> Defined in: packages/taskora/dist/memory/index.d.mts:88 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | { `_v`: `number`; `concurrencyKey?`: `string`; `concurrencyLimit?`: `number`; `delay?`: `number`; `expireAt?`: `number`; `maxAttempts?`: `number`; `priority?`: `number`; } | | `options._v` | `number` | | `options.concurrencyKey?` | `string` | | `options.concurrencyLimit?` | `number` | | `options.delay?` | `number` | | `options.expireAt?` | `number` | | `options.maxAttempts?` | `number` | | `options.priority?` | `number` | | `throttleKey` | `string` | | `max` | `number` | | `windowMs` | `number` | #### Returns `Promise`<`boolean`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`throttleEnqueue`](../../taskora/namespaces/Taskora/interfaces/Adapter#throttleenqueue) *** ### tickScheduler() > **tickScheduler**(`now`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/memory/index.d.mts:177 #### Parameters | Parameter | Type | | ------ | ------ | | `now` | `number` | #### Returns `Promise`<`object`\[]> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`tickScheduler`](../../taskora/namespaces/Taskora/interfaces/Adapter#tickscheduler) *** ### trimCompleted() > **trimCompleted**(`task`, `before`, `maxItems`): `Promise`<`number`> Defined in: packages/taskora/dist/memory/index.d.mts:162 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `before` | `number` | | `maxItems` | `number` | #### Returns `Promise`<`number`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`trimCompleted`](../../taskora/namespaces/Taskora/interfaces/Adapter#trimcompleted) *** ### trimDLQ() > **trimDLQ**(`task`, `before`, `maxItems`): `Promise`<`number`> Defined in: packages/taskora/dist/memory/index.d.mts:161 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `before` | `number` | | `maxItems` | `number` | #### Returns `Promise`<`number`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`trimDLQ`](../../taskora/namespaces/Taskora/interfaces/Adapter#trimdlq) *** ### updateScheduleNextRun() > **updateScheduleNextRun**(`name`, `config`, `nextRun`): `Promise`<`void`> Defined in: packages/taskora/dist/memory/index.d.mts:181 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `config` | `string` | | `nextRun` | `number` | #### Returns `Promise`<`void`> #### Implementation of [`Adapter`](../../taskora/namespaces/Taskora/interfaces/Adapter).[`updateScheduleNextRun`](../../taskora/namespaces/Taskora/interfaces/Adapter#updateschedulenextrun) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/ResultHandle.md --- [Taskora API Reference](../../index) / [index](../index) / ResultHandle # Class: ResultHandle\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:261 ## Type Parameters | Type Parameter | | ------ | | `TOutput` | ## Constructors ### Constructor > **new ResultHandle**<`TOutput`>(`id`, `taskName`, `adapter`, `serializer`, `enqueuePromise`): `ResultHandle`<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:279 #### Parameters | Parameter | Type | | ------ | ------ | | `id` | `string` | | `taskName` | `string` | | `adapter` | [`Adapter`](../namespaces/Taskora/interfaces/Adapter) | | `serializer` | [`Serializer`](../namespaces/Taskora/interfaces/Serializer) | | `enqueuePromise` | `Promise`<`void`> | #### Returns `ResultHandle`<`TOutput`> ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `enqueued` | `public` | `boolean` | Whether the job was actually enqueued. - `null` — enqueue still pending - `true` — job created in queue - `false` — rejected by flow control (throttled or deduplicated) | packages/taskora/dist/types-KwmL9wkN.d.mts:269 | | `existingId` | `public` | `string` | When deduplicated, the ID of the existing job that blocked this dispatch. | packages/taskora/dist/types-KwmL9wkN.d.mts:273 | | `id` | `readonly` | `string` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:262 | ## Accessors ### result #### Get Signature > **get** **result**(): `Promise`<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:282 ##### Returns `Promise`<`TOutput`> ## Methods ### cancel() > **cancel**(`options?`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:284 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `reason?`: `string`; } | | `options.reason?` | `string` | #### Returns `Promise`<`void`> *** ### ensureEnqueued() > **ensureEnqueued**(): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:281 #### Returns `Promise`<`void`> *** ### getLogs() > **getLogs**(): `Promise`<[`LogEntry`](../namespaces/Taskora/interfaces/LogEntry)\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:291 #### Returns `Promise`<[`LogEntry`](../namespaces/Taskora/interfaces/LogEntry)\[]> *** ### getProgress() > **getProgress**(): `Promise`<`number` | `Record`<`string`, `unknown`>> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:290 #### Returns `Promise`<`number` | `Record`<`string`, `unknown`>> *** ### getState() > **getState**(): `Promise`<[`JobState`](../namespaces/Taskora/type-aliases/JobState)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:289 #### Returns `Promise`<[`JobState`](../namespaces/Taskora/type-aliases/JobState)> *** ### then() > **then**<`TResult1`, `TResult2`>(`onfulfilled?`, `onrejected?`): `Promise`<`TResult1` | `TResult2`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:280 #### Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TResult1` | `string` | | `TResult2` | `never` | #### Parameters | Parameter | Type | | ------ | ------ | | `onfulfilled?` | (`value`) => `TResult1` | `PromiseLike`<`TResult1`> | | `onrejected?` | (`reason`) => `TResult2` | `PromiseLike`<`TResult2`> | #### Returns `Promise`<`TResult1` | `TResult2`> *** ### waitFor() > **waitFor**(`timeout?`): `Promise`<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:283 #### Parameters | Parameter | Type | | ------ | ------ | | `timeout?` | `number` | #### Returns `Promise`<`TOutput`> --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora/classes/RetryError.md' --- [Taskora API Reference](../../index) / [index](../index) / RetryError # Class: RetryError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:185 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new RetryError**(`options?`): `RetryError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:187 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `cause?`: `unknown`; `delay?`: `number`; `message?`: `string`; } | | `options.cause?` | `unknown` | | `options.delay?` | `number` | | `options.message?` | `string` | #### Returns `RetryError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `delay?` | `readonly` | `number` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:186 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/SchemaVersionMismatchError.md --- [Taskora API Reference](../../index) / [index](../index) / SchemaVersionMismatchError # Class: SchemaVersionMismatchError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:236 Thrown by [App.ensureConnected](App#ensureconnected) when the wire-format version compiled into this process is incompatible with the meta record the storage backend already has. Stops the process before any worker, scheduler, or dispatch can touch incompatible data. See `src/wire-version.ts` for the full compatibility rule and the policy for bumping `WIRE_VERSION` / `MIN_COMPAT_VERSION`. ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new SchemaVersionMismatchError**(`code`, `message`, `ours`, `theirs`): `SchemaVersionMismatchError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:249 #### Parameters | Parameter | Type | | ------ | ------ | | `code` | `"theirs_too_new"` | `"theirs_too_old"` | `"invalid_meta"` | | `message` | `string` | | `ours` | { `minCompat`: `number`; `wireVersion`: `number`; `writtenBy`: `string`; } | | `ours.minCompat` | `number` | | `ours.wireVersion` | `number` | | `ours.writtenBy` | `string` | | `theirs` | { `minCompat`: `number`; `wireVersion`: `number`; `writtenAt`: `number`; `writtenBy`: `string`; } | | `theirs.minCompat` | `number` | | `theirs.wireVersion` | `number` | | `theirs.writtenAt` | `number` | | `theirs.writtenBy` | `string` | #### Returns `SchemaVersionMismatchError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `code` | `readonly` | `"theirs_too_new"` | `"theirs_too_old"` | `"invalid_meta"` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:237 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `ours` | `readonly` | `object` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:238 | | `ours.minCompat` | `public` | `number` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:240 | | `ours.wireVersion` | `public` | `number` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:239 | | `ours.writtenBy` | `public` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:241 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `theirs` | `readonly` | `object` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:243 | | `theirs.minCompat` | `public` | `number` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:245 | | `theirs.wireVersion` | `public` | `number` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:244 | | `theirs.writtenAt` | `public` | `number` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:247 | | `theirs.writtenBy` | `public` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:246 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora/classes/Signature.md' --- [Taskora API Reference](../../index) / [index](../index) / Signature # Class: Signature\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:302 A serializable, composable snapshot of a task invocation. Created via `task.s()` or `task.s(data)`. ## Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ## Constructors ### Constructor > **new Signature**<`TInput`, `TOutput`>(`task`, `data?`): `Signature`<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:308 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](Task)<`TInput`, `TOutput`> | | `data?` | `TInput` | #### Returns `Signature`<`TInput`, `TOutput`> ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `_tag` | `readonly` | `"signature"` | **`Internal`** | packages/taskora/dist/types-KwmL9wkN.d.mts:304 | | `_v` | `readonly` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:307 | | `boundData` | `readonly` | `TInput` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:306 | | `task` | `readonly` | [`Task`](Task)<`TInput`, `TOutput`> | - | packages/taskora/dist/types-KwmL9wkN.d.mts:305 | ## Accessors ### hasBoundData #### Get Signature > **get** **hasBoundData**(): `boolean` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:309 ##### Returns `boolean` ## Methods ### dispatch() > **dispatch**(`options?`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:311 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`WorkflowDispatchOptions`](../interfaces/WorkflowDispatchOptions) | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`> *** ### pipe() > **pipe**<`TNext`>(`next`): [`ChainSignature`](ChainSignature)<`TInput`, `TNext`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:310 #### Type Parameters | Type Parameter | | ------ | | `TNext` | #### Parameters | Parameter | Type | | ------ | ------ | | `next` | `Signature`<`TOutput`, `TNext`> | #### Returns [`ChainSignature`](ChainSignature)<`TInput`, `TNext`> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/StalledError.md --- [Taskora API Reference](../../index) / [index](../index) / StalledError # Class: StalledError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:193 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new StalledError**(`jobId`, `options?`): `StalledError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:195 #### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | | `options?` | `ErrorOptions` | #### Returns `StalledError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `jobId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:194 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora/classes/Task.md' --- [Taskora API Reference](../../index) / [index](../index) / Task # Class: Task\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:433 ## Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ## Constructors ### Constructor > **new Task**<`TInput`, `TOutput`>(`deps`, `name`, `handler`, `config`, `schemas?`, `migrationConfig?`, `middleware?`): `Task`<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:457 #### Parameters | Parameter | Type | | ------ | ------ | | `deps` | `TaskDeps` | | `name` | `string` | | `handler` | (`data`, `ctx`) => `TOutput` | `Promise`<`TOutput`> | | `config` | `TaskConfig` | | `schemas?` | { `input?`: `StandardSchemaV1`<`unknown`, `TInput`>; `output?`: `StandardSchemaV1`<`unknown`, `TOutput`>; } | | `schemas.input?` | `StandardSchemaV1`<`unknown`, `TInput`> | | `schemas.output?` | `StandardSchemaV1`<`unknown`, `TOutput`> | | `migrationConfig?` | `TaskMigrationConfig` | | `middleware?` | [`Middleware`](../namespaces/Taskora/type-aliases/Middleware)\[] | #### Returns `Task`<`TInput`, `TOutput`> ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `config` | `public` | `TaskConfig` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:442 | | `handler` | `public` | (`data`, `ctx`) => `TOutput` | `Promise`<`TOutput`> | Fields below are conceptually readonly for external consumers but are mutable internally so that [Task.\_mergeImplementation](#mergeimplementation) can upgrade a contract-only registration with a real handler and worker-side config. Do not mutate these from outside the class. | packages/taskora/dist/types-KwmL9wkN.d.mts:441 | | `hasHandler` | `public` | `boolean` | `true` if a real handler is attached and this process should run a worker loop for this task. `false` for contract-only registrations (`app.register`) where the handler lives in a separate process. | packages/taskora/dist/types-KwmL9wkN.d.mts:454 | | `inputSchema?` | `public` | `StandardSchemaV1`<`unknown`, `TInput`> | - | packages/taskora/dist/types-KwmL9wkN.d.mts:443 | | `middleware` | `public` | [`Middleware`](../namespaces/Taskora/type-aliases/Middleware)\[] | - | packages/taskora/dist/types-KwmL9wkN.d.mts:448 | | `migrations` | `public` | `Map`<`number`, `MigrationFn`> | - | packages/taskora/dist/types-KwmL9wkN.d.mts:447 | | `name` | `readonly` | `string` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:434 | | `outputSchema?` | `public` | `StandardSchemaV1`<`unknown`, `TOutput`> | - | packages/taskora/dist/types-KwmL9wkN.d.mts:444 | | `since` | `public` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:446 | | `version` | `public` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:445 | ## Methods ### \_getDeps() > **\_getDeps**(): `TaskDeps` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:462 **`Internal`** — used by workflow dispatch to extract adapter/serializer #### Returns `TaskDeps` *** ### \_mergeImplementation() > **\_mergeImplementation**(`opts`): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:470 **`Internal`** — called by [App.implement](App#implement) to upgrade a contract-only registration (`register()` result) with a real handler and worker-side configuration. Merges only the fields that make sense to override post-registration; leaves `name` and existing schemas intact unless explicitly replaced. #### Parameters | Parameter | Type | | ------ | ------ | | `opts` | { `config?`: `Partial`<`TaskConfig`>; `handler`: (`data`, `ctx`) => `TOutput` | `Promise`<`TOutput`>; `middleware?`: [`Middleware`](../namespaces/Taskora/type-aliases/Middleware)\[]; `migrationConfig?`: `TaskMigrationConfig`; } | | `opts.config?` | `Partial`<`TaskConfig`> | | `opts.handler` | (`data`, `ctx`) => `TOutput` | `Promise`<`TOutput`> | | `opts.middleware?` | [`Middleware`](../namespaces/Taskora/type-aliases/Middleware)\[] | | `opts.migrationConfig?` | `TaskMigrationConfig` | #### Returns `void` *** ### \_patchDeps() > **\_patchDeps**(`patch`): () => `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:485 **`Internal`** — used by TestRunner to swap adapter for testing #### Parameters | Parameter | Type | | ------ | ------ | | `patch` | `Partial`<`TaskDeps`> | #### Returns () => `void` *** ### chunk() > **chunk**(`items`, `options`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:481 Split items into chunks, process each chunk as a parallel group, chunks run sequentially. #### Parameters | Parameter | Type | | ------ | ------ | | `items` | `TInput`\[] | | `options` | { `size`: `number`; } | | `options.size` | `number` | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> *** ### dispatch() > **dispatch**(`data`, `options?`): [`ResultHandle`](ResultHandle)<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:490 #### Parameters | Parameter | Type | | ------ | ------ | | `data` | `TInput` | | `options?` | [`DispatchOptions`](../namespaces/Taskora/interfaces/DispatchOptions) | #### Returns [`ResultHandle`](ResultHandle)<`TOutput`> *** ### dispatchEvent() > **dispatchEvent**(`event`, `data`): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:488 **`Internal`** — used by App to dispatch stream events #### Parameters | Parameter | Type | | ------ | ------ | | `event` | `string` | | `data` | `unknown` | #### Returns `void` *** ### dispatchMany() > **dispatchMany**(`jobs`): [`ResultHandle`](ResultHandle)<`TOutput`>\[] Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:491 #### Parameters | Parameter | Type | | ------ | ------ | | `jobs` | `object`\[] | #### Returns [`ResultHandle`](ResultHandle)<`TOutput`>\[] *** ### hasEventListeners() > **hasEventListeners**(`event?`): `boolean` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:489 #### Parameters | Parameter | Type | | ------ | ------ | | `event?` | `string` | #### Returns `boolean` *** ### inspectCollect() > **inspectCollect**(`collectKey`): `Promise`<[`CollectBufferInfo`](../namespaces/Taskora/interfaces/CollectBufferInfo)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:531 Stats-only view of the current collect buffer for `collectKey`. Returns `null` if no active buffer exists. Cheaper than [peekCollect](#peekcollect) — reads only the meta hash, not the item payloads. **Throws** if this task was not declared with `collect: { ... }`. #### Parameters | Parameter | Type | | ------ | ------ | | `collectKey` | `string` | #### Returns `Promise`<[`CollectBufferInfo`](../namespaces/Taskora/interfaces/CollectBufferInfo)> *** ### map() > **map**(`items`): [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:479 Dispatch one job per item in parallel. Sugar for group(...items.map(i => task.s(i))). #### Parameters | Parameter | Type | | ------ | ------ | | `items` | `TInput`\[] | #### Returns [`WorkflowHandle`](WorkflowHandle)<`TOutput`\[]> *** ### on() > **on**<`K`>(`event`, `handler`): () => `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:486 #### Type Parameters | Type Parameter | | ------ | | `K` *extends* `"cancelled"` | `"active"` | `"completed"` | `"failed"` | `"retrying"` | `"progress"` | `"stalled"` | #### Parameters | Parameter | Type | | ------ | ------ | | `event` | `K` | | `handler` | (`data`) => `void` | #### Returns () => `void` *** ### peekCollect() > **peekCollect**(`collectKey`): `Promise`<`TInput`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:523 Read the current collect buffer for `collectKey` as a deserialized array of input items. Non-destructive: does not drain the buffer, reset the debounce timer, or alter any flush-trigger state. Items are returned in dispatch order (oldest → newest). Returns an empty array if the buffer is empty, was just flushed, or the key was never dispatched to — callers do not need to distinguish these cases. **Snapshot consistency:** the underlying read is a single atomic command (Redis `LRANGE` / memory `slice`), so the returned array always reflects a coherent point in time, even under concurrent dispatches or a flush running in parallel. **Throws** if this task was not declared with `collect: { ... }`. A task without collect has no buffer, and silently returning `[]` would mask a config bug. **Deserialization failures** on individual items are skipped rather than thrown — peek is a read/debugging tool, and partial corruption in one item should not poison the whole snapshot. #### Parameters | Parameter | Type | | ------ | ------ | | `collectKey` | `string` | #### Returns `Promise`<`TInput`\[]> #### Example ```ts const pending = await ingestMessagesTask.peekCollect(`chat:${chatId}`) const prompt = buildPrompt(longTermMemory, pending) ``` *** ### s() > **s**(`data?`): [`Signature`](Signature)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:477 Create a Signature — a composable snapshot of this task invocation. #### Parameters | Parameter | Type | | ------ | ------ | | `data?` | `TInput` | #### Returns [`Signature`](Signature)<`TInput`, `TOutput`> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/TaskoraError.md --- [Taskora API Reference](../../index) / [index](../index) / TaskoraError # Class: TaskoraError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:172 ## Extends * `Error` ## Extended by * [`CancelledError`](CancelledError) * [`DuplicateJobError`](DuplicateJobError) * [`ExpiredError`](ExpiredError) * [`JobFailedError`](JobFailedError) * [`RetryError`](RetryError) * [`SchemaVersionMismatchError`](SchemaVersionMismatchError) * [`StalledError`](StalledError) * [`ThrottledError`](ThrottledError) * [`TimeoutError`](TimeoutError) * [`ValidationError`](ValidationError) ## Constructors ### Constructor > **new TaskoraError**(`message`, `options?`): `TaskoraError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:173 #### Parameters | Parameter | Type | | ------ | ------ | | `message` | `string` | | `options?` | `ErrorOptions` | #### Returns `TaskoraError` #### Overrides `Error.constructor` ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | `Error.cause` | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `message` | `public` | `string` | - | `Error.message` | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | `Error.name` | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | `Error.stack` | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | `Error.stackTraceLimit` | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from `Error.captureStackTrace` *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from `Error.prepareStackTrace` --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/test/classes/TestRunner.md' --- [Taskora API Reference](../../index) / [test](../index) / TestRunner # Class: TestRunner Defined in: packages/taskora/dist/test/index.d.mts:18 ## Constructors ### Constructor > **new TestRunner**(`options?`): `TestRunner` Defined in: packages/taskora/dist/test/index.d.mts:25 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`TestRunnerOptions`](../interfaces/TestRunnerOptions) | #### Returns `TestRunner` ## Properties | Property | Modifier | Type | Defined in | | ------ | ------ | ------ | ------ | | `app` | `readonly` | [`App`](../../taskora/classes/App) | packages/taskora/dist/test/index.d.mts:19 | ## Accessors ### jobs #### Get Signature > **get** **jobs**(): `object`\[] Defined in: packages/taskora/dist/test/index.d.mts:62 All jobs with their current state. ##### Returns `object`\[] *** ### steps #### Get Signature > **get** **steps**(): `object`\[] Defined in: packages/taskora/dist/test/index.d.mts:69 Workflow step execution history. ##### Returns `object`\[] ## Methods ### advanceTime() > **advanceTime**(`duration`): `Promise`<`void`> Defined in: packages/taskora/dist/test/index.d.mts:52 Advance virtual time, promote delayed jobs, and process all due work. #### Parameters | Parameter | Type | | ------ | ------ | | `duration` | [`Duration`](../../taskora/type-aliases/Duration) | #### Returns `Promise`<`void`> *** ### clear() > **clear**(): `void` Defined in: packages/taskora/dist/test/index.d.mts:76 Reset all in-memory state between tests. #### Returns `void` *** ### dispatch() > **dispatch**<`TInput`, `TOutput`>(`task`, `data`, `options?`): [`ResultHandle`](../../taskora/classes/ResultHandle)<`TOutput`> Defined in: packages/taskora/dist/test/index.d.mts:48 Dispatch a job into the in-memory queue. The task must be registered on `runner.app` (via `app.task()`, `importTask()`, or `from`). #### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](../../taskora/classes/Task)<`TInput`, `TOutput`> | | `data` | `TInput` | | `options?` | [`DispatchOptions`](../../taskora/namespaces/Taskora/interfaces/DispatchOptions) | #### Returns [`ResultHandle`](../../taskora/classes/ResultHandle)<`TOutput`> *** ### dispose() > **dispose**(): `void` Defined in: packages/taskora/dist/test/index.d.mts:78 Restore original adapters when using `from` mode. Call in afterEach. #### Returns `void` *** ### execute() > **execute**<`TInput`, `TOutput`>(`task`, `data`, `options?`): `Promise`<[`ExecutionResult`](../interfaces/ExecutionResult)<`TOutput`>> Defined in: packages/taskora/dist/test/index.d.mts:36 Full queue pipeline: dispatch → process → auto-advance retries → return result + metadata. Auto-imports the task if not already registered on `runner.app`. Processes ALL tasks (including sub-tasks dispatched by the handler). #### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](../../taskora/classes/Task)<`TInput`, `TOutput`> | | `data` | `TInput` | | `options?` | [`DispatchOptions`](../../taskora/namespaces/Taskora/interfaces/DispatchOptions) | #### Returns `Promise`<[`ExecutionResult`](../interfaces/ExecutionResult)<`TOutput`>> *** ### flush() > **flush**(`task`, `key?`): `Promise`<`void`> Defined in: packages/taskora/dist/test/index.d.mts:60 Force-flush collect buffers for a task and process resulting jobs. #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](../../taskora/classes/Task)<`unknown`, `unknown`> | | `key?` | `string` | #### Returns `Promise`<`void`> *** ### importTask() > **importTask**<`TInput`, `TOutput`>(`task`): [`Task`](../../taskora/classes/Task)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/test/index.d.mts:43 Import a production task into the test runner. Copies handler, config, middleware, schemas — rebinds to the memory adapter. Returns a new Task registered on `runner.app`. Not needed when using `createTestRunner({ from: app })`. #### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](../../taskora/classes/Task)<`TInput`, `TOutput`> | #### Returns [`Task`](../../taskora/classes/Task)<`TInput`, `TOutput`> *** ### processAll() > **processAll**(): `Promise`<`void`> Defined in: packages/taskora/dist/test/index.d.mts:56 Process all waiting jobs across all tasks until the queues are drained. #### Returns `Promise`<`void`> *** ### run() > **run**<`TInput`, `TOutput`>(`task`, `data`): `Promise`<`TOutput`> Defined in: packages/taskora/dist/test/index.d.mts:30 Execute a task handler directly with retry support. Bypasses the queue — runs the handler inline and returns the result. #### Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | #### Parameters | Parameter | Type | | ------ | ------ | | `task` | [`Task`](../../taskora/classes/Task)<`TInput`, `TOutput`> | | `data` | `TInput` | #### Returns `Promise`<`TOutput`> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/ThrottledError.md --- [Taskora API Reference](../../index) / [index](../index) / ThrottledError # Class: ThrottledError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:207 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new ThrottledError**(`jobId`, `key`): `ThrottledError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:210 #### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | | `key` | `string` | #### Returns `ThrottledError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `jobId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:208 | | `key` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:209 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/TimeoutError.md --- [Taskora API Reference](../../index) / [index](../index) / TimeoutError # Class: TimeoutError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:202 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new TimeoutError**(`jobId`, `timeoutMs`): `TimeoutError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:205 #### Parameters | Parameter | Type | | ------ | ------ | | `jobId` | `string` | | `timeoutMs` | `number` | #### Returns `TimeoutError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `jobId` | `readonly` | `string` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:203 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `timeoutMs` | `readonly` | `number` | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:204 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/ValidationError.md --- [Taskora API Reference](../../index) / [index](../index) / ValidationError # Class: ValidationError Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:175 ## Extends * [`TaskoraError`](TaskoraError) ## Constructors ### Constructor > **new ValidationError**(`message`, `issues`, `options?`): `ValidationError` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:180 #### Parameters | Parameter | Type | | ------ | ------ | | `message` | `string` | | `issues` | readonly `object`\[] | | `options?` | `ErrorOptions` | #### Returns `ValidationError` #### Overrides [`TaskoraError`](TaskoraError).[`constructor`](TaskoraError#constructor) ## Properties | Property | Modifier | Type | Description | Inherited from | Defined in | | ------ | ------ | ------ | ------ | ------ | ------ | | `cause?` | `public` | `unknown` | - | [`TaskoraError`](TaskoraError).[`cause`](TaskoraError#cause) | documentation/node\_modules/typescript/lib/lib.es2022.error.d.ts:26 | | `issues` | `readonly` | readonly `object`\[] | - | - | packages/taskora/dist/types-KwmL9wkN.d.mts:176 | | `message` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`message`](TaskoraError#message) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1077 | | `name` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`name`](TaskoraError#name) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1076 | | `stack?` | `public` | `string` | - | [`TaskoraError`](TaskoraError).[`stack`](TaskoraError#stack) | documentation/node\_modules/typescript/lib/lib.es5.d.ts:1078 | | `stackTraceLimit` | `static` | `number` | The `Error.stackTraceLimit` property specifies the number of stack frames collected by a stack trace (whether generated by `new Error().stack` or `Error.captureStackTrace(obj)`). The default value is `10` but may be set to any valid JavaScript number. Changes will affect any stack trace captured *after* the value has been changed. If set to a non-number value, or set to a negative number, stack traces will not capture any frames. | [`TaskoraError`](TaskoraError).[`stackTraceLimit`](TaskoraError#stacktracelimit) | node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:68 | ## Methods ### captureStackTrace() > `static` **captureStackTrace**(`targetObject`, `constructorOpt?`): `void` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:52 Creates a `.stack` property on `targetObject`, which when accessed returns a string representing the location in the code at which `Error.captureStackTrace()` was called. ```js const myObject = {}; Error.captureStackTrace(myObject); myObject.stack; // Similar to `new Error().stack` ``` The first line of the trace will be prefixed with `${myObject.name}: ${myObject.message}`. The optional `constructorOpt` argument accepts a function. If given, all frames above `constructorOpt`, including `constructorOpt`, will be omitted from the generated stack trace. The `constructorOpt` argument is useful for hiding implementation details of error generation from the user. For instance: ```js function a() { b(); } function b() { c(); } function c() { // Create an error without stack trace to avoid calculating the stack trace twice. const { stackTraceLimit } = Error; Error.stackTraceLimit = 0; const error = new Error(); Error.stackTraceLimit = stackTraceLimit; // Capture the stack trace above function b Error.captureStackTrace(error, b); // Neither function c, nor b is included in the stack trace throw error; } a(); ``` #### Parameters | Parameter | Type | | ------ | ------ | | `targetObject` | `object` | | `constructorOpt?` | `Function` | #### Returns `void` #### Inherited from [`TaskoraError`](TaskoraError).[`captureStackTrace`](TaskoraError#capturestacktrace) *** ### prepareStackTrace() > `static` **prepareStackTrace**(`err`, `stackTraces`): `any` Defined in: node\_modules/.bun/@types+node@22.19.17/node\_modules/@types/node/globals.d.ts:56 #### Parameters | Parameter | Type | | ------ | ------ | | `err` | `Error` | | `stackTraces` | `CallSite`\[] | #### Returns `any` #### See https://v8.dev/docs/stack-trace-api#customizing-stack-traces #### Inherited from [`TaskoraError`](TaskoraError).[`prepareStackTrace`](TaskoraError#preparestacktrace) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/classes/WorkflowHandle.md --- [Taskora API Reference](../../index) / [index](../index) / WorkflowHandle # Class: WorkflowHandle\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:376 ## Type Parameters | Type Parameter | | ------ | | `TOutput` | ## Constructors ### Constructor > **new WorkflowHandle**<`TOutput`>(`workflowId`, `graph`, `adapter`, `serializer`, `dispatchPromise`): `WorkflowHandle`<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:383 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `graph` | [`WorkflowGraph`](../interfaces/WorkflowGraph) | | `adapter` | [`Adapter`](../namespaces/Taskora/interfaces/Adapter) | | `serializer` | [`Serializer`](../namespaces/Taskora/interfaces/Serializer) | | `dispatchPromise` | `Promise`<`void`> | #### Returns `WorkflowHandle`<`TOutput`> ## Properties | Property | Modifier | Type | Defined in | | ------ | ------ | ------ | ------ | | `workflowId` | `readonly` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:377 | ## Accessors ### result #### Get Signature > **get** **result**(): `Promise`<`TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:386 ##### Returns `Promise`<`TOutput`> ## Methods ### cancel() > **cancel**(`options?`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:387 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `reason?`: `string`; } | | `options.reason?` | `string` | #### Returns `Promise`<`void`> *** ### ensureDispatched() > **ensureDispatched**(): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:385 #### Returns `Promise`<`void`> *** ### getState() > **getState**(): `Promise`<[`WorkflowState`](../namespaces/Taskora/type-aliases/WorkflowState)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:390 #### Returns `Promise`<[`WorkflowState`](../namespaces/Taskora/type-aliases/WorkflowState)> *** ### then() > **then**<`TResult1`, `TResult2`>(`onfulfilled?`, `onrejected?`): `Promise`<`TResult1` | `TResult2`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:384 #### Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TResult1` | `string` | | `TResult2` | `never` | #### Parameters | Parameter | Type | | ------ | ------ | | `onfulfilled?` | (`value`) => `TResult1` | `PromiseLike`<`TResult1`> | | `onrejected?` | (`reason`) => `TResult2` | `PromiseLike`<`TResult2`> | #### Returns `Promise`<`TResult1` | `TResult2`> --- --- url: 'https://kravetsone.github.io/taskora/taskora/comparison.md' description: >- How Taskora compares to BullMQ, Agenda, and pg-boss — feature-by-feature breakdown for Node.js task queues. --- # Comparison Choosing a task queue for Node.js? Here's an honest feature comparison. ## Feature Matrix | Feature | Taskora | BullMQ | Agenda | pg-boss | |---|:---:|:---:|:---:|:---:| | **TypeScript-native** | Full | Full | Full (v6+) | Full | | **Schema validation** | Standard Schema | — | — | — | | **Middleware** | Koa-style onion model | Event hooks | — | — | | **Task contracts** | defineTask / staticContract | — | — | — | | **Job versioning & migrations** | Full migration chains | — | — | — | | **In-memory test adapter** | Built-in | — | — | — | | **Virtual time testing** | Built-in | — | — | — | | **Debounce** | Built-in | Built-in | — | — | | **Throttle** | Built-in | Pattern-based | — | Singleton slots | | **Deduplication** | Built-in | Built-in | — | Singleton keys | | **Batch collect** | Built-in | Pro (paid) | — | batchSize | | **TTL / expiration** | Built-in | — | — | Built-in | | **Singleton mode** | Built-in | — | — | singletonKey | | **Concurrency per key** | Built-in | Pro (Groups) | — | — | | **Cancellation** | Instant (pub/sub) | AbortSignal | — | Built-in | | **Cron / scheduling** | Built-in + leader election | Built-in | Built-in | Built-in | | **Workflows** | chain, group, chord | FlowProducer (tree) | — | — | | **Admin dashboard** | @taskora/board | Bull Board / Taskforce | Agendash | Community | | **Inspector / DLQ** | Built-in | QueueEvents + dashboard | Agendash | Built-in | | **Retry + backoff** | 4 strategies + selective | Built-in | Exponential | retryLimit/delay | | **Events** | Redis Streams (fan-out) | QueueEvents | — | Pub/sub | | **Backend** | Redis, Memory | Redis | MongoDB | PostgreSQL | ## Developer Experience Beyond the feature checklist, day-to-day DX is where Taskora pulls ahead. ### Typed from dispatch to result ```ts const handle = sendEmailTask.dispatch({ to: "a@b.com", body: "hi" }); // ^? ResultHandle<{ messageId: string }> const { messageId } = await handle.result; // ^? string — inferred end-to-end ``` `dispatch()` is synchronous — it returns a typed `ResultHandle` immediately. `await handle` confirms enqueue, `handle.result` resolves to the typed output. In BullMQ you need a separate `QueueEvents` instance and `job.waitUntilFinished(queueEvents)` with no output typing. ### Manual retry from the handler ```ts handler: async (data, ctx) => { const res = await fetch(data.url); if (res.status === 429) { const retryAfter = Number(res.headers.get("retry-after")) * 1000; throw ctx.retry({ delay: retryAfter, reason: "rate-limited" }); } return res.json(); } ``` `ctx.retry()` lets you control exactly when the next attempt fires — useful for respecting upstream `Retry-After` headers. In BullMQ you throw an error and hope the static backoff config aligns. ### Test without infrastructure ```ts const runner = createTestRunner({ from: app }); // patches prod app to memory const { result, state, logs } = await runner.execute(sendEmailTask, { to: "a@b.com", body: "hi" }); expect(state).toBe("completed"); expect(result.messageId).toBeDefined(); ``` `createTestRunner({ from: app })` takes your production app with all its tasks, middleware, and migrations — patches every task to use an in-memory backend. No Docker, no Redis, no mocks. Virtual time lets you test delayed jobs and retry backoff without `setTimeout`. ### Zero-config ioredis BullMQ requires `maxRetriesPerRequest: null` and `enableReadyCheck: false` — without them, a reconnect kills the worker loop. Taskora's blocking commands run inside internal retry loops, so ioredis defaults just work. One less footgun on day one. ### One object, all capabilities BullMQ splits every concern into a separate class — `Queue` for dispatch, `Worker` for processing, `QueueEvents` for listening, `FlowProducer` for flows. You wire them together by name strings, and types don't flow between them. Taskora puts everything on the task: ```ts const sendEmailTask = taskora.task("send-email", { schema: { in: EmailInput, out: EmailOutput }, retry: { attempts: 3, backoff: "exponential" }, timeout: "30s", concurrency: 10, schedule: { cron: "0 9 * * MON" }, middleware: [withSentry], handler: async (data, ctx) => { ctx.log.info("sending", { to: data.to }); const id = await mailer.send(data); return { messageId: id }; }, }); // dispatch, events, workflows, inspect — all from the same object sendEmailTask.dispatch({ to: "a@b.com", body: "hi" }); sendEmailTask.on("completed", ({ result }) => { /* typed */ }); sendEmailTask.s({ to: "a@b.com", body: "hi" }).pipe(logTask.s()); ``` Schema, retry, timeout, concurrency, schedule, middleware, handler — one definition, one place to look. No name-string wiring, no scattered config across multiple class instances. ### Everything composes with types Contracts, workflows, middleware, events — types flow through every layer: ```ts // contract — no handler import needed on the producer const resizeContract = defineTask({ name: "resize-image", schema: { in: ImageInput, out: ImageOutput }, }); // workflow — output types chain automatically const pipeline = chain( downloadTask.s({ url }), // → Buffer resizeTask.s(), // Buffer → ImageOutput uploadTask.s(), // ImageOutput → { cdn: string } ); const { cdn } = await pipeline.dispatch().result; // ^? string ``` No `as unknown as` casts, no manual generics, no separate type packages. Define the schema once — types propagate from dispatch through middleware, handler, events, workflows, contracts, and result handles. ## Key Differences ### vs BullMQ BullMQ is the most established Redis-based queue. Taskora differs in: * **Workflows** — BullMQ has FlowProducer for parent-child tree structures. Taskora has full DAG composition: [chain](./features/workflows.md) (sequential pipeline), [group](./features/workflows.md#group) (parallel fan-out), [chord](./features/workflows.md#chord) (fan-in merge), plus `.map()` / `.chunk()` sugar — all type-safe end-to-end. `WorkflowHandle` gives you `.result`, `.cancel()`, `.getState()`. * **Task contracts** — Taskora has [`defineTask()`](./guide/contracts.md) and `staticContract()` for producer/consumer split. Define a contract once, `register()` on the producer side, `implement()` on the worker. Types flow through, no handler import needed on the producer. BullMQ leaves this to userland. * **Middleware** — BullMQ uses event hooks for cross-cutting concerns. Taskora has a composable [Koa-style middleware](./features/middleware.md) pipeline where you can transform `ctx.data` before the handler and inspect `ctx.result` after. * **Schema validation** — Taskora validates input/output with any [Standard Schema](./guide/tasks.md#with-schema-validation) library (Zod, Valibot, ArkType) — on both dispatch and worker side. BullMQ leaves this to userland. * **Versioning** — BullMQ has no built-in migration story. Taskora has full [migration chains](./features/versioning.md): bump version, add a migration function, deploy — old jobs in the queue are migrated automatically before the handler runs. * **Testing** — BullMQ requires Redis (or redis-memory-server / ioredis-mock). Taskora ships [`taskora/test`](./testing/index.md) with an in-memory adapter and virtual time — no Docker, no mocks. * **Batch collect** — Accumulating items into batches is a Pro (paid) feature in BullMQ. It's [built into Taskora](./features/batch-processing.md) with three flush triggers (debounce, maxSize, maxWait). * **TTL / singleton** — Taskora has first-class [TTL with fail/discard policies](./features/ttl-expiration.md), singleton mode (one active job per task), and concurrency-per-key. BullMQ has worker-level timeout but no dispatch-time TTL; group-based concurrency is Pro only. * **Cancellation** — Both support cancellation, but Taskora uses Redis pub/sub for [instant cancel delivery](./features/cancellation.md) to the worker mid-execution — no polling. Active jobs get `ctx.signal` aborted immediately, plus an `onCancel` cleanup hook. * **Dashboard** — BullMQ has the open-source Bull Board and paid Taskforce.sh. Taskora ships [`@taskora/board`](./operations/board.md) — a built-in admin dashboard with workflow DAG visualization, throughput charts, DLQ management, job timeline, and real-time SSE updates. ### vs Agenda Agenda is a MongoDB-based scheduler, strong for cron-style recurring jobs: * **Database** — Agenda is MongoDB-only. Taskora uses Redis (with PostgreSQL planned). * **Focus** — Agenda is primarily a job scheduler. Taskora is a full task queue with scheduling as one feature among many — workflows, batch collect, middleware, contracts, versioning, and more. * **Type safety** — Agenda v6 is TypeScript-native, but doesn't offer typed dispatch-to-result flows. Taskora's `ResultHandle` gives you typed `.result`, and task contracts carry input/output types across service boundaries. * **Dashboard** — Agenda has Agendash. Taskora has `@taskora/board` with real-time SSE, workflow DAG visualization, and DLQ management. * **Testing** — Agenda requires a MongoDB instance. Taskora tests run in pure memory with virtual time. ### vs pg-boss pg-boss runs on PostgreSQL — useful if you don't want to add Redis: * **Database** — pg-boss leverages PostgreSQL for queueing (SKIP LOCKED). Taskora uses Redis for performance-critical dequeue (BZPOPMIN, Lua scripts). * **Throttle/dedup** — pg-boss has singleton-based throttling and deduplication. Taskora offers separate [debounce, throttle, and dedup](./features/flow-control.md) primitives with configurable keys and windows. * **Workflows** — pg-boss has no built-in workflow composition. Taskora has type-safe chain, group, and chord with DAG execution. * **Testing** — pg-boss requires a PostgreSQL instance. Taskora tests run in pure memory. * **Dashboard** — Both have admin dashboards. Taskora's `@taskora/board` includes workflow DAG visualization and real-time SSE. ## Migration from BullMQ | BullMQ | Taskora | |---|---| | `new Queue(name)` + `new Worker(name, fn)` | `createTaskora({ adapter })` + `taskora.task(name, fn)` | | `queue.add(name, data)` | `task.dispatch(data)` | | `job.waitUntilFinished(events)` | `await handle.result` | | `worker.on("completed", fn)` | `task.on("completed", fn)` | | `worker.concurrency` option | `taskora.task(name, { concurrency })` | | `job.progress(value)` | `ctx.progress(value)` | | `job.log(msg)` | `ctx.log.info(msg)` | | `new FlowProducer().add(tree)` | `chain(a.s(data), b.s()).dispatch()` | | `QueueScheduler` | Built-in, automatic | | Separate producer / worker files | `defineTask()` + `register()` / `implement()` | ### Steps 1. Create an instance with `createTaskora({ adapter: redisAdapter(...) })` 2. Define tasks with `taskora.task(name, { handler, ... })` 3. Replace `queue.add()` with `task.dispatch(data)` 4. Replace event listeners with `task.on()` or `taskora.on()` 5. Replace result polling with `await handle.result` 6. Move retry, concurrency, timeout config into task options 7. Add `await taskora.start()` / `await taskora.close()` 8. Write tests using `createTestRunner()` — no Redis needed 9. Replace `FlowProducer` trees with `chain()` / `group()` / `chord()` 10. If splitting services — replace shared Queue instances with [task contracts](./guide/contracts.md) ### ioredis connection config BullMQ requires `maxRetriesPerRequest: null` and `enableReadyCheck: false` on the ioredis client — without them, a long reconnect can surface an uncaught `MaxRetriesPerRequestError` that kills the worker loop. **You can drop both options when moving to Taskora.** Taskora's blocking commands (`BZPOPMIN`, `XREAD BLOCK`) run inside retry loops in the worker, event reader, and job waiter, so a transient ioredis error is swallowed and retried on the next tick. ioredis defaults are safe. See [Adapters → ioredis driver](./guide/adapters.md#ioredis-driver-default) for the full explanation. --- --- url: 'https://kravetsone.github.io/taskora/taskora/integrations/nestjs/consumers.md' --- # Consumers A consumer is a Nest provider class that handles jobs for a specific task contract. You write it the same way you'd write any `@Injectable()` service, with full constructor DI — the only addition is the `@TaskConsumer(contract)` decorator and a `process(data, ctx)` method. ## Basic consumer ```ts import { TaskConsumer } from "@taskora/nestjs" import type { InferInput, InferOutput, Taskora } from "taskora" import { MailerService } from "./mailer.service" import { sendEmailTask } from "@/tasks" @TaskConsumer(sendEmailTask) export class SendEmailConsumer { constructor(private readonly mailer: MailerService) {} async process( data: InferInput, ctx: Taskora.Context, ): Promise> { ctx.log.info("sending", { to: data.to }) return this.mailer.send(data) } } ``` Register it as a normal Nest provider: ```ts @Module({ providers: [SendEmailConsumer, MailerService], }) export class EmailModule {} ``` That's everything. On `onApplicationBootstrap`, the `TaskoraExplorer` walks the DI graph, finds this class, calls `app.implement(sendEmailTask, (data, ctx) => instance.process(data, ctx))`, and then starts the app. Injected dependencies (`MailerService` here) stay live across every job run — you get one consumer instance for the lifetime of the process, exactly like every other `@Injectable()`. ### Why `InferInput` and not a hand-written type? `defineTask` stores the input/output schema on the contract value. `InferInput` reads the type directly off that value, so renaming or reshaping the schema updates the consumer's signature automatically. You could write `data: { to: string; subject: string }` by hand, but it drifts — one day someone adds `cc?: string[]` to the schema and the consumer quietly ignores it. ## `@TaskConsumer` options ```ts @TaskConsumer(sendEmailTask, { concurrency: 10, // parallel in-flight jobs for this task timeout: "30s", // aborts the handler + fails the job retry: { attempts: 5, backoff: "exponential", delay: 1000, maxDelay: 60_000, }, singleton: false, // "only one in-flight at a time globally" concurrencyLimit: undefined, // key-based rate limit — see dispatch options ttl: { max: "10m", onExpire: "fail" }, stall: { interval: 30_000, maxCount: 1 }, version: 2, // payload version for schema migrations since: 1, // oldest supported version app: "secondary", // multi-app routing — see below }) ``` These are the same fields `app.implement(contract, handler, options)` accepts in bare taskora, minus the worker-side-only `handler`, `onCancel`, `middleware`, and `migrate` fields (which arrive via different DI mechanisms — see the sections below). ## Multi-app routing `@TaskConsumer(contract, { app: 'secondary' })` binds the consumer to a specific named app registered via `TaskoraModule.forRoot({ name: 'secondary' })`. The explorer filters consumers by the `app` option, so each named app only picks up its own consumers. ```ts @TaskConsumer(sendEmailTask) // → default app class DefaultEmailConsumer {} @TaskConsumer(sendEmailTask, { app: "secondary" }) // → "secondary" app class SecondaryEmailConsumer {} ``` Both consumers handle `sendEmailTask`, but they run in isolated apps with independent Redis connections. A dispatch via `TaskoraRef.for(sendEmailTask)` reaches the default consumer; a dispatch via `@InjectTaskoraRef('secondary')` → `.for(sendEmailTask)` reaches the secondary. See [Deployment > Multi-app](./deployment#multi-app) for the full pattern. ## Using `ctx` inside `process()` The second argument to `process()` is taskora's `Context` — a per-job handle with progress, logs, retry helpers, and cancellation: ```ts async process(data: InferInput, ctx: Taskora.Context) { ctx.log.info("starting", { url: data.url }) await ctx.progress({ phase: "downloading" }) const buffer = await fetch(data.url).then((r) => r.arrayBuffer()) await ctx.progress({ phase: "transforming", percent: 30 }) if (ctx.signal.aborted) throw ctx.signal.reason // fail fast on cancel const result = await this.pipeline.transform(buffer, ctx.signal) await ctx.progress({ phase: "uploading", percent: 90 }) const url = await this.storage.upload(result) ctx.log.info("done", { url }) return { url, width: result.width, height: result.height } } ``` Key fields: * **`ctx.log.info/warn/error`** — structured logs stored against the job, visible in the inspector and the board. * **`ctx.progress(value)`** — number or object, surfaced in the board and via `inspector.find()`. * **`ctx.signal`** — `AbortSignal` that fires on cancellation / timeout / stall recovery. Pass it to `fetch`, child processes, or manual `await` loops so handlers exit promptly. * **`ctx.retry({ delay?, reason? })`** — returns a `RetryError` you can `throw` to reschedule the job immediately (bypassing `retryOn`/`noRetryOn` filters). * **`ctx.attempt`** — current attempt number (1-indexed). See the main [Job Context](/guide/job-context) guide for the full surface. ## Schema validation If the contract has a Standard Schema (Zod, Valibot, ArkType) attached via `defineTask({ input, output })`, taskora runs validation on both sides automatically: * **Producer**: `TaskoraRef.for(contract).dispatch(data)` validates `data` against `input` before enqueueing (can be disabled globally via `validateOnDispatch: false` or per-call via `dispatch(data, { skipValidation: true })`). * **Worker**: after deserialization and migration, taskora validates `data` against `input` again before calling `process()`. If the contract also has an `output` schema, the value returned from `process()` is validated before being stored as the result. Validation errors are regular `ValidationError` throws — they go through the normal retry machinery and end up in the DLQ if `attempts` is exhausted. You don't need to validate inside `process()` — by the time it runs, `data` is already typed and proven to match the schema. ## `@OnTaskEvent` — event bindings Method-level decorator that wires a consumer method to a per-task event on the same Task that the consumer handles: ```ts import { OnTaskEvent, TaskConsumer } from "@taskora/nestjs" @TaskConsumer(sendEmailTask) export class SendEmailConsumer { constructor( private readonly mailer: MailerService, private readonly metrics: MetricsService, ) {} async process(data: InferInput) { return this.mailer.send(data) } @OnTaskEvent("completed") onDone(evt: Taskora.TaskEventMap>["completed"]) { this.metrics.counter("email.sent").inc() this.metrics.histogram("email.duration_ms").observe(Number(evt.duration)) } @OnTaskEvent("failed") onFail(evt: Taskora.TaskEventMap["failed"]) { this.metrics.counter("email.failed", { reason: evt.error }).inc() } @OnTaskEvent("retrying") onRetry(evt: Taskora.TaskEventMap["retrying"]) { this.metrics.counter("email.retried").inc() } } ``` Valid event names match the keys of `Taskora.TaskEventMap`: | Event | Payload | When | |---|---|---| | `completed` | `{ result, duration, attempt }` | Handler returned successfully | | `failed` | `{ error, attempt, willRetry }` | Handler threw (with or without retry) | | `retrying` | `{ error, attempt, nextAttemptAt }` | Handler threw and is being retried | | `progress` | `{ value, timestamp }` | Handler called `ctx.progress(...)` | | `active` | `{ jobId, attempt }` | Job transitioned to active state | | `stalled` | `{ count, action }` | Stall detection recovered or failed a job | | `cancelled` | `{ reason, cancelledAt }` | `handle.cancel()` was called | ### DI still works in event handlers `@OnTaskEvent` methods run on the same consumer instance as `process()`. The explorer binds each method via `instance[method].bind(instance)`, so `this.metrics` and `this.mailer` resolve normally inside event handlers — there's no detached context, no separate instance per event. ### App-level events `@OnTaskEvent` only wires **per-task** events. For cross-task app events (`worker:ready`, `worker:error`, `task:completed` across all tasks), inject the raw `App` and subscribe manually from a dedicated service: ```ts import { Injectable, OnModuleInit } from "@nestjs/common" import { InjectApp } from "@taskora/nestjs" import type { App } from "taskora" @Injectable() export class WorkerHealthService implements OnModuleInit { constructor(@InjectApp() private readonly app: App) {} onModuleInit() { this.app.on("worker:ready", () => this.markHealthy()) this.app.on("worker:error", (err) => this.reportError(err)) } } ``` ## Lifecycle ordering The explorer runs its discovery pass **inside** `onApplicationBootstrap`. The sequence for every named app is: 1. Nest resolves all providers (constructors run, DI is wired). 2. Explorer walks providers via `DiscoveryService.getProviders()`. 3. For each `@TaskConsumer`, explorer calls `app.implement(contract, handler, options)`. 4. For each `@OnTaskEvent` method on that consumer, explorer calls `task.on(event, boundHandler)`. 5. Explorer calls `app.start()` (unless `autoStart: false`). 6. Worker loop begins pulling jobs. The ordering is guaranteed: **all `implement()` calls finish before `start()` runs**. You can't get a race where a worker picks up a job for a contract whose handler hasn't been attached yet. On `onApplicationShutdown` the explorer awaits `app.close()`, which drains in-flight jobs up to the stall timeout and then returns. Make sure `app.enableShutdownHooks()` is called in `main.ts` so SIGTERM triggers the shutdown sequence. ## Consumer-owned scheduling A consumer can declare a schedule via the explorer's option passthrough — but actually, `TaskConsumer` options do **not** accept `schedule` at the moment. Use `app.schedule()` from a separate `OnModuleInit` provider or define the schedule at the task contract level via `defineTask`. Schedule discovery on consumers is tracked for a future release — see the [Scheduling](/features/scheduling) docs for the direct-taskora path. ## Error handling inside `process()` Throw normally. taskora's retry machinery handles everything: ```ts async process(data: InferInput) { try { return await this.importer.run(data) } catch (err) { if (err instanceof TransientError) { // Let taskora retry per the consumer's retry config throw err } if (err instanceof PermanentError) { // Opt out of retries — goes to DLQ immediately throw new RetryError({ retry: false, reason: err.message }) } throw err // default — respects retry config } } ``` * **Regular throws** go through the `retry.attempts` / `backoff` / `retryOn` / `noRetryOn` machinery. * **`ctx.retry({ delay, reason })`** returns a `RetryError` that bypasses filters — always retries, with the given delay. * **`new RetryError({ retry: false })`** short-circuits the retry machinery entirely, sending the job to the DLQ on first failure. * **`TimeoutError`** (when the consumer's `timeout` fires) is **not** retried by default — add `"TimeoutError"` to `retryOn` if you want it retried. ## Testing consumers Consumers are regular Nest providers, so `@nestjs/testing`'s `Test.createTestingModule` just works. For end-to-end coverage that actually runs the handler, use [`@taskora/nestjs/testing`](./testing): ```ts import { createTaskoraTestHarness } from "@taskora/nestjs/testing" it("sends the welcome email", async () => { const harness = await createTaskoraTestHarness({ providers: [SendEmailConsumer, MailerService], }) const result = await harness.execute(sendEmailTask, { to: "alice@example.com", subject: "Welcome", }) expect(result.state).toBe("completed") expect(result.result?.messageId).toBeDefined() await harness.close() }) ``` See the [Testing](./testing) page for the full patterns — the harness runs the real explorer + real consumers + real DI in-memory. --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/contracts.md' --- # Contracts A **task contract** is a pure, serializable declaration of a task — its name, input/output schemas, and defaults — with no runtime dependency on `App`, `Worker`, or `Adapter`. Contracts let you split task *declaration* from task *implementation*. ## When to reach for contracts The inline form — `taskora.task("send-email", { handler, input, output })` — ties declaration and implementation together in a single call. That's the **default and correct choice for most projects**. If producer and worker live in the same process, inline tasks are simpler, give you the same type safety, and compose into workflows identically. Don't use contracts just because they sound cleaner. Contracts exist to solve one specific problem: **the producer needs to dispatch a job, but can't import the handler**. This happens when: * The worker's handler has heavy runtime dependencies (`sharp`, `puppeteer`, `ffmpeg`, native bindings, large ML models) that you don't want in the API server bundle. * The producer runs somewhere the handler physically can't run — edge runtime, browser, serverless with cold-start budget. * Multiple services dispatch to the same worker pool, and each service would otherwise need to duplicate the input/output types manually. In those cases, inline tasks force the producer to import the handler file and its entire transitive dependency graph. Contracts fix that: the contract is the shared surface, handlers live only where they're needed. If none of the above applies, stay with inline `taskora.task()` — contracts add a layer of indirection you don't need. ## `defineTask()` Create a contract with runtime schemas. Types are inferred from any [Standard Schema](https://standardschema.dev) compatible library — Zod, Valibot, ArkType, etc. ```ts // contracts/tasks.ts — shared between producer and worker import { defineTask } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string().email(), subject: z.string(), body: z.string(), }), output: z.object({ messageId: z.string() }), retry: { attempts: 3, backoff: "exponential" }, timeout: "30s", }) ``` `defineTask` returns a `TaskContract` — a plain object with the task's metadata. It has no `dispatch()` method by itself: to dispatch, you bind the contract to an `App` via `register()` or `implement()`. Contract fields that make sense on both sides of the split (retry, timeout, stall, version) can live in the contract as defaults. Worker-side overrides are applied at `implement()` time. ## `staticContract()` For bundle-size-sensitive producers — edge runtimes, browsers, serverless functions where shipping Zod or Valibot at runtime is a cost you don't want — `staticContract` creates a typeless contract with no runtime schemas: ```ts import { staticContract } from "taskora" export const sendEmailTask = staticContract< { to: string; subject: string; body: string }, { messageId: string } >({ name: "send-email" }) ``` Same API surface as `defineTask`, but the input/output types live purely at the type level. The worker still validates — workers always run schema validation before the handler, independent of what the producer shipped. The safety net is at the worker boundary, not producer. ## Producer side — `taskora.register()` A producer process never runs a handler. It registers the contract to get a dispatchable `BoundTask`: ```ts // api/server.ts — producer (no handler imports) import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" import { sendEmailTask } from "../contracts/tasks.js" const taskora = createTaskora({ adapter: redisAdapter(process.env.REDIS_URL!) }) const sendEmail = taskora.register(sendEmailTask) // Fully typed: TypeScript enforces { to, subject, body } const handle = sendEmail.dispatch({ to: "alice@example.com", subject: "Welcome", body: "...", }) const result = await handle.result // { messageId: string } ``` `register()` is **idempotent by task name**: calling it twice for the same contract returns the same underlying `BoundTask`. Existing tasks declared inline via `taskora.task()` can also be wrapped — `register()` never overwrites or conflicts. Producer processes can still call `taskora.start()`. The worker loop simply skips tasks that have no handler — dispatch and event subscription keep working as normal. ## Worker side — `taskora.implement()` A worker attaches a handler to a contract with `taskora.implement()`. Three call forms, pick whichever fits: ### Bare handler The common case. Data and result types are inferred from the contract's schemas. ```ts // worker/main.ts import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" import { sendEmailTask } from "../contracts/tasks.js" import { mailer } from "./mailer.js" const taskora = createTaskora({ adapter: redisAdapter(process.env.REDIS_URL!) }) taskora.implement(sendEmailTask, async (data, ctx) => { ctx.log.info("sending", { to: data.to }) const { id } = await mailer.send(data) return { messageId: id } }) await taskora.start() ``` ### Handler + worker-side options Worker-only config (concurrency, middleware, onCancel, migrations, singleton, concurrencyLimit, ttl) goes in the third argument. ```ts taskora.implement( processImageTask, async (data, ctx) => { const result = await sharp(data.url).resize(data.width).toBuffer() return { key: await s3.put(result) } }, { concurrency: 4, middleware: [withTracing(), withMetrics()], version: 3, migrate: [ (v1) => ({ ...v1, width: v1.w }), (v2) => ({ ...v2, width: v2.width ?? 800 }), ], }, ) ``` ### Object form Required for [collect tasks](/guide/tasks#collect-tasks) (the handler signature is `(items: I[], ctx) => ...` instead of `(data: I, ctx) => ...`), and cleaner when the config is larger than the handler body. ```ts taskora.implement(batchEmailTask, { collect: { key: "user-emails", delay: "5s", maxSize: 100 }, handler: async (items, ctx) => { await mailer.sendBatch(items) return { sent: items.length } }, }) ``` `implement()` throws if called twice for the same contract in the same process. Calling `implement()` after `register()` is **not** a double-implement — it's the intended upgrade path. The existing `BoundTask` returned by `register()` is updated in place and keeps working. ## Workflow composition from contracts Once registered or implemented, contract-based `BoundTask`s compose into workflows identically to inline tasks: ```ts import { chain, group, chord } from "taskora" import { fetchUserTask, renderTemplateTask, sendEmailTask } from "../contracts/tasks.js" const fetchUser = taskora.register(fetchUserTask) const renderTemplate = taskora.register(renderTemplateTask) const sendEmail = taskora.register(sendEmailTask) // Producer-side composition — no handler code needed const handle = chain( fetchUser.s({ id: "42" }), renderTemplate.s(), sendEmail.s(), ).dispatch() await handle.result ``` The workflow's individual jobs run on whichever process has implemented each contract. A single workflow can span multiple worker deployments — jobs are matched to workers by task name. ## Validation knob — `validateOnDispatch` By default, `dispatch()` validates input against the task's Standard Schema before enqueueing. Two ways to disable: ```ts // Global — disables for every dispatch in this app const taskora = createTaskora({ adapter: redisAdapter(url), validateOnDispatch: false, }) // Per-call — overrides the global default sendEmail.dispatch(data, { skipValidation: true }) ``` **Worker-side validation is unaffected.** Workers always run schema validation before the handler, so job data is still checked at some boundary. Disable producer-side validation when: * The producer has already validated upstream (e.g. tRPC / a REST framework validated the request body). * The producer uses `staticContract()` and has no schema to run. * You're profiling and have measured validation cost as a bottleneck. ## Type inference helpers — `InferInput` / `InferOutput` Contracts (and tasks, and bound tasks) carry `TInput` / `TOutput` as generic parameters. Pulling those out by hand means typing out `ThingType ? I : never` every time. `taskora` ships two helpers that do it for you — and they work on everything that carries a task input or output. ```ts import { defineTask, type InferInput, type InferOutput } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string(), subject: z.string() }), output: z.object({ messageId: z.string() }), }) type EmailInput = InferInput // { to: string; subject: string } type EmailResult = InferOutput // { messageId: string } ``` One helper, every carrier. `InferInput` / `InferOutput` work on: * `TaskContract` — from `defineTask()` / `staticContract()` * `Task` — from `taskora.task()` * `BoundTask` — from `taskora.register()` / `taskora.implement()` * `ResultHandle` — output only; `InferInput` resolves to `never` * `WorkflowHandle` — output only * `Signature` / `ChainSignature` / `GroupSignature` / `ChordSignature` — workflow building blocks. Groups and chords resolve to `never` under `InferInput` since they have no single input type. ```ts // Reuse the inferred type in a request handler — no duplication. app.post("/email", async (req, res) => { const payload: InferInput = req.body const handle = sendEmailTask.dispatch(payload) res.json({ jobId: await handle }) }) ``` ### Avoiding name collisions with other libraries `InferInput` / `InferOutput` are common names — Zod, ArkType, and other schema libraries export their own versions. If you've already imported `InferInput` from somewhere else, reach for the namespaced form: ```ts import type { Taskora } from "taskora" type EmailInput = Taskora.InferInput type EmailResult = Taskora.InferOutput ``` The namespace form resolves to the same types — no functional difference, just a safer import when you're working in a project that already owns the unqualified names. ## When to pick which declaration style | | Inline `taskora.task()` | Contract-based | |---|---|---| | Monolith (API + workers in one process) | ✅ simpler | also works | | Web API + separate worker deployment | handler leaks into API bundle | ✅ clean split | | Multi-package monorepo (`workspace:*`) | requires duplicating types | ✅ contracts go in `packages/contracts` | | Edge runtime / browser producer | ❌ handler deps | ✅ `staticContract()` | | Quick prototype | ✅ less ceremony | overkill | Both styles are first-class. You can freely mix them — for example, declare internal tasks inline and public API tasks as contracts. See [Splitting Services](/guide/splitting-services) for how to physically organize contract files across your codebase. --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/conventions.md' --- # Conventions Taskora conventions that keep codebases consistent as they grow. Follow these and your tasks will be easy to find, read, and debug. ## Naming ### Instance: `taskora` Always name your instance `taskora`. Never `app` — too generic, conflicts with Express/Fastify/Hono. ```ts import { createTaskora } from "taskora" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost"), }) ``` ### Tasks: `*Task` suffix Every task variable ends with `Task`. This instantly distinguishes tasks from regular functions and services. ```ts // ✓ Clear what's a task const sendEmailTask = taskora.task("send-email", { ... }) const processImageTask = taskora.task("process-image", { ... }) const chargeCardTask = taskora.task("charge-card", { ... }) // ✗ Ambiguous — is this a function or a task? const sendEmail = taskora.task("send-email", { ... }) ``` This matters most when you have both: ```ts // The function that actually sends email async function sendEmail(to: string, body: string) { ... } // The task that wraps it for async processing const sendEmailTask = taskora.task("send-email", { handler: async (data) => sendEmail(data.to, data.body), }) ``` ### Task string names: kebab-case The first argument to `taskora.task()` is the task's identity — it maps to Redis keys and must be stable. ```ts // ✓ kebab-case taskora.task("send-email", { ... }) taskora.task("process-image", { ... }) taskora.task("generate-invoice", { ... }) // ✗ Other styles taskora.task("sendEmail", { ... }) // camelCase taskora.task("send_email", { ... }) // snake_case taskora.task("SendEmail", { ... }) // PascalCase ``` ### Workflow handles: descriptive names ```ts const onboardingFlow = chain( createUserTask.s({ name: "John" }), sendWelcomeEmailTask.s(), notifySlackTask.s(), ).dispatch() const batchResult = processImageTask.map(images) ``` ## Project Structure ### Recommended layout ``` src/ ├── taskora.ts ← createTaskora + config ├── tasks/ │ ├── email.ts ← sendEmailTask, sendWelcomeEmailTask │ ├── billing.ts ← chargeCardTask, generateInvoiceTask │ ├── notifications.ts ← notifySlackTask, sendPushTask │ └── onboarding.ts ← onboarding workflow (chain of tasks above) └── worker.ts ← taskora.start() ``` ### With a Telegram bot (GramIO) ``` src/ ├── taskora.ts ← createTaskora + config ├── bot.ts ← GramIO bot instance ├── tasks/ │ ├── notifications.ts ← sendTelegramMessageTask, broadcastTask │ ├── moderation.ts ← checkSpamTask, banUserTask │ ├── media.ts ← processPhotoTask, generateThumbnailTask │ └── onboarding.ts ← welcome message chain ├── bot/ │ ├── commands/ │ │ ├── start.ts ← /start — dispatches welcome task │ │ └── settings.ts ← /settings │ └── callbacks/ │ └── subscribe.ts ← inline button → dispatches task └── index.ts ← bot.start() + taskora.start() ``` ```ts // src/tasks/notifications.ts import { taskora } from "../taskora.js" export const sendTelegramMessageTask = taskora.task("send-telegram-message", { retry: { attempts: 3, backoff: "exponential", delay: 2000 }, handler: async (data: { chatId: number; text: string }) => { // Rate-limited — offloaded from bot handler to task queue await bot.api.sendMessage({ chat_id: data.chatId, text: data.text }) return { sent: true } }, }) export const broadcastTask = taskora.task("broadcast", { timeout: 300_000, // 5 min for large broadcasts handler: async (data: { chatIds: number[]; text: string }, ctx) => { let sent = 0 for (const chatId of data.chatIds) { if (ctx.signal.aborted) break sendTelegramMessageTask.dispatch({ chatId, text: data.text }) sent++ ctx.progress(sent / data.chatIds.length) } return { sent } }, }) ``` ```ts // src/bot/commands/start.ts — bot handler dispatches task import { sendTelegramMessageTask } from "../../tasks/notifications.js" bot.command("start", async (context) => { // Respond immediately await context.send("Welcome! Setting things up...") // Heavy work goes to the queue sendTelegramMessageTask.dispatch({ chatId: context.chatId, text: "Your account is ready! Here's what you can do...", }) }) ``` ### With a REST API (Elysia) ``` src/ ├── taskora.ts ← createTaskora + config ├── server.ts ← Elysia instance ├── tasks/ │ ├── email.ts ← sendEmailTask, sendInvoiceTask │ ├── reports.ts ← generateReportTask, exportCsvTask │ ├── webhooks.ts ← deliverWebhookTask │ └── order-fulfillment.ts ← chain: validate → charge → ship → notify ├── routes/ │ ├── orders.ts ← POST /orders → dispatches tasks │ └── reports.ts ← POST /reports → dispatches task, returns handle └── index.ts ← server.listen() + taskora.start() ``` ```ts // src/routes/orders.ts — Elysia route dispatches workflow import { Elysia, t } from "elysia" import { chain } from "taskora" import { validateOrderTask, chargePaymentTask, sendConfirmationTask } from "../tasks/orders.js" export const orderRoutes = new Elysia({ prefix: "/orders" }) .post("/", async ({ body }) => { // Dispatch workflow, return immediately const handle = chain( validateOrderTask.s(body), chargePaymentTask.s(), sendConfirmationTask.s(), ).dispatch() await handle // ensure dispatched return { orderId: handle.workflowId, status: "processing" } }, { body: t.Object({ items: t.Array(t.Object({ sku: t.String(), qty: t.Number() })), email: t.String(), }), }) ``` ```ts // src/routes/reports.ts — long-running task with polling import { Elysia, t } from "elysia" import { generateReportTask } from "../tasks/reports.js" export const reportRoutes = new Elysia({ prefix: "/reports" }) .post("/", async ({ body }) => { const handle = generateReportTask.dispatch(body) const id = await handle return { reportId: id, status: "generating" } }) .get("/:id/status", async ({ params }) => { const state = await generateReportTask.inspect(params.id) return { status: state } }) ``` ### taskora.ts — single source Define `createTaskora` once, export the instance. All task files import from here. ```ts // src/taskora.ts import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" export const taskora = createTaskora({ adapter: redisAdapter(process.env.REDIS_URL!), defaults: { retry: { attempts: 3, backoff: "exponential", delay: 1000 }, timeout: 30_000, }, }) ``` ### One file per domain Group tasks by business domain, not by technical concern. ```ts // src/tasks/email.ts import { taskora } from "../taskora.js" export const sendEmailTask = taskora.task("send-email", { handler: async (data: { to: string; subject: string; body: string }) => { return await mailer.send(data) }, }) export const sendWelcomeEmailTask = taskora.task("send-welcome-email", { handler: async (data: { userId: string }) => { const user = await db.users.find(data.userId) return await mailer.send({ to: user.email, subject: "Welcome!", body: "..." }) }, }) ``` ### worker.ts — import tasks, start ```ts // src/worker.ts import { taskora } from "./taskora.js" // Import all task files so they register on the taskora instance import "./tasks/email.js" import "./tasks/billing.js" import "./tasks/notifications.js" await taskora.start() console.log("Worker started") ``` ## Handler Patterns ### Return serializable values Handler results are serialized (JSON by default). Return plain objects, arrays, strings, numbers. ```ts // ✓ Plain object handler: async (data) => { const user = await db.users.create(data) return { id: user.id, email: user.email } } // ✗ Class instances, functions, circular refs handler: async (data) => { return new User(data) // won't serialize correctly } ``` ### Use `ctx.log`, not `console.log` Logs via `ctx.log` are stored with the job and visible in the inspector. ```ts handler: async (data, ctx) => { ctx.log.info("Processing started", { imageUrl: data.url }) const result = await processImage(data.url) ctx.log.info("Processing complete", { size: result.size }) return result } ``` ### Check `ctx.signal` in long operations Pass the abort signal to APIs that support it: ```ts handler: async (data, ctx) => { const response = await fetch(data.url, { signal: ctx.signal }) const body = await response.json() return body } ``` For manual checks in loops: ```ts handler: async (data, ctx) => { for (const item of data.items) { if (ctx.signal.aborted) break await processItem(item) ctx.progress(processed++ / data.items.length) } } ``` ### Design for idempotency Tasks may be retried. Write handlers that are safe to run twice with the same input. ```ts // ✓ Idempotent — uses upsert handler: async (data) => { await db.users.upsert({ email: data.email }, { name: data.name }) } // ✗ Not idempotent — creates duplicates on retry handler: async (data) => { await db.users.insert({ email: data.email, name: data.name }) } ``` ### Guard clauses first Check preconditions at the top, fail fast: ```ts handler: async (data, ctx) => { const user = await db.users.find(data.userId) if (!user) throw new Error(`User ${data.userId} not found`) if (!user.verified) throw new Error(`User ${data.userId} not verified`) // Happy path return await sendEmail(user.email, data.template) } ``` ## Workflow Patterns ### When to use what | Pattern | Use when | |---|---| | `chain(a, b, c)` | Steps must run in order, output flows forward | | `group(a, b, c)` | Steps are independent, run in parallel | | `chord([a, b], c)` | Parallel steps feed into a merge/reduce step | | `task.map(items)` | Same task, many inputs, all parallel | | `task.chunk(items, { size })` | Same task, many inputs, batched concurrency | ### Keep chains short Long chains are hard to debug. If a chain has more than 5 steps, consider splitting into sub-workflows or rethinking the pipeline. ```ts // ✓ Focused chain const processOrderFlow = chain( validateOrderTask.s(orderData), chargePaymentTask.s(), sendConfirmationTask.s(), ).dispatch() // ✗ Too long — where did it fail? const flow = chain(a.s(), b.s(), c.s(), d.s(), e.s(), f.s(), g.s(), h.s()).dispatch() ``` ### Name your workflows Store workflow compositions in variables with descriptive names: ```ts // src/workflows/onboarding.ts import { chain } from "taskora" import { createUserTask, sendWelcomeEmailTask, setupDefaultsTask } from "../tasks/users.js" export function dispatchOnboarding(data: { name: string; email: string }) { return chain( createUserTask.s(data), sendWelcomeEmailTask.s(), setupDefaultsTask.s(), ).dispatch() } ``` ### Error boundaries A failed step fails the entire workflow. For steps that can fail independently, use separate workflows or handle errors in the task handler: ```ts // This handler won't break the chain if Slack is down const notifySlackTask = taskora.task("notify-slack", { handler: async (data) => { try { await slack.send(data.channel, data.message) return { sent: true } } catch { return { sent: false } // swallow error, don't break workflow } }, }) ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/core-concepts.md' --- # Core Concepts Understanding the building blocks of taskora. ## Architecture ``` ┌────────────────────────────────────────┐ │ App │ │ ┌──────┐ ┌──────┐ ┌──────┐ │ │ │Task A│ │Task B│ │Task C│ ... │ │ └──┬───┘ └──┬───┘ └──┬───┘ │ │ │ │ │ │ │ └────────┴────────┘ │ │ │ │ │ ┌─────┴─────┐ │ │ │ Adapter │ ← abstraction │ │ └─────┬─────┘ │ │ │ │ │ ┌────────┴────────┐ │ │ │ Redis / Memory │ │ │ └─────────────────┘ │ └────────────────────────────────────────┘ ``` ### App The taskora instance is the central registry. It holds your adapter, serializer, tasks, middleware, schedules, and configuration defaults. You create one per process. ### Task A `Task` is a named handler with configuration. It defines what happens when a job is processed — plus retry logic, timeout, concurrency, middleware, and more. ### Worker When you call `taskora.start()`, a `Worker` is created for each registered task. Workers use blocking dequeue (BZPOPMIN) to efficiently pull jobs from their task's queue — no polling. ### Adapter The `Adapter` is the storage abstraction layer. The core `taskora` package never imports `ioredis` or `pg` directly. This means: * You only install the driver you need * Swapping backends requires zero handler changes * Testing uses the in-memory adapter — no Docker needed ## Job States A job moves through a well-defined state machine: | State | Description | |---|---| | `waiting` | Queued, ready to be picked up by a worker | | `delayed` | Scheduled for future processing (delay or retry backoff) | | `active` | Currently being processed by a worker | | `completed` | Handler returned successfully | | `failed` | Handler threw and all retries exhausted | | `retrying` | Handler threw but will be retried (moves to `delayed`) | | `cancelled` | Explicitly cancelled via `handle.cancel()` | | `expired` | TTL exceeded before processing started | ### State Transitions ``` dispatch() ──→ waiting ──→ active ──→ completed │ │ (delay option) ├──→ failed │ │ ▼ ├──→ retrying ──→ delayed ──→ waiting delayed │ ├──→ cancelled │ └──→ expired ``` ## Connection Lifecycle ```ts const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379") }) // Adapter connects lazily on first dispatch sendEmailTask.dispatch({ to: "user@example.com", subject: "Hi" }) // Or start workers (connects + starts processing) await taskora.start() // Graceful shutdown — finishes active jobs, disconnects await taskora.close() ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/testing/cross-runtime.md' --- # Cross-Runtime CI [![Test](https://github.com/kravetsone/taskora/actions/workflows/test.yml/badge.svg)](https://github.com/kravetsone/taskora/actions/workflows/test.yml) Taskora officially supports **three JavaScript runtimes** (Node.js, Bun, Deno) and **two Redis client implementations** (ioredis and Bun's native `Bun.RedisClient`). Every commit and every pull request runs the complete integration suite against every valid combination — in parallel on GitHub Actions — before any code can land, and again before any version can ship. This page documents what is tested, how, and why the matrix exists. ## The matrix Five cells, all mandatory, all gating merges and releases: | # | Runtime | Driver entry point | Purpose | |---|---|---|---| | 1 | **Node 24** | `taskora/redis` → `taskora/redis/ioredis` | Current Active LTS line (since October 2025). The primary Node baseline going forward. | | 2 | **Node 20** | `taskora/redis` → `taskora/redis/ioredis` | Maintenance-LTS line — kept in the matrix until Node 20 hits end-of-life in April 2026, after which this cell gets dropped. | | 3 | **Bun 1.3+** | `taskora/redis` → `taskora/redis/ioredis` | Proves ioredis works under Bun's Node-compatibility layer. Use this if you are already running Bun and have other ioredis-using code in your project. | | 4 | **Bun 1.3+** | `taskora/redis/bun` → native `Bun.RedisClient` | Zero-peer-dependency path for Bun-only deployments. Goes straight through Bun's built-in Redis client and drops the ioredis peer dep entirely. | | 5 | **Deno 2.x** | `taskora/redis` → `taskora/redis/ioredis` | ioredis imported via Deno's `npm:` specifier and `--node-modules-dir=auto`. Lets you use Taskora from Deno projects without maintaining a separate Deno-native Redis adapter. | Not on the matrix: **Deno + `Bun.RedisClient`**. Bun's native Redis client is a Bun-runtime global — it is unavailable under Deno by definition. ## What runs Every cell executes **the same ~300-test integration suite** against a real Redis container. The suite exercises every production code path the library offers: * **Lua scripts** — all 30+ atomic server-side scripts (enqueue, dequeue, ack, fail, nack, retry, stall detection, delayed promotion, cancel, cancel-finish, workflow advance, workflow fail, DLQ retry, DLQ retry-all, DLQ trim, completed trim, version distribution, debounce, throttle, dedup, collect, scheduler tick, leader lock acquire, leader lock renew, clean jobs, list job details). * **Blocking dequeue** — `BZPOPMIN`-driven poll loop with marker sorted sets for instant wake on new work. * **Stream events** — `XADD` producer, `XREAD BLOCK` consumer, per-event enrichment via `HMGET`. * **Pub/sub cancellation** — `cancel.lua` → `PUBLISH` → worker's subscribe handler → `AbortController.abort("cancelled")` → `onCancel` hook cleanup. * **Distributed leader election** — `SET NX PX` token-based leader for the scheduler across multiple app instances. * **Workflow DAG execution** — `chain`, `group`, `chord`, nested compositions, cascade cancellation, terminal-node aggregation. * **Flow control** — debounce (replace delayed job), throttle (rate-limited enqueue via atomic Lua), deduplicate (skip if existing in matching state), collect (batch accumulator with three flush triggers). * **Retention and DLQ** — automatic trim of old completed / failed jobs with age + count policies, single-job retry, bulk retry-all. * **Schema validation** — Standard Schema spec, post-migration validation, default-value application. * **Schedulers** — interval and cron, overlap prevention, missed-run catch-up policies, pause / resume / trigger / remove / update runtime management. * **Migrations** — tuple and record migrate forms, version gating (future nack, past fail), inspector version distribution. * **Multi-instance coordination** — work distribution, stall recovery between pods, cross-pod cancellation. Per-matrix-cell wall-clock is around 50–80 seconds. The whole matrix completes in under two minutes of real time because the cells run in parallel. ## How each runtime is invoked The test runner is **Vitest** on every cell — one test harness, one config, one report format. Only the entry point differs: ::: code-group ```bash [Node] npx vitest run ``` ```bash [Bun + ioredis] bunx --bun vitest run ``` ```bash [Bun + BunDriver] TASKORA_TEST_DRIVER=bun bunx --bun vitest run ``` ```bash [Deno] deno run -A --node-modules-dir=auto npm:vitest/vitest.mjs run ``` ::: Two details matter: 1. **`bunx --bun` is load-bearing.** Plain `bunx` follows the vitest binary's shebang and routes through Node — which silently ran the test suite under Node even in workflows that looked like they were testing Bun. Only `bunx --bun` forces execution under the Bun runtime. 2. **`TASKORA_TEST_DRIVER=bun` is the switch** between `taskora/redis/ioredis` and `taskora/redis/bun`. Internally the test suite imports from a thin shim (`tests/create-adapter.ts`) that picks the real factory at module load based on this env var. Test files themselves are runtime-agnostic. ## Redis provisioning CI uses a GitHub Actions `services: redis:7-alpine` sidecar and sets `REDIS_URL=redis://localhost:6379` for every cell. The test `globalSetup` honors a pre-existing `REDIS_URL` and skips its own testcontainers spin-up — so there is no Docker-in-Docker, no test-container lifecycle management per cell, and all five cells talk to the same shared Redis instance. Locally, if you already have Redis running on `localhost:6379` the same skip applies: tests will use it instead of spawning a new container. If you do not, the `@testcontainers/redis` package will automatically pull `redis:7-alpine` and start a fresh container for you. ## How it is wired into the workflow The workflow file is [`.github/workflows/test.yml`](https://github.com/kravetsone/taskora/blob/main/.github/workflows/test.yml) in the repo. Shape: ```yaml name: Test on: push: # every branch, every push pull_request: branches: [main] # external fork PRs workflow_call: # reused by publish.yml jobs: lint-build: # Biome + pkgroll, once, on Node 24 — runtime-agnostic. test: strategy: fail-fast: false # never hide divergence by cancelling peers matrix: include: - { runtime: node, driver: ioredis, nodeVersion: 24, name: "Node 24 + ioredis" } - { runtime: node, driver: ioredis, nodeVersion: 20, name: "Node 20 + ioredis" } - { runtime: bun, driver: ioredis, nodeVersion: 24, name: "Bun + ioredis" } - { runtime: bun, driver: bun, nodeVersion: 24, name: "Bun + Bun.RedisClient" } - { runtime: deno, driver: ioredis, nodeVersion: 24, name: "Deno + ioredis" } services: redis: # shared sidecar for every cell image: redis:7-alpine env: REDIS_URL: redis://localhost:6379 TASKORA_TEST_DRIVER: ${{ matrix.driver }} steps: # ...runtime-specific vitest invocation... ``` And the publish workflow calls it: ```yaml # .github/workflows/publish.yml jobs: tests: uses: ./.github/workflows/test.yml # reusable-workflow call publish_package: needs: tests # gated on every matrix cell green # ...jsr publish, npm publish, GitHub release... ``` If any matrix cell is red, `publish_package` simply does not run. **A release cannot ship with any supported runtime × driver combination broken.** ## Running the matrix locally Every cell is reproducible on your machine. Start a Redis on `localhost:6399` (the tests honor the pre-set `REDIS_URL` and skip their own testcontainer), then: ```bash # Redis sidecar for the session docker run --rm -d --name taskora-test-redis -p 6399:6379 redis:7-alpine export REDIS_URL=redis://localhost:6399 # Matrix cells (each uses whatever Node your shell points to) bun run test:node # Node + ioredis bun run test:bun:ioredis # Bun + ioredis (bunx --bun vitest run) bun run test:bun # Bun + Bun.RedisClient (TASKORA_TEST_DRIVER=bun) bun run test:deno # Deno + ioredis (npm: specifier via --node-modules-dir=auto) # Or all four, one after the other bun run test:all ``` The scripts drive the same Vitest invocation CI uses. To reproduce the Node 20 and Node 24 cells specifically, switch your Node version via your version manager (`fnm`, `nvm`, `volta`, `mise`) and re-run `bun run test:node`. Cleanup when done: ```bash docker stop taskora-test-redis ``` ## Writing your own tests If you are looking for the library's **user-facing** test utilities — the in-memory adapter, virtual time, the `runner.run()` / `runner.execute()` helpers that let you test your handlers without Redis or Docker — that is covered in [Testing → Overview](/testing/). The cross-runtime CI documented on this page is about how Taskora itself is tested, not about how your application should test its tasks. --- --- url: 'https://kravetsone.github.io/taskora/taskora/operations/dead-letter-queue.md' --- # Dead Letter Queue & Retention Taskora automatically trims both completed and failed jobs to prevent unbounded memory growth. Zero config needed — safe defaults are always on. ## Defaults | | `maxAge` | `maxItems` | |---|---|---| | **completed** | `"1h"` | `100` | | **failed** | `"7d"` | `300` | Override if needed: ```ts const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), retention: { completed: { maxAge: "24h", maxItems: 1_000 }, failed: { maxAge: "30d", maxItems: 5_000 }, }, }) ``` Trim runs piggyback on the stall check interval — no extra timers. It removes the job hash, `:data`, `:result`, `:lock`, and `:logs` keys in batches of 100. ## Accessing the DLQ ```ts const dlq = taskora.deadLetters ``` The DLQ operates as a view over the existing `:failed` sorted set — no separate storage. ## Listing Failed Jobs ```ts const jobs = await dlq.list() const emailJobs = await dlq.list({ task: "send-email", limit: 50, offset: 0 }) ``` ## Retrying Jobs ### Single Job ```ts // Retry by job ID (cross-task search) await dlq.retry(jobId) // Retry with typed task reference await dlq.retry(sendEmailTask, jobId) ``` The job is atomically removed from the failed set, reset to `waiting`, and re-queued. ### Retry All ```ts // Retry all failed jobs await dlq.retryAll() // Retry all failed jobs for a specific task await dlq.retryAll({ task: "send-email" }) ``` Batched internally (100 per Lua call) for safety. ## Workflow Example ```ts // 1. Check for accumulated failures const stats = await taskora.inspect().stats() console.log(`Failed jobs: ${stats.failed}`) // 2. List recent failures const failures = await taskora.deadLetters.list({ limit: 10 }) for (const job of failures) { console.log(`${job.id}: ${job.error} (attempt ${job.attempt})`) } // 3. Fix the underlying issue, then retry all await taskora.deadLetters.retryAll() ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/integrations/nestjs/deployment.md' --- # Deployment Covers production bootstrap patterns for Nest apps that use taskora: graceful shutdown, producer/worker split, multi-app, and the `main.ts` shapes that actually work. ## `main.ts` checklist Every production bootstrap needs these four things: ```ts import "reflect-metadata" // 1. reflect-metadata FIRST — before any decorated class evaluates import { NestFactory } from "@nestjs/core" import { AppModule } from "./app.module" async function bootstrap() { const app = await NestFactory.create(AppModule) app.enableShutdownHooks() // 2. enable shutdown hooks so SIGTERM drains jobs // 3. Optional: mount the board (see /integrations/nestjs/board) // const board = app.get(getBoardToken()) // app.use("/board", getRequestListener(board.fetch)) await app.listen(3000) // 4. Bind HTTP } bootstrap() ``` ### Why `enableShutdownHooks()` matters `TaskoraExplorer` implements `OnApplicationShutdown` — it awaits `app.close()` on shutdown, which drains active jobs up to the stall timeout. Nest only calls shutdown hooks if you opt in with `enableShutdownHooks()`, so without this line a `SIGTERM` tears the process down immediately, orphaning every in-flight job. The job gets rescheduled later by stall detection, but you eat a duplicate execution. ### Why `import "reflect-metadata"` must be first Nest's constructor DI depends on decorator metadata. If your first decorated class evaluates before reflect-metadata patches the global `Reflect` object, the metadata lookups fall back to `undefined` and providers inject as undefined. This shows up as cryptic errors like `Cannot read property 'for' of undefined` inside services that inject `TaskoraRef`. The import **must** be the first line of `main.ts` — above every other import, including `NestFactory`. ## Monolith deployment One process, one image, one Nest module. Everything lives together — API controllers, services, consumers, all in the same Redis. ```ts // app.module.ts @Module({ imports: [ TaskoraModule.forRootAsync({ useFactory: () => ({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), defaults: { retry: { attempts: 3, backoff: "exponential" }, timeout: 30_000, }, stall: { interval: 30_000, maxCount: 1 }, retention: { completed: { maxAge: "1h", maxItems: 500 }, failed: { maxAge: "7d", maxItems: 1000 }, }, }), }), EmailModule, ImageModule, BillingModule, ], }) export class AppModule {} ``` Scale this vertically first — taskora's default concurrency is 1 per task; bump it via `@TaskConsumer(contract, { concurrency: 20 })` for I/O-bound work. Node's event loop handles thousands of in-flight promises, and the only moving part is Redis. When vertical scaling stops paying off, horizontal-scale the monolith behind a load balancer — every process subscribes to the same Redis and pulls work independently. ## Producer/worker split Two processes, shared contract package, same Redis. One scales on HTTP traffic, the other scales on queue depth. ### Shared contracts package ``` services/ ├── api/ │ └── src/ │ ├── main.ts │ └── api.module.ts ← NO consumers, pure producer ├── worker/ │ └── src/ │ ├── main.ts ← no HTTP │ └── worker.module.ts ← all @TaskConsumer classes live here └── packages/ └── tasks/ ├── src/ │ ├── index.ts │ └── email.contracts.ts └── package.json ← @tasks/contracts — zero runtime deps ``` ### API module — pure producer ```ts // services/api/src/api.module.ts import { Module } from "@nestjs/common" import { TaskoraModule } from "@taskora/nestjs" import { redisAdapter } from "taskora/redis" import { Redis } from "ioredis" import { EmailModule } from "./email/email.module" @Module({ imports: [ TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), autoStart: false, // ← explicit: this process never runs workers }), EmailModule, // only contains EmailService (producer), no consumer ], }) export class ApiModule {} ``` `autoStart: false` is belt-and-braces — taskora's contract-only short-circuit in `App.start()` already skips worker loops when no task has a handler, so the API process wouldn't run workers anyway. But the explicit flag makes the intent unmistakable in code review. The `EmailService` imports `sendEmailTask` from `@tasks/contracts`, registers it via `TaskoraRef.register(sendEmailTask)` (or implicitly via `TaskoraRef.for(sendEmailTask).dispatch(...)`), and dispatches normally. The contract is enough — no handler needed on this side. ### Worker module — no HTTP ```ts // services/worker/src/worker.module.ts import { Module } from "@nestjs/common" import { TaskoraModule } from "@taskora/nestjs" import { redisAdapter } from "taskora/redis" import { Redis } from "ioredis" import { EmailConsumer } from "./email/email.consumer" import { ImageConsumer } from "./image/image.consumer" import { MailerService } from "./email/mailer.service" import { SharpPipelineService } from "./image/sharp-pipeline.service" @Module({ imports: [ TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), }), ], providers: [ EmailConsumer, ImageConsumer, MailerService, SharpPipelineService, ], }) export class WorkerModule {} ``` ### Worker bootstrap — `createApplicationContext`, not `create` ```ts // services/worker/src/main.ts import "reflect-metadata" import { NestFactory } from "@nestjs/core" import { WorkerModule } from "./worker.module" async function bootstrap() { const app = await NestFactory.createApplicationContext(WorkerModule) app.enableShutdownHooks() // Note: no app.listen() — the App is the "server". It processes jobs forever. } bootstrap() ``` `createApplicationContext` skips the Express/Fastify HTTP adapter entirely. You get a pure DI container + lifecycle hooks, which is exactly what a headless worker needs. No wasted port binding, no HTTP adapter startup, no `@nestjs/platform-express` dependency. ### Dockerfile for the split One repo, one base image, two entry points: ```dockerfile # ----- build stage ----- FROM node:22-alpine AS build WORKDIR /app COPY package*.json ./ COPY services/api/package.json ./services/api/ COPY services/worker/package.json ./services/worker/ COPY packages/tasks/package.json ./packages/tasks/ RUN bun install --frozen-lockfile COPY . . RUN bun run build # ----- api image ----- FROM node:22-alpine AS api WORKDIR /app COPY --from=build /app/node_modules ./node_modules COPY --from=build /app/packages/tasks/dist ./packages/tasks/dist COPY --from=build /app/services/api/dist ./dist CMD ["node", "dist/main.js"] # ----- worker image ----- FROM node:22-alpine AS worker WORKDIR /app COPY --from=build /app/node_modules ./node_modules COPY --from=build /app/packages/tasks/dist ./packages/tasks/dist COPY --from=build /app/services/worker/dist ./dist # Heavy deps — only installed in the worker image RUN apk add --no-cache vips ffmpeg CMD ["node", "dist/main.js"] ``` `sharp`, `puppeteer`, `ffmpeg`, native bindings — none of them end up in the API image. The API image stays lean; the worker image pays the weight only where it's needed. ## Graceful shutdown Nest calls `OnApplicationShutdown` hooks on every provider when it receives SIGINT/SIGTERM (if `enableShutdownHooks()` was called). The explorer implements the hook and awaits `app.close()`, which: 1. Stops the worker poll loop from picking up new jobs. 2. Waits for in-flight handlers to finish (up to their individual `timeout`). 3. Closes the adapter (Redis connections, event readers, subscribe streams). Typical orchestrator flow (Kubernetes, ECS, systemd): 1. Orchestrator sends SIGTERM. 2. Nest triggers `OnApplicationShutdown` on all providers. 3. `TaskoraExplorer.onApplicationShutdown` awaits `app.close()`. 4. In-flight jobs complete or time out. 5. Nest finishes shutdown, process exits with code 0. ### Configuring the drain window Set a per-task `timeout` that's **shorter** than your orchestrator's graceful shutdown window: ```ts @TaskConsumer(processImageTask, { concurrency: 4, timeout: 25_000, // 25s handler timeout }) ``` If Kubernetes' `terminationGracePeriodSeconds` is 30, a 25s handler timeout guarantees every in-flight job either completes or aborts with a `TimeoutError` (which can retry per the retry config) before the pod is hard-killed. ### SIGKILL (hard stop) If the process gets SIGKILL (OOM, orchestrator timeout), active jobs are orphaned in Redis. **Stall detection** recovers them on the next poll — the stalled job's lock expires, a worker picks it up again. This is the whole point of the stall machinery: hard crashes don't lose work, they just trigger a re-execution. Tune `stall: { interval, maxCount }` per workload. The default (`interval: 30_000`, `maxCount: 1`) means a worker detects stalled jobs every 30s and re-queues them on the first stall, fails them permanently on the second. ## Multi-app One Nest container, multiple independent taskora apps — useful for: * **Per-tenant isolation** — each tenant gets its own Redis database, each with its own rate limits and DLQ. * **Separate priority tiers** — a "critical" app with low concurrency + strict timeouts, a "batch" app with high concurrency + loose timeouts. * **Gradual migration** — run an old and a new Redis in parallel, dual-dispatch, flip reads. ### Registration ```ts @Module({ imports: [ TaskoraModule.forRootAsync({ useFactory: (config: ConfigService) => ({ adapter: redisAdapter({ client: new Redis(config.getOrThrow("CRITICAL_REDIS_URL")), }), defaults: { retry: { attempts: 5 }, timeout: 5_000 }, }), inject: [ConfigService], }), TaskoraModule.forRootAsync({ name: "batch", useFactory: (config: ConfigService) => ({ adapter: redisAdapter({ client: new Redis(config.getOrThrow("BATCH_REDIS_URL")), }), defaults: { retry: { attempts: 1 }, timeout: 600_000 }, }), inject: [ConfigService], }), ], }) export class AppModule {} ``` ### Routing consumers ```ts @TaskConsumer(placeOrderTask) class CriticalOrderConsumer { … } // → default "critical" app @TaskConsumer(analyticsRollupTask, { app: "batch" }) class BatchRollupConsumer { … } // → "batch" app ``` ### Routing dispatchers ```ts @Injectable() class OrderService { constructor( readonly critical: TaskoraRef, // default slot @InjectTaskoraRef("batch") readonly batch: TaskoraRef, // named slot ) {} placeOrder(input: OrderInput) { return this.critical.for(placeOrderTask).dispatch(input) } scheduleRollup(day: string) { return this.batch.for(analyticsRollupTask).dispatch({ day }) } } ``` Each app has its own worker loop, its own subscribe stream, its own Inspector / DLQ / schedules. They don't share state. Cancelling a job in `critical` has no effect on `batch`. ### Multi-app boards If you're mounting the admin dashboard for each app, use named boards: ```ts TaskoraBoardModule.forRoot({ basePath: "/board/critical" }) TaskoraBoardModule.forRoot({ name: "batch", basePath: "/board/batch" }) ``` See [Admin Dashboard > Multi-app / multi-board](./board#multi-app--multi-board). ## Health checks A minimal health check for a taskora-backed Nest app: ```ts import { Injectable } from "@nestjs/common" import { InjectApp } from "@taskora/nestjs" import type { App } from "taskora" @Injectable() export class TaskoraHealthIndicator { constructor(@InjectApp() private readonly app: App) {} async isHealthy() { try { // A no-op inspector call round-trips to the adapter and back. await this.app.inspect().stats() return { taskora: { status: "up" } } } catch (err) { return { taskora: { status: "down", error: (err as Error).message } } } } } ``` Wire it into `@nestjs/terminus` or your custom health controller. `inspector.stats()` pipes an LLEN/ZCARD batch to Redis — cheap, non-mutating, a real connectivity check. For readiness vs liveness: taskora's `worker:ready` event fires after `app.start()` finishes. If you expose `"ready"` only after observing this event once, the readiness probe naturally gates traffic on "workers actually pulling jobs": ```ts @Injectable() export class ReadinessState implements OnModuleInit { private ready = false constructor(@InjectApp() private readonly app: App) {} onModuleInit() { this.app.on("worker:ready", () => { this.ready = true }) } isReady() { return this.ready } } ``` ## Environment variables Mandatory for any production taskora Nest app: | Variable | Purpose | |---|---| | `REDIS_URL` | Connection string for the taskora adapter | | `NODE_ENV` | `production` disables dev-only logging and enables strict defaults | | `BOARD_COOKIE_PASSWORD` | **≥ 32 chars**, required if you mount the board with auth | | `BOARD_ADMIN_PASSWORD` | Passed to your `authenticate()` callback in board auth | Recommended: | Variable | Purpose | |---|---| | `REDIS_TLS_CA` | Client CA bundle if your Redis uses mTLS | | `TASKORA_LOG_LEVEL` | Override log verbosity for the Nest `Logger("Taskora")` output | | `OTEL_EXPORTER_OTLP_ENDPOINT` | OpenTelemetry export endpoint (if using a tracing middleware) | ## Cold starts The explorer's discovery pass is O(N providers), a single synchronous walk during `onApplicationBootstrap`. For typical Nest apps with 50-500 providers this is sub-millisecond. The expensive part is Redis connection setup — `lazyConnect: true` (the default on `taskora/redis`) defers it until the first dispatch or `app.start()`, so HTTP startup isn't blocked on Redis. If you want the app to fail fast on Redis connectivity issues during bootstrap instead of at first dispatch, wrap the adapter in a connect call: ```ts TaskoraModule.forRootAsync({ useFactory: async () => { const client = new Redis(process.env.REDIS_URL!, { lazyConnect: true }) await client.connect() // eager connect — throws if Redis is down return { adapter: redisAdapter({ client }) } }, }) ``` For most applications, lazy is fine — health checks will catch a dead Redis before traffic is routed. --- --- url: >- https://kravetsone.github.io/taskora/taskora/integrations/nestjs/dispatching.md --- # Dispatching `@taskora/nestjs` offers three ways to get a dispatchable `BoundTask` into a service. They all end up calling the same `app.register(contract)` under the hood — the difference is how the DX feels at the call site. ## TL;DR | Path | When to use | Type safety | Decorator | |---|---|---|---| | `TaskoraRef.for(contract)` | **Default** — 99% of producers | Full, zero annotations | `constructor(private tasks: TaskoraRef)` — no decorator | | `@InjectTask(contract)` + `InferBoundTask` | Property-decorator fans | Full, via `InferBoundTask` | Yes | | `forFeature([contracts])` | You want explicit per-contract DI tokens | Full | Yes | Use `TaskoraRef.for()` unless you have a concrete reason to reach for the others. ## `TaskoraRef.for()` — the primary path `TaskoraRef` is a thin injectable service wrapping the App. It's auto-provided by `TaskoraModule.forRoot`, so every service can constructor-inject it without any module-side registration: ```ts import { Injectable } from "@nestjs/common" import { TaskoraRef } from "@taskora/nestjs" import { sendEmailTask } from "@/tasks" @Injectable() export class EmailService { constructor(private readonly tasks: TaskoraRef) {} async notifySignup(user: User) { const handle = this.tasks.for(sendEmailTask).dispatch({ to: user.email, subject: `Welcome, ${user.name}`, }) const { messageId } = await handle.result return messageId } } ``` ### Why this is the default path `.for()` is a generic method: `for(contract: TaskContract): BoundTask`. TypeScript inference propagates the contract's input/output types all the way to `.dispatch()` and `handle.result`, with zero manual annotations. Rename a field in `sendEmailTask`'s schema and every call site updates automatically. This is the DX you'd get from taskora directly with `taskora.register(contract)`, just wrapped in a DI-friendly service. Nothing is lost. ### `.for()` is cheap `app.register(contract)` is idempotent — calling it twice returns the same `Task` instance under the hood via a `Map` lookup. You can call `this.tasks.for(contract)` inside every method, inside a getter, or cache it in a field — performance is the same. ```ts @Injectable() export class EmailService { constructor(private readonly tasks: TaskoraRef) {} // Style 1: inline — reads most naturally for one-off dispatches async sendOne(to: string) { await this.tasks.for(sendEmailTask).dispatch({ to, subject: "Hi" }) } // Style 2: getter — handy if you reuse the bound task several times private get sendEmail() { return this.tasks.for(sendEmailTask) } async sendMany(recipients: string[]) { await Promise.all( recipients.map((to) => this.sendEmail.dispatch({ to, subject: "Hi" })), ) } } ``` ### Accessing the raw `App` If you need the full App (e.g. to query `app.schedules`, `app.deadLetters`, or attach app-level event listeners), reach for it via `TaskoraRef.raw`: ```ts constructor(private readonly tasks: TaskoraRef) {} someMethod() { this.tasks.raw.on("worker:ready", () => console.log("worker up")) } ``` For more structured access use the dedicated `@InjectInspector` / `@InjectDeadLetters` / `@InjectSchedules` decorators — see [Observability](./observability). ## `@InjectTask` — the escape hatch If you prefer property-style injection (one decorator per bound task in the constructor signature), you can opt into `@InjectTask` plus the `InferBoundTask` helper: ```ts import { Injectable } from "@nestjs/common" import { InjectTask, type InferBoundTask } from "@taskora/nestjs" import { sendEmailTask } from "@/tasks" @Injectable() export class EmailService { constructor( @InjectTask(sendEmailTask) private readonly sendEmail: InferBoundTask, ) {} async notifySignup(user: User) { await this.sendEmail.dispatch({ to: user.email, subject: "Welcome" }) } } ``` ### Why `InferBoundTask` and not `BoundTask`? TypeScript parameter decorators can't propagate generics into the decorated property's type — there's no `@InjectTask(contract)` that fills in the property type automatically. Without a helper you'd have to write: ```ts // ❌ Manual duplication, drifts the moment the schema changes @InjectTask(sendEmailTask) private sendEmail: BoundTask<{ to: string; subject: string }, { messageId: string }> ``` `InferBoundTask` is an alias for `BoundTask, InferOutput>`, so it reads the types directly off the contract value. No duplication, no drift — rename a field in `sendEmailTask` and the type tracks. ### Requires `forFeature` Unlike `TaskoraRef.for()`, `@InjectTask` needs the contract to be registered as a DI provider. Add `TaskoraModule.forFeature([...contracts])` to the feature module: ```ts @Module({ imports: [TaskoraModule.forFeature([sendEmailTask, processImageTask])], providers: [EmailService], }) export class EmailModule {} ``` `forFeature` creates one factory provider per contract, keyed on `getTaskToken(contract)`. Importing it multiple times (in different feature modules) is safe — the factory calls `app.register(contract)` which is idempotent. ## `forFeature` without `@InjectTask` You can import `forFeature` purely for the "explicit DI token" side effect if you want strict visibility of which contracts a module uses. But you don't need to — every `TaskoraRef.for(contract)` call works without it. `forFeature` is pure documentation unless you use `@InjectTask`. ## Multi-app dispatching Pass the app name to both the `TaskoraRef` injection and the call: ```ts import { Injectable } from "@nestjs/common" import { InjectTaskoraRef, TaskoraRef } from "@taskora/nestjs" @Injectable() export class MultiTenantService { constructor( readonly primaryTasks: TaskoraRef, // default slot @InjectTaskoraRef("tenant-b") readonly tenantB: TaskoraRef, // named slot ) {} async queueForTenant(which: "a" | "b", task: Task, data: unknown) { const ref = which === "a" ? this.primaryTasks : this.tenantB return ref.for(task).dispatch(data) } } ``` Registration side: ```ts @Module({ imports: [ TaskoraModule.forRoot({ adapter: redisAdapter({ client: primary }) }), TaskoraModule.forRoot({ name: "tenant-b", adapter: redisAdapter({ client: tenantB }) }), ], }) export class AppModule {} ``` The default slot (`TaskoraModule.forRoot` without `name`) is provided via the `TaskoraRef` class token so `constructor(private tasks: TaskoraRef)` still works without any decorator. Named slots use `@InjectTaskoraRef('name')`. ## `DispatchOptions` passthrough Every `DispatchOptions` taskora supports works with `TaskoraRef.for(...).dispatch(data, options)`: ```ts // Delayed dispatch await tasks.for(sendEmailTask).dispatch( { to, subject: "Weekly digest" }, { delay: "1h" }, ) // Debounced — only the latest wins within the window await tasks.for(searchIndexTask).dispatch( { documentId: id }, { debounce: { key: `idx:${id}`, delay: "5s" } }, ) // Deduplicated — skip if already queued await tasks.for(processReceiptTask).dispatch( { receiptId }, { deduplicate: { key: receiptId } }, ) ``` See the main [Dispatching guide](/guide/dispatching) for the full list of flow-control options. ## Awaiting results `.dispatch()` returns a `ResultHandle` synchronously. It's thenable (resolves to the job ID), and has `.result` for push-based result resolution: ```ts const handle = tasks.for(processImageTask).dispatch({ url }) const jobId = await handle // "uuid-..." — just the ID const result = await handle.result // { width, height, format } const state = await handle.getState() // "waiting" | "active" | "completed" | ... ``` Awaiting results only works when the worker for that contract is actually running — either in the same process (monolith) or a sibling worker process (split deploy) subscribed to the same Redis. See [Deployment](./deployment) for the monolith vs split setup. --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/dispatching.md' --- # Dispatching Jobs Every task has a `dispatch()` method that enqueues a job and returns a `ResultHandle` synchronously. ## Basic Dispatch ```ts const handle = sendEmailTask.dispatch({ to: "user@example.com", subject: "Welcome!", }) ``` `dispatch()` returns immediately — it does **not** wait for the job to be enqueued. The handle is a thenable that resolves to the job ID once enqueued. ## ResultHandle The `ResultHandle` is the primary way to interact with a dispatched job. ```ts const handle = sendEmailTask.dispatch(data) // Get the job ID (thenable — resolves when enqueued) const jobId = await handle // UUID string // Wait for the result (push-based via Redis Streams) const result = await handle.result // typed as TOutput // Wait with a timeout const result = await handle.waitFor(5000) // throws TimeoutError after 5s // Query current state const state = await handle.getState() // "waiting" | "active" | "completed" | ... // Query progress const progress = await handle.getProgress() // number | object | null // Get structured logs const logs = await handle.getLogs() // LogEntry[] // Cancel the job await handle.cancel({ reason: "User requested cancellation" }) ``` ### How Result Waiting Works `handle.result` and `handle.waitFor()` use **push-based** delivery via a shared Redis Streams XREAD connection (`JobWaiter`). Multiple handles share one connection — no per-job polling. ## Dispatch Options ```ts sendEmailTask.dispatch(data, { delay: 5000, // delay processing by 5 seconds priority: 10, // higher priority = processed first ttl: "1h", // expire if not processed within 1 hour concurrencyKey: "user:42", // limit concurrency per key concurrencyLimit: 2, // max 2 concurrent jobs for this key }) ``` ## Ordering Guarantees Jobs are dequeued in **(priority desc, timestamp asc)** order. Higher-priority jobs always come first; within the same priority band, earlier dispatches come first. This is **best-effort, not a strict FIFO contract**: * Multiple workers dequeue concurrently — execution order may differ from dequeue order. * Dispatches from separate processes that land in the same millisecond have no deterministic tiebreaker. Taskora uses UUID job IDs (generated client-side, no Redis round-trip), so there is no global sequence counter. * Delayed, retried, and debounced jobs re-enter the queue with a fresh timestamp. If your workload requires strict per-key ordering, use `concurrencyKey` with `concurrencyLimit: 1` — this serializes execution for that key while leaving other keys concurrent. ## Flow Control Options ### Debounce Replace the previous pending job for the same key. Only the last dispatch within the delay window is processed. ```ts searchIndexTask.dispatch(data, { debounce: { key: `index:${docId}`, delay: "2s" }, }) ``` ### Throttle Rate-limit dispatches per key. Excess dispatches are rejected. ```ts const handle = apiCallTask.dispatch(data, { throttle: { key: "external-api", max: 10, window: "1m" }, }) console.log(handle.enqueued) // true | false ``` ### Deduplicate Skip dispatch if a job with the same key already exists in a matching state. ```ts const handle = generateReportTask.dispatch(data, { deduplicate: { key: `report:${userId}`, while: ["waiting", "active"] }, }) if (!handle.enqueued) { console.log("Already running:", handle.existingId) } ``` ### Throwing on Rejection By default, throttle and dedup silently reject (set `handle.enqueued = false`). To throw instead: ```ts sendEmailTask.dispatch(data, { throttle: { key: "emails", max: 100, window: "1h" }, throwOnReject: true, // throws ThrottledError or DuplicateJobError }) ``` ## Bulk Dispatch ```ts const handles = sendEmailTask.dispatchMany([ { data: { to: "alice@example.com", subject: "Hi" } }, { data: { to: "bob@example.com", subject: "Hello" }, options: { delay: 5000 } }, ]) ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/recipes/email-queue.md' --- # Email Queue A complete email sending queue with validation, retry, and error handling. ```ts import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" import { z } from "zod" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), }) const sendEmailTask = taskora.task("send-email", { input: z.object({ to: z.string().email(), subject: z.string().min(1).max(200), body: z.string(), replyTo: z.string().email().optional(), }), retry: { attempts: 3, backoff: "exponential", delay: 2000, noRetryOn: [ValidationError], // don't retry bad input }, timeout: 15_000, handler: async (data, ctx) => { ctx.log.info("Sending email", { to: data.to, subject: data.subject }) const result = await mailer.send({ to: data.to, subject: data.subject, html: data.body, replyTo: data.replyTo, }) return { messageId: result.id, accepted: result.accepted } }, }) // Usage const handle = sendEmailTask.dispatch({ to: "user@example.com", subject: "Welcome to our platform!", body: "

Welcome!

Thanks for signing up.

", }) const result = await handle.result console.log("Sent:", result.messageId) // Monitor failures sendEmailTask.on("failed", ({ id, error, willRetry }) => { if (!willRetry) { alertOncall(`Email permanently failed: ${error}`) } }) await taskora.start() ``` ## Testing ```ts import { createTestRunner } from "taskora/test" const runner = createTestRunner({ from: taskora }) it("sends email successfully", async () => { const result = await runner.execute(sendEmailTask, { to: "test@example.com", subject: "Test", body: "Hello", }) expect(result.state).toBe("completed") }) it("rejects invalid email", async () => { const result = await runner.execute(sendEmailTask, { to: "not-an-email", subject: "Test", body: "Hello", }) expect(result.state).toBe("failed") expect(result.error).toContain("email") }) ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/error-handling.md' --- # Error Handling Taskora provides a rich error hierarchy to distinguish between different failure modes. All errors extend `TaskoraError`. ## Error Classes ### `ValidationError` Thrown when input or output data fails schema validation. ```ts try { await handle.result } catch (err) { if (err instanceof ValidationError) { console.log(err.issues) // Standard Schema issues array } } ``` ### `RetryError` Thrown by `ctx.retry()` to trigger a manual retry. Always retries (bypasses `retryOn`/`noRetryOn` filters) unless max attempts are exhausted. ```ts throw ctx.retry({ delay: 5000, reason: "Rate limited" }) // or throw new RetryError("Rate limited", { delay: 5000 }) ``` ### `TimeoutError` Thrown when a job exceeds its timeout. **Not retried by default** — add to `retryOn` explicitly if you want timeout retries. ```ts taskora.task("slow-task", { timeout: 5000, retry: { attempts: 3, retryOn: [TimeoutError], // opt-in to retry on timeout }, handler: async (data, ctx) => { /* ... */ }, }) ``` ### `JobFailedError` Thrown by `handle.result` / `handle.waitFor()` when the job failed permanently. ```ts try { await handle.result } catch (err) { if (err instanceof JobFailedError) { console.log(err.message) // original error message } } ``` ### `CancelledError` Thrown when a job is cancelled via `handle.cancel()`. ```ts try { await handle.result } catch (err) { if (err instanceof CancelledError) { console.log(err.reason) // optional cancellation reason } } ``` ### `ThrottledError` Thrown when dispatch is rejected by throttle (only with `throwOnReject: true`). ```ts try { sendEmailTask.dispatch(data, { throttle: { key: "emails", max: 100, window: "1h" }, throwOnReject: true, }) } catch (err) { if (err instanceof ThrottledError) { console.log(err.key) // "emails" } } ``` ### `DuplicateJobError` Thrown when dispatch is rejected by deduplication (only with `throwOnReject: true`). ```ts if (err instanceof DuplicateJobError) { console.log(err.key) // dedup key console.log(err.existingId) // ID of the existing job } ``` ### `ExpiredError` Stored as the error when a job's TTL expires before processing. ### `StalledError` Stored as the error when a job exceeds `maxStalledCount` and is moved to failed. ### `SchemaVersionMismatchError` A library-internal safety net for taskora upgrades that you will almost never see — we've built CI snapshot tests, a decoupled bump policy, and rolling-upgrade semantics specifically so this error doesn't fire in practice. If something ever does slip through all of that, `app.start()` / `app.ensureConnected()` throws **before** any worker, scheduler, or dispatch touches Redis, so your data is never in an ambiguous state. ```ts try { await app.start() } catch (err) { if (err instanceof SchemaVersionMismatchError) { console.error(err.code) // "theirs_too_new" | "theirs_too_old" | "invalid_meta" console.error(err.ours) // { wireVersion, minCompat, writtenBy } console.error(err.theirs) // same + writtenAt process.exit(1) } throw err } ``` This is **not** the same as task payload versioning — it protects taskora's own storage layout against incompatible library upgrades, not your task input schemas. See [Upgrading](../operations/upgrading) for the full story, including what each `code` means and the automated tests that make sure a breaking release can't slip through unnoticed. ## Default Error Logging When a task handler throws and **no** `failed` listener is registered (neither `task.on("failed")` nor `taskora.on("task:failed")`), taskora logs the error to `console.error` automatically: ``` [taskora] task "send-email" job a1b2c3d4 failed (attempt 1/3, will retry) Error: Connection refused at handler (/app/tasks/email.ts:15:11) at Array.handlerMw (/app/node_modules/taskora/src/worker.ts:82:36) ... ``` This fires synchronously on the worker that processed the job — no Redis round-trip, full stack trace preserved. The default logger is **automatically suppressed** the moment you register your own `failed` listener: ```ts // Default logging stops as soon as you add this: taskora.on("task:failed", (event) => { myLogger.error({ task: event.task, jobId: event.id, error: event.error }) }) ``` Per-task listeners also suppress it for that specific task: ```ts sendEmailTask.on("failed", (event) => { /* custom handling */ }) // Default logger no longer fires for send-email, but still fires for other tasks ``` ## Error Flow | Error | When | Retried? | |---|---|---| | `ValidationError` | Schema validation fails | No | | `RetryError` | `ctx.retry()` called | Always (if attempts remain) | | `TimeoutError` | Handler exceeds timeout | No (opt-in via `retryOn`) | | `CancelledError` | `handle.cancel()` called | No | | `ThrottledError` | Throttle rejects dispatch | N/A (dispatch-time) | | `DuplicateJobError` | Dedup rejects dispatch | N/A (dispatch-time) | | `ExpiredError` | TTL expires | No | | `StalledError` | maxStalledCount exceeded | No | | Any other `Error` | Handler throws | Yes (if attempts remain) | --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/events.md' --- # Events Taskora provides a typed event system for real-time job monitoring. Events are emitted at both the task and app levels. ## Task Events ```ts const sendEmailTask = taskora.task("send-email", { handler: async (data, ctx) => { /* ... */ }, }) sendEmailTask.on("completed", ({ id, result, duration, attempt }) => { console.log(`Email sent in ${duration}ms (attempt ${attempt})`) }) sendEmailTask.on("failed", ({ id, error, attempt, willRetry }) => { if (!willRetry) { alertOncall(`Email ${id} permanently failed: ${error}`) } }) sendEmailTask.on("retrying", ({ id, attempt, nextAttempt, error }) => { console.log(`Retrying ${id}: attempt ${attempt} → ${nextAttempt}`) }) sendEmailTask.on("progress", ({ id, progress }) => { console.log(`Job ${id} progress: ${JSON.stringify(progress)}`) }) sendEmailTask.on("active", ({ id, attempt }) => { console.log(`Job ${id} started processing (attempt ${attempt})`) }) sendEmailTask.on("stalled", ({ id, count, action }) => { console.log(`Job ${id} stalled (${action}: count ${count})`) }) sendEmailTask.on("cancelled", ({ id, reason }) => { console.log(`Job ${id} cancelled: ${reason}`) }) ``` ## App Events App events include the task name in the payload, useful for cross-cutting monitoring. ```ts taskora.on("task:completed", ({ task, id, result, duration, attempt }) => { metrics.increment("jobs.completed", { task }) metrics.histogram("jobs.duration", duration, { task }) }) taskora.on("task:failed", ({ task, id, error, attempt, willRetry }) => { if (!willRetry) { metrics.increment("jobs.failed", { task }) } }) taskora.on("task:active", ({ task, id, attempt }) => { metrics.increment("jobs.active", { task }) }) taskora.on("task:stalled", ({ task, id, count, action }) => { metrics.increment("jobs.stalled", { task, action }) }) taskora.on("task:cancelled", ({ task, id, reason }) => { metrics.increment("jobs.cancelled", { task }) }) ``` ### Worker Events ```ts taskora.on("worker:ready", () => { console.log("Workers started, processing jobs") }) taskora.on("worker:error", (error) => { console.error("Worker error:", error) }) taskora.on("worker:closing", () => { console.log("Workers shutting down...") }) ``` ## Event Payload Types | Event | Payload | |---|---| | `completed` | `{ id, result, duration, attempt }` | | `failed` | `{ id, error, attempt, willRetry }` | | `retrying` | `{ id, attempt, nextAttempt, error }` | | `progress` | `{ id, progress }` | | `active` | `{ id, attempt }` | | `stalled` | `{ id, count, action: "recovered" \| "failed" }` | | `cancelled` | `{ id, reason? }` | App events add `{ task: string }` to each payload. ## Default Error Logging If no `failed` listener is registered, taskora logs errors to `console.error` automatically with full stack traces: ``` [taskora] task "send-email" job a1b2c3d4 failed (attempt 1/3, will retry) Error: Connection refused at handler (/app/tasks/email.ts:15:11) ... ``` This prevents errors from being silently swallowed. The default logger is suppressed as soon as you register any `failed` listener — either per-task or app-level. See [Error Handling](/guide/error-handling#default-error-logging) for details. ## Multi-Instance Behavior Events use Redis Streams with `XREAD` (not `XREADGROUP`). In a multi-pod deployment: * **Job processing**: only one pod claims each job (atomic `RPOP` in Lua) * **Events**: all pods receive all events (fan-out) If 3 pods subscribe to `task.on("completed")`, all 3 will fire the handler for each completed job. This is by design — events are notifications, not work items. If you need exactly-once event processing, use the job queue itself (dispatch a new task from the event handler with deduplication). ## Implementation Events are backed by **Redis Streams** (`XADD`). Each state transition in the Lua scripts writes a stream event. The `EventReader` uses `XREAD BLOCK` on a separate connection to consume events in real-time. Stream positions are snapshotted via `XREVRANGE` before workers start — this prevents missing events during the startup window. --- --- url: >- https://kravetsone.github.io/taskora/taskora/integrations/nestjs/file-layout.md --- # File Layout How you arrange taskora code inside a Nest project matters — not for correctness (taskora doesn't care where files live), but for the team. A consistent layout makes it obvious where to add a new task, where a handler lives, and which services are allowed to dispatch what. This page covers the layout patterns that scale. Start with the monolith, split only when you have a concrete reason. ## The monolith (recommended default) ``` src/ ├── main.ts ├── app.module.ts ├── tasks/ ← all task contracts live here │ ├── index.ts ← re-exports every contract │ ├── email.contracts.ts │ ├── image.contracts.ts │ └── webhook.contracts.ts ├── email/ │ ├── email.module.ts ← feature module │ ├── email.service.ts ← producer: dispatches via TaskoraRef │ ├── email.consumer.ts ← @TaskConsumer for sendEmailTask │ └── mailer.service.ts ← injected into the consumer ├── image/ │ ├── image.module.ts │ ├── image.controller.ts ← producer: triggers dispatches from HTTP │ ├── image.consumer.ts │ └── sharp-pipeline.service.ts └── common/ ├── middleware/ ← class middleware │ └── logging.middleware.ts └── observability/ ← admin/observability HTTP endpoints └── queue-stats.controller.ts ``` ### Why contracts live in `tasks/`, not inside feature folders Contracts are the single source of truth for both producers and consumers. Even in a monolith you'll eventually have services that dispatch a task they don't own — `BillingService` dispatching `sendEmailTask`, for example. Keeping contracts in one shared folder means: * Producers never reach into a feature folder to grab a contract. The import path says "shared thing", not "internal to email module". * Renaming or reshaping a contract's input schema is one file, caught at compile time everywhere. * When you eventually split the worker out of the monolith (see below), you already know which files go to the shared package. ### Why consumers live next to the code they call A consumer is a thin DI wrapper around domain logic — it calls a `MailerService` or `SharpPipelineService`. Keeping the consumer in the same folder as its dependencies means: * The feature module's `providers: []` array is the full registration surface for that feature. * You can read the consumer and its dependencies without jumping between folders. * When you delete the feature, everything goes with it. ## The composition ```ts // src/tasks/email.contracts.ts import { defineTask } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string().email(), subject: z.string() }), output: z.object({ messageId: z.string() }), }) export const sendWelcomeEmailTask = defineTask({ name: "send-welcome-email", input: z.object({ userId: z.string().uuid() }), }) ``` ```ts // src/tasks/index.ts — one barrel so producers import from "@/tasks" export * from "./email.contracts" export * from "./image.contracts" export * from "./webhook.contracts" ``` ```ts // src/email/email.module.ts import { Module } from "@nestjs/common" import { EmailService } from "./email.service" import { SendEmailConsumer } from "./email.consumer" import { MailerService } from "./mailer.service" @Module({ providers: [EmailService, SendEmailConsumer, MailerService], exports: [EmailService], }) export class EmailModule {} ``` ```ts // src/email/email.service.ts — the producer import { Injectable } from "@nestjs/common" import { TaskoraRef } from "@taskora/nestjs" import { sendEmailTask, sendWelcomeEmailTask } from "../tasks" @Injectable() export class EmailService { constructor(private readonly tasks: TaskoraRef) {} notifySignup(userId: string) { return this.tasks.for(sendWelcomeEmailTask).dispatch({ userId }) } notifyReset(email: string) { return this.tasks.for(sendEmailTask).dispatch({ to: email, subject: "Reset your password", }) } } ``` ```ts // src/email/email.consumer.ts — the worker import { TaskConsumer } from "@taskora/nestjs" import type { InferInput, InferOutput, Taskora } from "taskora" import { MailerService } from "./mailer.service" import { sendEmailTask } from "../tasks" @TaskConsumer(sendEmailTask, { concurrency: 10, retry: { attempts: 5, backoff: "exponential" }, }) export class SendEmailConsumer { constructor(private readonly mailer: MailerService) {} async process( data: InferInput, ctx: Taskora.Context, ): Promise> { ctx.log.info("sending", { to: data.to }) return this.mailer.send(data) } } ``` ```ts // src/app.module.ts import { Module } from "@nestjs/common" import { TaskoraModule } from "@taskora/nestjs" import { redisAdapter } from "taskora/redis" import { Redis } from "ioredis" import { EmailModule } from "./email/email.module" import { ImageModule } from "./image/image.module" @Module({ imports: [ TaskoraModule.forRootAsync({ useFactory: () => ({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), defaults: { retry: { attempts: 3, backoff: "exponential" } }, }), }), EmailModule, ImageModule, ], }) export class AppModule {} ``` The `TaskoraModule` goes at the root. `EmailModule` and `ImageModule` don't need to re-import it — `TaskoraCoreModule` is `@Global`, so `TaskoraRef` and all accessor tokens are resolvable from anywhere in the container. ## When to split: producer ≠ worker Two legitimate reasons to split the monolith: 1. **Heavy worker dependencies don't belong in your API image.** If the worker uses `sharp`, `puppeteer`, `ffmpeg`, native bindings, or ML models, dragging them into the API bundle doubles deploy size and slows cold starts. 2. **Independent scaling characteristics.** API auto-scales on HTTP traffic, worker scales on queue depth. If the ratio diverges a lot, splitting wins. If neither applies, stay monolithic. ### Shared package layout (same monorepo) ``` services/ ├── api/ │ ├── src/ │ │ ├── main.ts ← Nest bootstrap, no worker │ │ └── app.module.ts ← TaskoraModule.forRoot({ ..., autoStart: false }) │ └── package.json ← depends on @taskora/nestjs + @tasks/contracts ├── worker/ │ ├── src/ │ │ ├── main.ts ← Nest bootstrap, consumers only │ │ └── worker.module.ts ← TaskoraModule.forRoot + @TaskConsumer providers │ └── package.json ← depends on @taskora/nestjs + @tasks/contracts + heavy deps └── packages/ └── tasks/ ├── src/ │ ├── index.ts │ ├── email.contracts.ts │ └── image.contracts.ts └── package.json ← @tasks/contracts — zero runtime deps, just schemas ``` The shared `@tasks/contracts` package exports only `defineTask(...)` calls. Both the API and the worker import contracts from there. The API never imports consumer files; the worker never imports API controller files. The only thing they agree on is the contract shape. ### Producer-only bootstrap In the API process, no `@TaskConsumer` providers are registered. That means `App.start()` has nothing to start a worker for, and even if `autoStart: true`, taskora's contract-only short-circuit skips the worker loop entirely. You can still set `autoStart: false` explicitly for extra clarity: ```ts // services/api/src/app.module.ts TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), autoStart: false, // pure producer — no workers, no subscribers }) ``` ### Worker-only bootstrap In the worker process, register every `@TaskConsumer` class and let `TaskoraModule.forRoot` start the App normally. No HTTP adapter needed — use `NestFactory.createApplicationContext` instead of `NestFactory.create`: ```ts // services/worker/src/main.ts import "reflect-metadata" import { NestFactory } from "@nestjs/core" import { WorkerModule } from "./worker.module" async function bootstrap() { const app = await NestFactory.createApplicationContext(WorkerModule) app.enableShutdownHooks() // No listen() — the App is the "HTTP server" equivalent, processing jobs forever. } bootstrap() ``` `createApplicationContext` skips Express/Fastify setup entirely. You get a pure DI container with lifecycle hooks, which is exactly what a worker process needs. ## Anti-patterns ### Don't scatter contracts across feature folders ``` ❌ src/ ├── email/ │ ├── email.contracts.ts ← bad: siloed │ └── email.module.ts └── billing/ └── billing.service.ts ← imports "../email/email.contracts"? No. ``` Contracts moved because of a refactor? Now `billing.service.ts` has a broken import and the search-replace touches every service. Keep contracts in one place from day one. ### Don't put dispatchers in controllers ``` ❌ @Controller("users") export class UsersController { constructor(private tasks: TaskoraRef) {} @Post("signup") async signup(@Body() dto: SignupDto) { // Controller directly dispatches a task await this.tasks.for(sendWelcomeEmailTask).dispatch({ userId: dto.id }) } } ``` The controller has no business knowing about the email task. Put the dispatch inside a `UserService.onSignup()` method and have the controller call the service. Same reasoning as putting SQL queries or Redis calls inside a service, not a controller — separation of layers. ### Don't register consumers in the same module as their producers Consumers and producers for the same task are structurally independent — in a split deployment they live in different processes. Even in a monolith, keep them in separate modules (`EmailModule` contains `EmailService` + `SendEmailConsumer`, but `BillingModule` can also dispatch `sendEmailTask` via `TaskoraRef` without importing anything from `EmailModule`). ### Don't use `TaskoraModule.forRoot` in a feature module `forRoot` creates and owns the App. Only call it once, in `AppModule` (or a shared `CoreModule`). Feature modules that need per-contract providers can use [`forFeature`](./dispatching#forfeature) — but most of the time `TaskoraRef.for(contract)` removes the need for `forFeature` entirely. ## Summary * **Contracts in `src/tasks/`**, one folder, one barrel. * **Consumers next to their dependencies**, inside the feature module that owns the domain logic. * **Dispatchers in service classes**, not controllers. * **`TaskoraModule.forRoot` only in `AppModule`**, `@Global` handles the rest. * **Split producer/worker only when you have a concrete pain** — heavy deps or divergent scaling. Otherwise the monolith wins on simplicity. --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/flow-control.md' --- # Flow Control Taskora provides three flow control strategies — debounce, throttle, and deduplicate — all implemented as atomic Lua scripts. ## Debounce Replace the previous pending job for the same key. Only the **last** dispatch within the delay window is processed. ```ts searchIndexTask.dispatch(data, { debounce: { key: `reindex:${documentId}`, delay: "2s", }, }) ``` **Use case:** Avoid redundant reindexing when a document is edited multiple times in quick succession. Only the final version is indexed. ### How It Works 1. First dispatch creates a delayed job 2. Subsequent dispatches with the same key **replace** the previous job (reset the delay timer) 3. When the delay elapses without a new dispatch, the job moves to the waiting queue ## Throttle Rate-limit dispatches per key. Excess dispatches are **rejected**. ```ts const handle = callExternalApiTask.dispatch(data, { throttle: { key: "stripe-api", max: 100, // max 100 dispatches window: "1m", // per 1 minute window }, }) if (!handle.enqueued) { console.log("Rate limited — try again later") } ``` **Use case:** Respect external API rate limits by capping how many jobs can be enqueued in a time window. ### How It Works 1. Each dispatch increments a counter for the key 2. Counter resets when the window expires 3. If counter exceeds `max`, the dispatch is rejected (`handle.enqueued = false`) 4. With `throwOnReject: true`, throws `ThrottledError` instead ## Deduplicate Skip dispatch if a job with the same key already exists in a matching state. ```ts const handle = generateReportTask.dispatch(data, { deduplicate: { key: `report:${userId}`, while: ["waiting", "active"], // default: ["waiting", "delayed", "active"] }, }) if (!handle.enqueued) { console.log("Report already in progress:", handle.existingId) } ``` **Use case:** Prevent duplicate report generation when a user clicks "Generate" multiple times. ### How It Works 1. Checks if a job with the same dedup key exists in any of the `while` states 2. If found, returns the existing job's ID and sets `handle.enqueued = false` 3. If not found, creates a new job and stores the dedup key 4. Dedup keys are automatically cleaned up when jobs complete or fail ## Throwing on Rejection By default, throttle and dedup silently reject. Use `throwOnReject` for explicit error handling: ```ts try { sendEmailTask.dispatch(data, { throttle: { key: "emails", max: 100, window: "1h" }, throwOnReject: true, }) } catch (err) { if (err instanceof ThrottledError) { // err.key = "emails" } if (err instanceof DuplicateJobError) { // err.key, err.existingId } } ``` ## Combining Flow Control Flow control options are **mutually exclusive** — use only one per dispatch call. Collect tasks are also mutually exclusive with all flow control options. --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/getting-started.md' --- # Getting Started Get your first task queue running in under 5 minutes. ## Installation ::: pm-add taskora ioredis ::: ## Create Your Instance Every taskora project starts with an instance — the central registry that holds your tasks, adapters, and configuration. ```ts import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), }) ``` The adapter can accept a Redis URL string, an `ioredis` options object, or an existing `ioredis` instance. ::: tip Redis tuning for production For best memory efficiency, set `hash-max-listpack-value 1024` in your Redis config. This keeps job hashes in compact encoding for payloads up to ~1 KB. See [Performance](/operations/performance#hash-max-listpack-value) for details. ::: ## Define a Task A task is a named function that processes data. Taskora infers the input and output types from your handler. ```ts const sendEmailTask = taskora.task( "send-email", async (data: { to: string; subject: string; body: string }) => { // Your email sending logic const messageId = await mailer.send(data) return { messageId } }, ) ``` ## Dispatch a Job Call `dispatch()` to enqueue a job. It returns a `ResultHandle` immediately — no `await` needed to enqueue. ```ts const handle = sendEmailTask.dispatch({ to: "user@example.com", subject: "Welcome!", body: "Thanks for signing up.", }) // The handle is thenable — await it to get the job ID const jobId = await handle // "550e8400-e29b-41d4-a716-446655440000" // Or wait for the actual result const result = await handle.result // { messageId: "..." } ``` ## Start the Worker Call `taskora.start()` to begin processing jobs. Workers automatically pick up jobs from Redis using blocking dequeue (BZPOPMIN) — no polling overhead. ```ts await taskora.start() ``` ## Graceful Shutdown ```ts process.on("SIGTERM", async () => { await taskora.close() // waits for active jobs to finish }) ``` ## Add Retry Logic Tasks can be configured with retry policies for automatic error recovery. ```ts const sendEmailTask = taskora.task("send-email", { retry: { attempts: 3, backoff: "exponential", delay: 1000, }, handler: async (data: { to: string; subject: string }) => { return await mailer.send(data) }, }) ``` ## Full Example Here's a complete working example: ```ts import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), defaults: { retry: { attempts: 3, backoff: "exponential", delay: 1000 }, timeout: 30_000, concurrency: 5, }, }) const processImageTask = taskora.task("process-image", { timeout: 60_000, concurrency: 2, handler: async (data: { url: string; width: number }, ctx) => { ctx.log.info("Starting image processing", { url: data.url }) ctx.progress(0) const image = await downloadImage(data.url) ctx.progress(50) const resized = await resize(image, data.width) ctx.progress(100) return { path: resized.path, size: resized.size } }, }) // Dispatch const handle = processImageTask.dispatch({ url: "https://example.com/photo.jpg", width: 800 }) // Monitor progress handle.on?.("progress", (p) => console.log(`Progress: ${p}%`)) // Wait for result const result = await handle.result console.log("Done:", result.path) // Start workers await taskora.start() ``` ## Next Steps * [Core Concepts](/guide/core-concepts) — Understand the mental model * [Tasks](/guide/tasks) — Task definition patterns & schema validation * [Retry & Backoff](/features/retry-backoff) — Configure resilient retry strategies * [Testing](/testing/) — Test your tasks without Redis --- --- url: 'https://kravetsone.github.io/taskora/taskora/recipes/image-processing.md' --- # Image Processing Long-running image resize pipeline with progress tracking and heartbeat. ```ts const processImageTask = taskora.task("process-image", { concurrency: 2, timeout: 120_000, retry: { attempts: 2, backoff: "fixed", delay: 5000 }, handler: async (data: { url: string; sizes: number[] }, ctx) => { ctx.log.info("Starting image processing", { url: data.url }) ctx.progress(0) // Download const buffer = await downloadImage(data.url, { signal: ctx.signal }) ctx.progress(20) ctx.heartbeat() // extend lock for long job // Resize to each target size const results: { size: number; path: string }[] = [] for (let i = 0; i < data.sizes.length; i++) { const size = data.sizes[i] const resized = await sharp(buffer) .resize(size, size, { fit: "cover" }) .toBuffer() const path = await uploadToS3(`images/${ctx.id}/${size}.webp`, resized) results.push({ size, path }) ctx.progress(20 + ((i + 1) / data.sizes.length) * 80) ctx.heartbeat() } ctx.log.info("Processing complete", { variants: results.length }) return { variants: results } }, }) // Dispatch const handle = processImageTask.dispatch({ url: "https://example.com/photo.jpg", sizes: [64, 128, 256, 512, 1024], }) // Track progress const interval = setInterval(async () => { const progress = await handle.getProgress() console.log(`Progress: ${progress}%`) if (progress === 100) clearInterval(interval) }, 1000) const result = await handle.result console.log("Variants:", result.variants) ``` Key patterns used: * **`ctx.heartbeat()`** — prevents stall detection during long processing * **`ctx.progress()`** — reports progress for UI feedback * **`ctx.signal`** — cooperates with cancellation * **`ctx.log`** — structured logging per job --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora.md' --- [Taskora API Reference](../index) / index # index ## Namespaces * [Taskora](namespaces/Taskora/index) ## Classes * [App](classes/App) * [BoundTask](classes/BoundTask) * [CancelledError](classes/CancelledError) * [ChainSignature](classes/ChainSignature) * [ChordSignature](classes/ChordSignature) * [DeadLetterManager](classes/DeadLetterManager) * [DuplicateJobError](classes/DuplicateJobError) * [ExpiredError](classes/ExpiredError) * [GroupSignature](classes/GroupSignature) * [Inspector](classes/Inspector) * [JobFailedError](classes/JobFailedError) * [ResultHandle](classes/ResultHandle) * [RetryError](classes/RetryError) * [SchemaVersionMismatchError](classes/SchemaVersionMismatchError) * [Signature](classes/Signature) * [StalledError](classes/StalledError) * [Task](classes/Task) * [TaskoraError](classes/TaskoraError) * [ThrottledError](classes/ThrottledError) * [TimeoutError](classes/TimeoutError) * [ValidationError](classes/ValidationError) * [WorkflowHandle](classes/WorkflowHandle) ## Interfaces * [DefineTaskConfig](interfaces/DefineTaskConfig) * [SchemaMeta](interfaces/SchemaMeta) * [StaticContractConfig](interfaces/StaticContractConfig) * [TaskContract](interfaces/TaskContract) * [TaskoraOptions](interfaces/TaskoraOptions) * [WorkflowDispatchOptions](interfaces/WorkflowDispatchOptions) * [WorkflowGraph](interfaces/WorkflowGraph) * [WorkflowNode](interfaces/WorkflowNode) ## Type Aliases * [CompatResult](type-aliases/CompatResult) * [Duration](type-aliases/Duration) * [InferInput](type-aliases/InferInput) * [InferOutput](type-aliases/InferOutput) ## Variables ### MIN\_COMPAT\_VERSION > `const` **MIN\_COMPAT\_VERSION**: `6` = `6` Defined in: packages/taskora/dist/index.d.mts:222 The oldest wire-format version this build is still willing to coexist with. Must always satisfy `MIN_COMPAT_VERSION <= WIRE_VERSION`. Bumped ONLY when the most recent wire change was genuinely backward-incompatible for older readers (see the example table in this file's top-of-file comment). Additive changes must leave this alone — that is exactly what enables rolling upgrades. When this equals `WIRE_VERSION` the check is a hard gate: two processes must run identical wire versions to share one backend. The 1 → 2 bump is such a hard gate. Wait-list type (LIST → ZSET) cannot round-trip between versions, so `MIN_COMPAT_VERSION` must also be 2. The 4 → 5 bump is likewise a hard gate. Wait-list type reverts from ZSET back to LIST (plus a new prioritized ZSET sibling), so again wire-4 and wire-5 cannot coexist on one backend — `MIN_COMPAT_VERSION` bumps to 5. The 5 → 6 bump (single-hash job storage) is also a hard gate. A wire-5 worker that issues `GET :data` for a wire-6 job hits `nil` — the string sibling no longer exists — and silently drops the payload. Fail loud at handshake instead: `MIN_COMPAT_VERSION` bumps to 6 so wire-5 and wire-6 cannot share one backend. *** ### WIRE\_VERSION > `const` **WIRE\_VERSION**: `6` = `6` Defined in: packages/taskora/dist/index.d.mts:195 The wire-format version this build of taskora writes and expects to read. Monotonically increasing integer. Bumped whenever the layout listed in `docs/WIRE_FORMAT.md` changes, additive or not. Do NOT bump on release. ── History ── 1 → 2: wait list changed from Redis LIST to SORTED SET so `DispatchOptions.priority` can actually order dequeues by (priority desc, ts asc). Every Lua script that touched the wait list via LPUSH/RPUSH/RPOP/LLEN/LRANGE/LREM was rewritten to ZADD/ZPOPMIN/ZCARD/ZRANGE/ZREM. A Redis instance containing wire-version-1 `:wait` LIST data cannot be read by a wire-version-2 process (WRONGTYPE on first ZADD) and vice versa — this is a hard upgrade, not a rolling one. Drain the waits (or flush the keyspace) before switching versions. 2 → 3: purely additive. Two new Lua scripts shipped — ACK\_AND\_MOVE\_TO\_ACTIVE and FAIL\_AND\_MOVE\_TO\_ACTIVE — that fuse ack+dequeue (and fail+dequeue) into a single EVALSHA so each worker slot can self-feed without funneling through the poll loop. No key layout, hash field, or sorted-set score meaning changed; existing ACK/FAIL/MOVE\_TO\_ACTIVE scripts are still present and produce identical Redis state. Wire-version-2 readers can safely share a backend with wire-version-3 writers, so `MIN_COMPAT_VERSION` stays at 2 and rolling upgrades work. 3 → 4: throughput metric keys moved under the task's `{hash tag}`. Old layout: `taskora:[prefix:]metrics:::`. New layout: `taskora:[prefix:]{}:metrics::`. Motivation: with the new layout the INCR/EXPIRE can live inside ACK\_AND\_MOVE\_TO\_ACTIVE / FAIL\_AND\_MOVE\_TO\_ACTIVE (which hash-tag everything they touch to the same slot), removing a whole fire-and-forget roundtrip per completed job. Metrics have a 24h TTL and are observability-only — old keys orphan harmlessly. `getThroughput` was updated to SCAN the new pattern, so the board shows the new metrics once deployed. Rolling upgrades: wire-3 workers write old keys, wire-4 workers write new keys; `getThroughput` on a wire-3 reader would miss wire-4 metrics and vice versa, but neither side corrupts anything — MIN\_COMPAT stays at 2. 4 → 5: wait queue split. Reintroduces a LIST at `:wait` for priority=0 jobs (O(1) RPOP fast path) and adds a separate `:prioritized` ZSET for priority>0 jobs, mirroring BullMQ's layout. Dispatch branches on priority; dequeue tries the LIST first and falls back to the ZSET. Restores O(1) wait pop on the common path — at 10K queued jobs and c=100 the benchmark no longer pays ~140K ZPOPMIN tree-rebalance ops per 10K processed jobs. ``` Hard gate: LIST vs ZSET types do not round-trip, so a wire-4 worker cannot read a wire-5 `:wait` key without `WRONGTYPE` (and vice versa). `MIN_COMPAT_VERSION` bumps to 5 so the check fails loud at handshake. Mirrors the 1 → 2 upgrade exactly; drain queues or flush the keyspace before rolling workers. An automatic `MIGRATE_WAIT_V4_TO_V5` Lua script runs during handshake to split existing data. Priority semantics change: a priority=0 job already in the wait LIST now dispatches BEFORE a just-enqueued priority=5 job, because the LIST is checked before the prioritized ZSET. Strict priority ordering across the whole queue is no longer guaranteed — the existing "best-effort within a priority band" documentation widens to "best-effort across bands too when both are non-empty at dispatch time". ``` 5 → 6: single-hash job storage. Every job used to occupy four Redis keys: the metadata hash, a `:data` string for the serialized input, a `:result` string for the serialized output, and a `:lock` string for the worker lock. This bump collapses `:data` and `:result` into fields on the metadata hash, removing two Redis calls per job on the happy path (enqueue, claim, ack) and saving one Redis keyspace slot (~100–150 B) per job for payloads that fit in the hash's listpack encoding. The lock stays separate — it needs `SET key val PX ttl` atomicity that hashes can't match until Redis 7.4+'s `HEXPIRE`, which we intentionally don't mandate. ``` Hard gate: wire-5 workers doing `GET :data` will hit `nil` after the migration runs, so `MIN_COMPAT_VERSION` bumps to 6 and the handshake fails loud instead. Drain queues or flush the keyspace before rolling workers. `MIGRATE_JOBS_V5_TO_V6` runs automatically during the handshake migration window: it SCANs `*:data` and `*:result` siblings and moves each value into the hash. Memory tradeoff: for payloads that stay inside the hash's listpack threshold (default `hash-max-listpack-value 64`) the new layout uses ~50% less memory per job. For payloads above that threshold Redis promotes the hash to `hashtable` encoding (per- field overhead jumps from ~2 B to ~80 B) and the new layout can end up ~20–30% larger until the operator raises `hash-max-listpack-value` to something like 1024. Upgrade docs cover the tuning in detail. ``` ## Functions ### chain() #### Call Signature > **chain**<`A`, `B`>(`s1`): [`ChainSignature`](classes/ChainSignature)<`A`, `B`> Defined in: packages/taskora/dist/index.d.mts:298 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `B`> #### Call Signature > **chain**<`A`, `B`, `C`>(`s1`, `s2`): [`ChainSignature`](classes/ChainSignature)<`A`, `C`> Defined in: packages/taskora/dist/index.d.mts:299 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `C`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`>(`s1`, `s2`, `s3`): [`ChainSignature`](classes/ChainSignature)<`A`, `D`> Defined in: packages/taskora/dist/index.d.mts:300 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `D`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`, `E`>(`s1`, `s2`, `s3`, `s4`): [`ChainSignature`](classes/ChainSignature)<`A`, `E`> Defined in: packages/taskora/dist/index.d.mts:301 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | | `E` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | | `s4` | [`Signature`](classes/Signature)<`D`, `E`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `E`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`, `E`, `F`>(`s1`, `s2`, `s3`, `s4`, `s5`): [`ChainSignature`](classes/ChainSignature)<`A`, `F`> Defined in: packages/taskora/dist/index.d.mts:302 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | | `E` | | `F` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | | `s4` | [`Signature`](classes/Signature)<`D`, `E`> | | `s5` | [`Signature`](classes/Signature)<`E`, `F`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `F`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`, `E`, `F`, `G`>(`s1`, `s2`, `s3`, `s4`, `s5`, `s6`): [`ChainSignature`](classes/ChainSignature)<`A`, `G`> Defined in: packages/taskora/dist/index.d.mts:303 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | | `E` | | `F` | | `G` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | | `s4` | [`Signature`](classes/Signature)<`D`, `E`> | | `s5` | [`Signature`](classes/Signature)<`E`, `F`> | | `s6` | [`Signature`](classes/Signature)<`F`, `G`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `G`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`, `E`, `F`, `G`, `H`>(`s1`, `s2`, `s3`, `s4`, `s5`, `s6`, `s7`): [`ChainSignature`](classes/ChainSignature)<`A`, `H`> Defined in: packages/taskora/dist/index.d.mts:304 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | | `E` | | `F` | | `G` | | `H` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | | `s4` | [`Signature`](classes/Signature)<`D`, `E`> | | `s5` | [`Signature`](classes/Signature)<`E`, `F`> | | `s6` | [`Signature`](classes/Signature)<`F`, `G`> | | `s7` | [`Signature`](classes/Signature)<`G`, `H`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `H`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`, `E`, `F`, `G`, `H`, `I`>(`s1`, `s2`, `s3`, `s4`, `s5`, `s6`, `s7`, `s8`): [`ChainSignature`](classes/ChainSignature)<`A`, `I`> Defined in: packages/taskora/dist/index.d.mts:305 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | | `E` | | `F` | | `G` | | `H` | | `I` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | | `s4` | [`Signature`](classes/Signature)<`D`, `E`> | | `s5` | [`Signature`](classes/Signature)<`E`, `F`> | | `s6` | [`Signature`](classes/Signature)<`F`, `G`> | | `s7` | [`Signature`](classes/Signature)<`G`, `H`> | | `s8` | [`Signature`](classes/Signature)<`H`, `I`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `I`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`, `E`, `F`, `G`, `H`, `I`, `J`>(`s1`, `s2`, `s3`, `s4`, `s5`, `s6`, `s7`, `s8`, `s9`): [`ChainSignature`](classes/ChainSignature)<`A`, `J`> Defined in: packages/taskora/dist/index.d.mts:306 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | | `E` | | `F` | | `G` | | `H` | | `I` | | `J` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | | `s4` | [`Signature`](classes/Signature)<`D`, `E`> | | `s5` | [`Signature`](classes/Signature)<`E`, `F`> | | `s6` | [`Signature`](classes/Signature)<`F`, `G`> | | `s7` | [`Signature`](classes/Signature)<`G`, `H`> | | `s8` | [`Signature`](classes/Signature)<`H`, `I`> | | `s9` | [`Signature`](classes/Signature)<`I`, `J`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `J`> #### Call Signature > **chain**<`A`, `B`, `C`, `D`, `E`, `F`, `G`, `H`, `I`, `J`, `K`>(`s1`, `s2`, `s3`, `s4`, `s5`, `s6`, `s7`, `s8`, `s9`, `s10`): [`ChainSignature`](classes/ChainSignature)<`A`, `K`> Defined in: packages/taskora/dist/index.d.mts:307 ##### Type Parameters | Type Parameter | | ------ | | `A` | | `B` | | `C` | | `D` | | `E` | | `F` | | `G` | | `H` | | `I` | | `J` | | `K` | ##### Parameters | Parameter | Type | | ------ | ------ | | `s1` | [`Signature`](classes/Signature)<`A`, `B`> | | `s2` | [`Signature`](classes/Signature)<`B`, `C`> | | `s3` | [`Signature`](classes/Signature)<`C`, `D`> | | `s4` | [`Signature`](classes/Signature)<`D`, `E`> | | `s5` | [`Signature`](classes/Signature)<`E`, `F`> | | `s6` | [`Signature`](classes/Signature)<`F`, `G`> | | `s7` | [`Signature`](classes/Signature)<`G`, `H`> | | `s8` | [`Signature`](classes/Signature)<`H`, `I`> | | `s9` | [`Signature`](classes/Signature)<`I`, `J`> | | `s10` | [`Signature`](classes/Signature)<`J`, `K`> | ##### Returns [`ChainSignature`](classes/ChainSignature)<`A`, `K`> #### Call Signature > **chain**(...`steps`): [`ChainSignature`](classes/ChainSignature)<`unknown`, `unknown`> Defined in: packages/taskora/dist/index.d.mts:308 ##### Parameters | Parameter | Type | | ------ | ------ | | ...`steps` | `AnySignature`\[] | ##### Returns [`ChainSignature`](classes/ChainSignature)<`unknown`, `unknown`> *** ### checkCompat() > **checkCompat**(`ours`, `theirs`): [`CompatResult`](type-aliases/CompatResult) Defined in: packages/taskora/dist/index.d.mts:281 Pure compatibility check. No I/O, no exceptions — just data in, verdict out. Compatibility rule (symmetric, window-based): theirs.wireVersion ∈ \[ours.minCompat, +∞) — we are willing to read theirs ours.wireVersion ∈ \[theirs.minCompat, +∞) — they were willing to read ours Both windows must overlap; if either side's `wireVersion` falls below the other side's `minCompat`, the pair is incompatible and we refuse to start. The equal-version case trivially passes both checks. #### Parameters | Parameter | Type | | ------ | ------ | | `ours` | [`SchemaMeta`](interfaces/SchemaMeta) | | `theirs` | [`SchemaMeta`](interfaces/SchemaMeta) | #### Returns [`CompatResult`](type-aliases/CompatResult) *** ### chord() > **chord**<`T`, `CO`>(`header`, `callback`): [`ChordSignature`](classes/ChordSignature)<`CO`> Defined in: packages/taskora/dist/index.d.mts:319 Group + callback — parallel execution, then merge. Header results are collected as a tuple and passed to the callback. #### Type Parameters | Type Parameter | | ------ | | `T` *extends* `AnySignature`\[] | | `CO` | #### Parameters | Parameter | Type | | ------ | ------ | | `header` | \[`...T[]`] | | `callback` | [`Signature`](classes/Signature)<`InferOutputTuple`<`T`>, `CO`> | #### Returns [`ChordSignature`](classes/ChordSignature)<`CO`> *** ### compose() > **compose**(`middlewares`): (`ctx`) => `Promise`<`void`> Defined in: packages/taskora/dist/index.d.mts:296 Koa-style compose: creates a single function from an array of middleware. Execution follows the onion model — each middleware calls `next()` to proceed deeper, then resumes after `await next()` returns. #### Parameters | Parameter | Type | | ------ | ------ | | `middlewares` | [`Middleware`](namespaces/Taskora/type-aliases/Middleware)\[] | #### Returns (`ctx`) => `Promise`<`void`> *** ### createTaskora() > **createTaskora**(`options`): [`App`](classes/App) Defined in: packages/taskora/dist/index.d.mts:321 #### Parameters | Parameter | Type | | ------ | ------ | | `options` | [`TaskoraOptions`](interfaces/TaskoraOptions) | #### Returns [`App`](classes/App) *** ### currentMeta() > **currentMeta**(`now?`): [`SchemaMeta`](interfaces/SchemaMeta) Defined in: packages/taskora/dist/index.d.mts:255 The meta record this running build would persist if it were the first to touch the backend. #### Parameters | Parameter | Type | | ------ | ------ | | `now?` | `number` | #### Returns [`SchemaMeta`](interfaces/SchemaMeta) *** ### defineTask() > **defineTask**<`TInput`, `TOutput`>(`config`): [`TaskContract`](interfaces/TaskContract)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:140 Define a task contract with runtime schemas. Input and output types are inferred from the schemas automatically — any Standard Schema compatible library works (Zod, Valibot, ArkType, etc). #### Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TInput` | `unknown` | | `TOutput` | `unknown` | #### Parameters | Parameter | Type | | ------ | ------ | | `config` | [`DefineTaskConfig`](interfaces/DefineTaskConfig)<`TInput`, `TOutput`> | #### Returns [`TaskContract`](interfaces/TaskContract)<`TInput`, `TOutput`> #### Example ```ts import { defineTask } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.email(), subject: z.string() }), output: z.object({ messageId: z.string() }), retry: { attempts: 3, backoff: "exponential" }, }) ``` *** ### group() > **group**<`T`>(...`sigs`): [`GroupSignature`](classes/GroupSignature)<`InferOutputTuple`<`T`>> Defined in: packages/taskora/dist/index.d.mts:313 Parallel execution — all signatures run concurrently, result is a tuple. #### Type Parameters | Type Parameter | | ------ | | `T` *extends* `AnySignature`\[] | #### Parameters | Parameter | Type | | ------ | ------ | | ...`sigs` | `T` | #### Returns [`GroupSignature`](classes/GroupSignature)<`InferOutputTuple`<`T`>> *** ### into() > **into**<`S`>(`_schema`, `fn`): (`data`) => `InferInput`<`S`> Defined in: packages/taskora/dist/index.d.mts:289 Type helper for tuple migrations. Locks the return type to the schema's input type. Implementation is identity — the value is in the types. #### Type Parameters | Type Parameter | | ------ | | `S` *extends* `StandardSchemaV1`<`unknown`, `unknown`> | #### Parameters | Parameter | Type | | ------ | ------ | | `_schema` | `S` | | `fn` | (`data`) => `InferInput`<`S`> | #### Returns (`data`) => `InferInput`<`S`> *** ### isTaskContract() > **isTaskContract**(`value`): `value is TaskContract` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:170 Runtime type guard for task contracts. #### Parameters | Parameter | Type | | ------ | ------ | | `value` | `unknown` | #### Returns `value is TaskContract` *** ### json() > **json**(): [`Serializer`](namespaces/Taskora/interfaces/Serializer) Defined in: packages/taskora/dist/index.d.mts:283 #### Returns [`Serializer`](namespaces/Taskora/interfaces/Serializer) *** ### parseDuration() > **parseDuration**(`value`): `number` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:83 Parse a human-readable duration string into milliseconds. Accepts "30s", "5m", "2h", "1d", or a number (ms passthrough). #### Parameters | Parameter | Type | | ------ | ------ | | `value` | [`Duration`](type-aliases/Duration) | #### Returns `number` *** ### staticContract() > **staticContract**<`TInput`, `TOutput`>(`config`): [`TaskContract`](interfaces/TaskContract)<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:168 Define a type-only task contract with no runtime schemas. Use this for bundle-size-sensitive producers (edge runtimes, browsers) where you want compile-time safety without shipping Zod/Valibot at runtime. The worker side should still validate its input — either by swapping [staticContract](#staticcontract) for [defineTask](#definetask) in the worker-side module, or by providing schemas at `implement()` time. #### Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TInput` | `unknown` | | `TOutput` | `unknown` | #### Parameters | Parameter | Type | | ------ | ------ | | `config` | [`StaticContractConfig`](interfaces/StaticContractConfig) | #### Returns [`TaskContract`](interfaces/TaskContract)<`TInput`, `TOutput`> #### Example ```ts import { staticContract } from "taskora" export const sendEmailTask = staticContract< { to: string; subject: string }, { messageId: string } >({ name: "send-email" }) ``` *** ### writtenByForWireVersion() > **writtenByForWireVersion**(`wireVersion`): `string` Defined in: packages/taskora/dist/index.d.mts:250 Derives the `writtenBy` identifier for a given wire version. Exported so tests can assert the format without re-deriving it, and so any future tooling that wants to parse the field has a single source of truth. Intentionally tied to `WIRE_VERSION` and nothing else — there is no separate constant to keep in sync with `package.json`, which means a release that doesn't change the wire format doesn't touch this file at all. #### Parameters | Parameter | Type | | ------ | ------ | | `wireVersion` | `number` | #### Returns `string` --- --- url: 'https://kravetsone.github.io/taskora/taskora/operations/inspector.md' --- # Inspector The Inspector provides read-only access to queue state — query jobs, check stats, and inspect version distribution. ## Getting the Inspector ```ts const inspector = taskora.inspect() ``` ## Listing Jobs by State ```ts const waiting = await inspector.waiting({ task: "send-email", limit: 20, offset: 0 }) const active = await inspector.active() const delayed = await inspector.delayed({ limit: 50 }) const completed = await inspector.completed({ task: "send-email" }) const failed = await inspector.failed() const expired = await inspector.expired() const cancelled = await inspector.cancelled() ``` Each returns an array of `JobInfo` objects. ## Finding a Specific Job ```ts // Cross-task search (checks all registered tasks) const job = await inspector.find("550e8400-e29b-41d4-a716-446655440000") // Typed search (pass a Task object for typed data/result) const job = await inspector.find(sendEmailTask, "550e8400...") // job.data is typed as { to: string, subject: string } // job.result is typed as the handler's return type ``` ## Queue Stats ```ts const stats = await inspector.stats() // { waiting: 42, active: 5, delayed: 12, completed: 1500, failed: 3, expired: 0, cancelled: 1 } // Per-task stats const emailStats = await inspector.stats({ task: "send-email" }) ``` ## JobInfo Type ```ts interface JobInfo { id: string task: string state: JobState data: TData result?: TResult error?: string progress?: number | Record logs: LogEntry[] attempt: number version: number timestamp: number processedOn?: number finishedOn?: number timeline: Array<{ state: string; at: number }> } ``` The `timeline` is reconstructed from `timestamp`, `processedOn`, and `finishedOn` fields. ## Version Distribution Check what job versions are in your queues: ```ts const status = await inspector.migrations("send-email") // { // version: 3, since: 1, migrations: 2, // queue: { oldest: 2, byVersion: { 2: 5, 3: 142 } }, // delayed: { oldest: 3, byVersion: { 3: 8 } }, // canBumpSince: 2 // } ``` See [Versioning & Migrations](/features/versioning) for details. --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/AckAndDequeueResult.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / AckAndDequeueResult # Interface: AckAndDequeueResult Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:864 Result of `ackAndDequeue` / `failAndDequeue`. Carries the next job to process in this slot (if any) **and** the workflow binding of the job that was just acked/failed. The worker uses the latter to advance or fail the workflow without a separate `getWorkflowMeta` roundtrip. ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `ackedWorkflow` | `object` | Workflow binding of the just-acked/failed job, or `null` if it wasn't part of a workflow. | packages/taskora/dist/types-KwmL9wkN.d.mts:868 | | `ackedWorkflow.nodeIndex` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:870 | | `ackedWorkflow.workflowId` | `string` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:869 | | `next` | [`DequeueResult`](DequeueResult) | Next job to process in this slot, or `null` if no work is available. | packages/taskora/dist/types-KwmL9wkN.d.mts:866 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/ActiveEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / ActiveEvent # Interface: ActiveEvent Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:711 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:713 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:712 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/Adapter.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / Adapter # Interface: Adapter Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:915 ## Methods ### ack() > **ack**(`task`, `jobId`, `token`, `result`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1016 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `result` | `string` | #### Returns `Promise`<`void`> *** ### ackAndDequeue()? > `optional` **ackAndDequeue**(`task`, `jobId`, `token`, `result`, `newToken`, `newLockTtl`, `options?`): `Promise`<[`AckAndDequeueResult`](AckAndDequeueResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1033 Atomically ack the current job and dequeue the next one in a single roundtrip. Optional — adapters that don't implement this fall back to separate `ack()` + `dequeue()` calls in the worker. The Redis adapter implements it as a fused Lua script (ACK\_AND\_MOVE\_TO\_ACTIVE) which is critical for high-concurrency throughput: each slot self-feeds without funneling through the worker's poll loop. The returned `AckAndDequeueResult` also carries the workflow binding of the acked job so the worker can advance/cancel the workflow without a follow-up `getWorkflowMeta` HMGET. #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `result` | `string` | | `newToken` | `string` | | `newLockTtl` | `number` | | `options?` | [`DequeueOptions`](DequeueOptions) | #### Returns `Promise`<[`AckAndDequeueResult`](AckAndDequeueResult)> *** ### acquireSchedulerLock() > **acquireSchedulerLock**(`token`, `ttl`): `Promise`<`boolean`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1090 #### Parameters | Parameter | Type | | ------ | ------ | | `token` | `string` | | `ttl` | `number` | #### Returns `Promise`<`boolean`> *** ### addLog() > **addLog**(`task`, `jobId`, `entry`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1052 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `entry` | `string` | #### Returns `Promise`<`void`> *** ### addSchedule() > **addSchedule**(`name`, `config`, `nextRun`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1075 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `config` | `string` | | `nextRun` | `number` | #### Returns `Promise`<`void`> *** ### advanceWorkflow() > **advanceWorkflow**(`workflowId`, `nodeIndex`, `result`): `Promise`<[`WorkflowAdvanceResult`](WorkflowAdvanceResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1093 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `nodeIndex` | `number` | | `result` | `string` | #### Returns `Promise`<[`WorkflowAdvanceResult`](WorkflowAdvanceResult)> *** ### awaitJob() > **awaitJob**(`task`, `jobId`, `timeoutMs?`): `Promise`<[`AwaitJobResult`](AwaitJobResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1059 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `timeoutMs?` | `number` | #### Returns `Promise`<[`AwaitJobResult`](AwaitJobResult)> *** ### blockingDequeue() > **blockingDequeue**(`task`, `lockTtl`, `token`, `timeoutMs`, `options?`): `Promise`<[`DequeueResult`](DequeueResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1015 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `lockTtl` | `number` | | `token` | `string` | | `timeoutMs` | `number` | | `options?` | [`DequeueOptions`](DequeueOptions) | #### Returns `Promise`<[`DequeueResult`](DequeueResult)> *** ### cancel() > **cancel**(`task`, `jobId`, `reason?`): `Promise`<`"cancelled"` | `"flagged"` | `"not_cancellable"`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1044 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `reason?` | `string` | #### Returns `Promise`<`"cancelled"` | `"flagged"` | `"not_cancellable"`> *** ### cancelWorkflow() > **cancelWorkflow**(`workflowId`, `reason?`): `Promise`<[`WorkflowCancelResult`](WorkflowCancelResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1096 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `reason?` | `string` | #### Returns `Promise`<[`WorkflowCancelResult`](WorkflowCancelResult)> *** ### cleanJobs() > **cleanJobs**(`task`, `state`, `before`, `limit`): `Promise`<`number`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1101 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `state` | [`JobState`](../type-aliases/JobState) | | `before` | `number` | | `limit` | `number` | #### Returns `Promise`<`number`> *** ### collectPush() > **collectPush**(`task`, `jobId`, `item`, `options`): `Promise`<{ `count`: `number`; `flushed`: `boolean`; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:986 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `item` | `string` | | `options` | { `_v`: `number`; `collectKey`: `string`; `delayMs`: `number`; `maxAttempts?`: `number`; `maxSize`: `number`; `maxWaitMs`: `number`; } | | `options._v` | `number` | | `options.collectKey` | `string` | | `options.delayMs` | `number` | | `options.maxAttempts?` | `number` | | `options.maxSize` | `number` | | `options.maxWaitMs` | `number` | #### Returns `Promise`<{ `count`: `number`; `flushed`: `boolean`; }> *** ### connect() > **connect**(): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:916 #### Returns `Promise`<`void`> *** ### createWorkflow() > **createWorkflow**(`workflowId`, `graph`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1092 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `graph` | `string` | #### Returns `Promise`<`void`> *** ### debounceEnqueue() > **debounceEnqueue**(`task`, `jobId`, `data`, `options`, `debounceKey`, `delayMs`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:955 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | { `_v`: `number`; `concurrencyKey?`: `string`; `concurrencyLimit?`: `number`; `expireAt?`: `number`; `maxAttempts?`: `number`; `priority?`: `number`; } | | `options._v` | `number` | | `options.concurrencyKey?` | `string` | | `options.concurrencyLimit?` | `number` | | `options.expireAt?` | `number` | | `options.maxAttempts?` | `number` | | `options.priority?` | `number` | | `debounceKey` | `string` | | `delayMs` | `number` | #### Returns `Promise`<`void`> *** ### deduplicateEnqueue() > **deduplicateEnqueue**(`task`, `jobId`, `data`, `options`, `dedupKey`, `states`): `Promise`<{ `created`: `true`; } | { `created`: `false`; `existingId`: `string`; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:972 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | { `_v`: `number`; `concurrencyKey?`: `string`; `concurrencyLimit?`: `number`; `delay?`: `number`; `expireAt?`: `number`; `maxAttempts?`: `number`; `priority?`: `number`; } | | `options._v` | `number` | | `options.concurrencyKey?` | `string` | | `options.concurrencyLimit?` | `number` | | `options.delay?` | `number` | | `options.expireAt?` | `number` | | `options.maxAttempts?` | `number` | | `options.priority?` | `number` | | `dedupKey` | `string` | | `states` | `string`\[] | #### Returns `Promise`<{ `created`: `true`; } | { `created`: `false`; `existingId`: `string`; }> *** ### dequeue() > **dequeue**(`task`, `lockTtl`, `token`, `options?`): `Promise`<[`DequeueResult`](DequeueResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1014 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `lockTtl` | `number` | | `token` | `string` | | `options?` | [`DequeueOptions`](DequeueOptions) | #### Returns `Promise`<[`DequeueResult`](DequeueResult)> *** ### disconnect() > **disconnect**(): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:917 #### Returns `Promise`<`void`> *** ### enqueue() > **enqueue**(`task`, `jobId`, `data`, `options`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:933 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | `object` & [`DispatchOptions`](DispatchOptions) | #### Returns `Promise`<`void`> *** ### enqueueBulk()? > `optional` **enqueueBulk**(`task`, `jobs`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:942 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobs` | `object`\[] | #### Returns `Promise`<`void`> *** ### extendLock() > **extendLock**(`task`, `jobId`, `token`, `ttl`): `Promise`<`"extended"` | `"lost"` | `"cancelled"`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1043 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `ttl` | `number` | #### Returns `Promise`<`"extended"` | `"lost"` | `"cancelled"`> *** ### fail() > **fail**(`task`, `jobId`, `token`, `error`, `retry?`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1017 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `error` | `string` | | `retry?` | { `delay`: `number`; } | | `retry.delay?` | `number` | #### Returns `Promise`<`void`> *** ### failAndDequeue()? > `optional` **failAndDequeue**(`task`, `jobId`, `token`, `error`, `retry`, `newToken`, `newLockTtl`, `options?`): `Promise`<[`AckAndDequeueResult`](AckAndDequeueResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1040 Atomically fail the current job and dequeue the next one in a single roundtrip. Optional — see `ackAndDequeue` for rationale. The workflow binding is only populated on *permanent* failures (retries don't cascade). #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | | `error` | `string` | | `retry` | { `delay`: `number`; } | | `retry.delay` | `number` | | `newToken` | `string` | | `newLockTtl` | `number` | | `options?` | [`DequeueOptions`](DequeueOptions) | #### Returns `Promise`<[`AckAndDequeueResult`](AckAndDequeueResult)> *** ### failWorkflow() > **failWorkflow**(`workflowId`, `nodeIndex`, `error`): `Promise`<[`WorkflowFailResult`](WorkflowFailResult)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1094 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | | `nodeIndex` | `number` | | `error` | `string` | #### Returns `Promise`<[`WorkflowFailResult`](WorkflowFailResult)> *** ### finishCancel() > **finishCancel**(`task`, `jobId`, `token`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1045 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | #### Returns `Promise`<`void`> *** ### getError() > **getError**(`task`, `jobId`): `Promise`<`string`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1055 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`> *** ### getJobDetails() > **getJobDetails**(`task`, `jobId`): `Promise`<[`RawJobDetails`](RawJobDetails)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1064 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<[`RawJobDetails`](RawJobDetails)> *** ### getLogs() > **getLogs**(`task`, `jobId`): `Promise`<`string`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1057 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`\[]> *** ### getProgress() > **getProgress**(`task`, `jobId`): `Promise`<`string`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1056 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`> *** ### getQueueStats() > **getQueueStats**(`task`): `Promise`<[`QueueStats`](QueueStats)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1065 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | #### Returns `Promise`<[`QueueStats`](QueueStats)> *** ### getResult() > **getResult**(`task`, `jobId`): `Promise`<`string`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1054 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`string`> *** ### getSchedule() > **getSchedule**(`name`): `Promise`<{ `config`: `string`; `nextRun`: `number`; `paused`: `boolean`; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1077 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | #### Returns `Promise`<{ `config`: `string`; `nextRun`: `number`; `paused`: `boolean`; }> *** ### getServerInfo() > **getServerInfo**(): `Promise`<{ `connected`: `boolean`; `connectedClients?`: `number`; `dbSize?`: `number`; `peakMemory?`: `string`; `uptime`: `number`; `usedMemory`: `string`; `usedMemoryBytes?`: `number`; `version`: `string`; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1102 #### Returns `Promise`<{ `connected`: `boolean`; `connectedClients?`: `number`; `dbSize?`: `number`; `peakMemory?`: `string`; `uptime`: `number`; `usedMemory`: `string`; `usedMemoryBytes?`: `number`; `version`: `string`; }> *** ### getState() > **getState**(`task`, `jobId`): `Promise`<[`JobState`](../type-aliases/JobState)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1053 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<[`JobState`](../type-aliases/JobState)> *** ### getTaskKeyStats() > **getTaskKeyStats**(`task`): `Promise`<{ `keyCount`: `number`; `memoryBytes`: `number`; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1121 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | #### Returns `Promise`<{ `keyCount`: `number`; `memoryBytes`: `number`; }> *** ### getThroughput() > **getThroughput**(`task`, `bucketSize`, `count`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1125 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `bucketSize` | `number` | | `count` | `number` | #### Returns `Promise`<`object`\[]> *** ### getVersionDistribution() > **getVersionDistribution**(`task`): `Promise`<{ `active`: `Record`<`number`, `number`>; `delayed`: `Record`<`number`, `number`>; `waiting`: `Record`<`number`, `number`>; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1070 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | #### Returns `Promise`<{ `active`: `Record`<`number`, `number`>; `delayed`: `Record`<`number`, `number`>; `waiting`: `Record`<`number`, `number`>; }> *** ### getWorkflowDetail() > **getWorkflowDetail**(`workflowId`): `Promise`<[`WorkflowDetail`](WorkflowDetail)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1120 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | #### Returns `Promise`<[`WorkflowDetail`](WorkflowDetail)> *** ### getWorkflowMeta() > **getWorkflowMeta**(`task`, `jobId`): `Promise`<{ `nodeIndex`: `number`; `workflowId`: `string`; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1097 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<{ `nodeIndex`: `number`; `workflowId`: `string`; }> *** ### getWorkflowState() > **getWorkflowState**(`workflowId`): `Promise`<`string`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1095 #### Parameters | Parameter | Type | | ------ | ------ | | `workflowId` | `string` | #### Returns `Promise`<`string`> *** ### handshake() > **handshake**(`ours`): `Promise`<[`SchemaMeta`](SchemaMeta)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:932 Atomically persist-or-read the wire-format meta record. Contract: • If no meta is currently stored in this backend (+ prefix), the adapter writes `ours` verbatim and returns `ours`. • Otherwise, the adapter returns whatever meta is already stored, UNTOUCHED — the check/upgrade decision belongs to core. Called exactly once per `App.ensureConnected()`, immediately after `connect()` and before any worker/scheduler/dispatch runs. Core calls `checkCompat(ours, returned)` and throws `SchemaVersionMismatchError` on any incompatibility. #### Parameters | Parameter | Type | | ------ | ------ | | `ours` | [`SchemaMeta`](SchemaMeta) | #### Returns `Promise`<[`SchemaMeta`](SchemaMeta)> *** ### inspectCollect() > **inspectCollect**(`task`, `collectKey`): `Promise`<[`CollectBufferInfo`](CollectBufferInfo)> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1013 Stats-only view of the current collect buffer for `(task, collectKey)`. Cheaper than [peekCollect](#peekcollect) — reads only the meta hash, not the item payloads. Returns `null` if there is no active buffer for the key. #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `collectKey` | `string` | #### Returns `Promise`<[`CollectBufferInfo`](CollectBufferInfo)> *** ### listJobDetails() > **listJobDetails**(`task`, `state`, `offset`, `limit`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1060 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `state` | `"cancelled"` | `"waiting"` | `"delayed"` | `"active"` | `"completed"` | `"failed"` | `"expired"` | | `offset` | `number` | | `limit` | `number` | #### Returns `Promise`<`object`\[]> *** ### listSchedules() > **listSchedules**(): `Promise`<[`ScheduleRecord`](ScheduleRecord)\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1082 #### Returns `Promise`<[`ScheduleRecord`](ScheduleRecord)\[]> *** ### listWorkflows() > **listWorkflows**(`state?`, `offset?`, `limit?`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1112 #### Parameters | Parameter | Type | | ------ | ------ | | `state?` | [`WorkflowState`](../type-aliases/WorkflowState) | | `offset?` | `number` | | `limit?` | `number` | #### Returns `Promise`<`object`\[]> *** ### nack() > **nack**(`task`, `jobId`, `token`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1020 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `token` | `string` | #### Returns `Promise`<`void`> *** ### onCancel() > **onCancel**(`task`, `handler`): `Promise`<() => `void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1046 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `handler` | (`jobId`) => `void` | #### Returns `Promise`<() => `void`> *** ### pauseSchedule() > **pauseSchedule**(`name`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1088 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | #### Returns `Promise`<`void`> *** ### peekCollect() > **peekCollect**(`task`, `collectKey`): `Promise`<`string`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1007 Read the current collect buffer for `(task, collectKey)` as raw serialized items. Non-destructive: never drains, never resets TTLs, never alters flush-trigger state. Returns items in dispatch order (oldest → newest). Empty array if the buffer has no items, was just flushed, or was never created. Called by `Task.peekCollect` / `BoundTask.peekCollect`; deserialization is done at the Task layer so the adapter stays schema-agnostic. #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `collectKey` | `string` | #### Returns `Promise`<`string`\[]> *** ### removeSchedule() > **removeSchedule**(`name`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1076 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | #### Returns `Promise`<`void`> *** ### renewSchedulerLock() > **renewSchedulerLock**(`token`, `ttl`): `Promise`<`boolean`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1091 #### Parameters | Parameter | Type | | ------ | ------ | | `token` | `string` | | `ttl` | `number` | #### Returns `Promise`<`boolean`> *** ### resumeSchedule() > **resumeSchedule**(`name`, `nextRun`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1089 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `nextRun` | `number` | #### Returns `Promise`<`void`> *** ### retryAllFromDLQ() > **retryAllFromDLQ**(`task`, `limit`): `Promise`<`number`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1067 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `limit` | `number` | #### Returns `Promise`<`number`> *** ### retryFromDLQ() > **retryFromDLQ**(`task`, `jobId`): `Promise`<`boolean`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1066 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | #### Returns `Promise`<`boolean`> *** ### setProgress() > **setProgress**(`task`, `jobId`, `value`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1051 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `value` | `string` | #### Returns `Promise`<`void`> *** ### stalledCheck() > **stalledCheck**(`task`, `maxStalledCount`): `Promise`<{ `failed`: `string`\[]; `recovered`: `string`\[]; }> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1047 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `maxStalledCount` | `number` | #### Returns `Promise`<{ `failed`: `string`\[]; `recovered`: `string`\[]; }> *** ### subscribe() > **subscribe**(`tasks`, `handler`): `Promise`<() => `Promise`<`void`>> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1058 #### Parameters | Parameter | Type | | ------ | ------ | | `tasks` | `string`\[] | | `handler` | (`event`) => `void` | #### Returns `Promise`<() => `Promise`<`void`>> *** ### throttleEnqueue() > **throttleEnqueue**(`task`, `jobId`, `data`, `options`, `throttleKey`, `max`, `windowMs`): `Promise`<`boolean`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:963 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `jobId` | `string` | | `data` | `string` | | `options` | { `_v`: `number`; `concurrencyKey?`: `string`; `concurrencyLimit?`: `number`; `delay?`: `number`; `expireAt?`: `number`; `maxAttempts?`: `number`; `priority?`: `number`; } | | `options._v` | `number` | | `options.concurrencyKey?` | `string` | | `options.concurrencyLimit?` | `number` | | `options.delay?` | `number` | | `options.expireAt?` | `number` | | `options.maxAttempts?` | `number` | | `options.priority?` | `number` | | `throttleKey` | `string` | | `max` | `number` | | `windowMs` | `number` | #### Returns `Promise`<`boolean`> *** ### tickScheduler() > **tickScheduler**(`now`): `Promise`<`object`\[]> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1083 #### Parameters | Parameter | Type | | ------ | ------ | | `now` | `number` | #### Returns `Promise`<`object`\[]> *** ### trimCompleted() > **trimCompleted**(`task`, `before`, `maxItems`): `Promise`<`number`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1069 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `before` | `number` | | `maxItems` | `number` | #### Returns `Promise`<`number`> *** ### trimDLQ() > **trimDLQ**(`task`, `before`, `maxItems`): `Promise`<`number`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1068 #### Parameters | Parameter | Type | | ------ | ------ | | `task` | `string` | | `before` | `number` | | `maxItems` | `number` | #### Returns `Promise`<`number`> *** ### updateScheduleNextRun() > **updateScheduleNextRun**(`name`, `config`, `nextRun`): `Promise`<`void`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1087 #### Parameters | Parameter | Type | | ------ | ------ | | `name` | `string` | | `config` | `string` | | `nextRun` | `number` | #### Returns `Promise`<`void`> --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/AppEventMap.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / AppEventMap # Interface: AppEventMap Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:733 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `task:active` | [`ActiveEvent`](ActiveEvent) & `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:740 | | `task:cancelled` | [`CancelledEvent`](CancelledEvent) & `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:746 | | `task:completed` | [`CompletedEvent`](CompletedEvent)<`unknown`> & `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:734 | | `task:failed` | [`FailedEvent`](FailedEvent) & `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:737 | | `task:stalled` | [`StalledEvent`](StalledEvent) & `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:743 | | `worker:closing` | `undefined` | packages/taskora/dist/types-KwmL9wkN.d.mts:751 | | `worker:error` | `Error` | packages/taskora/dist/types-KwmL9wkN.d.mts:750 | | `worker:ready` | `undefined` | packages/taskora/dist/types-KwmL9wkN.d.mts:749 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/AwaitJobResult.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / AwaitJobResult # Interface: AwaitJobResult Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1131 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `error?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1134 | | `result?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1133 | | `state` | `"cancelled"` | `"completed"` | `"failed"` | packages/taskora/dist/types-KwmL9wkN.d.mts:1132 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/CancelledEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / CancelledEvent # Interface: CancelledEvent Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:720 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:721 | | `reason?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:722 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/CollectBufferInfo.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / CollectBufferInfo # Interface: CollectBufferInfo Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:803 Snapshot of an active collect buffer's meta (no payloads). Returned by `Task.inspectCollect` / `BoundTask.inspectCollect`. `null` is returned when no buffer exists for the key — use that as the "empty" signal rather than checking `count === 0`. ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `count` | `number` | Number of items currently buffered. | packages/taskora/dist/types-KwmL9wkN.d.mts:805 | | `newestAt` | `number` | Epoch ms of the most recent dispatch in the current buffer. | packages/taskora/dist/types-KwmL9wkN.d.mts:812 | | `oldestAt` | `number` | Epoch ms of the first dispatch in the current buffer generation. Reset on every flush — a new buffer starts on the next push. | packages/taskora/dist/types-KwmL9wkN.d.mts:810 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/CollectConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / CollectConfig # Interface: CollectConfig\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:791 ## Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TInput` | `unknown` | ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `delay` | `any` | packages/taskora/dist/types-KwmL9wkN.d.mts:793 | | `key` | `string` | ((`data`) => `string`) | packages/taskora/dist/types-KwmL9wkN.d.mts:792 | | `maxSize?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:794 | | `maxWait?` | `any` | packages/taskora/dist/types-KwmL9wkN.d.mts:795 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/CompletedEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / CompletedEvent # Interface: CompletedEvent\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:689 ## Type Parameters | Type Parameter | | ------ | | `TOutput` | ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:693 | | `duration` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:692 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:690 | | `result` | `TOutput` | packages/taskora/dist/types-KwmL9wkN.d.mts:691 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/Context.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / Context # Interface: Context Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:884 ## Extended by * [`MiddlewareContext`](MiddlewareContext) ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:886 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:885 | | `log` | [`ContextLog`](ContextLog) | packages/taskora/dist/types-KwmL9wkN.d.mts:895 | | `signal` | `AbortSignal` | packages/taskora/dist/types-KwmL9wkN.d.mts:888 | | `timestamp` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:887 | ## Methods ### heartbeat() > **heartbeat**(): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:889 #### Returns `void` *** ### progress() > **progress**(`value`): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:894 #### Parameters | Parameter | Type | | ------ | ------ | | `value` | `number` | `Record`<`string`, `unknown`> | #### Returns `void` *** ### retry() > **retry**(`options?`): [`RetryError`](../../../classes/RetryError) Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:890 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `delay?`: `number`; `reason?`: `string`; } | | `options.delay?` | `number` | | `options.reason?` | `string` | #### Returns [`RetryError`](../../../classes/RetryError) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/ContextLog.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / ContextLog # Interface: ContextLog Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:879 ## Methods ### error() > **error**(`message`, `meta?`): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:882 #### Parameters | Parameter | Type | | ------ | ------ | | `message` | `string` | | `meta?` | `Record`<`string`, `unknown`> | #### Returns `void` *** ### info() > **info**(`message`, `meta?`): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:880 #### Parameters | Parameter | Type | | ------ | ------ | | `message` | `string` | | `meta?` | `Record`<`string`, `unknown`> | #### Returns `void` *** ### warn() > **warn**(`message`, `meta?`): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:881 #### Parameters | Parameter | Type | | ------ | ------ | | `message` | `string` | | `meta?` | `Record`<`string`, `unknown`> | #### Returns `void` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/DebounceConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / DebounceConfig # Interface: DebounceConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:774 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `delay` | `any` | packages/taskora/dist/types-KwmL9wkN.d.mts:776 | | `key` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:775 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/DeduplicateConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / DeduplicateConfig # Interface: DeduplicateConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:783 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `key` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:784 | | `while?` | (`"waiting"` | `"delayed"` | `"active"`)\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:785 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/DefineTaskConfig.md --- [Taskora API Reference](../../index) / [index](../index) / DefineTaskConfig # Interface: DefineTaskConfig\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:113 Configuration accepted by [defineTask](../index#definetask). ## Type Parameters | Type Parameter | | ------ | | `TInput` | | `TOutput` | ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `input?` | `StandardSchemaV1`<`unknown`, `TInput`> | packages/taskora/dist/types-KwmL9wkN.d.mts:115 | | `name` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:114 | | `output?` | `StandardSchemaV1`<`unknown`, `TOutput`> | packages/taskora/dist/types-KwmL9wkN.d.mts:116 | | `retry?` | [`RetryConfig`](../namespaces/Taskora/interfaces/RetryConfig) | packages/taskora/dist/types-KwmL9wkN.d.mts:117 | | `stall?` | [`StallConfig`](../namespaces/Taskora/interfaces/StallConfig) | packages/taskora/dist/types-KwmL9wkN.d.mts:119 | | `timeout?` | [`Duration`](../type-aliases/Duration) | packages/taskora/dist/types-KwmL9wkN.d.mts:118 | | `version?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:120 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/DequeueOptions.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / DequeueOptions # Interface: DequeueOptions Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:831 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `onExpire?` | `"fail"` | `"discard"` | packages/taskora/dist/types-KwmL9wkN.d.mts:832 | | `singleton?` | `boolean` | packages/taskora/dist/types-KwmL9wkN.d.mts:833 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/DequeueResult.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / DequeueResult # Interface: DequeueResult Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:851 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `_v` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:854 | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:855 | | `data` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:853 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:852 | | `timestamp` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:856 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/DispatchOptions.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / DispatchOptions # Interface: DispatchOptions Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:814 ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `concurrencyKey?` | `string` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:818 | | `concurrencyLimit?` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:819 | | `debounce?` | [`DebounceConfig`](DebounceConfig) | - | packages/taskora/dist/types-KwmL9wkN.d.mts:820 | | `deduplicate?` | [`DeduplicateConfig`](DeduplicateConfig) | - | packages/taskora/dist/types-KwmL9wkN.d.mts:822 | | `delay?` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:815 | | `priority?` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:816 | | `skipValidation?` | `boolean` | Skip Standard Schema validation of `data` on this dispatch only. Overrides `TaskoraOptions.validateOnDispatch` (which is `true` by default). Worker-side validation still runs before the handler. | packages/taskora/dist/types-KwmL9wkN.d.mts:829 | | `throttle?` | [`ThrottleConfig`](ThrottleConfig) | - | packages/taskora/dist/types-KwmL9wkN.d.mts:821 | | `throwOnReject?` | `boolean` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:823 | | `ttl?` | `any` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:817 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/test/interfaces/ExecutionResult.md --- [Taskora API Reference](../../index) / [test](../index) / ExecutionResult # Interface: ExecutionResult\ Defined in: packages/taskora/dist/test/index.d.mts:9 ## Type Parameters | Type Parameter | | ------ | | `TOutput` | ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempts` | `number` | packages/taskora/dist/test/index.d.mts:12 | | `error` | `string` | packages/taskora/dist/test/index.d.mts:15 | | `handle` | [`ResultHandle`](../../taskora/classes/ResultHandle)<`TOutput`> | packages/taskora/dist/test/index.d.mts:16 | | `logs` | [`LogEntry`](../../taskora/namespaces/Taskora/interfaces/LogEntry)\[] | packages/taskora/dist/test/index.d.mts:13 | | `progress` | `number` | `Record`<`string`, `unknown`> | packages/taskora/dist/test/index.d.mts:14 | | `result` | `TOutput` | packages/taskora/dist/test/index.d.mts:10 | | `state` | [`JobState`](../../taskora/namespaces/Taskora/type-aliases/JobState) | packages/taskora/dist/test/index.d.mts:11 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/FailedEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / FailedEvent # Interface: FailedEvent Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:695 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:698 | | `error` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:697 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:696 | | `willRetry` | `boolean` | packages/taskora/dist/types-KwmL9wkN.d.mts:699 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/InspectorListOptions.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / InspectorListOptions # Interface: InspectorListOptions Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1170 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `limit?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1172 | | `offset?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1173 | | `task?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1171 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/JobInfo.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / JobInfo # Interface: JobInfo\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1136 ## Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TData` | `unknown` | | `TResult` | `unknown` | ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1145 | | `data` | `TData` | packages/taskora/dist/types-KwmL9wkN.d.mts:1140 | | `error?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1142 | | `finishedOn?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1149 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1137 | | `logs` | [`LogEntry`](LogEntry)\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1144 | | `processedOn?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1148 | | `progress?` | `number` | `Record`<`string`, `unknown`> | packages/taskora/dist/types-KwmL9wkN.d.mts:1143 | | `result?` | `TResult` | packages/taskora/dist/types-KwmL9wkN.d.mts:1141 | | `state` | [`JobState`](../type-aliases/JobState) | packages/taskora/dist/types-KwmL9wkN.d.mts:1139 | | `task` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1138 | | `timeline` | `object`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1150 | | `timestamp` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1147 | | `version` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1146 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/LogEntry.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / LogEntry # Interface: LogEntry Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:873 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `level` | `"info"` | `"warn"` | `"error"` | packages/taskora/dist/types-KwmL9wkN.d.mts:874 | | `message` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:875 | | `meta?` | `Record`<`string`, `unknown`> | packages/taskora/dist/types-KwmL9wkN.d.mts:876 | | `timestamp` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:877 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/MiddlewareContext.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / MiddlewareContext # Interface: MiddlewareContext Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:897 ## Extends * [`Context`](Context) ## Properties | Property | Type | Inherited from | Defined in | | ------ | ------ | ------ | ------ | | `attempt` | `number` | [`Context`](Context).[`attempt`](Context#attempt) | packages/taskora/dist/types-KwmL9wkN.d.mts:886 | | `data` | `unknown` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:901 | | `id` | `string` | [`Context`](Context).[`id`](Context#id) | packages/taskora/dist/types-KwmL9wkN.d.mts:885 | | `log` | [`ContextLog`](ContextLog) | [`Context`](Context).[`log`](Context#log) | packages/taskora/dist/types-KwmL9wkN.d.mts:895 | | `result` | `unknown` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:902 | | `signal` | `AbortSignal` | [`Context`](Context).[`signal`](Context#signal) | packages/taskora/dist/types-KwmL9wkN.d.mts:888 | | `task` | `object` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:898 | | `task.name` | `string` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:899 | | `timestamp` | `number` | [`Context`](Context).[`timestamp`](Context#timestamp) | packages/taskora/dist/types-KwmL9wkN.d.mts:887 | ## Methods ### heartbeat() > **heartbeat**(): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:889 #### Returns `void` #### Inherited from [`Context`](Context).[`heartbeat`](Context#heartbeat) *** ### progress() > **progress**(`value`): `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:894 #### Parameters | Parameter | Type | | ------ | ------ | | `value` | `number` | `Record`<`string`, `unknown`> | #### Returns `void` #### Inherited from [`Context`](Context).[`progress`](Context#progress) *** ### retry() > **retry**(`options?`): [`RetryError`](../../../classes/RetryError) Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:890 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | { `delay?`: `number`; `reason?`: `string`; } | | `options.delay?` | `number` | | `options.reason?` | `string` | #### Returns [`RetryError`](../../../classes/RetryError) #### Inherited from [`Context`](Context).[`retry`](Context#retry) --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/MigrationStatus.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / MigrationStatus # Interface: MigrationStatus Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1183 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `canBumpSince` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1195 | | `delayed` | `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:1191 | | `delayed.byVersion` | `Record`<`number`, `number`> | packages/taskora/dist/types-KwmL9wkN.d.mts:1193 | | `delayed.oldest` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1192 | | `migrations` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1186 | | `queue` | `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:1187 | | `queue.byVersion` | `Record`<`number`, `number`> | packages/taskora/dist/types-KwmL9wkN.d.mts:1189 | | `queue.oldest` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1188 | | `since` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1185 | | `version` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1184 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/ProgressEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / ProgressEvent # Interface: ProgressEvent Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:707 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:708 | | `progress` | `number` | `Record`<`string`, `unknown`> | packages/taskora/dist/types-KwmL9wkN.d.mts:709 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/QueueStats.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / QueueStats # Interface: QueueStats Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1155 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `active` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1157 | | `cancelled` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1162 | | `completed` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1159 | | `delayed` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1158 | | `expired` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1161 | | `failed` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1160 | | `waiting` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1156 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/RawJob.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / RawJob # Interface: RawJob Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:837 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `_v` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:843 | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:844 | | `data` | `unknown` | packages/taskora/dist/types-KwmL9wkN.d.mts:840 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:838 | | `options` | [`DispatchOptions`](DispatchOptions) | packages/taskora/dist/types-KwmL9wkN.d.mts:841 | | `state` | [`JobState`](../type-aliases/JobState) | packages/taskora/dist/types-KwmL9wkN.d.mts:842 | | `task` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:839 | | `timestamp` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:845 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/RawJobDetails.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / RawJobDetails # Interface: RawJobDetails Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1164 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `data` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1166 | | `fields` | `Record`<`string`, `string`> | packages/taskora/dist/types-KwmL9wkN.d.mts:1165 | | `logs` | `string`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1168 | | `result` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1167 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/RetentionConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / RetentionConfig # Interface: RetentionConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1175 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `maxAge?` | `any` | packages/taskora/dist/types-KwmL9wkN.d.mts:1176 | | `maxItems?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1177 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/RetentionOptions.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / RetentionOptions # Interface: RetentionOptions Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1179 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `completed?` | [`RetentionConfig`](RetentionConfig) | packages/taskora/dist/types-KwmL9wkN.d.mts:1180 | | `failed?` | [`RetentionConfig`](RetentionConfig) | packages/taskora/dist/types-KwmL9wkN.d.mts:1181 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/RetryConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / RetryConfig # Interface: RetryConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:761 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempts` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:762 | | `backoff?` | [`BackoffStrategy`](../type-aliases/BackoffStrategy) | packages/taskora/dist/types-KwmL9wkN.d.mts:763 | | `delay?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:764 | | `jitter?` | `boolean` | packages/taskora/dist/types-KwmL9wkN.d.mts:766 | | `maxDelay?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:765 | | `noRetryOn?` | (...`args`) => `Error`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:768 | | `retryOn?` | (...`args`) => `Error`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:767 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/RetryingEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / RetryingEvent # Interface: RetryingEvent Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:701 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `attempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:703 | | `error` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:705 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:702 | | `nextAttempt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:704 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/ScheduleConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / ScheduleConfig # Interface: ScheduleConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1199 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `cron?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1203 | | `data?` | `unknown` | packages/taskora/dist/types-KwmL9wkN.d.mts:1201 | | `every?` | `any` | packages/taskora/dist/types-KwmL9wkN.d.mts:1202 | | `onMissed?` | [`MissedPolicy`](../type-aliases/MissedPolicy) | packages/taskora/dist/types-KwmL9wkN.d.mts:1205 | | `overlap?` | `boolean` | packages/taskora/dist/types-KwmL9wkN.d.mts:1206 | | `task` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1200 | | `timezone?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1204 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/ScheduleInfo.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / ScheduleInfo # Interface: ScheduleInfo Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1208 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `config` | [`ScheduleConfig`](ScheduleConfig) | packages/taskora/dist/types-KwmL9wkN.d.mts:1210 | | `lastJobId` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1213 | | `lastRun` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1212 | | `name` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1209 | | `nextRun` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1211 | | `paused` | `boolean` | packages/taskora/dist/types-KwmL9wkN.d.mts:1214 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/SchedulerConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / SchedulerConfig # Interface: SchedulerConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1216 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `lockTtl?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1218 | | `pollInterval?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1217 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/ScheduleRecord.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / ScheduleRecord # Interface: ScheduleRecord Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1220 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `config` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1222 | | `name` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1221 | | `nextRun` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1223 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/SchemaMeta.md --- [Taskora API Reference](../../index) / [index](../index) / SchemaMeta # Interface: SchemaMeta Defined in: packages/taskora/dist/index.d.mts:228 A taskora wire-format meta record. Persisted once per `(backend, prefix)` pair by the first process to connect; every subsequent process validates its own compiled constants against the stored copy. ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `minCompat` | `number` | The oldest wire-format version the writer promised it can still handle. | packages/taskora/dist/index.d.mts:232 | | `wireVersion` | `number` | The wire-format version of the writing process. | packages/taskora/dist/index.d.mts:230 | | `writtenAt` | `number` | Unix milliseconds timestamp at write time. | packages/taskora/dist/index.d.mts:239 | | `writtenBy` | `string` | Identifier of the writer, derived from its wire version (not from `package.json`). Used purely for operator-facing error messages. | packages/taskora/dist/index.d.mts:237 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/SchemaMeta.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / SchemaMeta # Interface: SchemaMeta Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:909 Persisted wire-format meta record. See `src/wire-version.ts` for the compatibility rule and bump policy. ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `minCompat` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:911 | | `wireVersion` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:910 | | `writtenAt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:913 | | `writtenBy` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:912 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/Serializer.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / Serializer # Interface: Serializer Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:847 ## Methods ### deserialize() > **deserialize**(`raw`): `unknown` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:849 #### Parameters | Parameter | Type | | ------ | ------ | | `raw` | `string` | #### Returns `unknown` *** ### serialize() > **serialize**(`value`): `string` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:848 #### Parameters | Parameter | Type | | ------ | ------ | | `value` | `unknown` | #### Returns `string` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/StallConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / StallConfig # Interface: StallConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:770 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `interval?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:771 | | `maxCount?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:772 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/StalledEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / StalledEvent # Interface: StalledEvent Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:715 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `action` | `"failed"` | `"recovered"` | packages/taskora/dist/types-KwmL9wkN.d.mts:718 | | `count` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:717 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:716 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/StaticContractConfig.md --- [Taskora API Reference](../../index) / [index](../index) / StaticContractConfig # Interface: StaticContractConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:142 Configuration accepted by [staticContract](../index#staticcontract). ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `name` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:143 | | `retry?` | [`RetryConfig`](../namespaces/Taskora/interfaces/RetryConfig) | packages/taskora/dist/types-KwmL9wkN.d.mts:144 | | `stall?` | [`StallConfig`](../namespaces/Taskora/interfaces/StallConfig) | packages/taskora/dist/types-KwmL9wkN.d.mts:146 | | `timeout?` | [`Duration`](../type-aliases/Duration) | packages/taskora/dist/types-KwmL9wkN.d.mts:145 | | `version?` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:147 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/StreamEvent.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / StreamEvent # Interface: StreamEvent Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:753 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `event` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:755 | | `fields` | `Record`<`string`, `string`> | packages/taskora/dist/types-KwmL9wkN.d.mts:757 | | `jobId` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:756 | | `task` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:754 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/TaskContract.md --- [Taskora API Reference](../../index) / [index](../index) / TaskContract # Interface: TaskContract\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:99 A task contract — a pure, serializable declaration of a task's name, schemas, and defaults. Contracts have no runtime dependency on `App`, `Worker`, or `Adapter`. A producer service can import and dispatch contracts without pulling in handler code. A worker service attaches a handler via `taskora.implement(contract, handler)`. Create a contract with [defineTask](../index#definetask) (runtime schemas) or [staticContract](../index#staticcontract) (types only). ## Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TInput` | `unknown` | | `TOutput` | `unknown` | ## Properties | Property | Modifier | Type | Description | Defined in | | ------ | ------ | ------ | ------ | ------ | | `__kind` | `readonly` | `"TaskContract"` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:100 | | `[phantomInput]?` | `readonly` | `TInput` | **`Internal`** — phantom type carriers, never populated at runtime. | packages/taskora/dist/types-KwmL9wkN.d.mts:109 | | `[phantomOutput]?` | `readonly` | `TOutput` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:110 | | `input?` | `readonly` | `StandardSchemaV1`<`unknown`, `TInput`> | - | packages/taskora/dist/types-KwmL9wkN.d.mts:102 | | `name` | `readonly` | `string` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:101 | | `output?` | `readonly` | `StandardSchemaV1`<`unknown`, `TOutput`> | - | packages/taskora/dist/types-KwmL9wkN.d.mts:103 | | `retry?` | `readonly` | [`RetryConfig`](../namespaces/Taskora/interfaces/RetryConfig) | - | packages/taskora/dist/types-KwmL9wkN.d.mts:104 | | `stall?` | `readonly` | [`StallConfig`](../namespaces/Taskora/interfaces/StallConfig) | - | packages/taskora/dist/types-KwmL9wkN.d.mts:106 | | `timeout?` | `readonly` | [`Duration`](../type-aliases/Duration) | - | packages/taskora/dist/types-KwmL9wkN.d.mts:105 | | `version?` | `readonly` | `number` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:107 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/TaskEventMap.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / TaskEventMap # Interface: TaskEventMap\ Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:724 ## Type Parameters | Type Parameter | | ------ | | `TOutput` | ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `active` | [`ActiveEvent`](ActiveEvent) | packages/taskora/dist/types-KwmL9wkN.d.mts:729 | | `cancelled` | [`CancelledEvent`](CancelledEvent) | packages/taskora/dist/types-KwmL9wkN.d.mts:731 | | `completed` | [`CompletedEvent`](CompletedEvent)<`TOutput`> | packages/taskora/dist/types-KwmL9wkN.d.mts:725 | | `failed` | [`FailedEvent`](FailedEvent) | packages/taskora/dist/types-KwmL9wkN.d.mts:726 | | `progress` | [`ProgressEvent`](ProgressEvent) | packages/taskora/dist/types-KwmL9wkN.d.mts:728 | | `retrying` | [`RetryingEvent`](RetryingEvent) | packages/taskora/dist/types-KwmL9wkN.d.mts:727 | | `stalled` | [`StalledEvent`](StalledEvent) | packages/taskora/dist/types-KwmL9wkN.d.mts:730 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/TaskoraOptions.md --- [Taskora API Reference](../../index) / [index](../index) / TaskoraOptions # Interface: TaskoraOptions Defined in: packages/taskora/dist/app-BkYbyud9.d.mts:76 ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `adapter` | [`Adapter`](../namespaces/Taskora/interfaces/Adapter) | - | packages/taskora/dist/app-BkYbyud9.d.mts:77 | | `defaults?` | `object` | - | packages/taskora/dist/app-BkYbyud9.d.mts:81 | | `defaults.concurrency?` | `number` | - | packages/taskora/dist/app-BkYbyud9.d.mts:84 | | `defaults.retry?` | [`RetryConfig`](../namespaces/Taskora/interfaces/RetryConfig) | - | packages/taskora/dist/app-BkYbyud9.d.mts:82 | | `defaults.stall?` | [`StallConfig`](../namespaces/Taskora/interfaces/StallConfig) | - | packages/taskora/dist/app-BkYbyud9.d.mts:85 | | `defaults.timeout?` | `number` | - | packages/taskora/dist/app-BkYbyud9.d.mts:83 | | `retention?` | [`RetentionOptions`](../namespaces/Taskora/interfaces/RetentionOptions) | - | packages/taskora/dist/app-BkYbyud9.d.mts:80 | | `scheduler?` | [`SchedulerConfig`](../namespaces/Taskora/interfaces/SchedulerConfig) | - | packages/taskora/dist/app-BkYbyud9.d.mts:79 | | `serializer?` | [`Serializer`](../namespaces/Taskora/interfaces/Serializer) | - | packages/taskora/dist/app-BkYbyud9.d.mts:78 | | `validateOnDispatch?` | `boolean` | Whether `dispatch()` validates input via the task's Standard Schema before enqueueing. Disable when the producer fully trusts the input (e.g. already validated upstream) and you want to skip the schema cost. Worker-side validation is unaffected — it always runs before the handler so job data is still checked at some boundary. Per-call `dispatch(data, { skipValidation: true })` overrides this. Default: `true`. | packages/taskora/dist/app-BkYbyud9.d.mts:97 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/test/interfaces/TestRunnerOptions.md --- [Taskora API Reference](../../index) / [test](../index) / TestRunnerOptions # Interface: TestRunnerOptions Defined in: packages/taskora/dist/test/index.d.mts:4 ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `from?` | [`App`](../../taskora/classes/App) | Wrap an existing App — swaps all task adapters to in-memory. | packages/taskora/dist/test/index.d.mts:7 | | `serializer?` | [`Serializer`](../../taskora/namespaces/Taskora/interfaces/Serializer) | - | packages/taskora/dist/test/index.d.mts:5 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/ThrottleConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / ThrottleConfig # Interface: ThrottleConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:778 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `key` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:779 | | `max` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:780 | | `window` | `any` | packages/taskora/dist/types-KwmL9wkN.d.mts:781 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/TtlConfig.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / TtlConfig # Interface: TtlConfig Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:787 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `max` | `any` | packages/taskora/dist/types-KwmL9wkN.d.mts:788 | | `onExpire?` | `"fail"` | `"discard"` | packages/taskora/dist/types-KwmL9wkN.d.mts:789 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/WorkflowAdvanceResult.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / WorkflowAdvanceResult # Interface: WorkflowAdvanceResult Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1250 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `completed` | `boolean` | packages/taskora/dist/types-KwmL9wkN.d.mts:1258 | | `result?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1259 | | `toDispatch` | `object`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1251 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/WorkflowCancelResult.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / WorkflowCancelResult # Interface: WorkflowCancelResult Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1267 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `activeJobIds` | `object`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1268 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/WorkflowDetail.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / WorkflowDetail # Interface: WorkflowDetail Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1226 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `createdAt` | `number` | packages/taskora/dist/types-KwmL9wkN.d.mts:1229 | | `error` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1248 | | `graph` | `object` | packages/taskora/dist/types-KwmL9wkN.d.mts:1230 | | `graph.nodes` | `object`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1231 | | `graph.terminal` | `number`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1238 | | `id` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1227 | | `nodes` | `object`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1240 | | `result` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:1247 | | `state` | [`WorkflowState`](../type-aliases/WorkflowState) | packages/taskora/dist/types-KwmL9wkN.d.mts:1228 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/WorkflowDispatchOptions.md --- [Taskora API Reference](../../index) / [index](../index) / WorkflowDispatchOptions # Interface: WorkflowDispatchOptions Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:294 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `name?` | `string` | packages/taskora/dist/types-KwmL9wkN.d.mts:296 | | `ttl?` | [`Duration`](../type-aliases/Duration) | packages/taskora/dist/types-KwmL9wkN.d.mts:295 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/interfaces/WorkflowFailResult.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / WorkflowFailResult # Interface: WorkflowFailResult Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1261 ## Properties | Property | Type | Defined in | | ------ | ------ | ------ | | `activeJobIds` | `object`\[] | packages/taskora/dist/types-KwmL9wkN.d.mts:1262 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/WorkflowGraph.md --- [Taskora API Reference](../../index) / [index](../index) / WorkflowGraph # Interface: WorkflowGraph Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:368 ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `name?` | `string` | Optional human-readable name for debugging. | packages/taskora/dist/types-KwmL9wkN.d.mts:373 | | `nodes` | [`WorkflowNode`](WorkflowNode)\[] | - | packages/taskora/dist/types-KwmL9wkN.d.mts:369 | | `terminal` | `number`\[] | Indices of terminal nodes (define final result). | packages/taskora/dist/types-KwmL9wkN.d.mts:371 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/interfaces/WorkflowNode.md --- [Taskora API Reference](../../index) / [index](../index) / WorkflowNode # Interface: WorkflowNode Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:357 ## Properties | Property | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `_v` | `number` | Task version captured at signature creation. | packages/taskora/dist/types-KwmL9wkN.d.mts:366 | | `data?` | `string` | Serialized bound data. Absent = receives from predecessor(s). | packages/taskora/dist/types-KwmL9wkN.d.mts:360 | | `deps` | `number`\[] | Predecessor node indices. | packages/taskora/dist/types-KwmL9wkN.d.mts:362 | | `jobId` | `string` | Pre-generated job ID. | packages/taskora/dist/types-KwmL9wkN.d.mts:364 | | `taskName` | `string` | - | packages/taskora/dist/types-KwmL9wkN.d.mts:358 | --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/job-context.md' --- # Job Context Every task handler receives a `ctx` object as its second argument. The context provides tools for communication, control flow, and observability during job execution. ```ts taskora.task("process", async (data: Input, ctx) => { // ctx is Taskora.Context }) ``` ## Properties ### `ctx.id` The unique job ID (UUID v4). ### `ctx.attempt` Current attempt number (starts at 1). Increments on each retry. ### `ctx.timestamp` Job creation timestamp (milliseconds since epoch). ### `ctx.signal` An `AbortSignal` that fires when: * The job is **cancelled** via `handle.cancel()` * The job **times out** Use it with any API that accepts AbortSignal: ```ts taskora.task("fetch-data", { timeout: 10_000, handler: async (data: { url: string }, ctx) => { const res = await fetch(data.url, { signal: ctx.signal }) return await res.json() }, }) ``` ## Methods ### `ctx.heartbeat()` Extend the job's lock TTL. Call this in long-running jobs to prevent stall detection from reclaiming them. ```ts taskora.task("long-job", async (data, ctx) => { for (const chunk of chunks) { await processChunk(chunk) ctx.heartbeat() // "I'm still alive" } }) ``` ### `ctx.progress(value)` Report progress as a number or object. Fire-and-forget — does not block the handler. ```ts taskora.task("upload", async (data, ctx) => { ctx.progress(0) await step1() ctx.progress(33) await step2() ctx.progress(66) await step3() ctx.progress(100) }) ``` Progress is stored in Redis and queryable via `handle.getProgress()`. ### `ctx.retry(options?)` Request a manual retry. Returns a `RetryError` — throw it to trigger the retry. ```ts taskora.task("flaky-api", async (data, ctx) => { const res = await fetch(data.url) if (res.status === 429) { const retryAfter = Number(res.headers.get("retry-after")) * 1000 throw ctx.retry({ delay: retryAfter, reason: "Rate limited" }) } return await res.json() }) ``` `RetryError` **always retries** (bypasses `retryOn`/`noRetryOn` filters) unless max attempts are exhausted. ### `ctx.log` Structured logging attached to the job. Queryable via `handle.getLogs()`. ```ts taskora.task("import", async (data, ctx) => { ctx.log.info("Starting import", { source: data.source }) ctx.log.warn("Skipping invalid row", { row: 42, reason: "missing field" }) ctx.log.error("Failed to parse", { raw: data.raw }) }) ``` Each log entry is a `LogEntry`: ```ts interface LogEntry { level: "info" | "warn" | "error" message: string meta?: Record timestamp: number } ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/memory.md' --- [Taskora API Reference](../index) / memory # memory ## Classes * [MemoryBackend](classes/MemoryBackend) ## Functions ### memoryAdapter() > **memoryAdapter**(): [`Adapter`](../taskora/namespaces/Taskora/interfaces/Adapter) Defined in: packages/taskora/dist/memory/index.d.mts:225 #### Returns [`Adapter`](../taskora/namespaces/Taskora/interfaces/Adapter) --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/middleware.md' --- # Middleware Taskora uses a Koa-style **onion model** for middleware. Middleware wraps your handler — it can intercept data on the way in and results on the way out. ## App-Level Middleware Applied to **all tasks** in the app. ```ts taskora.use(async (ctx, next) => { const start = Date.now() await next() console.log(`${ctx.task.name} took ${Date.now() - start}ms`) }) ``` Multiple `taskora.use()` calls chain in order. Must be called **before** `taskora.start()`. ## Per-Task Middleware Applied to a specific task only. ```ts taskora.task("process-payment", { middleware: [authMiddleware, validateMiddleware], handler: async (data, ctx) => { /* ... */ }, }) ``` ## Execution Order ``` taskora.use(mw1) ──→ taskora.use(mw2) ──→ task middleware ──→ handler │ │ │ │ │ before next() │ before next() │ before next() │ runs │ │ │ │ │ after next() │ after next() │ after next() │ returns ←──────────────────←──────────────────←──────────────────← ``` ## Middleware Context Middleware receives a `MiddlewareContext` that extends `Context` with: ```ts interface MiddlewareContext extends Context { task: { name: string } // which task is running data: unknown // mutable — transform input before handler result: unknown // readable after next() — transform output } ``` ### Mutating Data ```ts taskora.use(async (ctx, next) => { // Transform input before handler ctx.data = { ...ctx.data, processedAt: Date.now() } await next() }) ``` ### Reading/Modifying Results ```ts taskora.use(async (ctx, next) => { await next() // Wrap result after handler ctx.result = { data: ctx.result, meta: { processedBy: "v2" } } }) ``` ### Error Handling ```ts taskora.use(async (ctx, next) => { try { await next() } catch (err) { ctx.log.error("Task failed", { error: err.message }) throw err // re-throw to propagate } }) ``` ## Practical Examples ### Logging Middleware ```ts taskora.use(async (ctx, next) => { ctx.log.info(`Starting ${ctx.task.name}`, { attempt: ctx.attempt }) const start = Date.now() try { await next() ctx.log.info(`Completed in ${Date.now() - start}ms`) } catch (err) { ctx.log.error(`Failed after ${Date.now() - start}ms`, { error: err.message }) throw err } }) ``` ### Metrics Middleware ```ts taskora.use(async (ctx, next) => { const timer = metrics.startTimer({ task: ctx.task.name }) try { await next() metrics.increment("task.success", { task: ctx.task.name }) } catch (err) { metrics.increment("task.failure", { task: ctx.task.name }) throw err } finally { timer.end() } }) ``` ## Composition Middleware is composed once at worker construction time (not per job) using the internal `compose()` function. The pipeline is: `deserialize → migrate → validate → [middleware chain → handler]`. --- --- url: 'https://kravetsone.github.io/taskora/taskora/integrations/nestjs/middleware.md' --- # Middleware taskora has a Koa-style onion middleware model — each middleware receives the `MiddlewareContext` and a `next()` function, runs work before/after `next()`, and is composed into a chain applied around the handler. `@taskora/nestjs` lets you write that chain as **DI-managed classes**, so middleware can inject any provider it needs (loggers, tracers, metrics, config) without any global state. ## The interface ```ts import type { Taskora } from "taskora" export interface TaskoraMiddleware { use( ctx: Taskora.MiddlewareContext, next: () => Promise, ): Promise | void } ``` `ctx` extends the per-job `Context` with `task: { name }`, mutable `data`, and (after `next()` resolves) mutable `result`. You can: * **Read or mutate `ctx.data`** before calling `next()` — e.g. to inject correlation IDs, decrypt a payload. * **Read or mutate `ctx.result`** after `next()` returns — e.g. to scrub PII before it's persisted. * **Wrap the call in `try/catch`** to centralise error handling or metric timing. * **Skip `next()`** to short-circuit — the handler never runs and `ctx.result` stays `undefined`. * **Use `ctx.log`, `ctx.progress`, `ctx.signal`** exactly like inside the handler. ## Writing a middleware class ```ts // src/common/middleware/logging.middleware.ts import { Logger } from "@nestjs/common" import { TaskMiddleware, type TaskoraMiddleware } from "@taskora/nestjs" import type { Taskora } from "taskora" @TaskMiddleware() export class LoggingMiddleware implements TaskoraMiddleware { private readonly logger = new Logger("Taskora") async use(ctx: Taskora.MiddlewareContext, next: () => Promise) { const start = Date.now() this.logger.log(`→ ${ctx.task.name} attempt=${ctx.attempt}`) try { await next() this.logger.log(`✓ ${ctx.task.name} (${Date.now() - start}ms)`) } catch (err) { this.logger.error(`✗ ${ctx.task.name} (${Date.now() - start}ms)`, err as Error) throw err } } } ``` `@TaskMiddleware()` is an alias for `@Injectable()` — it's optional but makes the intent explicit at the top of the file. Any class that implements the `use(ctx, next)` shape and is a Nest provider works. ## Wiring with `forRoot` Pass the class to `TaskoraModule.forRoot({ middleware })` and register it as a provider in the owning module: ```ts import { Module } from "@nestjs/common" import { TaskoraModule } from "@taskora/nestjs" import { LoggingMiddleware } from "./common/middleware/logging.middleware" @Module({ imports: [ TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), middleware: [LoggingMiddleware], // ← references the class }), ], providers: [LoggingMiddleware], // ← ALSO must be in providers so Nest instantiates it }) export class AppModule {} ``` The `TaskoraExplorer` walks the DI graph on bootstrap, finds the middleware instance by class reference, and calls `app.use((ctx, next) => instance.use(ctx, next))`. The closure captures the DI-managed instance, so every field and injected dependency stays live across jobs. **If you forget to add the class to `providers`**, the explorer throws a clear error at init: ``` TaskoraModule middleware LoggingMiddleware was listed in forRoot({ middleware }) but no DI instance was found. Make sure it is included in the owning module's providers: [LoggingMiddleware] array. ``` ## Composition order `middleware: [Outer, Middle, Inner]` registers them in list order with `app.use()`, which means the first entry is **outermost** — it wraps the others. The composed chain runs like this for a single job: ``` Outer.use() → before next() Middle.use() → before next() Inner.use() → before next() Inner.use() → after next() Middle.use() → after next() Outer.use() → after next() ``` If you need one middleware to see the result of another (e.g. a metrics middleware that needs to know whether a logging middleware set a correlation ID), order them explicitly. ## Multiple middleware classes with shared dependencies DI means all middleware classes resolve from the same container, so they can share singletons: ```ts @Injectable() export class CorrelationService { generate(): string { return randomUUID() } } @TaskMiddleware() export class CorrelationMiddleware implements TaskoraMiddleware { constructor(private readonly correlation: CorrelationService) {} async use(ctx: Taskora.MiddlewareContext, next: () => Promise) { (ctx as any).correlationId = this.correlation.generate() await next() } } @TaskMiddleware() export class TracingMiddleware implements TaskoraMiddleware { constructor( private readonly tracer: TracerService, private readonly correlation: CorrelationService, // same singleton ) {} async use(ctx: Taskora.MiddlewareContext, next: () => Promise) { const span = this.tracer.startSpan(ctx.task.name, { attributes: { correlationId: (ctx as any).correlationId }, }) try { await next() span.end() } catch (err) { span.recordException(err as Error) span.end() throw err } } } ``` Register both as providers and list them in order: ```ts TaskoraModule.forRoot({ adapter: …, middleware: [CorrelationMiddleware, TracingMiddleware], // correlation wraps tracing }) // providers array: providers: [ CorrelationService, TracerService, CorrelationMiddleware, TracingMiddleware, ] ``` ## Inline function middleware still works If you don't need DI, taskora's native `app.use(fn)` still works. Grab the App via `@InjectApp` and call it from a startup hook: ```ts import { Injectable, OnApplicationBootstrap } from "@nestjs/common" import { InjectApp } from "@taskora/nestjs" import type { App } from "taskora" @Injectable() export class InlineMiddlewareInstaller implements OnApplicationBootstrap { constructor(@InjectApp() private readonly app: App) {} onApplicationBootstrap() { this.app.use(async (ctx, next) => { console.log("inline before", ctx.task.name) await next() console.log("inline after", ctx.task.name) }) } } ``` Usually not worth it — if you're going through the `@InjectApp` dance to register middleware, you'd rather have a `@TaskMiddleware` class. But the escape hatch is there. ## Common patterns ### Timing / metrics Already shown above in `LoggingMiddleware`. The standard pattern: record `Date.now()` before `next()`, subtract after, push to your histogram service. ### Correlation / request ID propagation Use the example above — a `CorrelationMiddleware` that stamps a value on `ctx` before `next()`. Handlers can read it via `ctx.data`-adjacent fields if you extend the type. ### Auth / authorization for jobs If a job payload carries a tenant ID or user context, validate it in middleware before `next()`: ```ts @TaskMiddleware() export class TenantGuardMiddleware implements TaskoraMiddleware { constructor(private readonly tenants: TenantService) {} async use(ctx: Taskora.MiddlewareContext, next: () => Promise) { const tenantId = (ctx.data as { tenantId?: string }).tenantId if (!tenantId) throw new Error("job missing tenantId") const tenant = await this.tenants.findActive(tenantId) if (!tenant) throw new Error(`tenant ${tenantId} is inactive`) await next() } } ``` ### Result scrubbing Mutate `ctx.result` after `next()` to strip PII before taskora persists it: ```ts @TaskMiddleware() export class PiiScrubberMiddleware implements TaskoraMiddleware { async use(ctx: Taskora.MiddlewareContext, next: () => Promise) { await next() if (ctx.result && typeof ctx.result === "object") { ctx.result = redact(ctx.result as Record, ["email", "phone"]) } } } ``` ## Per-consumer middleware The current release wires middleware at the **app level** — every task runs through the same chain. Per-consumer middleware (e.g. `@TaskConsumer(contract, { middleware: [SomeMw] })`) is planned for a future phase. Until then, the workaround is to gate logic inside a global middleware by `ctx.task.name`: ```ts async use(ctx: Taskora.MiddlewareContext, next: () => Promise) { if (ctx.task.name === "send-email") { // email-specific setup } await next() } ``` ## Multi-app middleware `TaskoraModule.forRoot({ name: 'secondary', middleware: [...] })` binds the listed middleware to that named app. Default-app and secondary-app middleware chains are fully independent. ```ts @Module({ imports: [ TaskoraModule.forRoot({ adapter: primaryAdapter, middleware: [CorrelationMiddleware, LoggingMiddleware], }), TaskoraModule.forRoot({ name: "background", adapter: backgroundAdapter, middleware: [TracingMiddleware], // different chain for the background app }), ], providers: [CorrelationMiddleware, LoggingMiddleware, TracingMiddleware], }) export class AppModule {} ``` Same DI instances are reusable — `CorrelationMiddleware` is a singleton regardless of how many named apps reference it. --- --- url: 'https://kravetsone.github.io/taskora/taskora/operations/monitoring.md' --- # Monitoring Patterns for monitoring taskora in production. ::: tip Looking for the admin UI? For an in-process, batteries-included dashboard with workflow DAGs, schedule management, DLQ, and live SSE updates, see [Board](/operations/board). This page covers long-term metrics pipelines (Grafana / Datadog / Prometheus) — the board is not a replacement for those, it complements them. ::: ## Health Check Endpoint ```ts app.get("/health/queue", async (req, res) => { const stats = await taskora.inspect().stats() const healthy = stats.failed < 100 && stats.waiting < 10_000 res.status(healthy ? 200 : 503).json(stats) }) ``` ## Metrics Collection Use events to feed metrics into Prometheus, Datadog, or any metrics system: ```ts taskora.on("task:completed", ({ task, duration }) => { metrics.histogram("taskora.job.duration_ms", duration, { task }) metrics.increment("taskora.job.completed", { task }) }) taskora.on("task:failed", ({ task, willRetry }) => { if (willRetry) { metrics.increment("taskora.job.retried", { task }) } else { metrics.increment("taskora.job.failed", { task }) } }) taskora.on("task:stalled", ({ task, action }) => { metrics.increment("taskora.job.stalled", { task, action }) }) ``` ## Periodic Stats Collection ```ts setInterval(async () => { const stats = await taskora.inspect().stats() metrics.gauge("taskora.queue.waiting", stats.waiting) metrics.gauge("taskora.queue.active", stats.active) metrics.gauge("taskora.queue.delayed", stats.delayed) metrics.gauge("taskora.queue.failed", stats.failed) }, 10_000) ``` ## Alert Thresholds Suggested alerts for production: | Metric | Warning | Critical | |---|---|---| | `failed` count | > 50 | > 200 | | `waiting` depth | > 5,000 | > 50,000 | | Job duration P99 | > 30s | > 120s | | Stalled jobs/hour | > 5 | > 20 | ## Dashboard Queries Example queries for a Grafana/Prometheus dashboard: * **Throughput:** `rate(taskora_job_completed_total[5m])` by task * **Error rate:** `rate(taskora_job_failed_total[5m]) / rate(taskora_job_completed_total[5m])` * **Queue depth:** `taskora_queue_waiting` by task * **Latency P95:** `histogram_quantile(0.95, taskora_job_duration_ms_bucket)` ## Inspecting Individual Jobs ```ts const job = await taskora.inspect().find(jobId) if (job) { console.log(`State: ${job.state}`) console.log(`Attempt: ${job.attempt}`) console.log(`Error: ${job.error}`) console.log(`Logs:`, job.logs) console.log(`Timeline:`, job.timeline) } ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/integrations/nestjs.md' --- # NestJS Integration `@taskora/nestjs` is the first-class Nest integration for taskora. It wires the full producer + consumer surface into Nest's DI graph so your task code feels native to the framework — no bullmq-style boilerplate, no factory ceremony, no duplicate type annotations. Everything taskora exposes (dispatching, handlers, events, middleware, inspector, DLQ, schedules, the admin dashboard, and an end-to-end test harness) is injectable. Constructor DI just works — including for `@TaskConsumer` classes that run as workers. ## 30-second tour ```ts // src/tasks/contracts.ts import { defineTask } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string().email(), subject: z.string() }), output: z.object({ messageId: z.string() }), }) ``` ```ts // src/email/email.consumer.ts import { TaskConsumer, OnTaskEvent } from "@taskora/nestjs" import type { InferInput, InferOutput, Taskora } from "taskora" import { MailerService } from "./mailer.service" import { sendEmailTask } from "../tasks/contracts" @TaskConsumer(sendEmailTask, { concurrency: 10 }) export class SendEmailConsumer { constructor(private readonly mailer: MailerService) {} async process( data: InferInput, _ctx: Taskora.Context, ): Promise> { return this.mailer.send(data) } @OnTaskEvent("completed") onSent() { // metrics, logs, whatever — DI dependencies are alive here } } ``` ```ts // src/email/email.service.ts import { Injectable } from "@nestjs/common" import { TaskoraRef } from "@taskora/nestjs" import { sendEmailTask } from "../tasks/contracts" @Injectable() export class EmailService { constructor(private readonly tasks: TaskoraRef) {} async notifySignup(user: { email: string; name: string }) { // Full type safety — no manual BoundTask annotation. await this.tasks.for(sendEmailTask).dispatch({ to: user.email, subject: `Welcome, ${user.name}`, }) } } ``` ```ts // src/app.module.ts import { Module } from "@nestjs/common" import { TaskoraModule } from "@taskora/nestjs" import { redisAdapter } from "taskora/redis" import { Redis } from "ioredis" import { SendEmailConsumer } from "./email/email.consumer" import { EmailService } from "./email/email.service" import { MailerService } from "./email/mailer.service" @Module({ imports: [ TaskoraModule.forRoot({ adapter: redisAdapter({ client: new Redis(process.env.REDIS_URL!) }), }), ], providers: [SendEmailConsumer, EmailService, MailerService], }) export class AppModule {} ``` That's it. `main.ts` is standard Nest bootstrap: ```ts import "reflect-metadata" import { NestFactory } from "@nestjs/core" import { AppModule } from "./app.module" async function bootstrap() { const app = await NestFactory.create(AppModule) app.enableShutdownHooks() // so @taskora/nestjs can drain jobs on SIGTERM await app.listen(3000) } bootstrap() ``` ## What you get * **`TaskoraModule.forRoot` / `forRootAsync`** — register taskora in the Nest container exactly like `TypeOrmModule` or `BullModule`. * **`TaskoraRef.for(contract)`** — zero-decorator, fully inferred dispatchers. Replaces the duplicated `BoundTask` annotations you'd get from property-style injection. * **`@TaskConsumer(contract)`** — mark any provider as a worker handler. Full DI in the constructor, `process(data, ctx)` method runs inside the real worker loop. * **`@OnTaskEvent('completed' | 'failed' | …)`** — method-level bindings for per-task events. * **Class middleware** — `@TaskMiddleware()` providers composed via `forRoot({ middleware })`, Koa-style onion chain with live DI dependencies. * **Observability accessors** — `Inspector`, `DeadLetterManager`, and the schedule manager are injectable via class tokens (zero decorator) or `@InjectInspector` / `@InjectDeadLetters` / `@InjectSchedules` for named apps. * **`TaskoraBoardModule`** — optional dynamic import of `@taskora/board`; mount the admin dashboard from `main.ts` with three lines. * **Multi-app** — every decorator and module method takes an optional `name` so one Nest container can host multiple independent taskora apps (separate Redis clusters, per-tenant isolation, etc.). * **`@taskora/nestjs/testing`** — opt-in subpath with `TaskoraTestHarness`, a pre-wired memory-adapter testing module that runs the real producer + consumer pipeline end-to-end in milliseconds, no Redis / Docker required. ## Installation ::: pm-add @taskora/nestjs taskora reflect-metadata ::: For the Redis adapter (production): ::: pm-add taskora ioredis ::: For the admin dashboard (optional): ::: pm-add @taskora/board hono @hono/node-server ::: `reflect-metadata` is a peer dependency — Nest needs it loaded before any decorated class is evaluated. Import it once at the top of `main.ts`: ```ts import "reflect-metadata" ``` ## TypeScript configuration Nest's constructor DI relies on emitted decorator metadata. Make sure your `tsconfig.json` has both flags: ```jsonc { "compilerOptions": { "experimentalDecorators": true, "emitDecoratorMetadata": true } } ``` Without `emitDecoratorMetadata`, `constructor(private tasks: TaskoraRef)` silently resolves to `undefined` and every job throws on the first dispatch. If you're using SWC or esbuild (e.g. vitest default transform), you need the equivalent flag in their configs — see [Testing](./testing) for details. ## Next steps 1. [**File layout**](./file-layout) — recommended project structure for contracts, consumers, and modules. 2. [**Dispatching**](./dispatching) — `TaskoraRef.for()`, `@InjectTask`, and `forFeature`. 3. [**Consumers**](./consumers) — `@TaskConsumer`, `@OnTaskEvent`, lifecycle. 4. [**Middleware**](./middleware) — class middleware with DI. 5. [**Observability**](./observability) — inspector, DLQ, schedules as injectable services. 6. [**Admin dashboard**](./board) — mounting `@taskora/board` in a Nest app. 7. [**Testing**](./testing) — `TaskoraTestHarness` patterns. 8. [**Deployment**](./deployment) — multi-app, producer/worker split, graceful shutdown. --- --- url: >- https://kravetsone.github.io/taskora/taskora/integrations/nestjs/observability.md --- # Observability Taskora exposes three operational surfaces beyond the dispatch/handler path: * **Inspector** — cross-task queries, queue stats, per-job details * **DeadLetterManager** — list failed jobs, retry individually or in bulk * **Schedules** — runtime list/pause/resume/trigger/delete for scheduled tasks `@taskora/nestjs` exposes each one as a regular Nest provider. The default slot resolves via the **class token**, so you can write `constructor(private inspector: Inspector)` without any decorator — just like constructor-injecting `DataSource` from TypeORM. For named multi-app slots, use `@InjectInspector('name')`, `@InjectDeadLetters('name')`, or `@InjectSchedules('name')`. ## Inspector The `Inspector` is taskora's read-heavy query surface — cross-task search by job ID, per-state queue listings, aggregated stats, migration distribution, and per-job details with logs/progress/timeline. ```ts import { Injectable } from "@nestjs/common" import { Inspector } from "taskora" @Injectable() export class QueueStatsService { constructor(private readonly inspector: Inspector) {} async dashboardSnapshot() { const stats = await this.inspector.stats() const recentlyFailed = await this.inspector.failed({ limit: 20 }) const activeWorkflows = (await this.inspector.active()).length return { stats, recentlyFailed, activeWorkflows } } async findJob(jobId: string) { return this.inspector.find(jobId) } } ``` ### Typical uses * **Admin HTTP endpoints** — expose `inspector.stats()` / `inspector.failed()` / `inspector.find()` behind your own auth if you don't want to mount the full board. * **Health checks** — assert queue depth is under a threshold, or that there are no stalled jobs. * **Business dashboards** — build custom panels that show per-task completion rates, recent activity, or cross-task drilldowns. ### Controller example ```ts import { Controller, Get, Param, UseGuards } from "@nestjs/common" import { Inspector } from "taskora" import { AdminGuard } from "../auth/admin.guard" @Controller("admin/queue") @UseGuards(AdminGuard) export class QueueAdminController { constructor(private readonly inspector: Inspector) {} @Get("stats") stats() { return this.inspector.stats() } @Get("failed") failed() { return this.inspector.failed({ limit: 100 }) } @Get("jobs/:id") async findJob(@Param("id") id: string) { const job = await this.inspector.find(id) if (!job) return { found: false } return { found: true, job } } } ``` See the main [Inspector](/operations/inspector) guide for the full method list. ## Dead Letter Manager `DeadLetterManager` is the admin-side handle for permanently-failed jobs. It's a view over taskora's `:failed` sorted set — list, retry one, retry all. ```ts import { Injectable } from "@nestjs/common" import { DeadLetterManager } from "taskora" @Injectable() export class DlqService { constructor(private readonly dlq: DeadLetterManager) {} async recent(limit = 50) { return this.dlq.list({ limit }) } async recentForTask(taskName: string, limit = 50) { return this.dlq.list({ task: taskName, limit }) } async retryOne(jobId: string) { return this.dlq.retry(jobId) } async retryAllForTask(taskName: string) { return this.dlq.retryAll({ task: taskName }) } } ``` ### Retrying a specific job ```ts @Controller("admin/dlq") @UseGuards(AdminGuard) export class DlqController { constructor(private readonly dlq: DeadLetterManager) {} @Post("jobs/:id/retry") retry(@Param("id") id: string) { return this.dlq.retry(id) } } ``` `retry(jobId)` searches across every task for the job, so you don't need to know which task it belongs to. For the typed variant that keeps `JobInfo` generics, pass the `Task` object: `dlq.retry(task, jobId)`. ### Bulk retry `retryAll({ task? })` retries every job in the DLQ for the given task (or all tasks if omitted). The underlying Lua script batches 100 jobs per iteration so it's safe to run against thousands of entries. ## Schedules `app.schedules` is the runtime manager for scheduled tasks — list current schedules, pause/resume specific ones, trigger a run manually, update the cron or interval, delete a schedule entirely. The `ScheduleManager` class isn't in taskora's public type exports, so `@InjectSchedules` uses a string token and the type annotation references `App["schedules"]`: ```ts import { Injectable } from "@nestjs/common" import { InjectSchedules } from "@taskora/nestjs" import type { App } from "taskora" @Injectable() export class SchedulesService { constructor(@InjectSchedules() private readonly schedules: App["schedules"]) {} async listAll() { return this.schedules.list() } async pause(name: string) { return this.schedules.pause(name) } async resume(name: string) { return this.schedules.resume(name) } async triggerNow(name: string) { return this.schedules.trigger(name) } async remove(name: string) { return this.schedules.remove(name) } async updateInterval(name: string, every: string) { return this.schedules.update(name, { every }) } } ``` ### Registering schedules dynamically If your schedules come from a database or config service (rather than being hard-coded in `defineTask`), register them from an `OnModuleInit` provider: ```ts import { Injectable, OnModuleInit } from "@nestjs/common" import { InjectApp, InjectSchedules } from "@taskora/nestjs" import type { App } from "taskora" import { ConfigService } from "@nestjs/config" @Injectable() export class ScheduleRegistrar implements OnModuleInit { constructor( @InjectApp() private readonly app: App, @InjectSchedules() private readonly schedules: App["schedules"], private readonly config: ConfigService, ) {} async onModuleInit() { for (const job of this.config.get("scheduledJobs", [])) { this.app.schedule(job.name, { cron: job.cron, task: job.taskName, data: job.payload }) } } } ``` `app.schedule()` stages the schedule for when the App starts; the `TaskoraExplorer` calls `app.start()` during `onApplicationBootstrap`, which flushes all pending schedules into the scheduler. Registering schedules in `onModuleInit` (before bootstrap) is the right lifecycle hook. ## Multi-app observability Each named app gets its own Inspector / DLQ / Schedules slot: ```ts import { Injectable } from "@nestjs/common" import { InjectDeadLetters, InjectInspector, InjectSchedules, } from "@taskora/nestjs" import { DeadLetterManager, Inspector } from "taskora" import type { App } from "taskora" @Injectable() export class AdminService { constructor( // Default app — zero-decorator class tokens readonly primaryInspector: Inspector, readonly primaryDlq: DeadLetterManager, @InjectSchedules() readonly primarySchedules: App["schedules"], // Named "secondary" app — string tokens via decorators @InjectInspector("secondary") readonly secondaryInspector: Inspector, @InjectDeadLetters("secondary") readonly secondaryDlq: DeadLetterManager, @InjectSchedules("secondary") readonly secondarySchedules: App["schedules"], ) {} } ``` The default slot uses the `Inspector` / `DeadLetterManager` class tokens (only one `forRoot` without a `name` can own those). Every named slot goes through the string-token decorators. Schedules always use the decorator since `App["schedules"]` isn't a class. ## Full admin module pattern A clean way to package observability endpoints is one module, one controller per surface: ``` src/admin/ ├── admin.module.ts ├── admin.guard.ts ├── queue.controller.ts ← uses Inspector ├── dlq.controller.ts ← uses DeadLetterManager └── schedules.controller.ts ← uses App['schedules'] ``` ```ts @Module({ providers: [AdminGuard], controllers: [QueueController, DlqController, SchedulesController], }) export class AdminModule {} ``` `TaskoraModule` in `AppModule` is `@Global`, so `AdminModule` doesn't need to re-import it — `Inspector` / `DeadLetterManager` / schedules are already in scope. For a ready-made UI instead of a custom admin module, reach for [`@taskora/board`](./board). --- --- url: 'https://kravetsone.github.io/taskora/taskora/operations/performance.md' --- # Performance Taskora's performance depends on three axes: which Redis-compatible server you use, which Node.js runtime, and how Redis is configured. This page covers all three with real benchmark data and practical tuning advice. All numbers below are from taskora's built-in benchmark suite (`@taskora/bench`), each run isolated (one library, one runtime, one fresh Redis container at a time). They reflect relative differences — absolute numbers depend on hardware, network, and payload size. ## taskora vs BullMQ ### Throughput (ops/sec, Redis 7, isolated runs, median of 3 iterations) | Benchmark | taskora (Bun) | BullMQ (Bun) | taskora (Node) | BullMQ (Node) | |---|---:|---:|---:|---:| | enqueue (single) | 9,307 | 9,025 | **9,250** | 8,228 | | enqueue (bulk, batch=50) | **106,586** | 49,136 | **89,677** | 49,781 | | process (c=1) | 6,964 | 6,873 | **8,273** | 6,464 | | process (c=100) | **31,955** | 27,879 | **43,320** | 27,635 | | latency throughput | 5,292 | 5,155 | **7,643** | 4,029 | ### Latency (ms, Bun) | Library | p50 | p95 | p99 | |---|---:|---:|---:| | taskora | 0.27 | 0.83 | 1.39 | | BullMQ | 0.56 | 1.85 | 2.80 | ### Latency (ms, Node.js v22) | Library | p50 | p95 | p99 | |---|---:|---:|---:| | taskora | 0.23 | 0.40 | 0.72 | | BullMQ | 0.49 | 1.85 | 3.50 | ### Memory per job | Library | B/job (single) | B/job (bulk) | |---|---:|---:| | taskora | 355 | 360 | | BullMQ | 279 | 254 | ### Takeaways * **Single enqueue** is close — both libraries are limited by one Lua round trip per job. taskora is ~10% faster on Node. * **Bulk enqueue**: taskora is **~2x faster** thanks to `dispatchMany()` batching multiple jobs into a single pipeline round trip, while BullMQ's `addBulk()` issues a separate `EVALSHA` per job. * **Processing**: taskora is 1.1–1.6x faster. The gap comes from taskora's fused ack+dequeue Lua script — one `EVALSHA` per job vs. BullMQ's separate ack and dequeue calls. * **Latency**: taskora has **2–5x lower tail latency** (p95/p99) due to fewer Redis round trips in the hot path. * **Memory**: BullMQ uses ~20% less memory per job. BullMQ stores fewer metadata fields per job hash. The gap narrows with larger payloads where the data field dominates. ## Redis-Compatible Servers Taskora works with any server that speaks the Redis protocol. The three main options: | Server | Description | |---|---| | **Redis 7** | The original. Single-threaded, battle-tested, widest ecosystem. | | **Valkey 8** | Redis fork (Linux Foundation). Drop-in compatible, same protocol. | | **Dragonfly** | Multi-threaded reimplementation. Different storage engine. | ### Throughput (ops/sec, Node.js v22, median of 3 runs) | Benchmark | Redis 7 | Valkey 8 | Dragonfly | |---|---:|---:|---:| | enqueue (single) | 5,275 | 5,402 | 1,229 | | enqueue (bulk, batch=50) | 80,798 | 93,345 | 10,622 | | process (c=1) | 5,306 | 5,563 | 1,404 | | process (c=100) | 30,799 | 36,152 | 1,781 | | latency throughput | 4,429 | 5,169 | 807 | ### Latency (ms) | Server | p50 | p95 | p99 | |---|---:|---:|---:| | Redis 7 | 0.39 | 0.74 | 1.20 | | Valkey 8 | 0.33 | 0.60 | 0.93 | | Dragonfly | 2.09 | 2.97 | 4.14 | ### Memory per job | Server | B/job | |---|---:| | Redis 7 | 355 | | Valkey 8 | 324 | | Dragonfly | 274 | ### Takeaways * **Redis and Valkey** are both excellent. Valkey is slightly faster at high concurrency and has better tail latency. Switching between them is a Docker image swap — no code changes. * **Dragonfly** is 5–15x slower on taskora workloads. Dragonfly is multi-threaded, but its Lua engine serializes script execution. Since taskora uses atomic Lua scripts for every state transition (enqueue, dequeue, ack, fail), this becomes the bottleneck. Dragonfly also requires the `--default_lua_flags=allow-undeclared-keys` flag because taskora constructs keys inside scripts. * **Memory** differences are modest. Dragonfly's storage engine is the most compact (274 B/job), but the gap narrows with Redis tuning (see [below](#hash-max-listpack-value)). ::: tip Recommendation Use **Redis** or **Valkey** — whichever your team is more comfortable operating. If you're starting fresh, Valkey is a strong default. ::: ## Runtimes Taskora runs on Bun, Node.js, and Deno. The runtime affects client-side overhead — serialization, event loop scheduling, and ioredis internals. ### Throughput (ops/sec, taskora, Redis 7, isolated runs, median of 3 iterations) | Benchmark | Bun | Node.js v22 | Deno | |---|---:|---:|---:| | enqueue (single) | 9,307 | 9,250 | **13,045** | | enqueue (bulk, batch=50) | 106,586 | 89,677 | **123,870** | | process (c=1) | 6,964 | 8,273 | **10,860** | | process (c=100) | 31,955 | 43,320 | **47,311** | | latency throughput | 5,292 | 7,643 | **8,483** | ### Latency (ms, Redis 7) | Runtime | p50 | p95 | p99 | |---|---:|---:|---:| | Bun | 0.27 | 0.83 | 1.39 | | Node.js v22 | 0.23 | 0.40 | 0.72 | | Deno | **0.18** | **0.35** | **0.62** | ### Takeaways * **Deno** is the fastest runtime across all benchmarks — 1.3–1.6x Bun, 1.1–1.4x Node. It also has the best latency profile. Deno runs ioredis through its Node.js compatibility layer. Use `deno run -A --unstable-sloppy-imports` to run. * **Node.js** is competitive with Bun and beats it on concurrent processing (43k vs 32k ops/sec) and latency. Node's event loop scheduling handles high concurrency well. * **Bun** has the fastest single enqueue but trails on concurrent workloads. Bun's event loop overhead grows with many in-flight promises. ::: info These benchmarks measure the full queue pipeline (serialize → Lua script → Redis → deserialize). The runtime difference is only the client-side overhead — Redis is the same in all cases. ::: ## Redis Drivers Taskora ships two Redis drivers: **ioredis** (Node.js library, works on all runtimes) and a **Bun native driver** (`taskora/redis/bun`) that uses `Bun.RedisClient` — Bun's built-in Redis client with auto-pipelining. ### Throughput (ops/sec, Bun runtime, Redis 7, median of 3 runs) | Benchmark | ioredis | Bun native | |---|---:|---:| | enqueue (single) | 11,928 | 11,391 | | enqueue (bulk, batch=50) | 128,407 | 125,853 | | process (c=1) | 10,333 | 10,310 | | process (c=100) | 42,594 | 42,700 | | latency throughput | 8,053 | 8,001 | ### Latency (ms) | Driver | p50 | p95 | p99 | |---|---:|---:|---:| | ioredis | 0.21 | 0.31 | 0.93 | | Bun native | 0.21 | 0.29 | 0.91 | ### Takeaways The two drivers are **effectively identical** in performance. Bun's auto-pipelining (batching same-tick `.send()` calls into one round trip) matches ioredis's pipeline behavior, and the overhead of RESP parsing is negligible vs. the Lua script execution time on the server. Choose based on operational constraints: | | ioredis (`taskora/redis`) | Bun native (`taskora/redis/bun`) | |---|---|---| | **Runtimes** | Bun, Node.js, Deno | Bun only | | **Cluster** | Yes | No | | **Sentinel** | Yes | No | | **Peer deps** | `ioredis` | None (built-in) | If you need Cluster or Sentinel, use ioredis. If you're on Bun and want zero peer deps, the Bun driver is a safe choice with no performance trade-off. ## Redis Tuning ### `hash-max-listpack-value` **This is the single most impactful Redis tuning knob for taskora.** Taskora stores every job as a single Redis hash. Redis 7 keeps a hash in compact `listpack` encoding as long as two conditions hold: * number of fields ≤ `hash-max-listpack-entries` (default `128` — taskora uses ~10-15 fields, well within limit) * every field value ≤ `hash-max-listpack-value` bytes (default **`64`**) The second condition is the one that bites. If your job's serialized `data` or `result` exceeds 64 bytes — which most real-world payloads do — Redis promotes the entire hash from `listpack` to `hashtable` encoding. The per-field overhead jumps from ~2 bytes to ~80 bytes, and a 10-field hash that was using ~150 bytes suddenly costs ~900 bytes. **Fix: raise the threshold in `redis.conf`:** ``` hash-max-listpack-value 1024 ``` With this setting, payloads up to ~1 KB stay in the compact encoding. For most task queues where job payloads are a few hundred bytes of JSON, this keeps every job in listpack and gives you the best memory efficiency. #### Memory impact | Payload size | Default (64) | Tuned (1024) | |---|---|---| | < 64 B | ~150 B/job (listpack) | ~150 B/job (listpack) | | 64–1024 B | ~900 B/job (hashtable) | ~200–400 B/job (listpack) | | > 1024 B | ~900+ B/job (hashtable) | ~900+ B/job (hashtable) | The sweet spot is the middle row — medium payloads where the tuning makes a 2–4x difference in memory per job. If you're running 1M concurrent jobs, that's the difference between 900 MB and 300 MB of Redis memory. #### How to apply **redis.conf:** ``` hash-max-listpack-value 1024 ``` **Redis CLI (runtime, non-persistent):** ``` CONFIG SET hash-max-listpack-value 1024 ``` **Docker Compose:** ```yaml services: redis: image: redis:7-alpine command: redis-server --hash-max-listpack-value 1024 ``` ### Connection pool sizing Taskora uses a small number of Redis connections per process: | Connection | Purpose | Count | |---|---|---| | Main | Commands, Lua scripts, enqueue/ack | 1 | | Blocking | `BZPOPMIN` per task (worker dequeue) | 1 per task | | Subscriber | `XREAD BLOCK` for events + `JobWaiter` | 1 (lazy, shared) | | Cancel | `SUBSCRIBE` for cancel pub/sub | 1 per worker | A typical process with 3 tasks and a worker uses ~6 connections. Redis's default `maxclients` is 10,000, so connection limits are rarely an issue unless you're running hundreds of taskora processes against one Redis. If you're behind a Redis proxy (e.g. Envoy, HAProxy) or using Redis Cluster with a connection pool, size the pool to at least `2 + number_of_tasks` per taskora process. ### `maxmemory-policy` Taskora manages its own retention (completed/failed job cleanup via configurable `retention` options). Redis `maxmemory-policy` should be set to **`noeviction`** — if Redis starts evicting keys on its own, it may silently drop in-flight jobs or metadata hashes. ``` maxmemory-policy noeviction ``` This is the default in most Redis deployments, but verify it in production. ## Running Benchmarks Taskora ships a benchmark suite in `packages/bench` that you can run against any store and runtime. ### Quick start ```bash # Default: bun + redis, all benchmarks, taskora vs BullMQ bun run bench # Pick a store bun run bench -- --store valkey bun run bench -- --store dragonfly # Pick a runtime bun run bench:node # Node.js (via tsx) bun run bench:deno # Deno # Compare ioredis vs Bun native driver (Bun only) bun run bench -- --libraries taskora,taskora-bun # Filter libraries and benchmarks bun run bench -- --libraries taskora --benchmarks enqueue-single,enqueue-bulk # JSON output for CI / scripts bun run bench -- --json ``` ### Options | Flag | Default | Values | |---|---|---| | `--store` | `redis` | `redis`, `valkey`, `dragonfly` | | `--libraries` | `taskora,bullmq` | `taskora`, `taskora-bun`, `bullmq` (comma-separated) | | `--benchmarks` | all | `enqueue-single`, `enqueue-bulk`, `process-single`, `process-concurrent`, `latency` | | `--iterations` | `3` | number of measured runs per benchmark | | `--json` | off | machine-readable output | ### External Redis By default the suite starts a Docker container via testcontainers. To benchmark against an existing server: ```bash REDIS_URL=redis://your-host:6379 bun run bench ``` ::: warning The benchmark suite runs `FLUSHDB` between iterations. Don't point it at a production Redis. ::: --- --- url: 'https://kravetsone.github.io/taskora/taskora/recipes/rate-limited-api.md' --- # Rate-Limited API Calls Call external APIs while respecting rate limits using throttle and concurrency controls. ```ts const callStripeApiTask = taskora.task("stripe-api-call", { concurrency: 5, // max 5 parallel API calls timeout: 30_000, retry: { attempts: 3, backoff: "exponential", delay: 2000, }, handler: async (data: { method: string; params: Record }, ctx) => { const res = await fetch(`https://api.stripe.com/v1/${data.method}`, { method: "POST", headers: { Authorization: `Bearer ${process.env.STRIPE_SECRET}` }, body: new URLSearchParams(data.params as any), signal: ctx.signal, }) if (res.status === 429) { const retryAfter = Number(res.headers.get("retry-after")) * 1000 throw ctx.retry({ delay: retryAfter, reason: "Stripe rate limit" }) } if (!res.ok) throw new Error(`Stripe API error: ${res.status}`) return await res.json() }, }) // Dispatch with throttle — max 25 requests per second callStripeApiTask.dispatch( { method: "customers", params: { email: "user@example.com" } }, { throttle: { key: "stripe", max: 25, window: "1s" } }, ) // Per-customer concurrency limit callStripeApiTask.dispatch( { method: "charges", params: { customer: "cus_123", amount: "1000" } }, { concurrencyKey: `stripe:cus_123`, concurrencyLimit: 1, // one charge at a time per customer }, ) ``` ## Pattern: Automatic Rate Limit Detection ```ts handler: async (data, ctx) => { const res = await callApi(data) if (res.status === 429) { // Use the API's retry-after header for precise backoff const delay = Number(res.headers.get("retry-after")) * 1000 || 5000 throw ctx.retry({ delay }) } return res.data } ``` The `ctx.retry({ delay })` override bypasses the configured backoff strategy and uses the API's suggested retry delay instead. --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/redis.md' --- [Taskora API Reference](../index) / redis # redis ## Functions ### redisAdapter() > **redisAdapter**(`connection`, `options?`): [`Adapter`](../taskora/namespaces/Taskora/interfaces/Adapter) Defined in: packages/taskora/dist/redis/ioredis.d.mts:19 Construct a taskora `Adapter` backed by ioredis. Accepts a connection URL, an `RedisOptions` config object, or a pre-built `Redis` instance. When a client instance is passed in, the adapter will not close it on disconnect — that responsibility stays with the caller. This is the recommended driver for production use today: it has the most complete feature support (Cluster, Sentinel, RESP2/RESP3), the largest battle-tested user base, and supports every Redis command taskora needs via dedicated methods rather than the generic `.send()` escape hatch. For Bun-only deployments where the ioredis peer dependency is undesirable, see `taskora/redis/bun`. #### Parameters | Parameter | Type | | ------ | ------ | | `connection` | `string` | `RedisOptions` | `Redis` | | `options?` | { `prefix?`: `string`; } | | `options.prefix?` | `string` | #### Returns [`Adapter`](../taskora/namespaces/Taskora/interfaces/Adapter) --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/retry-backoff.md' --- # Retry & Backoff Taskora provides flexible retry strategies with configurable backoff, jitter, and selective retry filtering. ## Configuration ```ts taskora.task("send-webhook", { retry: { attempts: 5, // total attempts (1 original + 4 retries) backoff: "exponential", // "fixed" | "exponential" | "linear" | function delay: 1000, // base delay in ms maxDelay: 60_000, // cap at 60 seconds jitter: true, // ±25% randomization (default: true) }, handler: async (data, ctx) => { await deliverWebhook(data) }, }) ``` ## `attempts` — Total Attempts `attempts: 3` means the handler runs **up to 3 times** total — 1 original execution + 2 retries. ## Backoff Strategies ### Fixed Constant delay between retries. ```ts retry: { attempts: 3, backoff: "fixed", delay: 2000 } // Attempt 1: immediate // Attempt 2: +2000ms // Attempt 3: +2000ms ``` ### Exponential Delay doubles each time: `delay * 2^(attempt - 1)`. ```ts retry: { attempts: 5, backoff: "exponential", delay: 1000 } // Attempt 1: immediate // Attempt 2: +1000ms // Attempt 3: +2000ms // Attempt 4: +4000ms // Attempt 5: +8000ms ``` ### Linear Delay increases linearly: `delay * attempt`. ```ts retry: { attempts: 5, backoff: "linear", delay: 1000 } // Attempt 1: immediate // Attempt 2: +1000ms // Attempt 3: +2000ms // Attempt 4: +3000ms // Attempt 5: +4000ms ``` ### Custom Function Full control — return the delay in milliseconds for the given attempt. ```ts retry: { attempts: 5, backoff: (attempt) => Math.min(attempt * 500 + 200, 30_000), } ``` ## Jitter Jitter adds ±25% randomization to prevent thundering herd. It's **enabled by default**. ```ts retry: { attempts: 3, backoff: "exponential", delay: 1000, jitter: true } // Attempt 2: ~750ms – ~1250ms (instead of exactly 1000ms) // Attempt 3: ~1500ms – ~2500ms (instead of exactly 2000ms) ``` Disable with `jitter: false`. ## `maxDelay` — Delay Cap Prevents exponential backoff from growing unbounded. ```ts retry: { attempts: 10, backoff: "exponential", delay: 1000, maxDelay: 30_000 } // Attempts 1-5: 1s, 2s, 4s, 8s, 16s // Attempts 6+: capped at 30s ``` ## Selective Retry ### `retryOn` — Whitelist Only retry for specific error types. All other errors fail immediately. ```ts retry: { attempts: 3, retryOn: [NetworkError, TimeoutError], } ``` ### `noRetryOn` — Blacklist Never retry for these error types. All other errors retry normally. ```ts retry: { attempts: 3, noRetryOn: [ValidationError, AuthenticationError], } ``` ## Manual Retry Use `ctx.retry()` inside the handler for programmatic retry control. ```ts taskora.task("call-api", async (data, ctx) => { const res = await fetch(data.url) if (res.status === 429) { const retryAfter = Number(res.headers.get("retry-after")) * 1000 throw ctx.retry({ delay: retryAfter, reason: "Rate limited" }) } if (res.status === 503) { throw ctx.retry() // use configured backoff delay } return await res.json() }) ``` `RetryError` **always retries** — it bypasses `retryOn`/`noRetryOn` filters. The only limit is `attempts`. --- --- url: 'https://kravetsone.github.io/taskora/taskora/testing/run-vs-execute.md' --- # run() vs execute() The test runner offers two execution modes with different trade-offs. ## `runner.run(task, data)` **Direct handler execution** — calls the handler function directly with a mock context, including an inline retry loop. ```ts const result = await runner.run(sendEmailTask, { to: "user@example.com", subject: "Hi" }) ``` ### Characteristics * Calls handler directly (no queue) * Retries inline in the same call * Returns the handler's return value * Fastest — zero queue overhead * Schema validation still runs * Middleware still runs ### Best For * Unit testing handler logic * Testing schema validation * Testing middleware behavior * Quick smoke tests ## `runner.execute(task, data)` **Full pipeline execution** — dispatches through the memory adapter, processes the job, auto-advances time for retries. ```ts const execution = await runner.execute(sendEmailTask, { to: "user@example.com", subject: "Hi", }) console.log(execution.state) // "completed" console.log(execution.attempts) // 1 console.log(execution.result) // { messageId: "..." } console.log(execution.logs) // [{ level: "info", message: "...", ... }] console.log(execution.progress) // 100 console.log(execution.error) // undefined ``` ### Characteristics * Full dispatch → dequeue → process → ack/fail pipeline * Auto-advances virtual time for delayed jobs and retries * Returns `ExecutionResult` with full telemetry * State transitions match production behavior * Slower than `run()` but more realistic ### Best For * Integration testing the full job lifecycle * Testing retry behavior with backoff * Verifying state transitions * Testing progress and log output * Testing delayed jobs ## Comparison Table | | `run()` | `execute()` | |---|---|---| | Queue involved | No | Yes | | Retries | Inline loop | Via delayed queue + auto-advance | | Returns | Handler result | `ExecutionResult` with metadata | | Speed | Fast | Moderate | | Schema validation | Yes | Yes | | Middleware | Yes | Yes | | Progress tracking | No | Yes | | Log capture | No | Yes | | State inspection | No | Yes | --- --- url: 'https://kravetsone.github.io/taskora/taskora/recipes/cron-cleanup.md' --- # Scheduled Cleanup with DLQ A periodic cleanup task using built-in scheduling and dead letter queue monitoring. ```ts import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" // Retention is ON by default (completed: 1h/100, failed: 7d/300) // Override if needed: const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), retention: { failed: { maxAge: "14d", maxItems: 1_000 }, }, }) // Cleanup task const cleanupExpiredTask = taskora.task("cleanup-expired", { schedule: { every: "6h", onMissed: "skip", // if scheduler was down, just skip missed runs }, timeout: 300_000, // 5 minute timeout handler: async (data, ctx) => { ctx.log.info("Starting cleanup") // Clean expired sessions const sessions = await db.session.deleteMany({ where: { expiresAt: { lt: new Date() } }, }) ctx.log.info("Cleaned sessions", { count: sessions.count }) ctx.progress(50) // Clean orphaned uploads const uploads = await db.upload.deleteMany({ where: { createdAt: { lt: new Date(Date.now() - 24 * 60 * 60 * 1000) }, processed: false, }, }) ctx.log.info("Cleaned uploads", { count: uploads.count }) ctx.progress(100) return { sessions: sessions.count, uploads: uploads.count } }, }) // Health monitoring — check DLQ size periodically const healthCheckTask = taskora.task("health-check", { schedule: { every: "5m" }, handler: async (data, ctx) => { const stats = await taskora.inspect().stats() if (stats.failed > 100) { ctx.log.error("High failure count", { failed: stats.failed }) await alertOncall(`Queue health: ${stats.failed} failed jobs`) } if (stats.waiting > 10_000) { ctx.log.warn("Queue backlog growing", { waiting: stats.waiting }) } return stats }, }) // Manual DLQ management async function retryAllFailedJobs() { const count = await taskora.deadLetters.retryAll() console.log(`Retried ${count} failed jobs`) } await taskora.start() ``` ## Schedule Management at Runtime ```ts // Pause cleanup during maintenance await taskora.schedules.pause("cleanup-expired") // Resume await taskora.schedules.resume("cleanup-expired") // Run immediately const handle = await taskora.schedules.trigger("cleanup-expired") const result = await handle.result // Check schedule status const schedules = await taskora.schedules.list() for (const s of schedules) { console.log(`${s.name}: next=${s.nextRun}, paused=${s.paused}`) } ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/scheduling.md' --- # Scheduling Taskora includes a built-in distributed scheduler with leader election, interval/cron support, and missed run policies. ## Interval Scheduling ```ts // Inline on task definition taskora.task("cleanup-expired", { schedule: { every: "1h" }, handler: async (data, ctx) => { await db.deleteExpired() }, }) ``` ## Cron Scheduling Requires `cron-parser` as a peer dependency: ::: pm-add cron-parser ::: ```ts taskora.task("daily-report", { schedule: { cron: "0 9 * * 1-5", // 9 AM weekdays timezone: "America/New_York", }, handler: async (data, ctx) => { await generateDailyReport() }, }) ``` ## Named Schedules Register schedules separately from task definitions: ```ts taskora.schedule("nightly-cleanup", { task: "cleanup-expired", every: "6h", data: { olderThan: "30d" }, }) taskora.schedule("weekly-digest", { task: "send-digest", cron: "0 8 * * 1", timezone: "UTC", }) ``` ## Duration Type Taskora accepts durations as numbers (milliseconds) or human-readable strings: ```ts type Duration = number | `${number}s` | `${number}m` | `${number}h` | `${number}d` ``` Examples: `"30s"`, `"5m"`, `"2h"`, `"1d"`, `30000` ## Leader Election In a multi-worker deployment, only **one instance** runs the scheduler at a time. Taskora uses a Redis-based leader election (SET NX PX) with automatic failover. * Leader acquires a lock with a configurable TTL (default 30s) * Lock is renewed at regular intervals * If the leader dies, another instance takes over automatically ```ts const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), scheduler: { pollInterval: 1000, // check for due schedules every 1s (default) lockTtl: 30_000, // leader lock TTL (default 30s) }, }) ``` ## Overlap Prevention By default, `overlap: false` — a new scheduled run won't dispatch if the previous run's job is still active. ```ts taskora.task("sync-data", { schedule: { every: "5m", overlap: false, // default — skip if previous still running }, handler: async (data, ctx) => { await syncExternalData() }, }) ``` ## Missed Run Policies When the scheduler is temporarily down (deployment, crash, etc.), runs may be missed. | Policy | Behavior | |---|---| | `"skip"` (default) | Ignore missed runs, resume from now | | `"catch-up"` | Execute all missed runs in sequence | | `"catch-up-limit:N"` | Execute at most N missed runs | ```ts taskora.schedule("important-sync", { task: "sync-data", every: "1h", onMissed: "catch-up-limit:3", }) ``` ## Runtime Management ```ts // List all schedules const schedules = await taskora.schedules.list() // Pause a schedule await taskora.schedules.pause("nightly-cleanup") // Resume await taskora.schedules.resume("nightly-cleanup") // Update configuration await taskora.schedules.update("nightly-cleanup", { every: "2h" }) // Remove await taskora.schedules.remove("nightly-cleanup") // Trigger immediately (bypasses schedule timing) const handle = await taskora.schedules.trigger("nightly-cleanup") ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/serializers.md' --- # Serializers Taskora uses a `Serializer` to convert job data and results to/from strings for storage. The default serializer is JSON. ## Default JSON Serializer ```ts import { json } from "taskora" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), serializer: json(), // this is the default — you don't need to specify it }) ``` ## Custom Serializer Implement the `Taskora.Serializer` interface to use a different format: ```ts interface Serializer { serialize(value: unknown): string deserialize(raw: string): unknown } ``` ### Example: MessagePack ```ts import { encode, decode } from "@msgpack/msgpack" const msgpack: Taskora.Serializer = { serialize(value) { return Buffer.from(encode(value)).toString("base64") }, deserialize(raw) { return decode(Buffer.from(raw, "base64")) }, } const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), serializer: msgpack, }) ``` ### Example: superjson ```ts import superjson from "superjson" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), serializer: { serialize: (value) => superjson.stringify(value), deserialize: (raw) => superjson.parse(raw), }, }) ``` The serializer is applied to both job **data** (input) and **results** (output). All tasks in an app share the same serializer. --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/splitting-services.md' --- # Splitting Producer & Worker Sometimes you need the producer (API server, cron service, webhook receiver) and the worker (background jobs) to run in separate processes. When that happens, both sides need to agree on task names and data shapes — this page shows two low-friction ways to share [task contracts](/guide/contracts) between them. ::: tip Default: keep it monolithic For most projects, running producer and worker in the same process is simpler and works fine — `taskora.task("name", { handler })` covers everything. Only reach for the split when you have a concrete reason: * The worker has heavy dependencies (`sharp`, `puppeteer`, `ffmpeg`, native bindings) that you don't want in the API server bundle. * The two sides have genuinely different scaling/deploy characteristics (API auto-scales on HTTP traffic, worker scales on queue depth). * The producer runs on an edge runtime or browser that can't execute the handler's runtime at all. If none of those apply, stay monolithic. You can always split later — the contract layer is backwards-compatible with inline tasks. ::: ## The question is not "should you use contracts" — it's "where do the contracts live" Whichever strategy you pick, the code-level pattern is the same: ```ts // Shared — defined once export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string().email(), subject: z.string() }), output: z.object({ messageId: z.string() }), }) // Producer — no handler imports const sendEmail = taskora.register(sendEmailTask) await sendEmail.dispatch({ to: "alice@example.com", subject: "Welcome" }) // Worker — the heavy-deps side taskora.implement(sendEmailTask, async (data, ctx) => { return { messageId: await mailer.send(data) } }) ``` The only question is where `sendEmailTask` physically lives so both sides can import it. ## Strategy 1: Single package, two entrypoints **Recommended default** for small-to-mid teams, startups, and any project that doesn't already have workspace tooling. One `package.json`, one install, two `bin` entrypoints. ``` my-app/ ├── package.json ├── src/ │ ├── contracts/ │ │ └── tasks.ts ← defineTask() declarations live here │ ├── api/ │ │ └── index.ts ← imports contracts/tasks, dispatches │ └── worker/ │ └── index.ts ← imports contracts/tasks, implements └── Dockerfile ← one image, different CMD per container ``` ```ts // src/contracts/tasks.ts import { defineTask } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string().email(), subject: z.string() }), output: z.object({ messageId: z.string() }), }) ``` ```ts // src/api/index.ts — producer import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" import { sendEmailTask } from "../contracts/tasks.js" const taskora = createTaskora({ adapter: redisAdapter(process.env.REDIS_URL!) }) const sendEmail = taskora.register(sendEmailTask) // Express / Hono / Fastify — whatever your HTTP stack is app.post("/signup", async (req, res) => { await sendEmail.dispatch({ to: req.body.email, subject: "Welcome", }) res.json({ ok: true }) }) ``` ```ts // src/worker/index.ts — worker import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" import { sendEmailTask } from "../contracts/tasks.js" import { mailer } from "./mailer.js" const taskora = createTaskora({ adapter: redisAdapter(process.env.REDIS_URL!) }) taskora.implement(sendEmailTask, async (data, ctx) => { return { messageId: await mailer.send(data) } }) await taskora.start() ``` Build with `pkgroll` / `tsup` using two entries (`src/api/index.ts` and `src/worker/index.ts`). Tree-shaking guarantees the producer bundle never pulls in `./mailer.js` or its transitive deps — the contract file just re-exports a plain object. Deploy as two containers from the same image: ```dockerfile FROM node:24-alpine WORKDIR /app COPY . . RUN npm ci && npm run build # Container command set at deploy time CMD ["node", "dist/api/index.js"] ``` ```yaml # docker-compose.yml services: api: build: . command: node dist/api/index.js worker: build: . command: node dist/worker/index.js ``` **Friction: 1/5.** Zero workspace tooling, no publish step, standard TypeScript project layout. This is the default recommendation and fits ~80% of cases. ## Strategy 2: Monorepo with workspaces Once your team grows and you have multiple apps sharing contracts, promote them to a workspace package. Bun and pnpm workspaces use the `workspace:*` protocol — no publish step, no private registry, the workspace tool symlinks the package locally. ``` my-monorepo/ ├── package.json (workspaces: ["packages/*", "apps/*"]) ├── packages/ │ └── contracts/ │ ├── package.json │ └── src/ │ ├── index.ts │ └── tasks.ts ← defineTask() declarations ├── apps/ │ ├── api/ │ │ ├── package.json ("@acme/contracts": "workspace:*") │ │ └── src/server.ts │ ├── worker/ │ │ ├── package.json ("@acme/contracts": "workspace:*") │ │ └── src/main.ts │ └── webhook-receiver/ │ ├── package.json ("@acme/contracts": "workspace:*") │ └── src/handler.ts └── bun.lockb ``` Minimal `packages/contracts/package.json`: ```json { "name": "@acme/contracts", "type": "module", "main": "./src/index.ts", "exports": { ".": "./src/index.ts" }, "peerDependencies": { "taskora": "*", "zod": "*" } } ``` Consumer `apps/api/package.json`: ```json { "name": "@acme/api", "type": "module", "dependencies": { "@acme/contracts": "workspace:*", "taskora": "^0.3.0", "zod": "^4.0.0" } } ``` ```ts // apps/api/src/server.ts import { sendEmailTask } from "@acme/contracts" const sendEmail = taskora.register(sendEmailTask) await sendEmail.dispatch({ to: "alice@example.com", subject: "Welcome" }) ``` `workspace:*` is a **native protocol** of bun/pnpm/yarn — `bun install` and `pnpm install` symlink the package locally. Nothing gets published to npm unless you explicitly do it. This is not "enterprise tooling" — it's the standard TypeScript monorepo layout in 2026. **Friction: 2/5 initial, 1/5 ongoing.** Use this when you have three or more consumers of the same contracts (e.g. web API + background worker + webhook receiver all dispatching to the same worker pool). ## What NOT to do A few approaches that look tempting but hurt in practice: * **Relative imports across service directories** — `apps/api/src/server.ts` doing `import from "../../worker/src/tasks.js"` works short-term but turns into a rat's nest the moment you move a directory. Use workspaces instead. * **TypeScript path aliases without workspaces** — `paths: { "@contracts/*": ["../contracts/*"] }` is a compile-time convenience but doesn't resolve at runtime. You'll hit "works in dev, breaks in prod" bugs. Use workspaces. * **Private npm registry for contract packages** — GitHub Packages / Verdaccio / npm Pro. Overkill unless you're publishing to teams outside your own codebase. For internal sharing, `workspace:*` does the same thing without the infrastructure. * **Copy-paste the contract file into each repo** — drifts silently, doesn't compose, and the inevitable "production is dispatching v2 but staging worker expects v1" bug is painful to debug. Use a workspace. * **Code generation from protobuf / OpenAPI** — heavy, loses the ergonomics of Standard Schema inference, and taskora already gives you type inference from Zod/Valibot/ArkType directly. Skip it. ## Runtime safety net — independent of sharing strategy Regardless of how contracts are physically shared, taskora gives you two runtime guarantees against drift: 1. **Worker-side schema validation.** Workers always validate job data through the task's Standard Schema before calling the handler. If a producer is on an older version of the contract and dispatches data the worker can't parse, the job fails with a clear `ValidationError` — the handler never sees malformed data. 2. **Payload versioning & migrations.** taskora's [version / migrate](/features/versioning) system lets contracts evolve without requiring producer and worker deploys to happen atomically. Ship worker first with migrations for older payloads, then ship producer with the new shape — in-flight jobs drain correctly. These two mechanisms mean the distribution strategy you pick above is about developer ergonomics, not correctness. Pick the cheapest option that covers your team size. ## Summary | Team shape | Strategy | |---|---| | 1–5 devs, one codebase, API + worker | **Strategy 1** — single package, two entrypoints | | 5+ devs, multiple apps, shared contracts | **Strategy 2** — monorepo + `workspace:*` | | Fully separate repos, different teams | Strategy 2 with git submodules (painful, consider merging first) | | Polyglot (TypeScript producer, Python worker) | Out of scope — taskora is TypeScript-first on both sides | Both strategies compose with [contracts](/guide/contracts) identically. Start with Strategy 1 and promote to Strategy 2 when the second consumer shows up. --- --- url: 'https://kravetsone.github.io/taskora/taskora/operations/stall-detection.md' --- # Stall Detection Taskora detects jobs that stop responding (worker crash, OOM, network partition) and either recovers or fails them. ## How It Works Stall detection uses a **two-phase** approach: 1. **Phase 1 — Snapshot:** Record all active job IDs into a stalled candidates set 2. **Phase 2 — Resolve:** On the next check, any job still in the candidates set (not removed by `extendLock`) is truly stalled Between phases, healthy workers call `extendLock()` which removes their jobs from the candidates set (via `SREM`). Only genuinely stalled jobs remain. ## Configuration ```ts taskora.task("process-data", { stall: { interval: 30_000, // check every 30 seconds (default) maxCount: 1, // max stalled count before failing (default: 1) }, handler: async (data, ctx) => { /* ... */ }, }) ``` ### `maxCount` Behavior | `maxCount` | First stall | Second stall | |---|---|---| | `1` (default) | Re-queue (recover) | Move to failed | | `2` | Re-queue | Re-queue | | `0` | Move to failed immediately | — | The `stalledCount` is tracked in the job hash (`HINCRBY`). ## App-Level Defaults ```ts const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), defaults: { stall: { interval: 15_000, maxCount: 2 }, }, }) ``` ## Events ```ts task.on("stalled", ({ id, count, action }) => { console.log(`Job ${id} stalled (count: ${count}, action: ${action})`) // action: "recovered" — re-queued for another attempt // action: "failed" — maxStalledCount exceeded, moved to failed }) taskora.on("task:stalled", ({ task, id, count, action }) => { metrics.increment("jobs.stalled", { task, action }) }) ``` ## Preventing Stalls For long-running jobs, call `ctx.heartbeat()` to extend the lock and prevent stall detection: ```ts taskora.task("long-export", { stall: { interval: 30_000 }, handler: async (data, ctx) => { for (const batch of batches) { await processBatch(batch) ctx.heartbeat() // extend lock, prevent stall detection } }, }) ``` ## Cancelled Stalled Jobs If a job is both stalled **and** has a `cancelledAt` flag (was cancelled while active), the stall check moves it directly to the `cancelled` set instead of recovering it. --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora.md' --- [Taskora API Reference](../../../index) / [index](../../index) / Taskora # Taskora ## Interfaces * [AckAndDequeueResult](interfaces/AckAndDequeueResult) * [ActiveEvent](interfaces/ActiveEvent) * [Adapter](interfaces/Adapter) * [AppEventMap](interfaces/AppEventMap) * [AwaitJobResult](interfaces/AwaitJobResult) * [CancelledEvent](interfaces/CancelledEvent) * [CollectBufferInfo](interfaces/CollectBufferInfo) * [CollectConfig](interfaces/CollectConfig) * [CompletedEvent](interfaces/CompletedEvent) * [Context](interfaces/Context) * [ContextLog](interfaces/ContextLog) * [DebounceConfig](interfaces/DebounceConfig) * [DeduplicateConfig](interfaces/DeduplicateConfig) * [DequeueOptions](interfaces/DequeueOptions) * [DequeueResult](interfaces/DequeueResult) * [DispatchOptions](interfaces/DispatchOptions) * [FailedEvent](interfaces/FailedEvent) * [InspectorListOptions](interfaces/InspectorListOptions) * [JobInfo](interfaces/JobInfo) * [LogEntry](interfaces/LogEntry) * [MiddlewareContext](interfaces/MiddlewareContext) * [MigrationStatus](interfaces/MigrationStatus) * [ProgressEvent](interfaces/ProgressEvent) * [QueueStats](interfaces/QueueStats) * [RawJob](interfaces/RawJob) * [RawJobDetails](interfaces/RawJobDetails) * [RetentionConfig](interfaces/RetentionConfig) * [RetentionOptions](interfaces/RetentionOptions) * [RetryConfig](interfaces/RetryConfig) * [RetryingEvent](interfaces/RetryingEvent) * [ScheduleConfig](interfaces/ScheduleConfig) * [ScheduleInfo](interfaces/ScheduleInfo) * [SchedulerConfig](interfaces/SchedulerConfig) * [ScheduleRecord](interfaces/ScheduleRecord) * [SchemaMeta](interfaces/SchemaMeta) * [Serializer](interfaces/Serializer) * [StallConfig](interfaces/StallConfig) * [StalledEvent](interfaces/StalledEvent) * [StreamEvent](interfaces/StreamEvent) * [TaskEventMap](interfaces/TaskEventMap) * [ThrottleConfig](interfaces/ThrottleConfig) * [TtlConfig](interfaces/TtlConfig) * [WorkflowAdvanceResult](interfaces/WorkflowAdvanceResult) * [WorkflowCancelResult](interfaces/WorkflowCancelResult) * [WorkflowDetail](interfaces/WorkflowDetail) * [WorkflowFailResult](interfaces/WorkflowFailResult) ## Type Aliases * [BackoffStrategy](type-aliases/BackoffStrategy) * [Duration](type-aliases/Duration) * [InferInput](type-aliases/InferInput) * [InferOutput](type-aliases/InferOutput) * [~~JobOptions~~](type-aliases/JobOptions) * [JobState](type-aliases/JobState) * [Middleware](type-aliases/Middleware) * [MissedPolicy](type-aliases/MissedPolicy) * [TaskContract](type-aliases/TaskContract) * [WorkflowState](type-aliases/WorkflowState) --- --- url: 'https://kravetsone.github.io/taskora/taskora/skills/using-taskora/SKILL.md' description: > taskora — TypeScript-first distributed task queue for Node.js with Redis backend. Use when building background job systems, scheduling recurring tasks, composing type-safe workflows (chain/group/chord), handling retries with backoff, rate limiting, debouncing/throttling/deduplicating dispatches, cancelling running jobs, validating job schemas, versioning job payloads, inspecting queue state, managing dead-letter queues, or running the admin dashboard. Not for BullMQ, Agenda, Bee-Queue, or other task queue libraries. --- # taskora — Task Queue for Node.js taskora is a modern, type-safe task queue library for Node.js. TypeScript-first, batteries-included. Unlike BullMQ or Agenda, taskora is **task-centric** (not queue-centric) — you define tasks, not queues. The queue is an implementation detail. ## Architecture overview ``` taskora — core engine, types, task API (zero DB deps) taskora/redis — Redis adapter (peer dep: ioredis) taskora/memory — in-memory adapter (zero deps, for testing & dev) taskora/test — test runner with virtual time taskora/telemetry — OpenTelemetry adapter (deferred) taskora/react — React hooks (deferred) @taskora/board — admin dashboard, separate fullstack package (peer deps: taskora, hono) ``` Always import from the correct subpath: ```typescript // Core import { createTaskora, chain, group, chord } from "taskora" import type { Taskora } from "taskora" // Redis adapter import { redisAdapter } from "taskora/redis" // Testing import { createTestRunner } from "taskora/test" // Admin dashboard — separate package, install with `bun add @taskora/board hono` import { createBoard } from "@taskora/board" ``` `ioredis` is an **optional peer dep** — only required when using `taskora/redis`. `@taskora/board` is a **separate package** with its own `hono` peer dep — no board code lands in the `taskora` bundle unless you install it. ## Basic usage ### Creating an instance ```typescript import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), // or: redisAdapter({ host, port, password }) // or: redisAdapter(existingIORedisInstance) defaults: { retry: { attempts: 3, backoff: "exponential", delay: 1000 }, timeout: 30_000, concurrency: 5, }, }) ``` ### Defining tasks Minimal — name + function: ```typescript const sendEmailTask = taskora.task( "send-email", async (data: { to: string; subject: string }) => { await mailer.send(data) return { messageId: "abc" } }, ) // Infers: Task<{ to: string; subject: string }, { messageId: string }> ``` With options: ```typescript const processImageTask = taskora.task("process-image", { retry: { attempts: 5, backoff: "exponential", maxDelay: 60_000 }, timeout: 120_000, concurrency: 10, handler: async (data: { url: string; width: number }, ctx) => { ctx.progress(50) ctx.log.info("Processing", { url: data.url }) const result = await sharp(data.url).resize(data.width).toBuffer() return { size: result.byteLength } }, }) ``` ### Dispatching jobs ```typescript const handle = sendEmailTask.dispatch({ to: "user@example.com", subject: "Hello" }) handle.id // job UUID — available synchronously, immediately await handle // resolves to job ID string (thenable, backward-compatible) await handle.result // waits for actual result: { messageId: "abc" } await handle.getState() // "waiting" | "delayed" | "active" | "completed" | "failed" | "retrying" | "cancelled" | "expired" ``` Dispatch options: ```typescript sendEmailTask.dispatch(data, { delay: 5_000, // delay before processing priority: 1, // higher = processed first ttl: "5m", // expire if not started within 5 minutes concurrencyKey: "user:123", // limit concurrency per key concurrencyLimit: 2, debounce: { key: "user:123", delay: "2s" }, throttle: { key: "user:123", max: 3, window: "1m" }, deduplicate: { key: "sync:123", while: ["waiting", "delayed", "active"] }, }) ``` Ordering: **(priority desc, timestamp asc)** — best-effort, not strict FIFO. No global sequence counter. For strict per-key ordering use `concurrencyKey` + `concurrencyLimit: 1`. Bulk dispatch: ```typescript const handles = await sendEmailTask.dispatchMany([ { data: { to: "a@b.com", subject: "Hi" } }, { data: { to: "c@d.com", subject: "Hey" }, options: { delay: 5000 } }, ]) ``` ### Starting workers & shutdown ```typescript await taskora.start() // starts workers for all registered tasks process.on("SIGTERM", async () => { await taskora.close() // waits for active jobs to finish, then disconnects }) ``` ## Contracts — producer/consumer split A **task contract** is a pure declaration of a task (name + schemas + defaults) with no runtime dependency on `App`/`Worker`/`Adapter`. Contracts are NOT the default path — inline `taskora.task("name", { handler, ... })` is simpler and correct for most projects. Reach for contracts only when the producer physically cannot import the handler. **When to use contracts vs inline tasks:** Default: inline. Same type safety, fewer concepts. If producer and worker run in the same process, stop here. Use contracts only when one of these applies: * Worker has heavy runtime deps (`sharp`, `puppeteer`, `ffmpeg`, native bindings, large ML models) that you don't want in the producer bundle. * Producer runs on an edge runtime or browser that physically cannot execute the handler. * Multiple services dispatch to the same worker pool, and you want one source of truth for task names and types (monorepo `packages/contracts` workspace). Inline tasks force the producer to import the handler file, which drags the handler's transitive dependency graph into the producer bundle. Contracts split declaration from implementation so the producer imports only the contract file (plain data, no handler code). If none of the above applies, stay with inline tasks. Contracts add an extra layer you don't need. ### `defineTask()` — contract with runtime schemas ```typescript // contracts/tasks.ts — shared between producer and worker import { defineTask } from "taskora" import { z } from "zod" export const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string().email(), subject: z.string(), body: z.string(), }), output: z.object({ messageId: z.string() }), retry: { attempts: 3, backoff: "exponential" }, timeout: "30s", }) ``` Returns `TaskContract`. Types are inferred from any Standard Schema compatible library (Zod, Valibot, ArkType). No `dispatch()` method on the contract itself — you must bind it to an `App` first via `register()` or `implement()`. ### `staticContract()` — types only, no runtime schemas For producers that can't ship a schema library (edge runtimes, browsers, serverless). Worker still validates — validation always runs on the worker boundary regardless. ```typescript import { staticContract } from "taskora" export const sendEmailTask = staticContract< { to: string; subject: string; body: string }, { messageId: string } >({ name: "send-email" }) ``` ### `taskora.register()` — producer path ```typescript // api/server.ts — producer only, never runs the handler import { createTaskora } from "taskora" import { redisAdapter } from "taskora/redis" import { sendEmailTask } from "../contracts/tasks.js" const taskora = createTaskora({ adapter: redisAdapter(process.env.REDIS_URL!) }) const sendEmail = taskora.register(sendEmailTask) // Fully typed — TypeScript enforces { to, subject, body } const handle = sendEmail.dispatch({ to: "alice@example.com", subject: "Welcome", body: "...", }) const result = await handle.result // { messageId: string } ``` Key properties: * **Idempotent by task name** — calling `register(sameContract)` twice returns the same `BoundTask`. * **Producer-only processes can still call `taskora.start()`** — the worker loop simply skips contract-only tasks. Dispatch and event subscription still work. * Returns a `BoundTask` with `.dispatch()`, `.dispatchMany()`, `.on()`, `.s()`, `.map()`, `.chunk()`. ### `taskora.implement()` — worker path, 3 call forms ```typescript // Form 1: bare handler — most common taskora.implement(sendEmailTask, async (data, ctx) => { return { messageId: await mailer.send(data) } }) // Form 2: handler + worker-side options taskora.implement( processImageTask, async (data, ctx) => { /* ... */ }, { concurrency: 4, middleware: [withTracing()], version: 3, migrate: [(v1) => ({ ...v1, width: v1.w })], }, ) // Form 3: object form — required for collect tasks, preferred for larger configs taskora.implement(batchEmailTask, { collect: { key: "user-emails", delay: "5s", maxSize: 100 }, handler: async (items, ctx) => { await mailer.sendBatch(items) return { sent: items.length } }, }) ``` `ImplementOptions` accepts all worker-side config that isn't on the contract: `concurrency`, `middleware`, `onCancel`, `singleton`, `concurrencyLimit`, `ttl`, `version`, `since`, `migrate`. Fields that also exist on the contract (`retry`, `timeout`, `stall`) act as worker-side overrides when set. **Precedence**: `implement()` options → contract defaults → app defaults. **Double-implement throws**. Calling `implement()` after `register()` is NOT a double-implement — it's the intended upgrade path. The existing `BoundTask` returned by `register()` is updated in place and keeps working. ### Workflow composition from contracts Once registered or implemented, contract-based `BoundTask`s compose into chain/group/chord identically to inline tasks: ```typescript import { chain } from "taskora" import { fetchUserTask, renderTemplateTask, sendEmailTask } from "../contracts/tasks.js" const fetchUser = taskora.register(fetchUserTask) const renderTemplate = taskora.register(renderTemplateTask) const sendEmail = taskora.register(sendEmailTask) // Producer-side composition — no handler code needed await chain( fetchUser.s({ id: "42" }), renderTemplate.s(), sendEmail.s(), ).dispatch().result ``` Jobs run on whichever process has implemented each contract. A single workflow can span multiple worker deployments — routing is by task name. ### `validateOnDispatch` — producer-side validation knob Default-on: `dispatch()` validates input via Standard Schema before enqueueing. Disable when producer has already validated upstream or uses `staticContract()`: ```typescript // Global — for all dispatches in this app const taskora = createTaskora({ adapter: redisAdapter(url), validateOnDispatch: false, }) // Per-call — overrides global default sendEmail.dispatch(data, { skipValidation: true }) ``` **Worker-side validation is unaffected** — it always runs before the handler. The safety net stays at the worker boundary regardless of producer config. ### Distribution: where the contract file lives Two strategies, ranked by friction: 1. **Single package, two entrypoints** (recommended for 80% of cases). One repo, one `package.json`, `src/contracts/tasks.ts` imported by `src/api/index.ts` and `src/worker/index.ts`. Build two bundles, deploy as two containers from the same Docker image. 2. **Monorepo with `workspace:*` protocol**. `packages/contracts` workspace, `apps/api` and `apps/worker` depend on `"@acme/contracts": "workspace:*"`. No npm publish — bun/pnpm symlinks locally. **Do NOT** use private npm registries, git submodules, path aliases without workspaces, code generation, or copy-paste. Workspace protocol is free and native; everything else is enterprise ceremony. ## Task context (ctx) Second argument to every handler: ```typescript const myTask = taskora.task("my-task", { handler: async (data: { url: string }, ctx) => { ctx.id // job ID ctx.attempt // current attempt (1-based) ctx.signal // AbortSignal — fires on shutdown or cancellation ctx.timestamp // job creation time (epoch ms) ctx.heartbeat() // extend processing lock ctx.progress(50) // report progress (number or object) ctx.log.info("Fetching resource") ctx.log.warn("Rate limit approaching", { remaining: 3 }) ctx.log.error("Unexpected response", { status: 500 }) // Manual retry with custom delay try { return await fetchResource(data.url) } catch (err) { if (isRateLimited(err)) { throw ctx.retry({ delay: err.retryAfter * 1000 }) } throw err // regular error — uses configured retry policy } }, }) ``` ## Retry & backoff ```typescript const apiTask = taskora.task("call-api", { retry: { attempts: 5, // total attempts (not retries): 5 = 1 initial + 4 retries backoff: "exponential", // "fixed" | "exponential" | "linear" | ((attempt) => ms) delay: 1000, // base delay in ms maxDelay: 60_000, // cap jitter: true, // ±25% randomization (default: true) retryOn: [NetworkError, TimeoutError], // whitelist (if set, only these retry) noRetryOn: [ValidationError], // blacklist }, handler: async (data) => { /* ... */ }, }) ``` `TimeoutError` is **not retried by default** — add to `retryOn` explicitly if you want timeout retries. Manual retry from handler: `throw ctx.retry({ delay: 5000, reason: "rate limited" })` or `throw new RetryError()`. ## Schema validation (Standard Schema) Any library implementing [Standard Schema](https://standardschema.dev/) works — Zod, Valibot, ArkType, TypeBox. ```typescript import { z } from "zod" const createUserTask = taskora.task("create-user", { input: z.object({ name: z.string().min(1), email: z.string().email(), }), output: z.object({ id: z.string().uuid(), }), handler: async (data) => { // data is { name: string; email: string } — inferred from schema const user = await db.users.create(data) return { id: user.id } }, }) ``` `@standard-schema/spec` is a peer dep (types only). The library never imports Zod/Valibot directly. ### Type inference — `InferInput` / `InferOutput` Pull `TInput` / `TOutput` out of anything that carries them. One pair of helpers works on every task-shaped type: ```typescript import { defineTask, type InferInput, type InferOutput } from "taskora" import { z } from "zod" const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string(), subject: z.string() }), output: z.object({ messageId: z.string() }), }) type EmailInput = InferInput // { to: string; subject: string } type EmailResult = InferOutput // { messageId: string } ``` Supported carriers: `Task`, `BoundTask`, `TaskContract`, `ResultHandle`, `WorkflowHandle`, and workflow `Signature` / `ChainSignature` / `GroupSignature` / `ChordSignature`. Output-only types (`ResultHandle`, `WorkflowHandle`, groups, chords) resolve to `never` under `InferInput` — a deliberate mismatch signal. **Name collisions** with Zod / ArkType / other schema libraries that export `InferInput`? Use the namespaced form: ```typescript import type { Taskora } from "taskora" type EmailInput = Taskora.InferInput type EmailResult = Taskora.InferOutput ``` Same types, collision-safe import. ## Schema versioning & migrations Three levels — pick what fits: ### Level 1: Bump version (schema defaults do the work) ```typescript const sendEmailTask = taskora.task("send-email", { version: 2, input: z.object({ to: z.string(), subject: z.string(), html: z.boolean().default(false), // old v1 jobs get false via schema }), handler: async (data) => { /* ... */ }, }) ``` ### Level 2: Sparse migrate record (only breaking changes) ```typescript const sendEmailTask = taskora.task("send-email", { version: 4, input: emailSchemaV4, migrate: { 3: (data) => ({ ...(data as any), body: { text: "" } }), // only v3→v4 is breaking }, handler: async (data) => { /* ... */ }, }) ``` ### Level 3: Tuple migrate (strict, typed last element) ```typescript import { into } from "taskora" const sendEmailTask = taskora.task("send-email", { input: emailSchema, migrate: [ (data) => ({ ...(data as any), body: "" }), // v1→v2 into(emailSchema, (data) => ({ // v2→v3 (return type enforced) to: (data as any).to, subject: (data as any).subject, body: { text: String((data as any).body) }, })), ], // version = since + migrate.length = 1 + 2 = 3 handler: async (data) => { /* ... */ }, }) ``` Prune old migrations with `since`: ```typescript const sendEmailTask = taskora.task("send-email", { since: 3, migrate: [ (data) => ({ ...(data as any), priority: "normal" }), // v3→v4 ], // version = 3 + 1 = 4 handler: async (data) => { /* ... */ }, }) ``` Inspect migration state: ```typescript const status = await taskora.inspect().migrations("send-email") // { version, since, queue: { oldest, byVersion }, canBumpSince } ``` ## Scheduling / Cron ### Inline schedule ```typescript const healthCheckTask = taskora.task("health-check", { schedule: { every: "30s" }, handler: async () => await pingServices(), }) ``` ### Standalone schedules ```typescript app.schedule("cleanup", { task: processImageTask, every: "5m", data: { url: "internal://cleanup", width: 0 }, }) app.schedule("daily-report", { task: sendEmailTask, cron: "0 9 * * MON-FRI", timezone: "America/New_York", data: { to: "team@company.com", subject: "Daily Report" }, }) app.schedule("invoice-generation", { task: generateInvoiceTask, cron: "0 0 1 * *", onMissed: "catch-up", // "skip" (default) | "catch-up" | "catch-up-limit:5" data: { type: "monthly" }, }) ``` Duration type: `number | "${number}s" | "${number}m" | "${number}h" | "${number}d"`. `cron-parser` is an optional peer dep — only needed if using `cron:` expressions. ### Runtime schedule management ```typescript await app.schedules.list() await app.schedules.pause("daily-report") await app.schedules.resume("daily-report") await app.schedules.update("cleanup", { every: "10m" }) await app.schedules.remove("cleanup") await app.schedules.trigger("daily-report") // fire now, outside schedule ``` Leader election ensures only one scheduler runs across multiple workers (SET NX PX). ## Workflows (Canvas) Type-safe task composition — chain, group, chord. Inspired by Celery's Canvas. ### Signatures `.s()` creates a serializable, composable snapshot: ```typescript const sig = sendEmailTask.s({ to: "a@b.com", subject: "Welcome" }) // Type: Signature<{ to: string; subject: string }, { messageId: string }> ``` | Call | Behavior | |---|---| | `task.s(data)` | Bound — data is fixed, ignores pipeline input | | `task.s()` | Unbound — receives previous step's output | ### Chain — sequential pipeline ```typescript import { chain } from "taskora" const onboarding = chain( createUserTask.s({ name: "John", email: "john@example.com" }), sendWelcomeEmailTask.s(), // receives { id: string } notifySlackTask.s(), // receives { messageId: string } ) const handle = onboarding.dispatch() const result = await handle.result ``` Pipe syntax (unlimited chaining): ```typescript const result = await createUserTask .s({ name: "John", email: "john@example.com" }) .pipe(sendWelcomeEmailTask.s()) .pipe(notifySlackTask.s()) .dispatch() .result ``` `chain()` has type overloads for up to 10 steps. `.pipe()` has no limit. ### Group — parallel execution ```typescript import { group } from "taskora" const handle = group( processImageTask.s({ url: "img1.jpg", width: 800 }), processImageTask.s({ url: "img2.jpg", width: 800 }), processImageTask.s({ url: "img3.jpg", width: 800 }), ).dispatch() const result = await handle.result // Type: [ImageResult, ImageResult, ImageResult] ``` ### Chord — parallel then callback ```typescript import { chord } from "taskora" const handle = chord( [ fetchPriceTask.s({ symbol: "AAPL" }), fetchPriceTask.s({ symbol: "GOOG" }), ], calculatePortfolioTask.s(), // ^ receives [PriceResult, PriceResult] ).dispatch() ``` ### Composability Compositions are themselves valid inputs: ```typescript const handle = chord( [ chain(fetchDataTask.s({ source: "api" }), transformTask.s()), chain(fetchDataTask.s({ source: "db" }), transformTask.s()), ], mergeTask.s(), ).dispatch() ``` ### Map & Chunk ```typescript const handle = processImageTask.map([ { url: "img1.jpg", width: 800 }, { url: "img2.jpg", width: 800 }, ]) // Equivalent to group(task.s(item1), task.s(item2), ...).dispatch() const handle = processImageTask.chunk(largeList, { size: 50 }) // 50 at a time, then next 50 ``` ### WorkflowHandle ```typescript const handle = chain(a.s(data), b.s()).dispatch() await handle // ensure dispatched const result = await handle.result // wait for final result const state = await handle.getState() // "running" | "completed" | "failed" | "cancelled" await handle.cancel({ reason: "no longer needed" }) // cascade cancel // Workflow-level TTL chain(a.s(data), b.s()).dispatch({ ttl: "5m" }) ``` ## Events ### Task-level ```typescript sendEmailTask.on("completed", (event) => { event.id; event.result; event.duration; event.attempt }) sendEmailTask.on("failed", (event) => { event.id; event.error; event.attempt; event.willRetry }) sendEmailTask.on("retrying", (event) => { event.id; event.attempt; event.nextAttempt; event.error }) sendEmailTask.on("progress", (event) => { event.id; event.progress }) sendEmailTask.on("active", (event) => { event.id; event.attempt }) sendEmailTask.on("stalled", (event) => { event.id; event.count; event.action // "recovered" | "failed" }) sendEmailTask.on("cancelled", (event) => { event.id; event.reason }) ``` ### App-level ```typescript taskora.on("task:completed", (event) => { /* event includes task name */ }) taskora.on("task:failed", (event) => {}) taskora.on("task:active", (event) => {}) taskora.on("task:stalled", (event) => {}) taskora.on("task:cancelled", (event) => {}) taskora.on("worker:ready", () => {}) taskora.on("worker:error", (error) => {}) taskora.on("worker:closing", () => {}) ``` Default error logging: when no `failed` listener is registered, taskora logs to `console.error`. Adding any `failed` listener (app or task level) suppresses the default. ## Middleware Koa-style onion model: ```typescript // App-level (before start()) taskora.use(async (ctx, next) => { const start = performance.now() await next() metrics.record(ctx.task.name, performance.now() - start) }) // Per-task const protectedTask = taskora.task("admin-action", { middleware: [requireRole("admin"), auditLog()], handler: async (data) => { /* ... */ }, }) ``` Middleware context extends `Taskora.Context` with `task: { name }`, mutable `data`, and mutable `result` (readable after `await next()`). Execution order: app middleware -> task middleware -> handler. ## Flow control | Feature | Scope | Excess jobs | Configured on | |---|---|---|---| | **debounce** | per-key | replaced (last wins) | dispatch options | | **throttle** | per-key | dropped | dispatch options | | **deduplicate** | per-key | no-op (first wins) | dispatch options | | **ttl** | per-job | expired/failed | dispatch or task | | **singleton** | per-task | queued (wait) | task definition | | **concurrencyKey** | per-key | queued (wait) | dispatch options | | **overlap: false** | per-schedule | skipped | schedule definition | ### Debounce ```typescript await reindexTask.dispatch({ userId: "123" }, { debounce: { key: "user:123", delay: "2s" }, }) ``` ### Throttle ```typescript await notifyTask.dispatch({ userId: "123", msg: "New message" }, { throttle: { key: "user:123", max: 3, window: "1m" }, }) ``` ### Deduplicate ```typescript await syncUserTask.dispatch({ userId: "123" }, { deduplicate: { key: "sync:123" }, }) ``` ### TTL / Expiration ```typescript // Per-dispatch await sendOtpTask.dispatch(data, { ttl: "5m" }) // Per-task const sendOtpTask = taskora.task("send-otp", { ttl: { max: "5m", onExpire: "discard" }, // "fail" (default) | "discard" handler: async (data) => { /* ... */ }, }) ``` ### Singleton ```typescript const rebuildCacheTask = taskora.task("rebuild-cache", { singleton: true, // only one active globally across all workers handler: async () => { /* ... */ }, }) ``` ### Batch collection (collect) Accumulate items and flush as a batch: ```typescript const indexTask = taskora.task("index-batch", { collect: { key: "search-index", delay: "5s", maxSize: 100, maxWait: "30s" }, handler: async (items: SearchItem[]) => { // items is an array — all accumulated items flushed together await searchIndex.bulkIndex(items) }, }) // Each dispatch adds one item to the accumulator await indexTask.dispatch({ id: "1", title: "Hello" }) await indexTask.dispatch({ id: "2", title: "World" }) // After 5s (or 100 items, or 30s) → handler receives [item1, item2, ...] ``` Three flush triggers (whichever comes first): debounce delay, maxSize, maxWait. **Peek the buffer (non-destructive read)** — for live-context use cases where unflushed items need to be surfaced in a read path without disturbing the flush cycle: ```typescript // Read current buffer as deserialized input items (oldest → newest) const pending = await indexTask.peekCollect("search-index") // Stats-only — cheaper (no payloads), returns null when no buffer exists const info = await indexTask.inspectCollect("search-index") // → { count: 12, oldestAt: 1712678400000, newestAt: 1712678520000 } | null ``` Semantics: * Non-destructive — never drains, never resets the debounce timer * Snapshot-consistent — single atomic `LRANGE` / `slice` * Returns `[]` / `null` once the handler has drained the batch (ownership boundary preserved — items belong to either the buffer or the handler, never both) * Throws `TaskoraError` on tasks without `collect` (silent `[]` would mask config bugs) * Dynamic `collect.key`: pass the already-resolved key string * No `retain` option is offered — flushed items belong to the handler's output storage Use it when you need to read unflushed data (e.g. a chat ingestion pipeline where a Q\&A path must surface messages still sitting in the collect buffer alongside data already written to long-term memory). ## Cancellation ```typescript const handle = longTask.dispatch(data) await handle.cancel({ reason: "no longer needed" }) // Waiting/delayed → cancelled immediately // Active → AbortSignal fires, onCancel hook runs ``` Task-level cancel hook: ```typescript const importTask = taskora.task("import", { onCancel: async (ctx) => { // ctx.signal already aborted await cleanupPartialImport(ctx.id) }, handler: async (data, ctx) => { // ctx.signal.aborted becomes true on cancel for (const chunk of chunks) { if (ctx.signal.aborted) break await processChunk(chunk) } }, }) ``` ## Inspector API ```typescript const inspector = taskora.inspect() await inspector.active() await inspector.waiting({ task: "send-email", limit: 50 }) await inspector.delayed() await inspector.completed() await inspector.failed() await inspector.cancelled() await inspector.expired() await inspector.stats() // { waiting, active, delayed, completed, failed, expired, cancelled } await inspector.stats({ task: "send-email" }) const job = await inspector.find("job-id-123") // { id, task, state, data, result, error, progress, logs, attempt, version, timeline } // Type-safe variant const job = await inspector.find(sendEmailTask, "job-id-123") job.data // { to: string; subject: string } job.result // { messageId: string } | undefined ``` ## Dead letter queue ```typescript await taskora.deadLetters.list({ task: "send-email", limit: 20 }) await taskora.deadLetters.retry("job-id-123") await taskora.deadLetters.retryAll({ task: "send-email" }) ``` DLQ is a view over the failed sorted set — no separate `:dead` key. ## Retention ON by default: ```typescript const taskora = createTaskora({ adapter: redisAdapter("redis://localhost:6379"), retention: { completed: { maxAge: "1h", maxItems: 100 }, // defaults failed: { maxAge: "7d", maxItems: 300 }, // defaults }, }) ``` Trim runs piggyback on stall check interval (zero extra timers). ## Stall detection Workers heartbeat via lock extension. Stalled jobs (no heartbeat) are auto-recovered or failed. ```typescript const myTask = taskora.task("my-task", { stall: { interval: 30_000, maxCount: 1 }, // maxCount: 1 = re-queue first stall, fail on second handler: async (data) => { /* ... */ }, }) ``` ## Testing ### Test runner ```typescript import { createTestRunner } from "taskora/test" const runner = createTestRunner() // Define tasks on the runner's app const addTask = runner.app.task("add", async (data: { x: number; y: number }) => data.x + data.y) ``` ### From existing instance ```typescript import { createTestRunner } from "taskora/test" import { taskora, sendEmailTask } from "../src/tasks" const runner = createTestRunner({ from: taskora }) // All tasks patched to use in-memory backend — inter-task dispatches work ``` ### Two execution modes **`runner.run(task, data)`** — direct handler call, inline retry loop, no queue: ```typescript const result = await runner.run(sendEmailTask, { to: "test@example.com", subject: "Test" }) ``` **`runner.execute(task, data)`** — full pipeline (dispatch -> process -> retries -> result): ```typescript const execution = await runner.execute(sendEmailTask, { to: "test@example.com", subject: "Test" }) execution.state // "completed" execution.result // { messageId: "..." } execution.attempts // 1 execution.logs // LogEntry[] execution.progress // number | object | null execution.error // string | undefined execution.handle // ResultHandle ``` ### Testing workflows ```typescript const handle = chain(addTask.s({ x: 3, y: 4 }), doubleTask.s()).dispatch() await handle for (let i = 0; i < 10; i++) { await runner.processAll() if (await handle.getState() === "completed") break } const result = await handle.result // 14 console.log(runner.steps) // workflow step history ``` ### Cleanup ```typescript afterEach(() => runner.clear()) // standalone mode afterEach(() => runner.dispose()) // from-instance mode (restores original adapters) ``` ## Admin dashboard (Board) `@taskora/board` is a **separate package** that ships a full-featured admin UI as a pre-built React SPA served by a Hono backend. No build step for users — install it alongside `hono` and mount the board. Separate-package layout keeps the main `taskora` bundle free of Hono + static assets when the board isn't used. ```bash bun add @taskora/board hono # or: npm install @taskora/board hono ``` ```typescript import { createBoard } from "@taskora/board" const board = createBoard(taskora, { basePath: "/board", // default "/board" readOnly: false, // hides mutation UI + rejects POST/PUT/DELETE auth: { // batteries-included session auth (see below) cookiePassword: process.env.BOARD_COOKIE_SECRET!, // min 32 chars authenticate: async ({ username, password }) => username === "admin" && password === process.env.BOARD_PASSWORD ? { id: "admin" } : null, }, title: "My Queue", theme: "auto", // "light" | "dark" | "auto" redact: ["password", "apiKey", "ssn"], // deep, case-insensitive key redaction refreshInterval: 2000, // stats polling fallback — SSE is primary formatters: { data: (data, taskName) => data, // per-task render preprocessing result: (result, taskName) => result, }, }) ``` ### Auth Two shapes, pick one — they're a discriminated union. The board detects which form you passed at `createBoard` time. **Session auth (recommended)** — drop in a config object, the board mounts a server-rendered login page, signs a session cookie, and guards everything (SPA HTML + API + SSE). Inspired by AdminJS. ```typescript createBoard(taskora, { auth: { // HMAC-SHA256 signing secret, min 32 chars // generate: openssl rand -base64 48 cookiePassword: process.env.BOARD_COOKIE_SECRET!, // Return a truthy user to accept, null to reject authenticate: async ({ username, password }, req) => { if (username === "admin" && password === process.env.BOARD_PASSWORD) { return { id: "admin" } } return null }, cookieName: "taskora_board_session", // default // sessionTtl defaults to `false` — no expiry, browser-session cookie. // Opt into rolling expiry with a Duration: "30s" | "5m" | "2h" | "1d" | ms number. // sessionTtl: "7d", }, }) ``` Routes mounted automatically under `${basePath}`: * `GET /login` — server-rendered login form (no SPA rebuild required) * `POST /auth/login` — verifies via `authenticate`, sets signed `HttpOnly SameSite=Lax` cookie, 302 redirect * `POST /auth/logout` — clears cookie, 302 to `/login`. Sidebar shows a `[ logout ]` button automatically when session auth is on. Unauthenticated requests get: * `401 {"error":"Unauthorized"}` on `/api/*` * `302 → /login?redirect=` on SPA paths `createBoard` throws synchronously if `cookiePassword` is shorter than 32 characters. The session is a stateless signed cookie — no Redis session store, works with every adapter including memory. Password hashing / rate limiting / lockout are the caller's responsibility inside `authenticate`. No separate CSRF token is needed — `SameSite=Lax` + `HttpOnly` cover the mutation endpoints. **Custom auth hook (legacy / BYO JWT)** — pass a function instead if you already ship JWT, OAuth, or your framework's session middleware. Runs per-request on `/api/*` only. The SPA HTML remains public in this mode, matching the pre-session-auth behavior: ```typescript createBoard(taskora, { auth: async (req) => { const token = req.headers.get("authorization")?.replace("Bearer ", "") if (!token || !(await verifyJwt(token))) { return new Response("Unauthorized", { status: 401 }) } // return undefined → request proceeds }, }) ``` ### Serving the board The `Board` interface exposes four ways to serve the UI — pick whichever fits your host: ```typescript interface Board { app: Hono // raw Hono instance fetch: (req: Request) => Response | Promise // Web standard fetch handler handler: (req, res) => void // Node.js-style (requires @hono/node-server) listen: (port: number) => void // standalone server (Bun / Deno only) } ``` ```typescript // Standalone (Bun / Deno only — throws on plain Node.js) board.listen(3000) // Bun.serve / Deno.serve / Cloudflare Workers / Vercel Edge Bun.serve({ fetch: board.fetch, port: 3000 }) Deno.serve({ port: 3000 }, board.fetch) // Hono — mount as sub-route on an existing Hono app honoApp.route("/admin/taskora", board.app) // Node.js (Express / Fastify / Koa) — wrap with @hono/node-server import { serve } from "@hono/node-server" serve({ fetch: board.fetch, port: 3000 }) ``` Anything that speaks the Web `Request`/`Response` standard can mount `board.fetch` directly. For Node.js-native frameworks, always wrap with `@hono/node-server` — calling `board.handler` without it throws. ### What the UI provides * **Overview dashboard** — global stat cards (waiting/active/delayed/failed/completed/cancelled/expired), 24h throughput chart (Recharts), task table, Redis health (version, memory, uptime) * **Task detail** — state tabs, paginated job table, bulk retry-all and clean-by-age actions * **Job detail** — timeline (`ts` → `processedOn` → `finishedOn`), data/result/error/logs tabs, progress bar, attempt history, retry + cancel actions, workflow link * **Workflow DAG** — `@xyflow/react` with BFS auto-layout, state-colored nodes, animated edges, cascade cancel; renders chains/groups/chords including nested * **Schedule management** — list with next-run countdown, pause/resume/trigger-now/delete/update * **DLQ view** — failed jobs grouped by error-message frequency, per-job retry + atomic retry-all (`retryAllDLQ.lua` batched 100 at a time) * **Migrations view** — version distribution bar chart per task, `canBumpSince` indicator * **Real-time SSE** — `/api/events` streams `adapter.subscribe()` events live + periodic `stats:update`; `refreshInterval` polling only as fallback * **Global job search** — paste job ID → detail view * **Keyboard shortcuts** — `1`–`5` for top-level nav, `/` for global search * **Dark / light / auto theme** via CSS custom properties ### Throughput backing `ack.lua` and `fail.lua` `INCR` per-minute counters with a 24h TTL, so the throughput chart is accurate without any external time-series database. ### REST API (under `${basePath}/api`) The SPA talks to a public REST API — you can call these from your own tooling. Key endpoints: ``` GET /api/overview — global stats + tasks + Redis info GET /api/tasks/:task/jobs — paginated (query: state, limit, offset) GET /api/tasks/:task/stats — queue counts GET /api/tasks/:task/migrations — version distribution + canBumpSince POST /api/tasks/:task/retry-all — retry every failed job for a task POST /api/tasks/:task/clean — trim completed/failed GET /api/jobs/:jobId — full JobDetailResponse w/ timeline + workflow link POST /api/jobs/:jobId/retry POST /api/jobs/:jobId/cancel GET /api/schedules POST /api/schedules/:name/pause POST /api/schedules/:name/resume POST /api/schedules/:name/trigger PUT /api/schedules/:name DELETE /api/schedules/:name GET /api/workflows GET /api/workflows/:workflowId — DAG graph + per-node state POST /api/workflows/:workflowId/cancel GET /api/dlq — grouped by error-message frequency POST /api/dlq/:jobId/retry POST /api/dlq/retry-all GET /api/throughput — 24h per-minute buckets GET /api/events — Server-Sent Events stream GET /api/config — static config (title, logo, theme, readOnly, authEnabled) ``` All mutation endpoints honor `readOnly` and the `auth` hook. ### Field redaction `redact: ["password", "secret", ...]` walks every field in `data`, `result`, `error`, and `logs.meta` and masks keys matching (case-insensitive) recursively through nested objects and arrays. Redaction runs **on the server** — secrets never leave the process. For per-task control, use `formatters.data` / `formatters.result` instead. ### Production checklist * **Always** set `auth` — never expose the board publicly without it * Configure `redact` for any sensitive payload fields * Consider `readOnly: true` for broad internal visibility * Mount behind HTTPS (the board has no TLS — that's the proxy's job) * If fronted by nginx, disable `proxy_buffering` on the board location so SSE works * Pin a stable `basePath` — changing it invalidates cached asset URLs in browsers ## Adapters ### Redis adapter ```typescript import { redisAdapter } from "taskora/redis" const adapter = redisAdapter("redis://localhost:6379") // or: redisAdapter({ host, port, password, db, tls }) // or: redisAdapter(existingIORedisInstance) ``` Redis 7.0+ required. All multi-step state transitions use Lua scripts for atomicity. Keys use `{hash tags}` for Redis Cluster compatibility. Key layout: `taskora:{task}:{key}` — no prefix by default, customizable via `prefix` option. ### Memory adapter ```typescript import { memoryAdapter } from "taskora/memory" const adapter = memoryAdapter() ``` Full adapter implementation using plain JS data structures. No Redis needed. Used internally by `taskora/test`. ## Types All public types under the `Taskora` namespace: ```typescript import type { Taskora } from "taskora" type State = Taskora.JobState type Config = Taskora.RetryConfig type Ctx = Taskora.Context type Mid = Taskora.Middleware type Opts = Taskora.DispatchOptions type Info = Taskora.JobInfo type Stats = Taskora.QueueStats type Log = Taskora.LogEntry ``` ## Conventions * Factory function: `createTaskora()` — returns App instance * Task variables: `*Task` suffix — `sendEmailTask`, `processImageTask` * Task string names: kebab-case — `"send-email"`, `"process-image"` * Property: `adapter` (not `backend`) — matches `Taskora.Adapter` interface * All keys for one job share a `{hash tag}` for Redis Cluster compatibility * `ioredis` is an optional peer dep — install only when using `taskora/redis` ## Common patterns ### Error handling with typed results ```typescript import { JobFailedError, TimeoutError, CancelledError } from "taskora" try { const result = await handle.result } catch (err) { if (err instanceof CancelledError) { /* job was cancelled */ } if (err instanceof TimeoutError) { /* waitFor() timed out */ } if (err instanceof JobFailedError) { /* job failed permanently */ } } ``` ### Metrics middleware ```typescript taskora.use(async (ctx, next) => { const start = performance.now() try { await next() metrics.recordSuccess(ctx.task.name, performance.now() - start) } catch (err) { metrics.recordFailure(ctx.task.name, err) throw err } }) ``` ### Multi-step processing with progress ```typescript const importTask = taskora.task("import-data", { timeout: 300_000, handler: async (data: { fileUrl: string }, ctx) => { ctx.log.info("Downloading file") const file = await download(data.fileUrl, { signal: ctx.signal }) ctx.progress({ step: "download", percent: 33 }) ctx.log.info("Parsing records") const records = await parse(file) ctx.progress({ step: "parse", percent: 66 }) ctx.log.info("Inserting into database", { count: records.length }) ctx.heartbeat() // extend lock for long insert await db.bulkInsert(records) ctx.progress({ step: "insert", percent: 100 }) return { imported: records.length } }, }) ``` ### Rate-limited API calls ```typescript const apiTask = taskora.task("call-external-api", { retry: { attempts: 5, backoff: "exponential", delay: 1000, }, handler: async (data: { endpoint: string }, ctx) => { try { return await fetch(data.endpoint, { signal: ctx.signal }).then(r => r.json()) } catch (err) { if (err.status === 429) { throw ctx.retry({ delay: Number(err.headers.get("retry-after")) * 1000 }) } throw err } }, }) ``` *** ## Internal flows ### Job lifecycle state machine ``` dispatch() │ ├──(no delay)──→ WAITING ──→ worker claims ──→ ACTIVE │ │ │ └──(delay > 0)──→ DELAYED ─(timer fires)──→ WAITING │ │ ┌────────────────────────────┤ │ │ │ │ ▼ ▼ ▼ ▼ COMPLETED FAILED RETRYING CANCELLED │ │ │ ▼ │ DELAYED ──→ WAITING ──→ ACTIVE ... │ (expired job during dequeue) ▼ EXPIRED ``` State transitions and what triggers them: | From | To | Trigger | |---|---|---| | — | `waiting` | `dispatch()` with no delay | | — | `delayed` | `dispatch()` with `delay` option | | `delayed` | `waiting` | Score time reached in sorted set, promoted by `moveToActive.lua` | | `waiting` | `active` | Worker claims via `blockingDequeue` → `moveToActive.lua` | | `active` | `completed` | Handler returns successfully → `ack.lua` | | `active` | `failed` | Handler throws, no retries left → `fail.lua` | | `active` | `retrying` | Handler throws, retries remaining → `fail.lua` (retry path) | | `retrying` | `delayed` | `fail.lua` sets ZADD with backoff delay score | | `active` | `cancelled` | `handle.cancel()` → `cancel.lua` + worker detects via pub/sub or `extendLock` | | `waiting`/`delayed` | `cancelled` | `handle.cancel()` → `cancel.lua` (immediate) | | `waiting`/`delayed` | `expired` | TTL exceeded, detected during `moveToActive.lua` promote/dequeue | Every state transition is a **Lua script** — no partial states, no race conditions. ### Worker processing pipeline When a worker claims a job, this is the exact sequence: ``` blockingDequeue (BZPOPMIN on marker ZSET) │ ▼ moveToActive.lua ├── promote delayed jobs (ZRANGEBYSCORE → LPUSH) ├── check TTL expiration (expireAt < now → EXPIRED) ├── check singleton (LLEN active > 0 → re-queue with 1s delay) ├── check concurrency key limit ├── RPOPLPUSH wait → active ├── set lock token + processedOn timestamp └── return job data │ ▼ Version check ├── job._v > task.version → nack (future version, leave for newer worker) ├── job._v < task.since → fail permanently ("migration no longer available") └── job._v <= task.version → continue │ ▼ Deserialize (serializer.deserialize) │ ▼ Migration (if job._v < task.version) └── run migration chain: for each v from job._v to task.version, apply migrate[v] if exists │ ▼ Schema validation (if input schema + versioned task) └── standardSchema.validate(data) — applies .default() values │ ▼ Middleware pipeline (composed once at Worker construction) └── app middleware → task middleware → handler wrapper │ ▼ Handler execution (with timeout race if configured) ├── timeout fires → TimeoutError + controller.abort("timeout") └── handler completes → result │ ▼ Output validation (if output schema) │ ▼ Cancel check (signal.aborted && reason === "cancelled"?) ├── yes → onCancel hook → finishCancel.lua └── no → continue │ ├──(success)──→ ack.lua (LREM active + store result + ZADD completed + XADD event) │ └── advance workflow if part of one │ └──(error)──→ Retry decision: ├── RetryError → always retry (unless attempts exhausted) ├── TimeoutError → NOT retried by default (must be in retryOn) ├── noRetryOn match → permanent fail ├── retryOn set + no match → permanent fail └── else → shouldRetry(attempt < max) → retry or fail │ ├── retry → fail.lua (retry path: HINCRBY attempt, state=retrying, │ ZADD delayed with backoff score, XADD retrying event) │ └── permanent fail → fail.lua (LREM active + ZADD failed + XADD event) └── failWorkflow if part of one ``` ### Retry decision flow ``` Handler throws error │ ▼ Is error a RetryError (manual ctx.retry())? ├── yes → Are attempts exhausted (attempt >= retry.attempts)? │ ├── yes → permanent fail │ └── no → RETRY with RetryError.delay or computed backoff │ └── no → Is error a TimeoutError? ├── yes → Is TimeoutError in retry.retryOn? │ ├── yes, and attempt < attempts → RETRY │ └── no → permanent fail │ └── no → Is error in retry.noRetryOn? ├── yes → permanent fail └── no → Is retry.retryOn set? ├── yes → Is error in retryOn? │ ├── yes, attempt < attempts → RETRY │ └── no → permanent fail └── no → attempt < retry.attempts? ├── yes → RETRY └── no → permanent fail ``` Backoff delay computation: ``` base = retry.delay (default: 1000ms) strategy: "fixed" → base "linear" → base * attempt "exponential" → base * 2^(attempt-1) [default] function → fn(attempt) cap: min(delay, retry.maxDelay) jitter (default on): delay * random(0.75, 1.25) ``` `retry.attempts` is **total attempts**, not retry count. `attempts: 3` means 1 initial + 2 retries. ### Workflow execution flow ``` dispatch(composition) │ ▼ flattenToDAG(composition) └── recursively flatten chain/group/chord into WorkflowGraph: { nodes: [{ taskName, data, deps, jobId }], terminal: [indices] } │ ▼ createWorkflow(workflowId, graph) └── store entire graph as single Redis hash: taskora:wf:{id} │ ▼ Enqueue root nodes (nodes with deps = []) └── each node gets a pre-generated jobId, enqueued as normal job with _wf/_wfNode fields │ ▼ Worker completes node job → ack.lua │ ▼ advanceWorkflow(workflowId, nodeIndex, result) └── Lua script: 1. Mark node as completed, store result 2. Find nodes whose ALL deps are now completed 3. For each ready node: ├── 1 dep → pass that dep's result as input └── N deps → pass array of all dep results as input 4. Return toDispatch list + whether workflow completed │ ├──(toDispatch not empty)──→ enqueue next batch of nodes │ ├──(completed = true)──→ workflow state = "completed" │ └── result = terminal nodes' results (single or array) │ └──(node failed permanently)──→ failWorkflow(workflowId, nodeIndex, error) └── Lua script: 1. Mark workflow as "failed" 2. Return list of active jobIds 3. Worker cancels all active/pending nodes (cascade) ``` Data flow through chains: `task.s(data)` = bound (ignores pipeline), `task.s()` = unbound (receives predecessor output). First chain step MUST have bound data. ### Scheduling flow ``` app.schedule() or task schedule option │ ▼ Store config in Redis hash + next run time in sorted set │ ▼ Scheduler loop (runs in ONE leader across all workers): │ ├── acquireSchedulerLock (SET NX PX 30s, token-based) │ ├── acquired → I am leader │ └── not acquired → skip tick (another leader owns it) │ └── tickScheduler (every pollInterval, default 1s): │ ▼ TICK_SCHEDULER Lua: 1. ZRANGEBYSCORE schedules:next (score <= now) 2. ZREM claimed entries (atomic — first worker wins) 3. HGET config for each 4. Return list of due schedules │ ▼ For each due schedule: ├── overlap check: if overlap=false, check lastJobId state │ ├── still active → skip this run │ └── done/not exists → dispatch │ ├── missed run policy: │ ├── "skip" → dispatch once, set next run │ ├── "catch-up" → dispatch for each missed interval │ └── "catch-up-limit:N" → dispatch up to N missed │ └── dispatch task → update lastJobId + next run time Leader failover: lock has 30s TTL, renewed every tick. If leader dies, another worker acquires within ~30s. ``` ### Cancellation flow ``` handle.cancel({ reason }) │ ▼ cancel.lua: ├── job in waiting/delayed/retrying? │ └── move to cancelled set immediately → return "cancelled" │ └── job in active? └── set cancelledAt flag in hash + PUBLISH to cancel channel → return "flagged" │ ▼ Worker detects cancel (two paths, whichever first): ├── Redis pub/sub: cancel channel message → controller.abort("cancelled") └── extendLock heartbeat: returns "cancelled" → controller.abort("cancelled") │ ▼ Handler observes ctx.signal.aborted = true ├── handler checks signal and stops → throws/returns └── handler ignores signal → continues until done │ ▼ After handler exits: └── worker detects signal.reason === "cancelled" └── onCancel hook runs (if defined) └── finishCancel.lua: LREM active → ZADD cancelled, clean dedup/concurrency keys ``` ### Stall detection flow ``` Every stallInterval (default 30s): │ ▼ stalledCheck.lua (two-phase): Phase 1: previousActiveSet ∩ currentActiveSet = stalled candidates └── for each candidate: does lock key exist? ├── yes → healthy (extendLock already SREMed from stalled set) └── no → truly stalled ├── stalledCount < maxCount → re-queue (LPUSH wait, state=waiting) └── stalledCount >= maxCount → fail permanently Phase 2: SADD all currently active IDs for next check cycle ``` ### Collect (batch accumulation) flow ``` dispatch(item) │ ▼ COLLECT_PUSH Lua: 1. RPUSH item to collect:{key}:items list 2. Count items in list 3. count >= maxSize? │ ├── yes → immediate flush: drain list → create real job in wait set │ └── no → update/create flush sentinel (delayed job with collectKey) │ └── debounce: each dispatch resets the delay timer │ ▼ (when sentinel fires — delay elapsed without new items) moveToActive.lua: sentinel job claimed └── detects collectKey → drains collect:{key}:items into :data └── worker receives items[] as handler data │ ▼ (or maxWait fires — absolute deadline reached) same as above — maxWait creates a hard deadline independent of debounce ``` Three flush triggers (whichever first): debounce delay reset per dispatch, maxSize immediate flush, maxWait absolute deadline. *** ## Best practices ### Production checklist ```typescript const taskora = createTaskora({ adapter: redisAdapter({ host: process.env.REDIS_HOST, port: Number(process.env.REDIS_PORT), password: process.env.REDIS_PASSWORD, lazyConnect: true, // connect on first use // Taskora does NOT require `maxRetriesPerRequest: null` or // `enableReadyCheck: false` — those are BullMQ-specific // workarounds. Taskora's worker/event-reader/job-waiter all // wrap their blocking commands in retry loops, so a transient // ioredis MaxRetriesPerRequestError is swallowed and retried // automatically. Using ioredis defaults is fine. }), defaults: { retry: { attempts: 3, backoff: "exponential", delay: 1000, maxDelay: 60_000 }, timeout: 30_000, concurrency: 5, }, retention: { completed: { maxAge: "24h", maxItems: 1_000 }, failed: { maxAge: "30d", maxItems: 5_000 }, }, }) ``` ### Always set timeouts Every task should have a timeout. Without one, a stuck handler holds the lock forever (until stall detection kicks in at 30s intervals). ```typescript // Bad — no timeout, stuck handler blocks the slot const task = taskora.task("risky", { handler: async (data) => await externalApi.call(data), }) // Good — timeout + signal propagation const task = taskora.task("risky", { timeout: 30_000, handler: async (data, ctx) => { return await externalApi.call(data, { signal: ctx.signal }) }, }) ``` ### Propagate ctx.signal to all I/O The AbortSignal fires on shutdown AND cancellation. Pass it to every `fetch`, database call, or child process to ensure clean abort. ```typescript handler: async (data, ctx) => { const response = await fetch(url, { signal: ctx.signal }) await db.query(sql, { signal: ctx.signal }) const result = await childProcess.exec(cmd, { signal: ctx.signal }) } ``` ### Use ctx.heartbeat() for long operations Lock TTL is 30s, extended every 10s automatically. But if a single operation takes >30s (e.g., large file upload), extend the lock manually: ```typescript handler: async (data, ctx) => { for (const chunk of largeFile.chunks()) { ctx.heartbeat() // extend lock await uploadChunk(chunk) } } ``` ### Idempotent handlers Jobs can be delivered more than once (network partitions, stall recovery, lock expiry). Design handlers to be idempotent. ```typescript // Bad — double-charge if job retried after ack failure handler: async (data) => { await chargeCustomer(data.customerId, data.amount) } // Good — idempotency key prevents double processing handler: async (data, ctx) => { await chargeCustomer(data.customerId, data.amount, { idempotencyKey: ctx.id, // job ID is stable across retries }) } ``` ### Choose the right flow control | Need | Use | Why | |---|---|---| | Only process the latest update | `debounce` | Replaces previous job, last dispatch wins | | Limit rate per user/key | `throttle` | Drops excess, per-key | | Don't queue duplicate work | `deduplicate` | No-op if existing job matches | | Limit rate for the whole task | `concurrency` | Queue excess, per-worker | | Only one active globally | `singleton: true` | Queue excess, global across workers | | Limit concurrent per group | `concurrencyKey + concurrencyLimit` | Queue excess, per-key | | Job is useless after timeout | `ttl` | Expires before processing starts | | Accumulate items then batch | `collect` | Flushes on debounce/size/maxWait | ### Retry anti-patterns ```typescript // Bad — retrying non-transient errors wastes resources const task = taskora.task("validate", { retry: { attempts: 5, backoff: "exponential" }, handler: async (data) => { // ValidationError will be retried 5 times for nothing if (!data.email.includes("@")) throw new ValidationError("bad email") }, }) // Good — exclude non-transient errors const task = taskora.task("validate", { retry: { attempts: 5, backoff: "exponential", noRetryOn: [ValidationError, AuthError, NotFoundError], }, handler: async (data) => { /* ... */ }, }) ``` ### Structure task definitions consistently ```typescript // Recommended pattern for production tasks const processOrderTask = taskora.task("process-order", { // Schema validation input: orderSchema, output: orderResultSchema, // Resilience retry: { attempts: 3, backoff: "exponential", noRetryOn: [ValidationError] }, timeout: 60_000, // Concurrency concurrency: 10, // Versioning (bump when schema changes) version: 2, migrate: { 1: (data) => ({ ...(data as any), priority: "normal" }), }, // Middleware middleware: [auditLog(), validatePermissions()], // Handler handler: async (data, ctx) => { ctx.log.info("Processing order", { orderId: data.id }) ctx.progress(0) // ... process ctx.progress(100) return { status: "completed", processedAt: Date.now() } }, // Cancellation cleanup onCancel: async (data, ctx) => { await rollbackPartialOrder(ctx.id) }, }) ``` ### Testing strategy ```typescript // Unit test: handler logic only (fast, no queue) it("processes order correctly", async () => { const result = await runner.run(processOrderTask, validOrderData) expect(result.status).toBe("completed") }) // Integration test: full pipeline with retries it("retries on transient error then succeeds", async () => { let calls = 0 const flaky = runner.app.task("flaky", { retry: { attempts: 3 }, handler: async () => { calls++ if (calls < 3) throw new Error("transient") return "ok" }, }) const exec = await runner.execute(flaky, {}) expect(exec.state).toBe("completed") expect(exec.attempts).toBe(3) }) // Workflow test it("chain passes data between steps", async () => { const handle = chain(addTask.s({ x: 1, y: 2 }), doubleTask.s()).dispatch() await handle while (await handle.getState() !== "completed") { await runner.processAll() } expect(await handle.result).toBe(6) }) ``` ### Graceful shutdown ```typescript // Handle both SIGTERM (orchestrator stop) and SIGINT (Ctrl+C) const signals = ["SIGTERM", "SIGINT"] as const let shuttingDown = false for (const signal of signals) { process.on(signal, async () => { if (shuttingDown) return // prevent double-shutdown shuttingDown = true console.log(`Received ${signal}, shutting down...`) await taskora.close() // waits for active jobs, then disconnects process.exit(0) }) } await taskora.start() ``` ### Monitoring with events ```typescript // Global error tracking taskora.on("task:failed", (event) => { errorTracker.captureException(new Error(event.error), { tags: { task: event.task, jobId: event.id, attempt: event.attempt }, }) }) // Metrics taskora.on("task:completed", (event) => { metrics.histogram("task.duration", event.duration, { task: event.task }) }) taskora.on("task:stalled", (event) => { alerting.warn(`Job ${event.id} stalled (${event.action})`, { task: event.task }) }) ``` ### When to use workflows vs standalone dispatch ```typescript // Standalone: steps are independent, don't need result passing await sendEmailTask.dispatch(emailData) await logAnalyticsTask.dispatch(analyticsData) // Chain: output of step N is input of step N+1 const handle = chain( createUserTask.s(userData), sendWelcomeEmailTask.s(), // needs user ID from previous step notifySlackTask.s(), // needs email result ).dispatch() // Group: independent steps that should complete together const handle = group( resizeSmall.s(imgData), resizeMedium.s(imgData), resizeLarge.s(imgData), ).dispatch() // Chord: parallel work, then aggregate const handle = chord( [fetchA.s(), fetchB.s(), fetchC.s()], merge.s(), // receives [resultA, resultB, resultC] ).dispatch() ``` *** ## Upgrading & wire format Taskora versions its own Redis storage layout separately from user-facing task payload versioning (covered above). This is a library-internal safety net and you will almost never interact with it — the whole system is built so that upgrading taskora is a non-event. **Why it's safe to just upgrade**: * A frozen wire-format snapshot test (`tests/unit/wire-format-snapshot.test.ts`) runs on every CI build, pinning every Redis key builder output, every Lua script SHA, and both version constants. A PR that drifts the persistence surface without consciously updating those values fails CI — a drive-by edit cannot slip through. * `WIRE_VERSION` and `MIN_COMPAT_VERSION` are decoupled from `package.json`. A bug-fix release that doesn't touch storage doesn't move either, so the identifier stored in your Redis (`taskora-wire-`) stays stable across releases that don't change the format. * Additive changes (new optional fields, new event types old code ignores) don't bump `MIN_COMPAT_VERSION`, so rolling upgrades work without coordination — old and new workers can share the same Redis. * On `app.start()`, taskora does one atomic read-or-init against a tiny `taskora:meta` record and refuses to start if the running build genuinely can't read what's in Redis. No data is mutated, no jobs are lost — the process exits with `SchemaVersionMismatchError` before any worker runs. * Breaking wire changes are reserved for major releases, if we do them at all. A built-in wire-format upgrader is planned — the long-term goal is that even a major-version wire change migrates persisted data transparently on first connect. **Catching the error is optional** — the default message is already actionable: ```ts import { SchemaVersionMismatchError } from "taskora" try { await app.start() } catch (err) { if (err instanceof SchemaVersionMismatchError) { // err.code: "theirs_too_new" | "theirs_too_old" | "invalid_meta" // err.ours, err.theirs: { wireVersion, minCompat, writtenBy, writtenAt? } console.error(err.message) process.exit(1) } throw err } ``` **Do not confuse** with per-task `version` / `since` / `migrate`: | | Wire format (library) | Task payload (your code) | |---|---|---| | **Protects** | taskora's own Redis layout | user task input shape | | **Bumped by** | taskora maintainers, rarely | you, whenever task input changes | | **Configured in** | taskora source (internal) | `app.task({ version, since, migrate })` | | **How often it matters** | almost never — by design | every schema change | See `documentation/operations/upgrading.md` for the full user-facing treatment. ## Further reading Full taskora documentation in LLM-friendly formats (regenerated on every docs build, always matches the current version): * **Single-file full docs**: `https://kravetsone.github.io/taskora/llms-full.txt` — entire documentation concatenated, paste into context when you need the exhaustive reference * **Index**: `https://kravetsone.github.io/taskora/llms.txt` — table of contents with descriptions, use to decide which page to fetch * **Per-page markdown**: append `.md` to any doc URL (e.g. `https://kravetsone.github.io/taskora/features/workflows.md`) Prefer this SKILL.md for day-to-day work (it's the curated quick reference). Fall back to `llms-full.txt` when you need deeper detail on a specific subsystem not covered here (e.g. full inspector API surface, board internals, specific recipe walkthroughs). --- --- url: 'https://kravetsone.github.io/taskora/taskora/api.md' description: Auto-generated TypeScript API reference for all taskora entrypoints. --- # API Reference Auto-generated from TypeScript declarations. Browse classes, interfaces, types, and methods. ## Entrypoints | Package | Description | |---------|-------------| | [taskora](/api/taskora/) | Core — App, Task, ResultHandle, Inspector, types, errors | | [taskora/redis](/api/redis/) | Redis adapter factory | | [taskora/memory](/api/memory/) | In-memory adapter factory | | [taskora/test](/api/test/) | Test runner utilities | --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide/tasks.md' --- # Tasks A task is a named function with configuration. Taskora provides three ways to define tasks, from minimal to fully configured. ## Minimal Definition Pass a name and a handler. Types are inferred automatically. ```ts const greetTask = taskora.task("greet", async (data: { name: string }) => { return `Hello, ${data.name}!` }) // TypeScript knows: dispatch expects { name: string }, result is string const handle = greetTask.dispatch({ name: "Alice" }) const result = await handle.result // "Hello, Alice!" ``` ## With Options Pass an options object for retry, timeout, concurrency, and more. ```ts const sendEmailTask = taskora.task("send-email", { concurrency: 5, timeout: 30_000, retry: { attempts: 3, backoff: "exponential", delay: 1000, }, handler: async (data: { to: string; subject: string }, ctx) => { ctx.log.info("Sending email", { to: data.to }) return await mailer.send(data) }, }) ``` ## With Schema Validation Use any [Standard Schema](https://github.com/standard-schema/standard-schema) compatible library (Zod, Valibot, ArkType) for runtime validation. ```ts import { z } from "zod" const processOrderTask = taskora.task("process-order", { input: z.object({ orderId: z.string().uuid(), items: z.array(z.object({ sku: z.string(), quantity: z.number().int().positive(), })), }), output: z.object({ total: z.number(), status: z.enum(["confirmed", "pending"]), }), handler: async (data, ctx) => { // data is fully typed: { orderId: string, items: { sku: string, quantity: number }[] } const total = await calculateTotal(data.items) return { total, status: "confirmed" as const } }, }) ``` Schema validation runs **after** migrations (if versioned) and provides clear `ValidationError` with an `issues` array on failure. ### Reusing the inferred types Reach for `InferInput` / `InferOutput` when you need the types outside the handler — in a controller, a test factory, a shared DTO, anywhere: ```ts import type { InferInput, InferOutput } from "taskora" type OrderPayload = InferInput // { orderId: string; items: { sku: string; quantity: number }[] } type OrderResult = InferOutput // { total: number; status: "confirmed" | "pending" } ``` They also work on `BoundTask` (from contracts), `ResultHandle`, `WorkflowHandle`, and workflow `Signature`s. Collisions with a schema library that also ships `InferInput`? Use the namespaced form — `Taskora.InferInput` — see [Contracts → Type inference helpers](/guide/contracts#type-inference-helpers-inferinput-inferoutput) for the full list of supported carriers. ## Task Options Reference | Option | Type | Default | Description | |---|---|---|---| | `concurrency` | `number` | `1` | Max parallel jobs per worker | | `timeout` | `number` | `undefined` | Handler timeout in ms | | `retry` | `RetryConfig` | `undefined` | Retry configuration | | `stall` | `StallConfig` | `{ interval: 30000, maxCount: 1 }` | Stall detection config | | `singleton` | `boolean` | `false` | Only one active job at a time | | `concurrencyLimit` | `number` | `undefined` | Max active jobs per concurrency key | | `ttl` | `TtlConfig` | `undefined` | Job time-to-live | | `middleware` | `Middleware[]` | `[]` | Per-task middleware | | `onCancel` | `(data, ctx) => void` | `undefined` | Cleanup on cancellation | | `version` | `number` | `1` | Current task version | | `since` | `number` | `1` | Minimum supported version | | `migrate` | `MigrationFn[]` | `Record` | `undefined` | Version migrations | | `input` | `StandardSchema` | `undefined` | Input validation schema | | `output` | `StandardSchema` | `undefined` | Output validation schema | | `schedule` | `ScheduleConfig` | `undefined` | Inline schedule | | `collect` | `CollectConfig` | `undefined` | Batch collection | ## Collect Tasks Collect tasks accumulate items into batches before processing. The handler receives an array. ```ts const batchInsertTask = taskora.task("batch-insert", { collect: { key: "db-inserts", delay: "2s", // flush 2s after last item maxSize: 100, // or when 100 items accumulated maxWait: "10s", // or 10s since first item (hard deadline) }, handler: async (items: { table: string; row: Record }[], ctx) => { ctx.log.info(`Inserting ${items.length} rows`) await db.batchInsert(items) return { inserted: items.length } }, }) // Dispatch individual items — they accumulate automatically batchInsertTask.dispatch({ table: "users", row: { name: "Alice" } }) batchInsertTask.dispatch({ table: "users", row: { name: "Bob" } }) ``` Three flush triggers (whichever fires first): 1. **Debounce delay** — reset on each new item 2. **maxSize** — immediate flush when buffer is full 3. **maxWait** — hard deadline since first item after last flush --- --- url: 'https://kravetsone.github.io/taskora/taskora/api/test.md' --- [Taskora API Reference](../index) / test # test ## Classes * [TestRunner](classes/TestRunner) ## Interfaces * [ExecutionResult](interfaces/ExecutionResult) * [TestRunnerOptions](interfaces/TestRunnerOptions) ## Functions ### createTestRunner() > **createTestRunner**(`options?`): [`TestRunner`](classes/TestRunner) Defined in: packages/taskora/dist/test/index.d.mts:85 #### Parameters | Parameter | Type | | ------ | ------ | | `options?` | [`TestRunnerOptions`](interfaces/TestRunnerOptions) | #### Returns [`TestRunner`](classes/TestRunner) --- --- url: 'https://kravetsone.github.io/taskora/taskora/testing/patterns.md' --- # Test Patterns Common patterns for testing taskora tasks effectively. ## Testing Retry Behavior ```ts import { describe, it, expect, afterEach } from "vitest" import { createTestRunner } from "taskora/test" const runner = createTestRunner() afterEach(() => runner.clear()) it("retries on transient errors", async () => { let callCount = 0 const flakyTask = runner.app.task("flaky", { retry: { attempts: 3, backoff: "fixed", delay: 100 }, handler: async () => { callCount++ if (callCount < 3) throw new Error("Transient failure") return "success" }, }) const result = await runner.execute(flakyTask, {}) expect(result.state).toBe("completed") expect(result.attempts).toBe(3) expect(result.result).toBe("success") }) ``` ## Testing with `from: taskora` Patch all tasks from a production instance to test inter-task interactions: ```ts import { taskora, processOrderTask, sendConfirmationTask } from "../src/tasks" const runner = createTestRunner({ from: taskora }) afterEach(() => runner.dispose()) it("processes order and sends confirmation", async () => { // processOrder dispatches sendConfirmation internally const result = await runner.execute(processOrderTask, { orderId: "123", items: [{ sku: "ABC", quantity: 1 }], }) expect(result.state).toBe("completed") // The confirmation email was also dispatched and processed in-memory }) ``` ## Testing Middleware ```ts it("middleware transforms data", async () => { const logs: string[] = [] const mwTestTask = runner.app.task("mw-test", { middleware: [ async (ctx, next) => { logs.push("before") await next() logs.push("after") }, ], handler: async (data: string) => { logs.push("handler") return data.toUpperCase() }, }) const result = await runner.run(mwTestTask, "hello") expect(result).toBe("HELLO") expect(logs).toEqual(["before", "handler", "after"]) }) ``` ## Testing Progress and Logs ```ts it("reports progress and logs", async () => { const progressTask = runner.app.task("progress-task", async (data: {}, ctx) => { ctx.progress(25) ctx.log.info("Quarter done") ctx.progress(100) ctx.log.info("Complete") return "done" }) const result = await runner.execute(progressTask, {}) expect(result.progress).toBe(100) expect(result.logs).toHaveLength(2) expect(result.logs[0].message).toBe("Quarter done") }) ``` ## Testing Error Cases ```ts it("fails permanently after max attempts", async () => { const alwaysFailsTask = runner.app.task("always-fails", { retry: { attempts: 2 }, handler: async () => { throw new Error("Always fails") }, }) const result = await runner.execute(alwaysFailsTask, {}) expect(result.state).toBe("failed") expect(result.attempts).toBe(2) expect(result.error).toContain("Always fails") }) ``` ## Importing Production Tasks Selectively import tasks from an existing instance: ```ts const runner = createTestRunner() runner.importTask(sendEmailTask) runner.importTask(processImageTask) // Only these two tasks are available in the runner const result = await runner.execute(sendEmailTask, data) ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/integrations/nestjs/testing.md' --- # Testing `@taskora/nestjs/testing` is an opt-in subpath that ships two helpers for unit-testing Nest modules that use taskora: * **`TaskoraTestingModule.forRoot()`** — drop-in replacement for `TaskoraModule.forRoot` with memory-adapter defaults. Use it when you want to assert on DI wiring without running jobs. * **`createTaskoraTestHarness({ providers })`** — higher-level builder that compiles a testing module, boots the real App (memory adapter + real worker loop + real subscribe stream), and returns a harness with `dispatch` / `execute` / `inspect` / `close` methods. Both reuse the **production** `TaskoraExplorer` / consumer registration path. You're not testing a parallel fake — you're driving the real code over an in-memory adapter. ## Import ```ts import { createTaskoraTestHarness, TaskoraTestHarness, TaskoraTestingModule, type ExecuteResult, } from "@taskora/nestjs/testing" ``` ## DX comparison ### Without the harness (manual pattern) ```ts it("sends the welcome email", async () => { const moduleRef = await Test.createTestingModule({ imports: [ TaskoraModule.forRoot({ adapter: memoryAdapter(), autoStart: false }), ], providers: [SendEmailConsumer, MailerService], }).compile() await moduleRef.init() // To test anything useful you have to spy on App.prototype.implement // and invoke the captured handler by hand — or set autoStart: true // and manually wire a TaskoraRef dispatch with await handle.result. // Either way: boilerplate. await moduleRef.close() }) ``` ### With the harness ```ts import { createTaskoraTestHarness } from "@taskora/nestjs/testing" it("sends the welcome email", async () => { const harness = await createTaskoraTestHarness({ providers: [SendEmailConsumer, MailerService], }) const result = await harness.execute(sendEmailTask, { to: "alice@example.com", subject: "Welcome", }) expect(result.state).toBe("completed") expect(result.result?.messageId).toBeDefined() await harness.close() }) ``` ## What `createTaskoraTestHarness` does 1. Compiles a Nest `TestingModule` with `TaskoraTestingModule.forRoot({ autoStart: true })` pre-imported. 2. Calls `moduleRef.init()`, which runs `TaskoraExplorer.onApplicationBootstrap`: * Every `@TaskConsumer` in your providers gets `app.implement(contract, handler)` called with the DI-managed instance's `process` bound as the handler. * Every `@OnTaskEvent` method gets wired via `task.on(event, …)`. * Every class middleware gets resolved and registered via `app.use(...)`. * `app.start()` runs, spinning up the worker loop and subscribe stream. 3. Resolves `TaskoraRef` from the DI graph. 4. Returns a `TaskoraTestHarness` that routes `dispatch` / `execute` through the same running App. The worker loop runs against the memory adapter (taskora's memory backend implements proper blocking dequeue, so there's no busy-spin and `close()` is clean). Subscribe runs against the same adapter. **`@OnTaskEvent` bindings fire exactly like in production** — they aren't simulated. ## Harness API ### `dispatch(contract, data, options?)` Fire-and-forget dispatch. Returns a `ResultHandle` synchronously — await `handle.result` to wait for processing: ```ts const handle = harness.dispatch(processImageTask, { url: "..." }) const { width } = await handle.result ``` ### `execute(contract, data, options?)` Dispatch + wait for terminal state + return a compact `ExecuteResult`: ```ts interface ExecuteResult { id: string state: Taskora.JobState // "completed" | "failed" | "cancelled" | "expired" result: TOutput | undefined error: string | undefined attempts: number logs: Taskora.LogEntry[] progress: number | Record | undefined timeline: { dispatched: number; processed?: number; finished?: number } } ``` **Errors are not re-thrown.** If the handler throws and retries are exhausted, `state === "failed"` and `error` carries the message: ```ts it("retries a flaky handler up to the configured limit", async () => { harness = await createTaskoraTestHarness({ providers: [FlakyConsumer], }) const result = await harness.execute(flakyTask, { boom: true }) expect(result.state).toBe("failed") expect(result.error).toMatch(/boom/) expect(result.attempts).toBe(3) // 1 initial + 2 retries }) ``` ### `inspect(contract, jobId)` Full `JobInfo` record from the inspector — data, result, logs, progress, timeline, attempt history: ```ts const handle = harness.dispatch(sendEmailTask, { to: "bob@x" }) await handle.result const info = await harness.inspect(sendEmailTask, handle.id) expect(info?.logs).toContainEqual(expect.objectContaining({ message: "sending" })) ``` ### `close()` Tears down the Nest testing module, which runs `TaskoraExplorer.onApplicationShutdown` → `app.close()`. Workers drain, subscribe stops, Redis connections (or memory backend timers) clean up. Always call this in a test `afterEach` / `finally`. ## Accessing `moduleRef` and `app` The harness exposes the underlying testing module, the raw App, and a `TaskoraRef` for tests that need finer control: ```ts harness.moduleRef // Nest TestingModule — use .get(Class) to resolve providers harness.app // raw taskora App — for inspector, dlq, schedules access harness.tasks // TaskoraRef — same one your services get via DI ``` Use `harness.moduleRef.get(SomeService)` to assert on service state after a job runs: ```ts await harness.execute(sendEmailTask, { to: "alice@x", subject: "Hi" }) const mailer = harness.moduleRef.get(MailerService) expect(mailer.sent).toEqual(["alice@x"]) ``` ## Verifying `@OnTaskEvent` fires Because the harness runs the real subscribe stream, event bindings fire naturally: ```ts @TaskConsumer(sendEmailTask) class SendEmailConsumer { completedCount = 0 async process(data: { to: string }) { return { sent: true, messageId: "abc" } } @OnTaskEvent("completed") onDone() { this.completedCount += 1 } } it("fires @OnTaskEvent('completed') on success", async () => { const harness = await createTaskoraTestHarness({ providers: [SendEmailConsumer], }) await harness.execute(sendEmailTask, { to: "alice@x" }) const consumer = harness.moduleRef.get(SendEmailConsumer) expect(consumer.completedCount).toBeGreaterThanOrEqual(1) await harness.close() }) ``` ## `TaskoraTestingModule` — the lower-level primitive If you want a Nest testing module without the harness abstraction (e.g. for tests that don't dispatch jobs at all — just verify providers resolve, or sanity-check that your module imports `TaskoraModule` correctly): ```ts import { Test } from "@nestjs/testing" import { TaskoraTestingModule } from "@taskora/nestjs/testing" it("wires the EmailService with its TaskoraRef", async () => { const moduleRef = await Test.createTestingModule({ imports: [TaskoraTestingModule.forRoot()], // memory adapter + autoStart: false providers: [EmailService, MailerService], }).compile() await moduleRef.init() const svc = moduleRef.get(EmailService) expect(svc).toBeDefined() await moduleRef.close() }) ``` Defaults: * `adapter` → `memoryAdapter()` * `autoStart` → `false` (no worker, no subscribe — just DI wiring) Override either to run integration-style tests against a real Redis or to start the App explicitly. ## Custom `imports` and taskora options `createTaskoraTestHarness` accepts the full Nest `ModuleMetadata` plus a `taskora` field for the underlying `TaskoraTestingModule.forRoot` options: ```ts const harness = await createTaskoraTestHarness({ imports: [ ConfigModule.forRoot({ isGlobal: true }), TypeOrmModule.forRoot({ type: "sqlite", database: ":memory:" }), ], providers: [SendEmailConsumer, MailerService, UserRepository], taskora: { // Override taskora defaults for this test middleware: [LoggingMiddleware], defaults: { retry: { attempts: 1 } }, // fail fast in tests }, }) ``` This lets you compose the harness with any other Nest module the consumer depends on (config, TypeORM, Prisma, etc.) without losing the taskora wiring. ## Snapshot: vitest config for decorator metadata The harness drives real DI, which means the test runner must emit decorator metadata (`experimentalDecorators` + `emitDecoratorMetadata` equivalents). Vitest's default esbuild transform **does not** emit metadata — consumers inject as `undefined` and every job throws. The fix is one plugin. Install it: ::: pm-add -D unplugin-swc @swc/core ::: And wire it in `vitest.config.ts`: ```ts import swc from "unplugin-swc" import { defineConfig } from "vitest/config" export default defineConfig({ plugins: [ swc.vite({ jsc: { parser: { syntax: "typescript", decorators: true }, transform: { legacyDecorator: true, decoratorMetadata: true }, }, }), ], test: { setupFiles: ["./tests/setup.ts"], }, }) ``` Plus a one-liner setup file loading reflect-metadata: ```ts // tests/setup.ts import "reflect-metadata" ``` Without this, you'll see cryptic "cannot read property 'for' of undefined" errors inside services that inject `TaskoraRef`. Production builds (tsc, swc, nest build) emit metadata by default — only Vitest's default transform skips it. ## Virtual time The harness uses real time. If you need virtual time (fast-forwarding delayed jobs, testing schedules, deterministic retries), drop down to taskora's own [`taskora/test`](/testing/virtual-time) subpath directly — construct a fresh `App` and wrap it in `createTestRunner()`. The harness deliberately doesn't try to merge the two worlds because dual-backend setups (runner's backend vs subscribe's backend) are easy to get wrong. Typical split: * **Harness** — end-to-end DI tests, `@TaskConsumer` integration, `@OnTaskEvent` bindings, middleware chains. * **`taskora/test` `createTestRunner`** — schedule tests, retry backoff timing, flow-control (debounce/throttle/dedupe), workflow composition. ## Integration testing against real Redis If you want to test against a real Redis (e.g. in CI with a service container or testcontainers), pass the real adapter via `taskora.adapter`: ```ts import { redisAdapter } from "taskora/redis" import { Redis } from "ioredis" beforeAll(async () => { redis = new Redis(process.env.REDIS_URL!) }) afterAll(async () => { await redis.quit() }) it("runs against a real Redis container", async () => { const harness = await createTaskoraTestHarness({ providers: [SendEmailConsumer, MailerService], taskora: { adapter: redisAdapter({ client: redis }), }, }) const result = await harness.execute(sendEmailTask, { to: "alice@x" }) expect(result.state).toBe("completed") await harness.close() }) ``` The rest of the harness API is unchanged — only the adapter swap differs between unit and integration tests. --- --- url: 'https://kravetsone.github.io/taskora/taskora/testing.md' --- # Testing Taskora ships with first-class testing utilities via `taskora/test` — no Redis, no Docker, no flaky tests. ::: tip Looking for how Taskora itself is tested? See [Cross-runtime CI](/testing/cross-runtime). The library runs its complete 300-test integration suite against **Node 24, Node 20, Bun 1.3+ (with both ioredis and native `Bun.RedisClient`), and Deno 2.x** on every push. That is **1,500 live-Redis test runs per commit** — and the publish workflow is gated on every matrix cell being green, so no release can ship with a red runtime. ::: ## Installation The test utilities use the in-memory adapter internally. No additional dependencies required. ```ts import { createTestRunner } from "taskora/test" ``` ## Creating a Test Runner ### Standalone Mode Create an isolated runner for unit testing individual tasks: ```ts import { describe, it, expect, afterEach } from "vitest" import { createTestRunner } from "taskora/test" const runner = createTestRunner() afterEach(() => runner.clear()) ``` ### From Instance Mode Patch all tasks from an existing instance to use the in-memory backend: ```ts import { createTestRunner } from "taskora/test" import { taskora, sendEmailTask, processImageTask } from "../src/tasks" const runner = createTestRunner({ from: taskora }) afterEach(() => runner.dispose()) // restores original adapters ``` This mode is powerful — multi-task chains and inter-task dispatches work in-memory without rewriting handlers. ## Two Execution Modes ### `runner.run(task, data)` — Direct Execution Calls the handler directly with an inline retry loop. No queue involved. ```ts const result = await runner.run(sendEmailTask, { to: "test@example.com", subject: "Test", }) expect(result).toEqual({ messageId: expect.any(String) }) ``` Best for: **unit testing** handler logic in isolation. ### `runner.execute(task, data)` — Full Pipeline Dispatches through the queue, processes, auto-advances retries, and returns a detailed result. ```ts const execution = await runner.execute(sendEmailTask, { to: "test@example.com", subject: "Test", }) expect(execution.state).toBe("completed") expect(execution.result).toEqual({ messageId: expect.any(String) }) expect(execution.attempts).toBe(1) expect(execution.logs).toHaveLength(1) ``` Best for: **integration testing** the full dispatch → process → result pipeline. See [run() vs execute()](/testing/run-vs-execute) for a detailed comparison. ## ExecutionResult ```ts interface ExecutionResult { result: TOutput | undefined state: JobState attempts: number logs: LogEntry[] progress: number | Record | null error: string | undefined handle: ResultHandle } ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/ttl-expiration.md' --- # TTL & Expiration Control job lifetimes with TTL, singleton mode, and per-key concurrency limits. ## TTL (Time-to-Live) Set a maximum lifetime for jobs. If a job isn't processed before its TTL expires, it's moved to the `expired` state. ### Task-Level TTL ```ts taskora.task("time-sensitive", { ttl: { max: "1h", // expire after 1 hour onExpire: "fail", // "fail" (default) or "discard" }, handler: async (data, ctx) => { /* ... */ }, }) ``` ### Per-Job TTL Override TTL at dispatch time: ```ts timeSensitiveTask.dispatch(data, { ttl: "30m", // this specific job expires in 30 minutes }) ``` ### Expiration Behavior | `onExpire` | Behavior | |---|---| | `"fail"` | Job moves to `expired` state, stores `ExpiredError` | | `"discard"` | Job is silently removed — no trace in failed/expired sets | TTL is checked during dequeue (`moveToActive.lua`). Jobs that expire while `waiting` or `delayed` are caught when a worker tries to pick them up. ## Singleton Ensure only **one job per task** is active at a time. ```ts taskora.task("global-sync", { singleton: true, handler: async (data, ctx) => { await syncAllData() // only one instance runs at a time }, }) ``` When a worker tries to dequeue and another job is already active, the dequeue is skipped (the job stays in the waiting queue with a 1s retry marker). ## Concurrency Per Key Limit concurrent active jobs for a specific key: ```ts processUserDataTask.dispatch(data, { concurrencyKey: `user:${userId}`, concurrencyLimit: 2, // max 2 concurrent jobs for this user }) ``` This uses an atomic counter in Redis (`INCR` on claim, `DECR` on ack/fail/nack/stall). If the limit is reached, the job waits in the queue. ### Use Cases * Limit API calls per customer * Prevent multiple concurrent writes to the same resource * Rate-limit by tenant in a multi-tenant system ```ts taskora.task("sync-account", { concurrencyLimit: 1, // task-level default handler: async (data: { accountId: string }, ctx) => { await syncAccount(data.accountId) }, }) // Per-dispatch override syncAccountTask.dispatch(data, { concurrencyKey: `account:${data.accountId}`, concurrencyLimit: 1, // one sync per account at a time }) ``` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/BackoffStrategy.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / BackoffStrategy # Type Alias: BackoffStrategy > **BackoffStrategy** = `"fixed"` | `"exponential"` | `"linear"` | ((`attempt`) => `number`) Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:760 --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/type-aliases/CompatResult.md --- [Taskora API Reference](../../index) / [index](../index) / CompatResult # Type Alias: CompatResult > **CompatResult** = { `ok`: `true`; } | { `code`: `"theirs_too_new"` | `"theirs_too_old"` | `"invalid_meta"`; `message`: `string`; `ok`: `false`; } Defined in: packages/taskora/dist/index.d.mts:260 Result of comparing our compiled meta against the meta persisted in the backend. ## Union Members ### Type Literal { `ok`: `true`; } *** ### Type Literal { `code`: `"theirs_too_new"` | `"theirs_too_old"` | `"invalid_meta"`; `message`: `string`; `ok`: `false`; } | Name | Type | Description | Defined in | | ------ | ------ | ------ | ------ | | `code` | `"theirs_too_new"` | `"theirs_too_old"` | `"invalid_meta"` | Short machine-readable reason code. | packages/taskora/dist/index.d.mts:265 | | `message` | `string` | Human-readable explanation safe to include in an error. | packages/taskora/dist/index.d.mts:267 | | `ok` | `false` | - | packages/taskora/dist/index.d.mts:263 | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/Duration.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / Duration # Type Alias: Duration > **Duration** = `Duration` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1197 --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/type-aliases/Duration.md --- [Taskora API Reference](../../index) / [index](../index) / Duration # Type Alias: Duration > **Duration** = `number` | `` `${number}s` `` | `` `${number}m` `` | `` `${number}h` `` | `` `${number}d` `` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:78 --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/InferInput.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / InferInput # Type Alias: InferInput\ > **InferInput**<`T`> = `InferInput`<`T`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:682 Namespaced alias of the top-level InferInputImpl | InferInput helper. Use this form when top-level `InferInput` would collide with another library's export (e.g. Zod): ```ts import type { Taskora } from "taskora" type Input = Taskora.InferInput ``` ## Type Parameters | Type Parameter | | ------ | | `T` | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/type-aliases/InferInput.md --- [Taskora API Reference](../../index) / [index](../index) / InferInput # Type Alias: InferInput\ > **InferInput**<`T`> = `T` *extends* [`TaskContract`](../interfaces/TaskContract)\ ? `I` : `T` *extends* [`Task`](../classes/Task)\ ? `I` : `T` *extends* [`BoundTask`](../classes/BoundTask)\ ? `I` : `T` *extends* [`Signature`](../classes/Signature)\ ? `I` : `T` *extends* [`ChainSignature`](../classes/ChainSignature)\ ? `I` : `never` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:632 Infer the input type of anything that carries one: `Task`, `BoundTask`, `TaskContract`, or a `Signature` produced by `task.s()`. For handle types that only carry an output (`ResultHandle`, `WorkflowHandle`, group/chord signatures) this resolves to `never`. ## Type Parameters | Type Parameter | | ------ | | `T` | ## Example ```ts import type { InferInput } from "taskora" const sendEmailTask = defineTask({ name: "send-email", input: z.object({ to: z.string(), subject: z.string() }), }) type EmailInput = InferInput // { to: string; subject: string } ``` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/InferOutput.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / InferOutput # Type Alias: InferOutput\ > **InferOutput**<`T`> = `InferOutput`<`T`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:688 Namespaced alias of the top-level InferOutputImpl | InferOutput helper. Use this form when top-level `InferOutput` would collide with another library's export. ## Type Parameters | Type Parameter | | ------ | | `T` | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/type-aliases/InferOutput.md --- [Taskora API Reference](../../index) / [index](../index) / InferOutput # Type Alias: InferOutput\ > **InferOutput**<`T`> = `T` *extends* [`TaskContract`](../interfaces/TaskContract)<`any`, infer O> ? `O` : `T` *extends* [`Task`](../classes/Task)<`any`, infer O> ? `O` : `T` *extends* [`BoundTask`](../classes/BoundTask)<`any`, infer O> ? `O` : `T` *extends* [`ResultHandle`](../classes/ResultHandle)\ ? `O` : `T` *extends* [`WorkflowHandle`](../classes/WorkflowHandle)\ ? `O` : `T` *extends* [`Signature`](../classes/Signature)<`any`, infer O> ? `O` : `T` *extends* [`ChainSignature`](../classes/ChainSignature)<`any`, infer O> ? `O` : `T` *extends* [`GroupSignature`](../classes/GroupSignature)\ ? `O` : `T` *extends* [`ChordSignature`](../classes/ChordSignature)\ ? `O` : `never` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:661 Infer the output type of anything that carries one: `Task`, `BoundTask`, `TaskContract`, `ResultHandle`, `WorkflowHandle`, or any workflow `Signature` (`Signature`, `ChainSignature`, `GroupSignature`, `ChordSignature`). This supersedes the previous workflow-only `InferOutput` — workflow code that already uses it keeps working since `Signature` variants are still handled. ## Type Parameters | Type Parameter | | ------ | | `T` | ## Example ```ts import type { InferOutput } from "taskora" const processImageTask = defineTask({ name: "process-image", input: z.object({ url: z.string() }), output: z.object({ width: z.number(), height: z.number() }), }) type ImageResult = InferOutput // { width: number; height: number } const handle = processImageTask.dispatch({ url: "..." }) type SameResult = InferOutput // { width: number; height: number } ``` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/JobState.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / JobState # Type Alias: JobState > **JobState** = `"waiting"` | `"delayed"` | `"active"` | `"completed"` | `"failed"` | `"retrying"` | `"cancelled"` | `"expired"` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:759 --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/Middleware.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / Middleware # Type Alias: Middleware > **Middleware** = (`ctx`, `next`) => `Promise`<`void`> | `void` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:904 ## Parameters | Parameter | Type | | ------ | ------ | | `ctx` | [`MiddlewareContext`](../interfaces/MiddlewareContext) | | `next` | () => `Promise`<`void`> | ## Returns `Promise`<`void`> | `void` --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/MissedPolicy.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / MissedPolicy # Type Alias: MissedPolicy > **MissedPolicy** = `"skip"` | `"catch-up"` | `` `catch-up-limit:${number}` `` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1198 --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/TaskContract.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / TaskContract # Type Alias: TaskContract\ > **TaskContract**<`TInput`, `TOutput`> = `TaskContract`<`TInput`, `TOutput`> Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:671 A task contract — a pure declaration of a task's name, schemas, and defaults, with no runtime dependency on `App`/`Worker`/`Adapter`. Created via `defineTask()` or `staticContract()`. See the `TaskContract` export from `taskora` for the full documentation. ## Type Parameters | Type Parameter | Default type | | ------ | ------ | | `TInput` | `unknown` | | `TOutput` | `unknown` | --- --- url: >- https://kravetsone.github.io/taskora/taskora/api/taskora/namespaces/Taskora/type-aliases/WorkflowState.md --- [Taskora API Reference](../../../../index) / [index](../../../index) / [Taskora](../index) / WorkflowState # Type Alias: WorkflowState > **WorkflowState** = `"running"` | `"completed"` | `"failed"` | `"cancelled"` Defined in: packages/taskora/dist/types-KwmL9wkN.d.mts:1225 --- --- url: 'https://kravetsone.github.io/taskora/taskora/operations/upgrading.md' --- # Upgrading Taskora **Upgrade taskora the way you'd upgrade any other dependency — bump the version, redeploy, carry on.** We've thought hard about this and built several layers of machinery specifically so you don't have to worry about it. This page explains what's in place, not what you have to do. ## Why you can just upgrade Taskora's storage layout (keys, Lua scripts, job hash fields, stream events) is treated as a load-bearing contract. We've put a belt-and-suspenders system in place to make sure a release that would silently break your production queues cannot land. Here's what's protecting you: **A frozen wire-format snapshot runs on every CI build.** `tests/unit/wire-format-snapshot.test.ts` pins down every Redis key the library constructs, the SHA-256 of every Lua script we ship, and both version constants. Any PR that drifts a single character of the persistence surface — a key rename, a comment tweak in a Lua script, a new stable hash field — fails CI with a clear "you are touching the wire format" reminder. A drive-by edit cannot sneak through review. **The version constants are decoupled from `package.json`.** A bug-fix release that never touches storage doesn't move `WIRE_VERSION` or `MIN_COMPAT_VERSION`. The identifier stored in your Redis (`taskora-wire-`) stays stable across every release that doesn't change the format, so there is no "did I remember to bump the version on release day" failure mode. **Rolling upgrades work by default for the common case.** When a new version adds a field or a new event that older code simply ignores, both versions can run against the same Redis simultaneously — their compatibility windows overlap automatically. No coordination, no downtime, no staged rollout required. **A runtime handshake runs before workers touch anything.** On `app.start()`, taskora does one small atomic read-or-init against a `taskora:meta` record in your Redis, compares the stored wire version against the running build's own, and stops the process dead before any worker, scheduler, or dispatch runs if they don't line up. The worst case is a clean exit with a clear error — your data is never in an ambiguous state. **Breaking wire changes are reserved for major releases, if we do them at all.** Cosmetic renames and "while I'm in here" cleanups will never change the format. The actual bar for a breaking change is "we couldn't fix a real correctness bug any other way," and even then we'd prefer to ship an additive escape hatch first. **A built-in wire-format upgrader is wired into the handshake.** When a taskora release genuinely has to change the persisted format, the new version ships with a Lua migrator that runs atomically as part of `app.start()`'s handshake against the storage backend. You deploy the new version, the first worker to reach `app.start()` notices the format needs upgrading, runs the in-place conversion, bumps the stored wire-version marker, and continues as normal. The migration is per-key atomic (via a single Lua script per shared data structure, so no concurrent writer can interleave), idempotent (safe to re-run if interrupted), and paced across the keyspace via `SCAN` so Redis is never blocked on one huge script. You should see a one-line log entry like `[taskora] migrated N :wait lists to wireVersion M` and nothing else. For the operator journal the important thing is: **there is nothing to do but deploy.** **A cluster-wide coordination protocol keeps multiple workers safe during a migration.** When a wire-format change rolls out, it's normal to have a mix of old-version and new-version workers alive at the same time — that's just what deployment looks like. Taskora reserves two keys for exactly this case: * `taskora::migration:lock` — a short-lived marker set by whichever worker is running the migration, with a JSON payload naming the target version (`{ targetWireVersion, reason, startedAt, expectedDurationMs }`) and a finite TTL so a crashed migrator can't pin the cluster forever. * `taskora::migration:broadcast` — a pub/sub channel used to signal "re-check the lock" to every subscribed worker, so the pause happens instantly instead of waiting for the next periodic poll. Every taskora release from wireVersion=2 onward subscribes to the broadcast channel on startup and watches the lock. When it sees a lock whose `targetWireVersion` covers its own wire version, it pauses all hot-path Redis operations (dispatches, dequeues, updates) until the lock clears — no writes, no reads of data that might be mid-rewrite. When the lock is released (or its TTL expires), workers resume automatically. A safety-net poller re-checks the lock every 30 seconds in case a broadcast message is missed during a reconnect. Cluster races are resolved by `SET NX` on the lock: if ten new-version workers start simultaneously against an old-version keyspace, exactly one acquires the lock and runs the migration, the other nine park until it finishes and then re-handshake cleanly. This is all automatic and invisible to your application code — `task.dispatch()` that happens to fall during a migration simply takes slightly longer to resolve, no errors, no retries required. The whole protocol exists so that the answer to "what do I need to do to upgrade taskora across a breaking wire change?" stays **"nothing — deploy the new version, keep working"**. Put together: the compatibility error documented below should not happen to you in practice. It's the last line of defense, present so that **if** something ever does slip through the combination of CI snapshot tests, bump policy, rolling-upgrade semantics, auto-migration, and code review, the failure mode is a clean fail-fast at startup — not silently corrupted queues. ## What it looks like if you do hit it `app.start()` throws `SchemaVersionMismatchError`. No data is mutated, no jobs are lost. The error carries structured fields you can feed into logs and alerts: ```ts import { SchemaVersionMismatchError } from "taskora" try { await app.start() } catch (err) { if (err instanceof SchemaVersionMismatchError) { // err.code: "theirs_too_new" | "theirs_too_old" | "invalid_meta" // err.ours: { wireVersion, minCompat, writtenBy } // err.theirs: { wireVersion, minCompat, writtenBy, writtenAt } console.error(err.message) process.exit(1) } throw err } ``` The `code` field tells you what's going on and how to resolve it: | Code | Meaning | Resolution | |---|---|---| | `theirs_too_new` | A newer taskora build already wrote to this Redis | Upgrade this process to match, or point it at a different Redis keyspace | | `theirs_too_old` | This Redis was written by a much older taskora | Upgrade in smaller steps, or drain the old queues first | | `invalid_meta` | The `taskora:meta` hash is corrupt or written by something else | Investigate who wrote it; if it's garbage, delete the key and retry | You should never have to use this table, but it's here in case you do. ## Known breaking upgrades Most releases are additive and roll through without intervention. When a release changes the persisted format in a way that the previous version cannot read, we call it out here. In every case we aim to ship an automatic migrator so operators still just "upgrade taskora, redeploy, done" — you should only ever need to read these sections if something goes wrong. ### wireVersion 1 → 2 — priority-aware wait list **What changed.** The `:wait` list was promoted from a Redis `List` to a `Sorted set` so `DispatchOptions.priority` can actually order dequeues. Before this release, `priority` was a decorative field — stored on the job hash but completely ignored by the wait-list dequeue path. A wireVersion=2 worker now always dequeues a higher-priority waiting job before any lower-priority one. **Upgrade is automatic.** `RedisBackend.handshake()` detects stored wireVersion=1 meta on connect and runs an in-place migrator before any worker or scheduler touches the backend. The migrator walks the keyspace with `SCAN`, and for every `:wait` key of type `list` it reads all job IDs via chunked `LRANGE`, pipelines `HMGET priority,ts` lookups in batches, computes scores, and atomically swaps the list for a sorted set carrying the same members via a tiny `DEL + RENAME` Lua script. Redis is never blocked on a long script — chunk size is bounded to 500 IDs per batch — so even a keyspace with thousands of tasks and millions of waiting jobs migrates without triggering timeouts or failover. You should see a one-line log entry the first time a wireVersion=2 process starts against a wireVersion=1 Redis ("migrated N `:wait` lists to sorted sets") and nothing more. If the migration fails mid-run (e.g. Redis disconnects), the next `app.start()` retries from wherever it left off — individual `:wait` conversions are atomic per key, so partial state is safe to resume. **Why a rolling upgrade with mixed v1 and v2 workers is still not safe.** `DispatchOptions.priority` is implemented by switching the `:wait` key from a Redis `List` to a `Sorted set` — a wireVersion=1 worker reading a `ZADD`-created wait set hits `WRONGTYPE` on the first `LPUSH`/`RPOP`, and vice versa. v1 predates the migration coordination protocol: it doesn't subscribe to the broadcast channel and doesn't check the lock, so a running v1 worker can't be asked to pause. `MIN_COMPAT_VERSION` is bumped to 2 so if you do run both versions against one Redis during a staged rollout, v1's handshake refuses to start with a `theirs_too_new` error — you never get a half-converted keyspace, but v1 instances already past handshake will crash when they next touch `:wait`. Stop v1 workers before starting v2. **From wireVersion 2 onward this gets easier.** The coordination protocol described above is wired in as of v2, so future wire-format changes (v2 → v3 and beyond) can run with live workers on both sides. The new-version migrator will acquire the migration lock, broadcast the halt signal, and old-version workers will pause automatically while the rewrite runs. v1 → v2 is the one exception you'll have to think about — every upgrade after it should be invisible. **If you'd rather migrate manually.** The auto-migrator is idempotent and you can skip it by draining the `:wait` lists yourself before switching versions: stop dispatching, wait for in-flight work to finish, confirm `LLEN taskora:{}:wait` is zero on every task, then deploy. Nothing breaks if you do — the new version just finds an empty keyspace and writes fresh sorted sets from scratch. ### wireVersion 5 → 6 — single-hash job storage **What changed.** Every job used to occupy four Redis keys: the metadata hash, a `:data` string sibling holding the serialized input, a `:result` string sibling holding the serialized output, and a `:lock` string (still separate — it needs `SET … PX` atomicity). wireVersion 6 collapses `:data` and `:result` into fields on the metadata hash itself, so the common path is one hash plus an optional lock, full stop. Hot-path wins — three fewer `redis.call()` invocations per job on the happy path: | Operation | wireVersion 5 (split) | wireVersion 6 (single hash) | |---|---|---| | Enqueue | `HSET fields` + `SET :data` | `HSET fields + data` | | Claim | `RPOPLPUSH` + `HMGET meta` + `GET :data` | `RPOPLPUSH` + `HMGET meta + data` | | Ack | `SET :result` + `HSET state + finishedOn` | `HSET state + finishedOn + result` | At c=100 that's roughly 16 % less Lua-script Redis CPU per job, stacked on top of Phase 3A's O(1) wait-list dequeue. Enqueue and bench memory per job drop ~40–50 % for payloads that fit inside the hash listpack encoding, because one keyspace slot now holds what three used to. **Upgrade is a hard gate.** A wireVersion 5 worker that issues `GET :data` against a wireVersion 6 job hits `nil` — the string sibling no longer exists — and would silently drop the payload. `MIN_COMPAT_VERSION` bumps to 6 so the handshake refuses the mismatched pair: a v5 worker starting against a v6 keyspace throws `SchemaVersionMismatchError` before any job touches disk. Drain queues or flush the keyspace before rolling workers — same protocol as 1 → 2 and 4 → 5. **Automatic migration.** The first wireVersion 6 process to connect acquires the shared migration lock and runs `MIGRATE_JOBS_V5_TO_V6`: it `SCAN`s the keyspace for `:data` and `:result` string siblings and invokes a per-key Lua script for each hit, copying the string value into the matching hash field and deleting the string. The script is idempotent — jobs whose hash already carries the field and orphaned string siblings are both no-ops — so a partial run followed by a retry picks up where it left off. You should see a one-line log entry the first time a wireVersion 6 process connects, and nothing more. **Redis tuning for large job payloads.** Redis 7 keeps a hash in compact `listpack` encoding as long as: * number of fields ≤ `hash-max-listpack-entries` (default `128`) * every value ≤ `hash-max-listpack-value` bytes (default `64`) If your jobs routinely carry `data` or `result` larger than 64 bytes, the hash flips to `hashtable` encoding at the first oversize value, and the per-field overhead jumps from ~2 B to ~80 B. Inside that narrow window (roughly 64 B–1 KB per field) the single-hash layout can end up ~20–30 % larger than the old split layout. The cure is one line in `redis.conf`: ``` hash-max-listpack-value 1024 ``` With that setting, payloads up to ~1 KB stay in listpack and wireVersion 6 is a memory win across the board. BullMQ — which has always used single-hash job storage — gives the same recommendation implicitly; we're calling it out explicitly because taskora's default (split storage) masked the need until now. See [Performance](/operations/performance#hash-max-listpack-value) for the full guide on memory-efficient Redis settings. **If you'd rather migrate manually.** Same as 1 → 2: drain your queues, confirm the task keyspace is empty, then deploy wireVersion 6. The new version finds a clean keyspace and writes fresh single-hash jobs from scratch. ## Not to be confused with task payload versioning There are two independent versioning systems in taskora. They solve different problems and you'll interact with them at very different frequencies. | | Wire format (this page) | Task payload ([Versioning & Migrations](../features/versioning)) | |---|---|---| | **Protects** | taskora's own Redis layout | your task's input data shape | | **Bumped by** | taskora maintainers, rarely | you, whenever you evolve a task schema | | **Configured in** | taskora source (internal) | `app.task({ version, since, migrate })` | | **How often it matters** | almost never — we made sure | every time you change task input | If you're looking for "how do I change my task's input schema without breaking in-flight jobs," that's the second column and it's covered in [Versioning & Migrations](../features/versioning). ## TL;DR * Upgrade taskora, redeploy, done. * The compatibility machinery is there so you don't have to think about it. * `SchemaVersionMismatchError` is the last-resort safety net — in practice our CI snapshot tests, bump policy, and rolling-upgrade semantics mean you shouldn't see it. * Wire-format versioning is the library's own internal safety net. Task payload versioning is the separate, user-facing mechanism for evolving your own tasks. --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/versioning.md' --- # Versioning & Migrations Taskora supports task versioning with automatic data migration — deploy new task versions without losing in-flight jobs. ## The Problem When you change a task's input shape, jobs already in the queue have the old format. Without migration, they'll fail or produce wrong results. ## Three Levels of Versioning ### Level 1: Bump Version + Schema Defaults If the change is additive (new optional field), just bump the version and use schema `.default()`: ```ts const sendEmailTask = taskora.task("send-email", { version: 2, input: z.object({ to: z.string(), subject: z.string(), priority: z.enum(["low", "normal", "high"]).default("normal"), // new field }), handler: async (data, ctx) => { /* ... */ }, }) ``` Existing v1 jobs are validated — `.default("normal")` fills in the missing field. ### Level 2: Sparse Migrations For breaking changes, add migration functions: ```ts const sendEmailTask = taskora.task("send-email", { version: 3, since: 1, // oldest supported version migrate: { 2: (data) => ({ ...data, priority: "normal" }), // v1 → v2 3: (data) => ({ ...data, from: data.sender, sender: undefined }), // v2 → v3 }, handler: async (data, ctx) => { /* ... */ }, }) ``` Migrations run in order: v1 data goes through migration 2, then migration 3. ### Level 3: Tuple Migrations (Type-Safe) Strictest form — version is derived from `since + migrations.length`: ```ts import { into } from "taskora" const sendEmailTask = taskora.task("send-email", { since: 1, migrate: [ into(v2Schema, (data) => ({ ...data, priority: "normal" })), into(v3Schema, (data) => ({ ...data, from: data.sender })), ], // version is automatically 3 (since:1 + 2 migrations) input: v3Schema, handler: async (data, ctx) => { /* ... */ }, }) ``` The `into(schema, fn)` helper locks the return type to match the schema, catching migration errors at compile time. ## How It Works 1. `dispatch()` stamps every job with `_v = task.version` 2. Worker checks the job's version: * `_v > task.version` → **nack** (silently return to queue — wait for newer worker) * `_v < task.since` → **fail** (too old, no migration path) * `_v < task.version` → run migration chain, then validate * `_v === task.version` → validate directly 3. Schema validation runs **after** migration ## `since` — Minimum Supported Version `since` defines the oldest job version your task can process. Bump it when you're confident all old jobs have been drained: ```ts // Before: accepts v1, v2, v3 { version: 3, since: 1, migrate: { 2: fn, 3: fn } } // After draining old jobs: only v2+ supported { version: 3, since: 2, migrate: { 3: fn } } ``` ## Inspecting Version Distribution Use the inspector to check what versions are in your queues before bumping `since`: ```ts const status = await taskora.inspect().migrations("send-email") console.log(status) // { // version: 3, since: 1, migrations: 2, // queue: { oldest: 2, byVersion: { 2: 5, 3: 142 } }, // delayed: { oldest: 3, byVersion: { 3: 8 } }, // canBumpSince: 2 // safe to bump since to 2 // } ``` `canBumpSince` tells you the safe floor — the oldest version with jobs still in the queue. --- --- url: 'https://kravetsone.github.io/taskora/taskora/testing/virtual-time.md' --- # Virtual Time The test runner supports virtual time advancement — test delayed jobs, retries, and schedules without waiting. ## `runner.advanceTime(duration)` Fast-forward time to process delayed jobs and retries. ```ts const runner = createTestRunner() // Dispatch a delayed job await runner.dispatch(sendEmailTask, data, { delay: 5000 }) // Nothing processed yet expect(runner.jobs.filter((j) => j.state === "waiting")).toHaveLength(0) // Advance time by 5 seconds await runner.advanceTime(5000) // Now the job is in the waiting queue await runner.processAll() ``` ## Auto-Advance in `execute()` When using `runner.execute()`, virtual time is **automatically advanced** for retry delays. You don't need to call `advanceTime()` manually. ```ts const result = await runner.execute(retryTask, data) // If the task retries with exponential backoff (1s, 2s, 4s), // execute() auto-advances time to trigger each retry attempt expect(result.attempts).toBe(3) expect(result.state).toBe("completed") ``` ## `runner.processAll()` Drain all waiting jobs in a single call: ```ts await runner.dispatch(taskA, data1) await runner.dispatch(taskA, data2) await runner.dispatch(taskB, data3) await runner.processAll() // processes all 3 jobs ``` ## `runner.flush(task, key?)` Force-flush collect task buffers: ```ts await runner.dispatch(batchInsert, { table: "users", row: { name: "Alice" } }) await runner.dispatch(batchInsert, { table: "users", row: { name: "Bob" } }) // Force flush without waiting for debounce timer await runner.flush(batchInsert, "db-inserts") await runner.processAll() ``` ## `runner.clear()` Reset all state between tests: ```ts afterEach(() => { runner.clear() }) ``` ## `runner.dispose()` Required when using `from: taskora` mode. Restores the original adapters on the instance's tasks. ```ts const runner = createTestRunner({ from: taskora }) afterEach(() => { runner.dispose() }) ``` ## Inspecting Jobs ```ts // Get all jobs with their states const jobs = runner.jobs // [{ id: "...", task: "send-email", state: "completed", ... }] ``` --- --- url: 'https://kravetsone.github.io/taskora/taskora/recipes/webhook-delivery.md' --- # Webhook Delivery Reliable webhook delivery with exponential backoff and selective retry. ```ts class Http4xxError extends Error { constructor(public status: number) { super(`HTTP ${status}`) } } const deliverWebhookTask = taskora.task("deliver-webhook", { retry: { attempts: 8, backoff: "exponential", delay: 1000, maxDelay: 3600_000, // cap at 1 hour noRetryOn: [Http4xxError], // don't retry client errors }, timeout: 30_000, handler: async (data: { url: string; payload: unknown; secret: string }, ctx) => { const body = JSON.stringify(data.payload) const signature = createHmac("sha256", data.secret).update(body).digest("hex") const res = await fetch(data.url, { method: "POST", headers: { "Content-Type": "application/json", "X-Webhook-Signature": signature, "X-Delivery-Id": ctx.id, "X-Delivery-Attempt": String(ctx.attempt), }, body, signal: ctx.signal, }) if (res.status >= 400 && res.status < 500) { throw new Http4xxError(res.status) // permanent failure } if (!res.ok) { throw new Error(`HTTP ${res.status}`) // retryable } return { status: res.status, deliveredAt: Date.now() } }, }) // Dispatch deliverWebhookTask.dispatch({ url: "https://api.partner.com/webhooks", payload: { event: "order.created", data: { id: "123" } }, secret: process.env.WEBHOOK_SECRET, }) // Monitor DLQ for permanently failed deliveries const failures = await taskora.deadLetters.list({ task: "deliver-webhook" }) ``` **Retry schedule** with exponential backoff: 1s → 2s → 4s → 8s → 16s → 32s → 64s → capped at 1h. Total window: ~2 minutes before giving up. --- --- url: 'https://kravetsone.github.io/taskora/taskora/guide.md' --- # Why Taskora Taskora is a distributed task queue for Node.js — designed from scratch in TypeScript with the developer experience that modern backends deserve. ## Philosophy 1. **Task-centric, not queue-centric** — You define tasks, not queues. The queue is an implementation detail. 2. **Progressive disclosure** — `task.dispatch(data)` is enough to start. Retries, middleware, cron, monitoring unlock as you need them. 3. **Everything is composable** — Middleware chains, event listeners, and flow control options compose naturally. 4. **Type safety is DX** — If the types compile, the runtime works. No magic strings. No producer/consumer type drift. ## Features at a Glance | Capability | How | |---|---| | Type-safe end-to-end | Inferred types from handler to dispatch to result | | Schema validation | Any Standard Schema library (Zod, Valibot, ArkType) | | Workflows | Chain, group, chord — type-safe task pipelines | | Koa-style middleware | Onion model — app-level and per-task | | Versioning & migrations | Bump version, add migration, deploy without draining | | First-class testing | In-memory adapter + virtual time, no Docker needed | | Flow control | Debounce, throttle, deduplicate — all atomic Lua | | Batch collect | Accumulate items, flush on delay/maxSize/maxWait | | Graceful cancellation | Instant via Redis pub/sub, with `onCancel` hooks | | Distributed scheduling | Cron + intervals with leader election | | Inspector & DLQ | Query jobs, retry failures, auto-trim old entries | ## Package Structure ``` taskora — core engine, types, task API (zero DB deps) taskora/redis — Redis adapter (peer dep: ioredis) taskora/memory — in-memory adapter (zero DB deps) taskora/test — test runner (wraps memory adapter) ``` `ioredis` is an **optional peer dependency** — only required when using `taskora/redis`. The core package has zero database dependencies. --- --- url: 'https://kravetsone.github.io/taskora/taskora/features/workflows.md' --- # Workflows Compose tasks into pipelines — sequential chains, parallel groups, and fan-in chords. Inspired by Celery's Canvas, with full TypeScript type safety. ## Signatures A **Signature** is a snapshot of a task invocation — serializable and composable. ```ts const sig = sendEmailTask.s({ to: "a@b.com", subject: "Welcome" }) // Type: Signature<{ to: string; subject: string }, { messageId: string }> ``` Two forms: | Call | Behavior | |---|---| | `task.s(data)` | Bound data — ignores pipeline input | | `task.s()` | Unbound — receives previous step's output | ## Chain Sequential pipeline. Each step's output flows as input to the next. TypeScript checks the entire chain at compile time. ```ts import { chain } from "taskora" const onboarding = chain( createUserTask.s({ name: "John", email: "john@example.com" }), // ^ returns { id: string } sendWelcomeEmailTask.s(), // ^ receives { id: string }, returns { messageId: string } notifySlackTask.s(), // ^ receives { messageId: string } ) const handle = onboarding.dispatch() const result = await handle.result ``` ### Pipe Syntax Fluent alternative with unlimited type-safe chaining: ```ts const result = await createUserTask .s({ name: "John", email: "john@example.com" }) .pipe(sendWelcomeEmailTask.s()) .pipe(notifySlackTask.s()) .dispatch() .result ``` `chain()` provides type overloads for up to 10 steps. `.pipe()` has no limit — each call is individually type-checked. ## Group Parallel execution. All signatures run concurrently, result is a typed tuple. ```ts import { group } from "taskora" const handle = group( processImageTask.s({ url: "img1.jpg", width: 800 }), processImageTask.s({ url: "img2.jpg", width: 800 }), processImageTask.s({ url: "img3.jpg", width: 800 }), ).dispatch() const result = await handle.result // Type: [ImageResult, ImageResult, ImageResult] ``` ## Chord Group + callback — parallel execution, then merge. The callback receives an array of all group results. ```ts import { chord } from "taskora" const handle = chord( [ fetchPriceTask.s({ symbol: "AAPL" }), fetchPriceTask.s({ symbol: "GOOG" }), fetchPriceTask.s({ symbol: "MSFT" }), ], calculatePortfolioTask.s(), // ^ receives [PriceResult, PriceResult, PriceResult] ).dispatch() ``` ## Composability Compositions are themselves valid inputs to other compositions: ```ts const handle = chord( [ chain(fetchDataTask.s({ source: "api" }), transformTask.s()), chain(fetchDataTask.s({ source: "db" }), transformTask.s()), ], mergeTask.s(), ).dispatch() ``` Groups work as chain steps too: ```ts const handle = chain( fetchConfigTask.s({ env: "prod" }), group(buildFrontendTask.s(), buildBackendTask.s()), // ^ fans out config to both, collects results deployTask.s(), // ^ receives [FrontendResult, BackendResult] ).dispatch() ``` ## Map & Chunk Batch operations on a single task. ### Map Dispatch one job per item, all in parallel: ```ts const handle = processImageTask.map([ { url: "img1.jpg", width: 800 }, { url: "img2.jpg", width: 800 }, { url: "img3.jpg", width: 800 }, ]) const results = await handle.result // [ImageResult, ImageResult, ImageResult] ``` Equivalent to `group(task.s(item1), task.s(item2), ...).dispatch()`. ### Chunk Split into batches, process each batch as a parallel group, batches run sequentially: ```ts const handle = processImageTask.chunk(largeImageList, { size: 50 }) // Processes 50 at a time, then next 50, etc. ``` ## WorkflowHandle All compositions return a `WorkflowHandle` on dispatch: ```ts const handle = chain(a.s(data), b.s()).dispatch() await handle // ensure dispatched (thenable) const result = await handle.result // wait for final result const state = await handle.getState() // "running" | "completed" | "failed" | "cancelled" await handle.cancel({ reason: "no longer needed" }) // cascade cancel ``` ### Workflow TTL Set a timeout on the entire workflow: ```ts const handle = chain(a.s(data), b.s(), c.s()).dispatch({ ttl: "5m", // auto-cancel if not completed within 5 minutes }) ``` Individual jobs still use their task-level TTL. The workflow TTL is an additional global timeout. ## How It Works All compositions flatten to a **DAG** (directed acyclic graph) of task nodes: ``` chain(a, b, c) → a → b → c group(a, b, c) → a, b, c (all parallel) chord([a, b], c) → a ─┐ b ─┤→ c chord([chain(a,b), chain(c,d)], e) → a → b ─┐ c → d ─┤→ e ``` At dispatch: 1. DAG is built, job IDs pre-generated for all nodes 2. Workflow state is stored in Redis as a single hash 3. Root nodes (no dependencies) are enqueued immediately 4. When a job completes, the worker advances the workflow — finds ready nodes and enqueues them 5. When all terminal nodes complete, the workflow is done Failures cascade: if any node fails permanently (no retries left), the entire workflow is marked failed and all active/pending nodes are cancelled. ## Bound Data vs Pipeline Taskora uses a **full-bind-or-pipe** model (not partial application): * `task.s(data)` — data is fixed, pipeline input is ignored * `task.s()` — receives entire previous step's output as input The first step in a chain must have bound data. Subsequent steps can either bind their own data (ignoring the pipeline) or receive from the previous step. This keeps type checking clean: each chain junction is a single constraint (`PrevOutput extends NextInput`). ## Testing Workflows The test runner supports workflows out of the box: ```ts import { createTestRunner } from "taskora/test" import { chain } from "taskora" const runner = createTestRunner() const addTask = runner.app.task("add", async (data: { x: number; y: number }) => data.x + data.y) const doubleTask = runner.app.task("double", async (n: number) => n * 2) const handle = chain(addTask.s({ x: 3, y: 4 }), doubleTask.s()).dispatch() await handle // Process all workflow steps for (let i = 0; i < 10; i++) { await runner.processAll() if (await handle.getState() === "completed") break } const result = await handle.result // 14 // Track step execution console.log(runner.steps) // [ // { workflowId: "...", nodeIndex: 0, taskName: "add", state: "completed" }, // { workflowId: "...", nodeIndex: 1, taskName: "double", state: "completed" }, // ] ```