Dispatching Jobs
Every task has a dispatch() method that enqueues a job and returns a ResultHandle synchronously.
Basic Dispatch
const handle = sendEmailTask.dispatch({
to: "user@example.com",
subject: "Welcome!",
})dispatch() returns immediately — it does not wait for the job to be enqueued. The handle is a thenable that resolves to the job ID once enqueued.
ResultHandle
The ResultHandle<TOutput> is the primary way to interact with a dispatched job.
const handle = sendEmailTask.dispatch(data)
// Get the job ID (thenable — resolves when enqueued)
const jobId = await handle // UUID string
// Wait for the result (push-based via Redis Streams)
const result = await handle.result // typed as TOutput
// Wait with a timeout
const result = await handle.waitFor(5000) // throws TimeoutError after 5s
// Query current state
const state = await handle.getState() // "waiting" | "active" | "completed" | ...
// Query progress
const progress = await handle.getProgress() // number | object | null
// Get structured logs
const logs = await handle.getLogs() // LogEntry[]
// Cancel the job
await handle.cancel({ reason: "User requested cancellation" })How Result Waiting Works
handle.result and handle.waitFor() use push-based delivery via a shared Redis Streams XREAD connection (JobWaiter). Multiple handles share one connection — no per-job polling.
Dispatch Options
sendEmailTask.dispatch(data, {
delay: 5000, // delay processing by 5 seconds
priority: 10, // higher priority = processed first
ttl: "1h", // expire if not processed within 1 hour
concurrencyKey: "user:42", // limit concurrency per key
concurrencyLimit: 2, // max 2 concurrent jobs for this key
})Ordering Guarantees
Jobs are dequeued in (priority desc, timestamp asc) order. Higher-priority jobs always come first; within the same priority band, earlier dispatches come first.
This is best-effort, not a strict FIFO contract:
- Multiple workers dequeue concurrently — execution order may differ from dequeue order.
- Dispatches from separate processes that land in the same millisecond have no deterministic tiebreaker. Taskora uses UUID job IDs (generated client-side, no Redis round-trip), so there is no global sequence counter.
- Delayed, retried, and debounced jobs re-enter the queue with a fresh timestamp.
If your workload requires strict per-key ordering, use concurrencyKey with concurrencyLimit: 1 — this serializes execution for that key while leaving other keys concurrent.
Flow Control Options
Debounce
Replace the previous pending job for the same key. Only the last dispatch within the delay window is processed.
searchIndexTask.dispatch(data, {
debounce: { key: `index:${docId}`, delay: "2s" },
})Throttle
Rate-limit dispatches per key. Excess dispatches are rejected.
const handle = apiCallTask.dispatch(data, {
throttle: { key: "external-api", max: 10, window: "1m" },
})
console.log(handle.enqueued) // true | falseDeduplicate
Skip dispatch if a job with the same key already exists in a matching state.
const handle = generateReportTask.dispatch(data, {
deduplicate: { key: `report:${userId}`, while: ["waiting", "active"] },
})
if (!handle.enqueued) {
console.log("Already running:", handle.existingId)
}Throwing on Rejection
By default, throttle and dedup silently reject (set handle.enqueued = false). To throw instead:
sendEmailTask.dispatch(data, {
throttle: { key: "emails", max: 100, window: "1h" },
throwOnReject: true, // throws ThrottledError or DuplicateJobError
})Bulk Dispatch
const handles = sendEmailTask.dispatchMany([
{ data: { to: "alice@example.com", subject: "Hi" } },
{ data: { to: "bob@example.com", subject: "Hello" }, options: { delay: 5000 } },
])