Skip to content

Performance

Taskora's performance depends on three axes: which Redis-compatible server you use, which Node.js runtime, and how Redis is configured. This page covers all three with real benchmark data and practical tuning advice.

All numbers below are from taskora's built-in benchmark suite (@taskora/bench), each run isolated (one library, one runtime, one fresh Redis container at a time). They reflect relative differences — absolute numbers depend on hardware, network, and payload size.

taskora vs BullMQ

Throughput (ops/sec, Redis 7, isolated runs, median of 3 iterations)

Benchmarktaskora (Bun)BullMQ (Bun)taskora (Node)BullMQ (Node)
enqueue (single)9,3079,0259,2508,228
enqueue (bulk, batch=50)106,58649,13689,67749,781
process (c=1)6,9646,8738,2736,464
process (c=100)31,95527,87943,32027,635
latency throughput5,2925,1557,6434,029

Latency (ms, Bun)

Libraryp50p95p99
taskora0.270.831.39
BullMQ0.561.852.80

Latency (ms, Node.js v22)

Libraryp50p95p99
taskora0.230.400.72
BullMQ0.491.853.50

Memory per job

LibraryB/job (single)B/job (bulk)
taskora355360
BullMQ279254

Takeaways

  • Single enqueue is close — both libraries are limited by one Lua round trip per job. taskora is ~10% faster on Node.
  • Bulk enqueue: taskora is ~2x faster thanks to dispatchMany() batching multiple jobs into a single pipeline round trip, while BullMQ's addBulk() issues a separate EVALSHA per job.
  • Processing: taskora is 1.1–1.6x faster. The gap comes from taskora's fused ack+dequeue Lua script — one EVALSHA per job vs. BullMQ's separate ack and dequeue calls.
  • Latency: taskora has 2–5x lower tail latency (p95/p99) due to fewer Redis round trips in the hot path.
  • Memory: BullMQ uses ~20% less memory per job. BullMQ stores fewer metadata fields per job hash. The gap narrows with larger payloads where the data field dominates.

Redis-Compatible Servers

Taskora works with any server that speaks the Redis protocol. The three main options:

ServerDescription
Redis 7The original. Single-threaded, battle-tested, widest ecosystem.
Valkey 8Redis fork (Linux Foundation). Drop-in compatible, same protocol.
DragonflyMulti-threaded reimplementation. Different storage engine.

Throughput (ops/sec, Node.js v22, median of 3 runs)

BenchmarkRedis 7Valkey 8Dragonfly
enqueue (single)5,2755,4021,229
enqueue (bulk, batch=50)80,79893,34510,622
process (c=1)5,3065,5631,404
process (c=100)30,79936,1521,781
latency throughput4,4295,169807

Latency (ms)

Serverp50p95p99
Redis 70.390.741.20
Valkey 80.330.600.93
Dragonfly2.092.974.14

Memory per job

ServerB/job
Redis 7355
Valkey 8324
Dragonfly274

Takeaways

  • Redis and Valkey are both excellent. Valkey is slightly faster at high concurrency and has better tail latency. Switching between them is a Docker image swap — no code changes.
  • Dragonfly is 5–15x slower on taskora workloads. Dragonfly is multi-threaded, but its Lua engine serializes script execution. Since taskora uses atomic Lua scripts for every state transition (enqueue, dequeue, ack, fail), this becomes the bottleneck. Dragonfly also requires the --default_lua_flags=allow-undeclared-keys flag because taskora constructs keys inside scripts.
  • Memory differences are modest. Dragonfly's storage engine is the most compact (274 B/job), but the gap narrows with Redis tuning (see below).

Recommendation

Use Redis or Valkey — whichever your team is more comfortable operating. If you're starting fresh, Valkey is a strong default.

Runtimes

Taskora runs on Bun, Node.js, and Deno. The runtime affects client-side overhead — serialization, event loop scheduling, and ioredis internals.

Throughput (ops/sec, taskora, Redis 7, isolated runs, median of 3 iterations)

BenchmarkBunNode.js v22Deno
enqueue (single)9,3079,25013,045
enqueue (bulk, batch=50)106,58689,677123,870
process (c=1)6,9648,27310,860
process (c=100)31,95543,32047,311
latency throughput5,2927,6438,483

Latency (ms, Redis 7)

Runtimep50p95p99
Bun0.270.831.39
Node.js v220.230.400.72
Deno0.180.350.62

Takeaways

  • Deno is the fastest runtime across all benchmarks — 1.3–1.6x Bun, 1.1–1.4x Node. It also has the best latency profile. Deno runs ioredis through its Node.js compatibility layer. Use deno run -A --unstable-sloppy-imports to run.
  • Node.js is competitive with Bun and beats it on concurrent processing (43k vs 32k ops/sec) and latency. Node's event loop scheduling handles high concurrency well.
  • Bun has the fastest single enqueue but trails on concurrent workloads. Bun's event loop overhead grows with many in-flight promises.

INFO

These benchmarks measure the full queue pipeline (serialize → Lua script → Redis → deserialize). The runtime difference is only the client-side overhead — Redis is the same in all cases.

Redis Drivers

Taskora ships two Redis drivers: ioredis (Node.js library, works on all runtimes) and a Bun native driver (taskora/redis/bun) that uses Bun.RedisClient — Bun's built-in Redis client with auto-pipelining.

Throughput (ops/sec, Bun runtime, Redis 7, median of 3 runs)

BenchmarkioredisBun native
enqueue (single)11,92811,391
enqueue (bulk, batch=50)128,407125,853
process (c=1)10,33310,310
process (c=100)42,59442,700
latency throughput8,0538,001

Latency (ms)

Driverp50p95p99
ioredis0.210.310.93
Bun native0.210.290.91

Takeaways

The two drivers are effectively identical in performance. Bun's auto-pipelining (batching same-tick .send() calls into one round trip) matches ioredis's pipeline behavior, and the overhead of RESP parsing is negligible vs. the Lua script execution time on the server.

Choose based on operational constraints:

ioredis (taskora/redis)Bun native (taskora/redis/bun)
RuntimesBun, Node.js, DenoBun only
ClusterYesNo
SentinelYesNo
Peer depsioredisNone (built-in)

If you need Cluster or Sentinel, use ioredis. If you're on Bun and want zero peer deps, the Bun driver is a safe choice with no performance trade-off.

Redis Tuning

hash-max-listpack-value

This is the single most impactful Redis tuning knob for taskora.

Taskora stores every job as a single Redis hash. Redis 7 keeps a hash in compact listpack encoding as long as two conditions hold:

  • number of fields ≤ hash-max-listpack-entries (default 128 — taskora uses ~10-15 fields, well within limit)
  • every field value ≤ hash-max-listpack-value bytes (default 64)

The second condition is the one that bites. If your job's serialized data or result exceeds 64 bytes — which most real-world payloads do — Redis promotes the entire hash from listpack to hashtable encoding. The per-field overhead jumps from ~2 bytes to ~80 bytes, and a 10-field hash that was using ~150 bytes suddenly costs ~900 bytes.

Fix: raise the threshold in redis.conf:

hash-max-listpack-value 1024

With this setting, payloads up to ~1 KB stay in the compact encoding. For most task queues where job payloads are a few hundred bytes of JSON, this keeps every job in listpack and gives you the best memory efficiency.

Memory impact

Payload sizeDefault (64)Tuned (1024)
< 64 B~150 B/job (listpack)~150 B/job (listpack)
64–1024 B~900 B/job (hashtable)~200–400 B/job (listpack)
> 1024 B~900+ B/job (hashtable)~900+ B/job (hashtable)

The sweet spot is the middle row — medium payloads where the tuning makes a 2–4x difference in memory per job. If you're running 1M concurrent jobs, that's the difference between 900 MB and 300 MB of Redis memory.

How to apply

redis.conf:

hash-max-listpack-value 1024

Redis CLI (runtime, non-persistent):

CONFIG SET hash-max-listpack-value 1024

Docker Compose:

yaml
services:
  redis:
    image: redis:7-alpine
    command: redis-server --hash-max-listpack-value 1024

Connection pool sizing

Taskora uses a small number of Redis connections per process:

ConnectionPurposeCount
MainCommands, Lua scripts, enqueue/ack1
BlockingBZPOPMIN per task (worker dequeue)1 per task
SubscriberXREAD BLOCK for events + JobWaiter1 (lazy, shared)
CancelSUBSCRIBE for cancel pub/sub1 per worker

A typical process with 3 tasks and a worker uses ~6 connections. Redis's default maxclients is 10,000, so connection limits are rarely an issue unless you're running hundreds of taskora processes against one Redis.

If you're behind a Redis proxy (e.g. Envoy, HAProxy) or using Redis Cluster with a connection pool, size the pool to at least 2 + number_of_tasks per taskora process.

maxmemory-policy

Taskora manages its own retention (completed/failed job cleanup via configurable retention options). Redis maxmemory-policy should be set to noeviction — if Redis starts evicting keys on its own, it may silently drop in-flight jobs or metadata hashes.

maxmemory-policy noeviction

This is the default in most Redis deployments, but verify it in production.

Running Benchmarks

Taskora ships a benchmark suite in packages/bench that you can run against any store and runtime.

Quick start

bash
# Default: bun + redis, all benchmarks, taskora vs BullMQ
bun run bench

# Pick a store
bun run bench -- --store valkey
bun run bench -- --store dragonfly

# Pick a runtime
bun run bench:node               # Node.js (via tsx)
bun run bench:deno               # Deno

# Compare ioredis vs Bun native driver (Bun only)
bun run bench -- --libraries taskora,taskora-bun

# Filter libraries and benchmarks
bun run bench -- --libraries taskora --benchmarks enqueue-single,enqueue-bulk

# JSON output for CI / scripts
bun run bench -- --json

Options

FlagDefaultValues
--storeredisredis, valkey, dragonfly
--librariestaskora,bullmqtaskora, taskora-bun, bullmq (comma-separated)
--benchmarksallenqueue-single, enqueue-bulk, process-single, process-concurrent, latency
--iterations3number of measured runs per benchmark
--jsonoffmachine-readable output

External Redis

By default the suite starts a Docker container via testcontainers. To benchmark against an existing server:

bash
REDIS_URL=redis://your-host:6379 bun run bench

WARNING

The benchmark suite runs FLUSHDB between iterations. Don't point it at a production Redis.

Released under the MIT License.