Experiment

DynamoDB-style API on Durable Objects

Same PutItem/GetItem/UpdateItem semantics, one writer per partition, SQLite under the hood. I built it to see where the bottlenecks are.




v0.2.0. Experiment, not for production.

What it is

One partition key hashes directly to one executing Durable Object. No hidden caches, no control-plane RPC overhead—SQLite is the only source of truth. Tables are managed globally via a unified TableRegistryDO, making all tables global across the account.

Limitations

No LSI/GSI, no automatic partition splitting. Hot partitions hit a hard ceiling due to single-threaded DO execution limits. See roadmap for future plans.

await client.send(new PutItemCommand({
  TableName: "Users",
  Item: {
    PK: { S: "user_123" },
    SK: { S: "profile" },
    role: { S: "engineer" }
  }
}));

Deterministic Routing (Hover to explore)

One direct hop from Worker to Partition-Scoped DB.

Client
Worker Router
Partition DO
SQLite
Hover over the components above to see how a request flows through the architecture.

Zero Control Plane Overhead

Table schemas are fetched natively, cached into Edge Isolates, and item reads/writes never wait on a registry metadata response.

Under the hood

This is the literal SQLite schema running inside every single Partition's Durable Object. Notice how simple it is: just the Sort Key as the primary text index, the raw JSON payload as a blob, and an integer version for replication ordering. No magic, just standard SQL.

CREATE TABLE items (
  sk TEXT PRIMARY KEY,
  value BLOB,
  version INTEGER
);

CREATE INDEX idx_sk ON items(sk);

Why and how it evolved

DynamoDB gets a lot of throughput from partitioning, routing, and discipline — not a special storage engine. I wanted to see how much of that I could rebuild with pieces I could run and measure. So I could see where the bottlenecks are. One writer per partition, SQLite behind it, no hidden caches. Why Durable Objects? Each DO has a single-writer guarantee; that maps to one partition = one writer. I didn't have to build coordination; the runtime gives it. This project is an attempt to see how far DOs can scale for this workload, not to sell a product.

I started with a manual replication engine to scale out reads over Queues (v2), but this added significant end-to-end latency. In architecture v3, I stripped out the control plane lookups from the hot path entirely. Reads and writes now go straight from the edge via deterministic SHA-256 hash routing into the single authoritative `PartitionDO` instance.

What I learned is that embracing the DO boundaries drastically lowers P99 request latency and increases throughput. A single partition holds thousands of operations per second until its SQLite thread gets saturated (which hits the deliberate DynamoDB 1000 WCU cap). Read scaling will be introduced at the edge tier in the future rather than through manual raft-like log replication.

What this is not: not a DynamoDB replacement, not tuned for production. No LSI/GSI, no partition splitting. Check the roadmap if you're interested.

Performance (latest run)

YCSB, local dev. Three variants: hot partition (variant_a), sustained mixed (variant_b), read-heavy (variant_c). Full history & charts.

Sustained mixed (ops/s)

210 ops/s

variant_b, local

p99 read (ms)

141 ms

variant_b

Hot partition (ops/s)

265 ops/s

variant_a, local

How the numbers are produced

YCSB runs live in benchmarks/workloads/. Run npm run benchmark:run (local) or npm run benchmark:live. Raw output goes to benchmarks/results/run_<timestamp>/. benchmarks/process_results.py parses the run files and writes public/benchmarks/<timestamp>/results.json and updates public/benchmarks/index.json. The Benchmark Logs page and the cards above read from that. So the source of truth is whatever is in public/benchmarks/ after your last run.

Theoretical Global Metrics

While individual partitions bottleneck, the overall system scales linearly across the Cloudflare edge.

Max Partitions

No Limit!

Global Durable Objects

System Throughput

~15k ops/s

Measured so far, theoraticaly unlimited!

Single DO Bottleneck

~1000 ops/s

Hard ceiling per partition, Reads are horizontally scaled

Traces

Every request has an ID. In the Playground, send a request and use “View trace”, or open the Trace page and enter a request ID. You get a timeline of events for that request.

Try it

Use the Playground to run PutItem, GetItem, CreateTable, etc. API is DynamoDB JSON over HTTP; full details (cURL, SDK) are in the GitHub README.