Why and how it evolved
DynamoDB gets a lot of throughput from partitioning, routing, and discipline — not a special storage engine. I wanted to see how much of that I could rebuild with pieces I could run and measure. So I could see where the bottlenecks are. One writer per partition, SQLite behind it, no hidden caches. Why Durable Objects? Each DO has a single-writer guarantee; that maps to one partition = one writer. I didn't have to build coordination; the runtime gives it. This project is an attempt to see how far DOs can scale for this workload, not to sell a product.
I started with a manual replication engine to scale out reads over Queues (v2), but this added significant end-to-end latency. In architecture v3, I stripped out the control plane lookups from the hot path entirely. Reads and writes now go straight from the edge via deterministic SHA-256 hash routing into the single authoritative `PartitionDO` instance.
What I learned is that embracing the DO boundaries drastically lowers P99 request latency and increases throughput. A single partition holds thousands of operations per second until its SQLite thread gets saturated (which hits the deliberate DynamoDB 1000 WCU cap). Read scaling will be introduced at the edge tier in the future rather than through manual raft-like log replication.
What this is not: not a DynamoDB replacement, not tuned for production. No LSI/GSI, no partition splitting. Check the roadmap if you're interested.