First, they used to offload read queries. The main production database (the leader) handled all writes. A constellation of read-only replicas served SELECT queries for the web interface, API calls, and analytics. This follows Kleppmann’s principle of separating read paths from write paths. However, replication introduced its own classic problem: replication lag . A user might comment on an issue (write to leader) and then immediately refresh the page, only to read from a replica that hasn’t yet applied the change. GitHub solved this with application-level logic: for a short “critical consistency” window after a write, the application forced reads to go to the leader.
GitHub’s response was a masterclass in the two primary scaling techniques: and sharding .
Second, and more radically, GitHub implemented (horizontal partitioning) using a custom middleware layer called gh-ost (GitHub Online Schema Transfers) and later, their Vitess-inspired system. They split the massive issues and pull_requests tables by repository ID. This meant that data for a single repository always lived on one shard. This is a thoughtful choice: most queries (e.g., “list all issues in this repo”) are naturally local to a shard, avoiding costly distributed joins. The downside, as Kleppmann warns, is the loss of cross-shard transactional guarantees. For example, moving an issue from one repository to another becomes a complex distributed transaction, something GitHub handles with asynchronous workflows and idempotent retries. Reliability and the Chaos of Large Scale Designing a reliable system at GitHub’s scale means accepting that components will fail—and not just servers, but also network partitions, clock skews, and software bugs. Kleppmann emphasizes that reliability is not about preventing failure, but about building systems that tolerate it.