The Mirage of CAP Theorem

And the oversimplification of distributed systems.

Well, the title is meant to be catchy and I hope that I have your attention now. In this article, I want to discuss the ‘oversimplification’ of distributed systems and the mirage of either achieving CA, AP, or CP in a distributed system.

To lose something is an illusion because everything we own is just a mirage!

CAP Theorem

It was the year 2000—those days when people used to roam around without masks and without mobile phones. Dr. Eric Brewer gave a keynote at Proceedings of the Annual ACM Symposium on Principles of Distributed Computing and introduced CAP conjecture, or Brewer’s conjecture, to the world. Later in 2002, Seth Gilbert and Nancy Lynch published a proof of the conjecture, making it as a CAP theorem.

The CAP theorem can be defined as follows:

It states that it is impossible for a shared data store to simultaneously provide two out of following three guarantees:

  1. Consistency - This refers to linearizability. It means that if two requests are not concurrent, then the new request to the database must see the data at least as latest as the previous request. It refers to a guarantee of total order on all operations.
  2. Availability - The theorem says that every request which is received by a non-failing node (read database instance) must result in a non-error response, without the guarantee that it contains most recent write.
  3. Partition Tolerance - The system should be allowed to lose an arbitrary amount of messages between two nodes.

Later Brewer himself argued that this notion is misleading because one cannot trade off partition tolerance. A system can select only between consistency and availability; partition tolerance should always be present.

Notes for a new engineer

Distributed systems are different because they fail more often.

All working distributed systems alike; each failure in distributed system happens in its own way.

As a new distributed systems engineer a few years ago, I used to be worried a lot about the latency between my two nodes. With the passage of time and scars of software engineering, I have learned that distributed systems are complex because of probability of partial failure.

When you build a distributed system, you design for failures. The sooner you accept that, the better your life will become. If your system can handle failures, then give yourself a pat on a back, you have achieved something which we mere mortals can’t.

Should you rely on CAP theorem for analyzing distributed systems?

We have to understand that the world is gray, and so are distributed systems. Let’s take a case where a rockstar team claims to have written a distributed system which is CP (consistent and has partition tolerance). Now, according to CAP theorem, this system shouldn’t be "available," right? Doesn’t this definition look problematic? If I sell you a reliable database with CP properties but it is not available, is it even useful? Actually, availability is sacrificed only when there is a failure related to network partition and you still want to offer consistency over availability.

Another problem I have with CAP theorem is that it over-emphasizes network partition faults. Firstly, with the infra which is available in today’s world, network partition failures are not that common. Secondly, what about other kinds of issues? Disks running out of space, bugs in the latest deployment, etc... I simply want to point out the fact that there are a lot of failures to keep in mind while designing a distributed data store.

CAP theorem also doesn’t say anything about latency. What is the point of having an “available” system if you get delayed responses?

I would conclude this debate by saying:

While designing your distributed data store you need to make sure that you are complying with the exact definitions of consistency and availability if you want to apply CAP theorem on your system.


In the context of CAP theorem, consistency essentially refers to linearizability. It is a guarantee about single operations on single objects. This is another issue I have with CAP theorem, it doesn’t talk about transactions, and it doesn’t talk about operations that touch multiple objects.

Consistency talks about ordering. It means that if Operation 2 occurs after Operation 1, then Operation 2 must see the system in state as it was upon completion of Operation 1. This guarantee is a quite expensive to provide. It is also very hard to test whether system is following consistency. Fun fact: your CPU doesn’t provide linearizability, without memory barrier instruction!

In case you are wondering, C in CAP theorem is not related with C in ACID! This means that if your database is ACID, it isn’t necessarily CP.

A lot of modern data stores don’t provide consistency, they provide serializability. Check out Postgres SSI. Did you know that even for NoSQL databases like MongoDB, consistency is broken by design? Check more on that here.

Consistency guarantee both locally and globally is quite hard to provide. The universe won’t allow it. The key to solving this problem is to implement your time resolutions in such a way that no one notices that your consistency is actually breaking.


MongoDB puts more emphasis on “durability” than “availability" because there are limits to availability. In the context of SLAs (service level agreements), the term “availability” describes a continuum instead of a binary condition.

100% availability is generally regarded as unrealistic. CAP-consistent systems might be unavailable for some time due to network partition (or tons of other types of failure which CAP theorem doesn’t address). But that is true for all systems, even CAP-available ones! CAP-available systems might also be down for all kinds of predictable and unpredictable reasons.

CP/ CA Mirage

I hope that you have followed my arguments so far and that you agree with me on the fact that CAP-consistent or CAP-available systems don’t have a clear distinction.

Dr. Brewer has said about Google Spanner that the database is “technically CP,” but the network outages are so rare that it is “effectively CA." This just means that whenever a network partition actually happens, the systems chooses C over A. Additionally, Spanner uses two-phase commit to achieve serializability, but it uses TrueTime for external consistency, consistent reads without locking, and consistent snapshots (reference).

The more strictly we use CAP theorem to analyze distributed systems, the more real the mirage of distinction becomes. The road to designing a distributed system is full of tradeoffs, and the problem with CAP theorem is that it too narrowly focuses on one single type of failure.

Let me lay down some different types of outages which CAP theorem misses:

  1. Human Error
  2. Reprovisioning - Can your distributed data store be reprovisioned without downtime?

Blockchain is a unique technology which can be both CP and AP. (Read more here).

Space and Time and CAP theorem

At Space and Time, we are striving to build highly available, consistent, and massively fault-tolerant systems. CAP theorem is one of the critical components of any distributed data store. Space and Time is a massively distributed HTAP data warehouse, which presents some challenges when it comes to CAP theorem.

One challenge we’ve faced is deciding whether to prioritize a quick response or serving the most up-to-date information. If we send the latest value from the server where the user connected, it isn’t necessarily the latest information. But Space and Time operates with a single source of access to our architecture—a gateway built on top of a consensus protocol. This allows us to serve users the latest information from both indexed blockchain data and their own data.

Parting thoughts

Most distributed systems work well without a perfect availability guarantee or without strong consistency.

Hence, when designing a system, more focus should be placed on handling failures rather than choosing one of consistency or availability. We have to remember that we are designing fault-tolerant systems, not failure-avoiding systems. That is impossible in the context of distributed systems. We should refer to another paper, written by Dr. Eric Brewer himself along with Armando Fox, and think about availability in terms of yield (percent of requests answered successfully) and harvest (percent of required data actually included in the responses).

There are various flavors of consistency and various levels of eventual consistency available.

We should come up with our own framework if we want to analyze our distributed system rather than over-rely on CP and AP framework. We also need to remember that proving the correctness of distributed system is different from testing or verification of a distributed system.