Decoherence is one of the most pervasive processes in the universe. Indeed, it is precisely because decoherence is so powerful that the quantum fault-tolerance theorem came as a shock to physicists.^{9} Decoherence is one more manifestation of the second law of thermodynamics. (Quantum states are very easy to destroy and very hard to put back together.) Decoherence times depend on the physical properties of the qubit, the size of the memory register, the temperature of the environment, the rate of collisions with ambient gas molecules, etc. A very rough estimate of decoherence time can be made on the basis of Heisenberg’s uncertainty principle in energy and time

where *h* is the Planck constant (*h* = 6.626070040(81)×10^{-34} joule-secs), *k* is the Boltzmann constant (*k* = 1.38064852(79)×10^{−23} joules/°K) and *T* is the absolute temperature of the environment.^{10} At room temperature (≈300 °K), this gives a typical decoherence time of about 10^{-14} sec. At lower environmental temperatures, decoherence times are longer. For example, at liquid helium temperatures, it is about 100 times longer than at room temperature. The obvious ways of increasing decoherence times is to chill the computer and seal it in as best a vacuum as we can, apart from choosing qubit materials which provided better decoherence times. It is imperative that any quantum computation be completed before decoherence starts and destroys valid superposition of states. Current decoherence times are typically a few microseconds.^{11} For an excellent tutorial on decoherence, see Marquardt & Puttmann (2008).

**Algorithmic error correction is possible**

Presently available computing times before decoherence are very small and hence permit only a few computational steps. However, computing times can improve if suitable error correction algorithms are found. In 1996, A. M. Steane,^{12} and independently A. R. Calderbank and P. W. Shor^{13} found that some ideas used in the construction of classical linear codes can be used to correct errors in quantum computing by the clever use of quantum entanglement. The class of quantum error correction codes they devised are known as the Calderbank-Shor-Steane (CSS) codes. The codes are limited to correcting a group of errors that are unitary—spin and phase flips—which can be described by Pauli matrices; they are called *depolarization errors*. Such errors are large and discrete and hence the most tractable of all quantum errors.

**12.2 Calderbank-Shor-Steane error correction**

The idea behind the CSS codes is relatively obvious, namely that quantum states could be encoded. In classical information theory, coding just refers to the use of a string of bits to stand in for the value of one bit (or perhaps a smaller block of bits). Embedding redundancy into the encoding allows at least some errors to be caught and repaired. This form of encoding is standard practice in digital communications. However, it was not at all obvious how redundancy could be used in quantum computation. The no-cloning theorem seemed to say that even the simplest kind of redundancy was not possible even in principle. Amidst skepticism, Shor^{14} and Steane^{15} independently discovered an ingenious way to use entanglement, in the service of redundancy and error correction. In fact, they *used entanglement to fight entanglement*!

^{9}The fault-tolerance theorem says roughly that, if the rate of decoherence per qubit per gate operation is below a constant threshold, then it is possible, in principle, to correct errors faster than they occur and thereby perform an arbitrarily long quantum computation.

^{10}NIST reference on Constants, Units, and Uncertainty: see https://physics.nist.gov/cgi-bin/cuu/Value?h for the Planck Constant, and https://physics.nist.gov/cgi-bin/cuu/Value?k for the Boltzmann Constant.

^{11}Ball (2018).

^{12}Steane (1996). See also: Steane (1996a), Steane (1998), Raussendorf (2012).

^{13}Calderbank & Shor (1996). See also: Shor (1995).

^{14}Shor (1995).

^{15}Steane (1996a).