Home > issue 5 > The Essence of Quantum Computing

## The Essence of Quantum Computing Part 2 of 3 Part Series To be continued in Part III – The measurement conundrum

## Appendix A

Linear Algebra ### A.1 Introduction

The basic objects in the study of linear algebra are vector spaces, in particular, the space Cn, of all n-tuples of complex numbers, (z1,…, zn). The elements of a vector space are called vectors. Physicists sometimes refer to complex numbers as c-numbers. The most familiar notation for a state vector in describing a quantum system is |Ψ〉. (Physicists can be rather flippant in their choice of symbols other than Ψ , e.g., +,- ,↑ , ↓, etc.) The standard vector-matrix notation is used when the state vector is expressed in terms of its components. For example, Like ordinary vectors, state vectors are specified by a particular choice of basis vectors (eigenstates) and a particular set of complex numbers, corresponding to the amplitudes with which each eigenstate contributes to the complete state vector. Once the state vector |Ψ〉 of a quantum system is known, the expected value of any observable attribute of the system can be calculated since  |Ψ〉 contains the complete information about the system. This is similar to descriptions of systems in classical physics in which the complete state of the system is known once the time-dependent functions for position and momentum are determined.

In what follows, it is advisable to pay attention to minute details of notation and syntax. Heed von Neumann’s observation and get used to them! Pay particular attention to all those mathematical properties which remain invariant under a given transformation, especially unitary transformations. Under unitary transformations, state vectors only rotate, they do not change length. We have generally stayed with the notations and manner of presentation of results provided in the excellent text book by Nielsen and Chuang39. This will also facilitate readers in going back and forth between this paper and the book by Nielsen and Chuang. The notations have a certain elegance.

### A.2 Various representations of a state vector

A vector |v〉 having n vector components |v1〉,…|vn〉 is generally written as the linear summation where a1, a2, … , an are n complex constants. It is customary to represent |v〉 in the following alternative matrix forms, if it is apparent from the context that |v〉 has the components |v1〉,…|vn〉: Note that in this notation, the matrix representation of |vi will have all ak = 0 for k = 1,…, n except for k = i, for which ai = 1. For example,|v3 has the matrix representation Note that when we use matrix notation to describe state transformations, the ordering of the basis vectors in the matrix representation must be settled a priori so that we can keep track of how each basis vector is transforming when it undergoes a transformation. Finally, when the context is clear in this paper, the abstract index form is also used for an abstract linear transformation or a set of basis vectors. For example, |i〉 may either stand for itself or for the basis set of which it is member.

### A.3 Bases and linear independence

A set of vectors |v1〉 ,… |vn is said to be a spanning set for a vector space V if any vector |v〉 in V can be written as a linear combination where, for the given |v〉, the complex coefficients ai are unique. Such a vector space V is said to have n dimensions, a fact symbolically stated by Cn.

An example of a spanning set for the vector space C2 is the set since any vector |v〉≡ [a1  a2]T in C2 can be written as the following linear combination A vector space may have many different spanning sets. For example, a second spanning set for the vector space C2 is the set as once again, any vector |v〉 [a1  a2]T in C2 can be written as the linear combination A set of non-zero vectors |v1〉,… |vn is said to be linearly dependent if there exists a set of complex numbers |a1〉,… |an with ai ≠ 0 for at least one value of i, such that otherwise it is linearly independent. A linearly independent set is called a basis for V, and such a basis set always exists. The number of elements n in the basis is defined to be the dimension of V. Any two sets of linearly independent vectors, which span a vector space V, contain the same number n of elements.

### A.4 Linear operators and matrices

A linear operator between vector spaces V and W, where |v1〉,… |vm is a basis for V and |w1〉,… |wn is a basis for W (note that m and n may be different), is defined to be any function A: V → W, which is linear in its inputs40, A linear operator A is said to be defined on a vector space V if A is a linear operator from V to V. The identity operator IV on a vector space V is defined by the equation IV |v〉 ≡ |v〉 for all vectors |v〉. If the context is clear, IV is often abbreviated to I. In addition, there is a zero-operator denoted by 0, which maps all vectors to the zero vector, i.e., 0|v〉 ≡ 0. Note that the ket notation for the zero vector is not used as, by convention, it is reserved for the zero vector |0〉 in quantum computing where it means something entirely different.

Sometimes it is easier to see linear operators in terms of their equivalent matrix representation. The claim that the matrix A is a linear operator simply means that is true as an equation where the operation is matrix multiplication of
A
by column vectors. The linear operator’s matrix representation is given by for each j in the range 1,… , m and an Aij is an element of A when represented in matrix form. The matrix representation of A is completely equivalent to the operator A. However, to make the connection between matrices and linear operators we must specify a set of input and output basis states for the input and output vector spaces, namely, V and W, respectively, of the linear operator A.

40 A: V → W means that A is a mapping from V to W, i.e., the input to A is V and the output of A is W. The space V is called the domain of A, and W the codomain of A. The range of A is the space Y = {y | y ∈ W and y = Ax for some x ∈ V}.

Pages ( 10 of 14 ): « Previous1 ... 89 10 1112 ... 14Next »