KNOWPIA
WELCOME TO KNOWPIA

In linear algebra, the **trace** of a square matrix **A**, denoted tr(**A**),^{[1]} is defined to be the sum of elements on the main diagonal (from the upper left to the lower right) of **A**. The trace is only defined for a square matrix (*n* × *n*).

It can be proved that the trace of a matrix is the sum of its (complex) eigenvalues (counted with multiplicities). It can also be proved that tr(**A**) = tr(**C**^{−1}**AC**), and as a consequence that one can define the trace of a linear operator mapping a vector space into itself since this implies the trace is invariant for similar matrices and not dependent on a choice of basis.

The trace is related to the derivative of the determinant (see Jacobi's formula).

The **trace** of an *n* × *n* square matrix **A** is defined as^{[1]}^{[2]}^{[3]}^{: 34 }

Expressions like tr(exp(**A**)), where **A** is a square matrix, occur so often in some fields (e.g. multivariate statistical theory), that a shorthand notation has become common:

tre is sometimes referred to as the **exponential trace** function; it is used in the Golden–Thompson inequality.

Let **A** be a matrix, with

Then

The trace is a linear mapping. That is,^{[1]}^{[2]}

A matrix and its transpose have the same trace:^{[1]}^{[2]}^{[3]}^{: 34 }

This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal.

The trace of a square matrix which is the product of two real matrices can be rewritten as the sum of entry-wise products of their elements, i.e. as the sum of all elements of their Hadamard product. Phrased directly, if **A** and **B** are two *m* × *n* real matrices, then:

If one views any *m* × *n* real matrix as a vector of length mn (an operation called vectorization) then the above operation on **A** and **B** coincides with the standard dot product. According to the above expression, tr(**A**^{⊤}**A**) is a sum of squares and hence is nonnegative, equal to zero if and only if **A** is zero.^{[4]}^{: 7 } Furthermore, as noted in the above formula, tr(**A**^{⊤}**B**) = tr(**B**^{⊤}**A**). These demonstrate the positive-definiteness and symmetry required of an inner product; it is common to call tr(**A**^{⊤}**B**) the Frobenius inner product of **A** and **B**. This is a natural inner product on the vector space of all real matrices of fixed dimensions. The norm derived from this inner product is called the Frobenius norm, and it satisfies a submultiplicative property, as can be proven with the Cauchy–Schwarz inequality:

The Frobenius inner product may be extended to a hermitian inner product on the complex vector space of all complex matrices of a fixed size, by replacing **B** by its complex conjugate.

The symmetry of the Frobenius inner product may be phrased more directly as follows: the matrices in the trace of a product can be switched without changing the result. If **A** and **B** are *m* × *n* and *n* × *m* real or complex matrices, respectively, then^{[1]}^{[2]}^{[3]}^{: 34 }^{[note 1]}

This is notable both for the fact that **AB** does not usually equal **BA**, and also since the trace of either does not usually equal tr(**A**)tr(**B**).^{[note 2]} The similarity-invariance of the trace, meaning that tr(**A**) = tr(**P**^{−1}**AP**) for any square matrix **A** and any invertible matrix **P** of the same dimensions, is a fundamental consequence. This is proved by

Additionally, for real column vectors and , the trace of the outer product is equivalent to the inner product:

More generally, the trace is *invariant under cyclic permutations*, that is,

This is known as the *cyclic property*.

Arbitrary permutations are not allowed: in general,

However, if products of *three* symmetric matrices are considered, any permutation is allowed, since:

The trace of the Kronecker product of two matrices is the product of their traces:

The following three properties:

If **A** is symmetric and **B** is skew-symmetric, then

Given any *n* × *n* real or complex matrix **A**, there is

where λ_{1}, ..., λ_{n} are the eigenvalues of **A** counted with multiplicity. This holds true even if **A** is a real matrix and some (or all) of the eigenvalues are complex numbers. This may be regarded as a consequence of the existence of the Jordan canonical form, together with the similarity-invariance of the trace discussed above.

When both **A** and **B** are *n* × *n* matrices, the trace of the (ring-theoretic) commutator of **A** and **B** vanishes: tr([**A**, **B**]) = 0, because tr(**AB**) = tr(**BA**) and tr is linear. One can state this as "the trace is a map of Lie algebras gl_{n} → *k* from operators to scalars", as the commutator of scalars is trivial (it is an Abelian Lie algebra). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices.

Conversely, any square matrix with zero trace is a linear combinations of the commutators of pairs of matrices.^{[note 4]} Moreover, any square matrix with zero trace is unitarily equivalent to a square matrix with diagonal consisting of all zeros.

- The trace of the
*n*×*n*identity matrix is the dimension of the space, namely n.

- This leads to generalizations of dimension using trace.

- The trace of a Hermitian matrix is real, because the elements on the diagonal are real.
- The trace of a permutation matrix is the number of fixed points off the corresponding permutation, because the diagonal term
*a*_{ii}is 1 if the*i*th point is fixed and 0 otherwise. - The trace of a projection matrix is the dimension of the target space.

- The matrix
**P**is idempotent._{X}

- More generally, the trace of any idempotent matrix, i.e. one with
**A**^{2}=**A**, equals its own rank. - The trace of a nilpotent matrix is zero.

- When the characteristic of the base field is zero, the converse also holds: if tr(
**A**^{k}) = 0 for all k, then**A**is nilpotent. - When the characteristic
*n*> 0 is positive, the identity in n dimensions is a counterexample, as , but the identity is not nilpotent.

In general, given some linear map *f* : *V* → *V* (where V is a finite-dimensional vector space), we can define the trace of this map by considering the trace of a matrix representation of f, that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis-independent definition for the trace of a linear map.

Such a definition can be given using the canonical isomorphism between the space End(*V*) of linear maps on V and *V* ⊗ *V**, where *V** is the dual space of V. Let v be in V and let f be in *V**. Then the trace of the indecomposable element *v* ⊗ *f* is defined to be *f* (*v*); the trace of a general element is defined by linearity. Using an explicit basis for V and the corresponding dual basis for *V**, one can show that this gives the same definition of the trace as given above.

If **A** is a linear operator represented by a square matrix with real or complex entries and if *λ*_{1}, …, *λ _{n}* are the eigenvalues of

This follows from the fact that **A** is always similar to its Jordan form, an upper triangular matrix having *λ*_{1}, …, *λ _{n}* on the main diagonal. In contrast, the determinant of

More generally,

The trace corresponds to the derivative of the determinant: it is the Lie algebra analog of the (Lie group) map of the determinant. This is made precise in Jacobi's formula for the derivative of the determinant.

As a particular case, *at the identity*, the derivative of the determinant actually amounts to the trace: tr = det′_{I}. From this (or from the connection between the trace and the eigenvalues), one can derive a connection between the trace function, the exponential map between a Lie algebra and its Lie group (or concretely, the matrix exponential function), and the determinant:

For example, consider the one-parameter family of linear transformations given by rotation through angle θ,

These transformations all have determinant 1, so they preserve area. The derivative of this family at *θ* = 0, the identity rotation, is the antisymmetric matrix

which clearly has trace zero, indicating that this matrix represents an infinitesimal transformation which preserves area.

A related characterization of the trace applies to linear vector fields. Given a matrix **A**, define a vector field **F** on **R**^{n} by **F**(**x**) = **Ax**. The components of this vector field are linear functions (given by the rows of **A**). Its divergence div **F** is a constant function, whose value is equal to tr(**A**).

By the divergence theorem, one can interpret this in terms of flows: if **F**(**x**) represents the velocity of a fluid at location **x** and U is a region in **R**^{n}, the net flow of the fluid out of U is given by tr(**A**) · vol(*U*), where vol(*U*) is the volume of U.

The trace is a linear operator, hence it commutes with the derivative:

The trace of a 2 × 2 complex matrix is used to classify Möbius transformations. First, the matrix is normalized to make its determinant equal to one. Then, if the square of the trace is 4, the corresponding transformation is *parabolic*. If the square is in the interval [0,4), it is *elliptic*. Finally, if the square is greater than 4, the transformation is *loxodromic*. See classification of Möbius transformations.

The trace is used to define characters of group representations. Two representations **A**, **B** : *G* → *GL*(*V*) of a group G are equivalent (up to change of basis on V) if tr(**A**(*g*)) = tr(**B**(*g*)) for all *g* ∈ *G*.

The trace also plays a central role in the distribution of quadratic forms.

The trace is a map of Lie algebras from the Lie algebra of linear operators on an n-dimensional space (*n* × *n* matrices with entries in ) to the Lie algebra K of scalars; as K is Abelian (the Lie bracket vanishes), the fact that this is a map of Lie algebras is exactly the statement that the trace of a bracket vanishes:

The kernel of this map, a matrix whose trace is zero, is often said to be **traceless** or **trace free**, and these matrices form the simple Lie algebra , which is the Lie algebra of the special linear group of matrices with determinant 1. The special linear group consists of the matrices which do not change volume, while the special linear Lie algebra is the matrices which do not alter volume of *infinitesimal* sets.

In fact, there is an internal direct sum decomposition of operators/matrices into traceless operators/matrices and scalars operators/matrices. The projection map onto scalar operators can be expressed in terms of the trace, concretely as:

Formally, one can compose the trace (the counit map) with the unit map of "inclusion of scalars" to obtain a map mapping onto scalars, and multiplying by n. Dividing by n makes this a projection, yielding the formula above.

In terms of short exact sequences, one has

The bilinear form (where **X**, **Y** are square matrices)

The trace defines a bilinear form:

The form is symmetric, non-degenerate^{[note 5]} and associative in the sense that:

For a complex simple Lie algebra (such as _{n}), every such bilinear form is proportional to each other; in particular, to the Killing form.

Two matrices **X** and **Y** are said to be *trace orthogonal* if

The concept of trace of a matrix is generalized to the trace class of compact operators on Hilbert spaces, and the analog of the Frobenius norm is called the Hilbert–Schmidt norm.

If K is trace-class, then for any orthonormal basis , the trace is given by

The partial trace is another generalization of the trace that is operator-valued. The trace of a linear operator Z which lives on a product space *A* ⊗ *B* is equal to the partial traces over A and B:

For more properties and a generalization of the partial trace, see traced monoidal categories.

If A is a general associative algebra over a field k, then a trace on A is often defined to be any map tr : *A* ↦ *k* which vanishes on commutators^{[clarification needed]}: tr([*a*,*b*]) for all *a*, *b* ∈ *A*. Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar.

A supertrace is the generalization of a trace to the setting of superalgebras.

The operation of tensor contraction generalizes the trace to arbitrary tensors.

Given a vector space V, there is a natural bilinear map *V* × *V*^{∗} → *F* given by sending (*v*, φ) to the scalar φ(*v*). The universal property of the tensor product *V* ⊗ *V*^{∗} automatically implies that this bilinear map is induced by a linear functional on *V* ⊗ *V*^{∗}.^{[6]}

Similarly, there is a natural bilinear map *V* × *V*^{∗} → Hom(*V*, *V*) given by sending (*v*, φ) to the linear map *w* ↦ φ(*w*)*v*. The universal property of the tensor product, just as used previously, says that this bilinear map is induced by a linear map *V* ⊗ *V*^{∗} → Hom(*V*, *V*). If V is finite-dimensional, then this linear map is a linear isomorphism.^{[6]} This fundamental fact is a straightforward consequence of the existence of a (finite) basis of V, and can also be phrased as saying that any linear map *V* → *V* can be written as the sum of (finitely many) rank-one linear maps. Composing the inverse of the isomorphism with the linear functional obtained above results in a linear functional on Hom(*V*, *V*). This linear functional is exactly the same as the trace.

Using the definition of trace as the sum of diagonal elements, the matrix formula tr(**AB**) = tr(**BA**) is straightforward to prove, and was given above. In the present perspective, one is considering linear maps S and T, and viewing them as sums of rank-one maps, so that there are linear functionals *φ*_{i} and *ψ*_{j} and nonzero vectors *v*_{i} and *w*_{j} such that *S*(u) = ∑*φ*_{i}(*u*)*v*_{i} and *T*(u) = ∑*ψ*_{j}(*u*)*w*_{j} for any u in V. Then

for any u in V. The rank-one linear map *u* ↦ *ψ*_{j}(*u*)*φ*_{i}(*w*_{j})*v*_{i} has trace *ψ*_{j}(*v*_{i})*φ*_{i}(*w*_{j}) and so

Following the same procedure with S and T reversed, one finds exactly the same formula, proving that tr(*S* ∘ *T*) equals tr(*T* ∘ *S*).

The above proof can be regarded as being based upon tensor products, given that the fundamental identity of End(*V*) with *V* ⊗ *V*^{∗} is equivalent to the expressibility of any linear map as the sum of rank-one linear maps. As such, the proof may be written in the notation of tensor products. Then one may consider the multilinear map *V* × *V*^{∗} × *V* × *V*^{∗} → *V* ⊗ *V*^{∗} given by sending (*v*, *φ*, *w*, *ψ*) to *φ*(*w*)*v* ⊗ *ψ*. Further composition with the trace map then results in *φ*(*w*)*ψ*(*v*), and this is unchanged if one were to have started with (*w*, *ψ*, *v*, *φ*) instead. One may also consider the bilinear map End(*V*) × End(*V*) → End(*V*) given by sending (*f*, *g*) to the composition *f* ∘ *g*, which is then induced by a linear map End(*V*) ⊗ End(*V*) → End(*V*). It can be seen that this coincides with the linear map *V* ⊗ *V*^{∗} ⊗ *V* ⊗ *V*^{∗} → *V* ⊗ *V*^{∗}. The established symmetry upon composition with the trace map then establishes the equality of the two traces.^{[6]}

For any finite dimensional vector space V, there is a natural linear map *F* → *V* ⊗ *V*'; in the language of linear maps, it assigns to a scalar c the linear map *c*⋅id_{V}. Sometimes this is called *coevaluation map*, and the trace *V* ⊗ *V*' → *F* is called *evaluation map*.^{[6]} These structures can be axiomatized to define categorical traces in the abstract setting of category theory.

**^**This is immediate from the definition of the matrix product:**^**For example, if**AB**) = 1 ≠ 0 ⋅ 0 = tr(**A**)tr(**B**).**^**Proof: Let the standard basis and note that if and only if and**^**Proof: is a semisimple Lie algebra and thus every element in it is a linear combination of commutators of some pairs of elements, otherwise the derived algebra would be a proper ideal.**^**This follows from the fact that tr(**A*****A**) = 0 if and only if**A**=**0**.

- ^
^{a}^{b}^{c}^{d}^{e}"Rank, trace, determinant, transpose, and inverse of matrices".*fourier.eng.hmc.edu*. Retrieved 2020-09-09. - ^
^{a}^{b}^{c}^{d}Weisstein, Eric W. (2003). "Trace (matrix)".*CRC Concise Encyclopedia of Mathematics*(Second edition of 1999 original ed.). Boca Raton, FL: Chapman & Hall. doi:10.1201/9781420035223. ISBN 1-58488-347-2. MR 1944431. Zbl 1079.00009. Retrieved 2020-09-09. - ^
^{a}^{b}^{c}^{d}Lipschutz, Seymour; Lipson, Marc (September 2005).*Schaum's Outline of Theory and Problems of Linear Algebra*. McGraw-Hill. ISBN 9780070605022. **^**Horn, Roger A.; Johnson, Charles R. (2013).*Matrix Analysis*(2nd ed.). Cambridge University Press. ISBN 9780521839402.**^**Teschl, G. (30 October 2014).*Mathematical Methods in Quantum Mechanics*. Graduate Studies in Mathematics. Vol. 157 (2nd ed.). American Mathematical Society. ISBN 978-1470417048.- ^
^{a}^{b}^{c}^{d}Kassel, Christian (1995).*Quantum groups*. Graduate Texts in Mathematics. Vol. 155. New York: Springer-Verlag. doi:10.1007/978-1-4612-0783-2. ISBN 0-387-94370-6. MR 1321145. Zbl 0808.17003.

- Gantmacher, F. R. (1959).
*The theory of matrices. Vols. 1, 2*. Translated by K. A. Hirsch. New York: Chelsea Publishing Company. MR 0107649. - Horn, Roger A.; Johnson, Charles R. (2013).
*Matrix analysis*(Second edition of 1985 original ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-54823-6. MR 2978290. - Strang, Gilbert (2004).
*Linear algebra and its applications*(Fourth edition of 1976 original ed.). Cengage Learning. ISBN 978-0030105678.

- "Trace of a square matrix",
*Encyclopedia of Mathematics*, EMS Press, 2001 [1994]