Inner product space

Summary

Geometric interpretation of the angle between two vectors defined using an inner product
Scalar product spaces, inner product spaces, Hermitian product spaces.
Scalar product spaces, over any field, have "scalar products" that are symmetrical and linear in the first argument. Hermitian product spaces are restricted to the field of complex numbers and have "Hermitian products" that are conjugate-symmetrical and linear in the first argument. Inner product spaces may be defined over any field, having "inner products" that are linear in the first argument, conjugate-symmetrical, and positive-definite. Unlike inner products, scalar products and Hermitian products need not be positive-definite.

In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space[1][2]) is a real vector space or a complex vector space with a binary operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets, as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality (zero inner product) of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates.[3] Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.[4]

An inner product naturally induces an associated norm, (denoted and in the picture); so, every inner product space is a normed vector space. If this normed space is also complete (that is, a Banach space) then the inner product space is a Hilbert space.[1] If an inner product space H is not a Hilbert space, it can be extended by completion to a Hilbert space This means that is a linear subspace of the inner product of is the restriction of that of and is dense in for the topology defined by the norm.[1][5]

Definition

In this article, the field of scalars denoted is either the field of real numbers or the field of complex numbers

Formally, an inner product space is a vector space over the field together with a map

called an inner product (linear in its first argument) that satisfies the following conditions (1), (2), and (3) for all vectors and all scalars :[1][6][7]

  1. Linearity in the first argument:[note 1]

     

     

     

     

    (Additivity in the 1st argument)

     

     

     

     

    (Homogeneity in the 1st argument)

    • Each of the above two properties imply for every vector [proof 1]
    • This map is called a sesquilinear form if condition (1) holds and if is also antilinear (sometimes called, conjugate linear) in its second argument,[1] which by definition means that the following always hold:

       

       

       

       

      (Additivity in the 2nd argument)

       

       

       

       

      (Conjugate homogeneity in the 2nd argument)

      (note the complex conjugation of the scalar in the conjugate homogeneity property above). Every inner product is a special type of sesquilinear form.

  2. Hermitian symmetry or conjugate symmetry:[note 2]

     

     

     

     

    (Conjugate symmetry)

    • Conditions (1) and (2) are the defining properties of a Hermitian form, which is a special type of sesquilinear form.[1] A complex sesquilinear form is Hermitian if and only if is real for all [1][proof 2]
    • This condition (Hermitian symmetry) implies that is a real number for all [proof 3]
    • If then this condition holds if and only if is a symmetric map, meaning that for all If is symmetric and condition (1) holds then is a bilinear map (although there exist bilinear maps that are not symmetric). If and takes a non-real scalar as a value then it can not be both symmetric and conjugate symmetric.
  3. Positive definiteness:

     

     

     

     

    (Positive definite)

The above three conditions are the defining properties of an inner product, which is why an inner product is sometimes (equivalently) defined as being a positive-definite Hermitian form. An inner product can equivalently be defined as a positive-definite sesquilinear form.[1][note 3] If (respectively, if ) then an inner product is called a real inner product (respectively, a complex inner product).

Assuming holds, then condition (3) will hold if and only if both conditions (4) and (5) below hold:[1][5]

  1. Point-separating or definiteness:

     

     

     

     

    (Point-separating)

    • If the assignment given by defines a seminorm on then this seminorm will be a norm if and only if condition (4) is satisfied.
  2. Positive semi-definiteness or nonnegative-definiteness:

     

     

     

     

    (Positive semi-definiteness)

    • This condition holds if only if the assignment defined by is well-defined and valued in If this condition does not hold then this assignment does not define a seminorm (nor a norm) on (because by definition, seminorms and norms must be non-negative real-valued).

Conditions (1) through (5) are satisfied by every inner product.

Positive semi-definite Hermitian form:

Conditions (1), (2), and (5) are the defining properties of a positive semi-definite Hermitian form, which allows for the definition of a canonical seminorm on given by

This seminorm will be a norm if and only if condition (4) is satisfied. The constant map is a positive semi-definite Hermitian form. Indeed, the constant map is bilinear, sesquilinear, symmetric, conjugate symmetric, and nonnegative-definite; but it is positive definite if and only if

Conjugate symmetry versus bilinearity and symmetry:

Reasons for why complex inner products are required to be conjugate symmetric instead of bilinear or symmetric are now given. Assuming that and that satisfies if is bilinear then whenever is a scalar such that then it is impossible for both and to be real numbers; [proof 4] consequently, in this case the bilinear map cannot be nonnegative-definite and thus the assignment will not define a seminorm (nor a norm) on If rather than being bilinear, the map is instead sesquilinear (which will be true when conditions (1) and (2) are satisfied), then for any scalar is a real number if and only if is a real number. This shows that when then it is possible for a sesquilinear map to be nonnegative-definite (and thus to induce a seminorm or norm) but this is never possible for a non-zero bilinear map. This is one reason for requiring inner products to be conjugate symmetric instead of bilinear. If satisfies (1) and is symmetric (instead of conjugate symmetric) then it is necessarily bilinear, which is why complex inner products (unlike real inner products) are not required to be symmetric. In particular, if then defining by (with complex conjugation) will induce a norm on but defining it by (without complex conjugation) will not.

Alternative definitions, notations and remarks

A common special case of the inner product, the scalar product or dot product, is written with a centered dot

Some authors, especially in physics and matrix algebra, prefer to define the inner product and the sesquilinear form with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. In those disciplines, we would write the inner product as (the bra–ket notation of quantum mechanics), respectively (dot product as a case of the convention of forming the matrix product as the dot products of rows of with columns of ). Here, the kets and columns are identified with the vectors of and the bras and rows with the linear functionals (covectors) of the dual space with conjugacy associated with duality. This reverse order is now occasionally followed in the more abstract literature,[8] taking to be conjugate linear in rather than A few instead find a middle ground by recognizing both and as distinct notations—differing only in which argument is conjugate linear.

There are various technical reasons why it is necessary to restrict the base field to and in the definition. Briefly, the base field has to contain an ordered subfield in order for non-negativity to make sense,[9] and therefore has to have characteristic equal to (since any ordered field has to have such characteristic). This immediately excludes finite fields. The basefield has to have additional structure, such as a distinguished automorphism for conjugation. More generally, any quadratically closed subfield of or will suffice for this purpose (for example, algebraic numbers, constructible numbers). However, in the cases where it is a proper subfield (that is, neither nor ), even finite-dimensional inner product spaces will fail to be metrically complete. In contrast, all finite-dimensional inner product spaces over or such as those used in quantum computation, are automatically metrically complete (and hence Hilbert spaces).

In some cases, one needs to consider non-negative semi-definite sesquilinear forms. This means that is only required to be non-negative. Treatment for these cases are illustrated below.

Motivation for the definition

Inner products are generalizations of the dot product from the real vector space to an arbitrary vector space over a real or complex field where it will be assumed in this section that There are many different ways to explain how the definition of an inner product arises from generalizing the dot product, and this section gives just one of these. Specifically, this section will detail how the properties (A) and (B) listed below motivate the defining properties of inner products (that is, properties (1), (2), and (3) mentioned above).

A generalization of the dot product should be a scalar-valued map of the form It should also, at a minimum, distribute over addition (that is, be additive in each argument) and induce a norm. That is, should have the following properties:

  1. Additive in its 1st argument and additive in its 2nd argument.
    • This generalizes the following property of the dot product, which is often stated as "the dot product distributes over addition": and
  2. There exists a norm on such that for every vector
    • For the dot product on this norm is the Euclidean/-norm because holds for all

Properties (A) and (B) imply that the norm satisfies the parallelogram law [proof 5] Consequently, the polarization identity can be used with to define a unique inner product on such that for all [10] (see this footnote[note 4] for its definition; note that might be different from ). Thus merely requiring that satisfy (A) and (B) has already led to the appearance of an inner product (although this inner product will no longer be discussed). In addition, properties (A) and (B) imply that the sum is always a real number[proof 5] and that if then for all [proof 6] which indicates that should not be expected to be a symmetric map in the complex case.

Some motivation for why should be homogeneous in one argument (say the first) and conjugate homogeneous in the other will now be given. Additivity in each argument guarantees that is homogeneous over the rational numbers in each argument, meaning that for all vectors and all rational

For these equalities to hold for all real (that is, for to be real homogeneous in each argument), it suffices for there to exist some norm-induced topology on (or even just some vector topology) that makes the map continuous (or even just separately continuous),[proof 7] which is a relatively mild condition. These observations suggest that it is not too much to ask that be real homogeneous in each argument, which if is the same as being homogeneous (and thus linear) in each argument. Given real homogeneity in both arguments, if then it is reasonable to demand that be (fully) homogeneous in at least one argument, say in its first argument, which will be true if and only if holds for all

Property (B) and homogeneity in the 1st argument imply that for any scalar and vector

where solving for gives:
Thus when property (B) makes it impossible for to be homogeneous in both arguments. Moreover, the formula suggests[note 5] that should be conjugate homogeneous in its second argument. The above derivation of the formula shows that the appearance of the complex conjugation of the scalar may be viewed as a consequence of the fact that (1) absolute homogeneity introduces an absolute value around while homogeneity does not, and (2) the equation is solved by the complex conjugate Said differently, the complex conjugate arises from solving for where (1) absolute homogeneity leads to the factor in and (2) this newest equation happens to be solved by

Property (B) implies that the map must be positive definite (condition (3) above), which explains why the definition of an inner product requires positive definiteness. Because it is positive definite, is an inner product if and only if it satisfies (1) and (2) above (that is, linearity in the first argument and Hermitian symmetry), which are the two defining properties of a Hermitian form. Since is real for all when then is a Hermitian form if and only if it is a sesquilinear form.[1] Because is assumed to be additive in each argument, it is a sesquilinear form if and only if it is homogeneous in one argument and conjugate homogeneous in the other. Thus to motivate why inner products should satisfy properties (1) and (2) above, it is enough to give motivation for why should be a sesquilinear form, which has already been discussed.

Elementary properties

Positive definiteness ensures that:

while is guaranteed by both homogeneity in the 1st argument and also by additivity in the 1st argument.[proof 1]

For every vector conjugate symmetry guarantees which implies that is a real number. It also guarantees that for all vectors and

where denotes the real part of a scalar

Conjugate symmetry and linearity in the first variable imply[proof 8] conjugate linearity, also known as antilinearity, in the second argument; explicitly, this means that for any vectors and any scalar

 

 

 

 

(Antilinearity in the 2nd argument)

This shows that every inner product is also a sesquilinear form and that inner products are additive in each argument, meaning that for all vectors

Additivity in each argument implies the following important generalization of the familiar square expansion:

where

In the case of conjugate symmetry reduces to symmetry and so sesquilinearity reduces to bilinearity. Hence an inner product on a real vector space is a positive-definite symmetric bilinear form. That is, when then

 

 

 

 

(Symmetry)

and the binomial expansion becomes:

Every inner product is an L-semi-inner product although not all L-semi-inner products are inner products.

One can define an inner product on every finite vector space with a basis, by taking the dot-product of the unique coordinate vectors in the baseis. Conversely, the inner product of a vector space with an orthogonal basis, can always be expressed as a dot-product, using Parseval's identity.

Some examples

Real and complex numbers

Among the simplest examples of inner product spaces are and The real numbers are a vector space over that becomes a real inner product space when endowed with standard multiplication as its real inner product:[3]

The complex numbers are a vector space over that becomes a complex inner product space when endowed with the complex inner product

Unlike with the real numbers, the assignment does not define a complex inner product on

Euclidean vector space

More generally, the real -space with the dot product is an inner product space,[3] an example of a Euclidean vector space.

where is the transpose of

A function is an inner product on if and only if there exists a symmetric positive-definite matrix such that for all If is the identity matrix then is the dot product. For another example, if and is positive-definite (which happens if and only if and one/both diagonal elements are positive) then for any

As mentioned earlier, every inner product on is of this form (where and satisfy ).

Complex coordinate space

The general form of an inner product on is known as the Hermitian form and is given by

where is any Hermitian positive-definite matrix and is the conjugate transpose of For the real case, this corresponds to the dot product of the results of directionally-different scaling of the two vectors, with positive scale factors and orthogonal directions of scaling. It is a weighted-sum version of the dot product with positive weights—up to an orthogonal transformation.

Hilbert space

The article on Hilbert spaces has several examples of inner product spaces, wherein the metric induced by the inner product yields a complete metric space. An example of an inner product space which induces an incomplete metric is the space of continuous complex valued functions and on the interval The inner product is

This space is not complete; consider for example, for the interval [−1, 1] the sequence of continuous "step" functions, defined by:

This sequence is a Cauchy sequence for the norm induced by the preceding inner product, which does not converge to a continuous function.

Random variables

For real random variables and the expected value of their product

is an inner product.[11][12][13] In this case, if and only if (that is, almost surely), where denotes the probability of the event. This definition of expectation as inner product can be extended to random vectors as well.

Complex matrices

The inner product for complex square matrices of the same size is the Frobenius inner product . Since trace and transposition are linear and the conjugation is on the second matrix, it is a sesquilinear operator. We further get Hermitian symmetry by,

Finally, since for nonzero, , we get that the Frobenius inner product is positive definite too, and so is an inner product.

Vector spaces with forms

On an inner product space, or more generally a vector space with a nondegenerate form (hence an isomorphism ), vectors can be sent to covectors (in coordinates, via transpose), so that one can take the inner product and outer product of two vectors—not simply of a vector and a covector.

Basic results, terminology, and definitions

Norm

Every inner product space induces a norm, called its canonical norm, that is defined by[3]

With this norm, every inner product space becomes a normed vector space.

As for every normed vector space, an inner product space is a metric space, for the distance defined by

The axioms of the inner product guarantee that the map above forms a norm, which will have the following properties.

Homogeneity
For a vector and a scalar
Triangle inequality
For vectors
These two properties show that one has indeed a norm.
Cauchy–Schwarz inequality
For vectors
with equality if and only if and are linearly dependent. In the Russian mathematical literature, this inequality is also known as the Cauchy–Bunyakovsky inequality or the Cauchy–Bunyakovsky–Schwarz inequality.
Cosine similarity
When is a real number then the Cauchy–Schwarz inequality guarantees that lies in the domain of the inverse trigonometric function and so the (non oriented) angle between and can be defined as:
where
Polarization identity
The inner product can be retrieved from the norm by the polarization identity
which is a form of the law of cosines.
Orthogonality
Two vectors and are called orthogonal, written if their inner product is zero: This happens if and only if for all scalars [14] Moreover, for the scalar minimizes with value For a complex − but not real − inner product space a linear operator is identically if and only if for every [14]
Orthogonal complement
The orthogonal complement of a subset is the set of all vectors such that and are orthogonal for all ; that is, it is the set
This set is always a closed vector subspace of and if the closure of in is a vector subspace then
Pythagorean theorem
Whenever and then
The proof of the identity requires only expressing the definition of norm in terms of the inner product and multiplying out, using the property of additivity of each component. The name Pythagorean theorem arises from the geometric interpretation in Euclidean geometry.
Parseval's identity
An induction on the Pythagorean theorem yields: if are orthogonal vectors (meaning that for distinct indices ) then
Parallelogram law
For all
The parallelogram law is, in fact, a necessary and sufficient condition for the existence of an inner product corresponding to a given norm.
Ptolemy's inequality
For all
Ptolemy's inequality is, in fact, a necessary and sufficient condition for the existence of an inner product corresponding to a given norm. In detail, Isaac Jacob Schoenberg proved in 1952 that, given any real, seminormed space, if its seminorm is ptolemaic, then the seminorm is the norm associated with an inner product.[15]

Real and complex parts of inner products

Suppose that is an inner product on (so it is antilinear in its second argument). The polarization identity shows that the real part of the inner product is

If is a real vector space then

and the imaginary part (also called the complex part) of is always

Assume for the rest of this section that is a complex vector space. The polarization identity for complex vector spaces shows that

The map defined by for all satisfies the axioms of the inner product except that it is antilinear in its first, rather than its second, argument. The real part of both and are equal to but the inner products differ in their complex part:

The last equality is similar to the formula expressing a linear functional in terms of its real part.

These formulas show that every complex inner product is completely determined by its real part. There is thus a one-to-one correspondence between complex inner products and real inner products. For example, suppose that for some integer When is considered as a real vector space in the usual way (meaning that it is identified with the dimensional real vector space with each identified with ), then the dot product defines a real inner product on this space. The unique complex inner product on induced by the dot product is the map that sends to (because the real part of this map is equal to the dot product).

Real vs. complex inner products

Let denote considered as a vector space over the real numbers rather than complex numbers. The real part of the complex inner product is the map which necessarily forms a real inner product on the real vector space Every inner product on a real vector space is a bilinear and symmetric map.

For example, if with inner product where is a vector space over the field then is a vector space over and is the dot product where is identified with the point (and similarly for ); thus the standard inner product on is an "extension" the dot product . Also, had been instead defined to be the symmetric map (rather than the usual conjugate symmetric map ) then its real part would not be the dot product; furthermore, without the complex conjugate, if but then so the assignment would not define a norm.

The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable. For instance, if then but the next example shows that the converse is in general not true. Given any the vector (which is the vector rotated by 90°) belongs to and so also belongs to (although scalar multiplication of by is not defined in the vector in denoted by is nevertheless still also an element of ). For the complex inner product, whereas for the real inner product the value is always

If is a complex inner product and is a continuous linear operator that satisfies for all then This statement is no longer true if is instead a real inner product, as this next example shows. Suppose that has the inner product mentioned above. Then the map defined by is a linear map (linear for both and ) that denotes rotation by in the plane. Because and perpendicular vectors and is just the dot product, for all vectors nevertheless, this rotation map is certainly not identically In contrast, using the complex inner product gives which (as expected) is not identically zero.

Orthonormal sequences

Let be a finite dimensional inner product space of dimension Recall that every basis of consists of exactly linearly independent vectors. Using the Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis is orthonormal if for every and for each index

This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let be any inner product space. Then a collection

is a basis for if the subspace of generated by finite linear combinations of elements of is dense in (in the norm induced by the inner product). Say that is an orthonormal basis for if it is a basis and
if and for all

Using an infinite-dimensional analog of the Gram-Schmidt process one may show:

Theorem. Any separable inner product space has an orthonormal basis.

Using the Hausdorff maximal principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that

Theorem. Any complete inner product space has an orthonormal basis.

The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's A Hilbert Space Problem Book (see the references).[citation needed]

Parseval's identity leads immediately to the following theorem:

Theorem. Let be a separable inner product space and an orthonormal basis of Then the map

is an isometric linear map with a dense image.

This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series:

Theorem. Let be the inner product space Then the sequence (indexed on set of all integers) of continuous functions

is an orthonormal basis of the space with the inner product. The mapping
is an isometric linear map with dense image.

Orthogonality of the sequence follows immediately from the fact that if then

Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the inner product norm, follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials.

Operators on inner product spaces

Several types of linear maps between inner product spaces and are of relevance:

  • Continuous linear maps: is linear and continuous with respect to the metric defined above, or equivalently, is linear and the set of non-negative reals where ranges over the closed unit ball of is bounded.
  • Symmetric linear operators: is linear and for all
  • Isometries: satisfies for all A linear isometry (resp. an antilinear isometry) is an isometry that is also a linear map (resp. an antilinear map). For inner product spaces, the polarization identity can be used to show that is an isometry if and only if for all All isometries are injective. The Mazur–Ulam theorem establishes that every surjective isometry between two real normed spaces is an affine transformation. Consequently, an isometry between real inner product spaces is a linear map if and only if Isometries are morphisms between inner product spaces, and morphisms of real inner product spaces are orthogonal transformations (compare with orthogonal matrix).
  • Isometrical isomorphisms: is an isometry which is surjective (and hence bijective). Isometrical isomorphisms are also known as unitary operators (compare with unitary matrix).

From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces.

Generalizations

Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened.

Degenerate inner products

If is a vector space and a semi-definite sesquilinear form, then the function:

makes sense and satisfies all the properties of norm except that does not imply (such a functional is then called a semi-norm). We can produce an inner product space by considering the quotient The sesquilinear form factors through

This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly important example of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets.

Nondegenerate conjugate symmetric forms

Alternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero there exists some such that though need not equal ; in other words, the induced map to the dual space is injective. This generalization is important in differential geometry: a manifold whose tangent spaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugate symmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with nonzero weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index. Product of vectors in Minkowski space is an example of indefinite inner product, although, technically speaking, it is not an inner product according to the standard definition above. Minkowski space has four dimensions and indices 3 and 1 (assignment of "+" and "−" to them differs depending on conventions).

Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the injective homomorphism ) and thus hold more generally.

Related products

The term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a covector with an vector, yielding a matrix (a scalar), while the outer product is the product of an vector with a covector, yielding an matrix. The outer product is defined for different dimensions, while the inner product requires the same dimension. If the dimensions are the same, then the inner product is the trace of the outer product (trace only being properly defined for square matrices). In an informal summary: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out".

More abstractly, the outer product is the bilinear map sending a vector and a covector to a rank 1 linear transformation (simple tensor of type (1, 1)), while the inner product is the bilinear evaluation map given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the covector/vector distinction.

The inner product and outer product should not be confused with the interior product and exterior product, which are instead operations on vector fields and differential forms, or more generally on the exterior algebra.

As a further complication, in geometric algebra the inner product and the exterior (Grassmann) product are combined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors (1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this context the exterior product is usually called the outer product (alternatively, wedge product). The inner product is more correctly called a scalar product in this context, as the nondegenerate quadratic form in question need not be positive definite (need not be an inner product).

See also

Notes

  1. ^ By combining the linear in the first argument property with the conjugate symmetry property you get conjugate-linear in the second argument: This is how the inner product was originally defined and is still used in some old-school math communities. However, all of engineering and computer science, and most of physics and modern mathematics now define the inner product to be linear in the second argument and conjugate-linear in the first argument because this is more compatible with several other conventions in mathematics. Notably, for any inner product, there is some hermitian, positive-definite matrix such that (Here, is the conjugate transpose of )
  2. ^ A line over an expression or symbol, such as or denotes complex conjugation. A scalar is real if and only if
  3. ^ This is because condition (1) (that is, linearity in the first argument) and positive definiteness implies that is always a real number. And as mentioned before, a sesquilinear form is Hermitian if and only if is real for all
  4. ^ Let If then let while if then let See the polarization identity article for more details.
  5. ^ If can be written as for some function (in particular, this assumes that the scalar in front of that results from trying to "pull out of " does not depend on ) then implies that (when ) and consequently, will hold for all

Proofs

  1. ^ a b Homogeneity in the 1st argument implies Additivity in the 1st argument implies so adding to both sides proves
  2. ^ Assume that it is a sesquilinear form that satisfies for all To conclude that it is necessary and sufficient to show that the real parts of and are equal and that their imaginary parts are negatives of each other. For all because and the left hand side is real, is also real, which implies that the Similarly, But sesquilinearity implies which is only possible if the real parts of and are equal.
  3. ^ A complex number is a real number if and only if Using in condition (2) gives which implies that is a real number.
  4. ^ Assume that is a bilinear map and that satisfies Let be defined by where bilinearity implies that holds for all scalars Since the scalar is well-defined and so if and only if If is a scalar such that (which implies and ) then implies and similarly, implies this shows that for such a at most one of can be real. If and then pick such that which implies that thus so is surjective. If and (resp. ) then for any (resp. any ), which shows that (resp. ).
  5. ^ a b Note that and which implies that This proves that satisfies the parallelogram law. This also shows that which proves that is a real number.
  6. ^ Combining and proves that
  7. ^ Fix The equality will be discussed first. Define by and Because for all and are equal on a dense subset of Since is constant, the map is continuous (where the Hausdorff space which is either or has its usual Euclidean topology). Consequently, if is also continuous then and will necessarily be equal on all of that is, will hold for all real If are defined by and then So for to be continuous, it suffices for there to exist some topology on that makes both and continuous (or even just sequentially continuous). The map will automatically be continuous if is a topological vector space topology, such as a topology induced by a norm. The map will be continuous if is separately continuous (which will be true if is continuous). The discussion of the equality is nearly identical, with the main difference being that must be redefined as and
  8. ^ Let be vectors and let be a scalar. Then and

References

  1. ^ a b c d e f g h i j Trèves 2006, pp. 112–125.
  2. ^ Schaefer & Wolff 1999, pp. 40–45.
  3. ^ a b c d Weisstein, Eric W. "Inner Product". mathworld.wolfram.com. Retrieved 2020-08-25..
  4. ^ Moore, Gregory H. (1995). "The axiomatization of linear algebra: 1875-1940". Historia Mathematica. 22 (3): 262–303. doi:10.1006/hmat.1995.1025.
  5. ^ a b Schaefer & Wolff 1999, pp. 36–72.
  6. ^ Jain, P. K.; Ahmad, Khalil (1995). "5.1 Definitions and basic properties of inner product spaces and Hilbert spaces". Functional Analysis (2nd ed.). New Age International. p. 203. ISBN 81-224-0801-X.
  7. ^ Prugovec̆ki, Eduard (1981). "Definition 2.1". Quantum Mechanics in Hilbert Space (2nd ed.). Academic Press. pp. 18ff. ISBN 0-12-566060-X.
  8. ^ Emch, Gerard G. (1972). Algebraic Methods in Statistical Mechanics and Quantum Field Theory. New York: Wiley-Interscience. ISBN 978-0-471-23900-0.
  9. ^ Finkbeiner, Daniel T. (2013), Introduction to Matrices and Linear Transformations, Dover Books on Mathematics (3rd ed.), Courier Dover Publications, p. 242, ISBN 9780486279664.
  10. ^ Schechter 1996, pp. 601–603.
  11. ^ Ouwehand, Peter (November 2010). "Spaces of Random Variables" (PDF). AIMS. Retrieved 2017-09-05.
  12. ^ Siegrist, Kyle (1997). "Vector Spaces of Random Variables". Random: Probability, Mathematical Statistics, Stochastic Processes. Retrieved 2017-09-05.
  13. ^ Bigoni, Daniele (2015). "Appendix B: Probability theory and functional spaces" (PDF). Uncertainty Quantification with Applications to Engineering Problems (PhD). Technical University of Denmark. Retrieved 2017-09-05.
  14. ^ a b Rudin 1991, pp. 306–312.
  15. ^ Apostol, Tom M. (1967). "Ptolemy's Inequality and the Chordal Metric". Mathematics Magazine. 40 (5): 233–235. doi:10.2307/2688275. JSTOR 2688275.

Bibliography

  • Axler, Sheldon (1997). Linear Algebra Done Right (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-98258-8.
  • Emch, Gerard G. (1972). Algebraic Methods in Statistical Mechanics and Quantum Field Theory. Wiley-Interscience. ISBN 978-0-471-23900-0.
  • Halmos, Paul R. (8 November 1982). A Hilbert Space Problem Book. Graduate Texts in Mathematics. 19 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90685-0. OCLC 8169781.
  • Lax, Peter D. (2002). Functional Analysis (PDF). Pure and Applied Mathematics. New York: Wiley-Interscience. ISBN 978-0-471-55604-6. OCLC 47767143. Retrieved July 22, 2020.
  • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
  • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
  • Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
  • Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067.
  • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
  • Young, Nicholas (1988). An Introduction to Hilbert Space. Cambridge University Press. ISBN 978-0-521-33717-5.