BREAKING NEWS
Lie's theorem

## Summary

In mathematics, specifically the theory of Lie algebras, Lie's theorem states that,[1] over an algebraically closed field of characteristic zero, if ${\displaystyle \pi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)}$ is a finite-dimensional representation of a solvable Lie algebra, then there's a flag ${\displaystyle V=V_{0}\supset V_{1}\supset \cdots \supset V_{n}=0}$ of invariant subspaces of ${\displaystyle \pi ({\mathfrak {g}})}$ with ${\displaystyle \operatorname {codim} V_{i}=i}$, meaning that ${\displaystyle \pi (X)(V_{i})\subseteq V_{i}}$ for each ${\displaystyle X\in {\mathfrak {g}}}$ and i.

Put in another way, the theorem says there is a basis for V such that all linear transformations in ${\displaystyle \pi ({\mathfrak {g}})}$ are represented by upper triangular matrices.[2] This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices generate an abelian Lie algebra, which is a fortiori solvable.

A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see #Consequences). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that ${\displaystyle \pi ({\mathfrak {g}})}$ is contained in some Borel subalgebra of ${\displaystyle {\mathfrak {gl}}(V)}$.[1]

## Counter-example

For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.

## Proof

The proof is by induction on the dimension of ${\displaystyle {\mathfrak {g}}}$  and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of ${\displaystyle {\mathfrak {g}}}$  is positive. We also assume V is not zero. For simplicity, we write ${\displaystyle X\cdot v=\pi (X)(v)}$ .

Step 1: Observe that the theorem is equivalent to the statement:[3]

• There exists a vector in V that is an eigenvector for each linear transformation in ${\displaystyle \pi ({\mathfrak {g}})}$ .

Indeed, the theorem says in particular that a nonzero vector spanning ${\displaystyle V_{n-1}}$  is a common eigenvector for all the linear transformations in ${\displaystyle \pi ({\mathfrak {g}})}$ . Conversely, if v is a common eigenvector, take ${\displaystyle V_{n-1}}$  to its span and then ${\displaystyle \pi ({\mathfrak {g}})}$  admits a common eigenvector in the quotient ${\displaystyle V/V_{n-1}}$ ; repeat the argument.

Step 2: Find an ideal ${\displaystyle {\mathfrak {h}}}$  of codimension one in ${\displaystyle {\mathfrak {g}}}$ .

Let ${\displaystyle D{\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]}$  be the derived algebra. Since ${\displaystyle {\mathfrak {g}}}$  is solvable and has positive dimension, ${\displaystyle D{\mathfrak {g}}\neq {\mathfrak {g}}}$  and so the quotient ${\displaystyle {\mathfrak {g}}/D{\mathfrak {g}}}$  is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in ${\displaystyle {\mathfrak {g}}}$ .

Step 3: There exists some linear functional ${\displaystyle \lambda }$  in ${\displaystyle {\mathfrak {h}}^{*}}$  such that

${\displaystyle V_{\lambda }=\{v\in V|X\cdot v=\lambda (X)v,X\in {\mathfrak {h}}\}}$

is nonzero. This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional).

Step 4: ${\displaystyle V_{\lambda }}$  is a ${\displaystyle {\mathfrak {g}}}$ -invariant subspace. (Note this step proves a general fact and does not involve solvability.)

Let ${\displaystyle Y\in {\mathfrak {g}}}$ , ${\displaystyle v\in V_{\lambda }}$ , then we need to prove ${\displaystyle Y\cdot v\in V_{\lambda }}$ . If ${\displaystyle v=0}$  then it's obvious, so assume ${\displaystyle v\neq 0}$  and set recursively ${\displaystyle v_{0}=v,\,v_{i+1}=Y\cdot v_{i}}$ . Let ${\displaystyle U=\operatorname {span} \{v_{i}|i\geq 0\}}$  and ${\displaystyle \ell \in \mathbb {N} _{0}}$  be the largest such that ${\displaystyle v_{0},\ldots ,v_{\ell }}$  are linearly independent. Then we'll prove that they generate U and thus ${\displaystyle \alpha =(v_{0},\ldots ,v_{\ell })}$  is a basis of U. Indeed, assume by contradiction that it's not the case and let ${\displaystyle m\in \mathbb {N} _{0}}$  be the smallest such that ${\displaystyle v_{m}\notin \langle v_{0},\ldots ,v_{\ell }\rangle }$ , then obviously ${\displaystyle m\geq \ell +1}$ . Since ${\displaystyle v_{0},\ldots ,v_{\ell +1}}$  are linearly dependent, ${\displaystyle v_{\ell +1}}$  is a linear combination of ${\displaystyle v_{0},\ldots ,v_{\ell }}$ . Applying the map ${\displaystyle Y^{m-\ell -1}}$  it follows that ${\displaystyle v_{m}}$  is a linear combination of ${\displaystyle v_{m-\ell -1},\ldots ,v_{m-1}}$ . Since by the minimality of m each of these vectors is a linear combination of ${\displaystyle v_{0},\ldots ,v_{\ell }}$ , so is ${\displaystyle v_{m}}$ , and we get the desired contradiction. We'll prove by induction that for every ${\displaystyle n\in \mathbb {N} _{0}}$  and ${\displaystyle X\in {\mathfrak {h}}}$  there exist elements ${\displaystyle a_{0,n,X},\ldots ,a_{n,n,X}}$  of the base field such that ${\displaystyle a_{n,n,X}=\lambda (X)}$  and

${\displaystyle X\cdot v_{n}=\sum _{i=0}^{n}a_{i,n,X}v_{i}.}$

The ${\displaystyle n=0}$  case is straightforward since ${\displaystyle X\cdot v_{0}=\lambda (X)v_{0}}$ . Now assume that we have proved the claim for some ${\displaystyle n\in \mathbb {N} _{0}}$  and all elements of ${\displaystyle {\mathfrak {h}}}$  and let ${\displaystyle X\in {\mathfrak {h}}}$ . Since ${\displaystyle {\mathfrak {h}}}$  is an ideal, it's ${\displaystyle [X,Y]\in {\mathfrak {h}}}$ , and thus

${\displaystyle X\cdot v_{n+1}=Y\cdot (X\cdot v_{n})+[X,Y]\cdot v_{n}=Y\cdot \sum _{i=0}^{n}a_{i,n,X}v_{i}+\sum _{i=0}^{n}a_{i,n,[X,Y]}v_{i}=a_{0,n,[X,Y]}v_{0}+\sum _{i=1}^{n}(a_{i-1,n,X}+a_{i,n,[X,Y]})v_{i}+\lambda (X)v_{n+1},}$

and the induction step follows. This implies that for every ${\displaystyle X\in {\mathfrak {h}}}$  the subspace U is an invariant subspace of X and the matrix of the restricted map ${\displaystyle \pi (X)|_{U}}$  in the basis ${\displaystyle \alpha }$  is upper triangular with diagonal elements equal to ${\displaystyle \lambda (X)}$ , hence ${\displaystyle \operatorname {tr} (\pi (X)|_{U})=\dim(U)\lambda (X)}$ . Applying this with ${\displaystyle [X,Y]\in {\mathfrak {h}}}$  instead of X gives ${\displaystyle \operatorname {tr} (\pi ([X,Y])|_{U})=\dim(U)\lambda ([X,Y])}$ . On the other hand, U is also obviously an invariant subspace of Y, and so

${\displaystyle \operatorname {tr} (\pi ([X,Y])|_{U})=\operatorname {tr} ([\pi (X),\pi (Y)]|_{U}])=\operatorname {tr} ([\pi (X)|_{U},\pi (Y)|_{U}])=0}$

since commutators have zero trace, and thus ${\displaystyle \dim(U)\lambda ([X,Y])=0}$ . Since ${\displaystyle \dim(U)>0}$  is invertible (because of the assumption on the characteristic of the base field), ${\displaystyle \lambda ([X,Y])=0}$  and

${\displaystyle X\cdot (Y\cdot v)=Y\cdot (X\cdot v)+[X,Y]\cdot v=Y\cdot (\lambda (X)v)+\lambda ([X,Y])v=\lambda (X)(Y\cdot v),}$

and so ${\displaystyle Y\cdot v\in V_{\lambda }}$ .

Step 5: Finish up the proof by finding a common eigenvector.

Write ${\displaystyle {\mathfrak {g}}={\mathfrak {h}}+L}$  where L is a one-dimensional vector subspace. Since the base field is algebraically closed, there exists an eigenvector in ${\displaystyle V_{\lambda }}$  for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of ${\displaystyle {\mathfrak {h}}}$ , the proof is complete. ${\displaystyle \square }$

## Consequences

The theorem applies in particular to the adjoint representation ${\displaystyle \operatorname {ad} :{\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})}$  of a (finite-dimensional) solvable Lie algebra ${\displaystyle {\mathfrak {g}}}$  over an algebraically closed field of characteristic zero; thus, one can choose a basis on ${\displaystyle {\mathfrak {g}}}$  with respect to which ${\displaystyle \operatorname {ad} ({\mathfrak {g}})}$  consists of upper triangular matrices. It follows easily that for each ${\displaystyle x,y\in {\mathfrak {g}}}$ , ${\displaystyle \operatorname {ad} ([x,y])=[\operatorname {ad} (x),\operatorname {ad} (y)]}$  has diagonal consisting of zeros; i.e., ${\displaystyle \operatorname {ad} ([x,y])}$  is a strictly upper triangular matrix. This implies that ${\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}$  is a nilpotent Lie algebra. Moreover, if the base field is not algebraically closed then solvability and nilpotency of a Lie algebra is unaffected by extending the base field to its algebraic closure. Hence, one concludes the statement (the other implication is obvious):[4]

A finite-dimensional Lie algebra ${\displaystyle {\mathfrak {g}}}$  over a field of characteristic zero is solvable if and only if the derived algebra ${\displaystyle D{\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]}$  is nilpotent.

Lie's theorem also establishes one direction in Cartan's criterion for solvability:

If V is a finite-dimensional vector space over a field of characteristic zero and ${\displaystyle {\mathfrak {g}}\subseteq {\mathfrak {gl}}(V)}$  a Lie subalgebra, then ${\displaystyle {\mathfrak {g}}}$  is solvable if and only if ${\displaystyle \operatorname {tr} (XY)=0}$  for every ${\displaystyle X\in {\mathfrak {g}}}$  and ${\displaystyle Y\in [{\mathfrak {g}},{\mathfrak {g}}]}$ .[5]

Indeed, as above, after extending the base field, the implication ${\displaystyle \Rightarrow }$  is seen easily. (The converse is more difficult to prove.)

Lie's theorem (for various V) is equivalent to the statement:[6]

For a solvable Lie algebra ${\displaystyle {\mathfrak {g}}}$  over an algebraically closed field of characteristic zero, each finite-dimensional simple ${\displaystyle {\mathfrak {g}}}$ -module (i.e., irreducible as a representation) has dimension one.

Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional ${\displaystyle {\mathfrak {g}}}$ -module V, let ${\displaystyle V_{1}}$  be a maximal ${\displaystyle {\mathfrak {g}}}$ -submodule (which exists by finiteness of the dimension). Then, by maximality, ${\displaystyle V/V_{1}}$  is simple; thus, is one-dimensional. The induction now finishes the proof.

The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true over any base field since in this case every vector subspace is a Lie subalgebra.[7]

Here is another quite useful application:[8]

Let ${\displaystyle {\mathfrak {g}}}$  be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with radical ${\displaystyle \operatorname {rad} ({\mathfrak {g}})}$ . Then each finite-dimensional simple representation ${\displaystyle \pi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)}$  is the tensor product of a simple representation of ${\displaystyle {\mathfrak {g}}/\operatorname {rad} ({\mathfrak {g}})}$  with a one-dimensional representation of ${\displaystyle {\mathfrak {g}}}$  (i.e., a linear functional vanishing on Lie brackets).

By Lie's theorem, we can find a linear functional ${\displaystyle \lambda }$  of ${\displaystyle \operatorname {rad} ({\mathfrak {g}})}$  so that there is the weight space ${\displaystyle V_{\lambda }}$  of ${\displaystyle \operatorname {rad} ({\mathfrak {g}})}$ . By Step 4 of the proof of Lie's theorem, ${\displaystyle V_{\lambda }}$  is also a ${\displaystyle {\mathfrak {g}}}$ -module; so ${\displaystyle V=V_{\lambda }}$ . In particular, for each ${\displaystyle X\in \operatorname {rad} ({\mathfrak {g}})}$ , ${\displaystyle \operatorname {tr} (\pi (X))=\dim(V)\lambda (X)}$ . Extend ${\displaystyle \lambda }$  to a linear functional on ${\displaystyle {\mathfrak {g}}}$  that vanishes on ${\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}$ ; ${\displaystyle \lambda }$  is then a one-dimensional representation of ${\displaystyle {\mathfrak {g}}}$ . Now, ${\displaystyle (\pi ,V)\simeq (\pi ,V)\otimes (-\lambda )\otimes \lambda }$ . Since ${\displaystyle \pi }$  coincides with ${\displaystyle \lambda }$  on ${\displaystyle \operatorname {rad} ({\mathfrak {g}})}$ , we have that ${\displaystyle V\otimes (-\lambda )}$  is trivial on ${\displaystyle \operatorname {rad} ({\mathfrak {g}})}$  and thus is the restriction of a (simple) representation of ${\displaystyle {\mathfrak {g}}/\operatorname {rad} ({\mathfrak {g}})}$ . ${\displaystyle \square }$

## References

1. ^ a b Serre 2001, Theorem 3
2. ^ Humphreys 1972, Ch. II, § 4.1., Corollary A.
3. ^ Serre 2001, Theorem 3″
4. ^ Humphreys 1972, Ch. II, § 4.1., Corollary C.
5. ^ Serre 2001, Theorem 4
6. ^ Serre 2001, Theorem 3'
7. ^ Jacobson 1979, Ch. II, § 6, Lemma 5.
8. ^ Fulton & Harris 1991, Proposition 9.17.

## Sources

• Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
• Humphreys, James E. (1972), Introduction to Lie Algebras and Representation Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90053-7.
• Jacobson, Nathan (1979), Lie algebras (Republication of the 1962 original ed.), New York: Dover Publications, Inc., ISBN 0-486-63832-4, MR 0559927
• Serre, Jean-Pierre (2001), Complex Semisimple Lie Algebras, Berlin: Springer, doi:10.1007/978-3-642-56884-8, ISBN 3-5406-7827-1, MR 1808366