BREAKING NEWS
Augmented matrix

## Summary

In linear algebra, an augmented matrix ${\displaystyle (A\vert B)}$ is a ${\displaystyle k\times (n+1)}$ matrix obtained by appending a ${\displaystyle k}$-dimensional column vector ${\displaystyle B}$, on the right, as a further column to a ${\displaystyle k\times n}$-dimensional matrix ${\displaystyle A}$. This is usually done for the purpose of performing the same elementary row operations on the augmented matrix ${\displaystyle (A\vert B)}$ as is done on the original one ${\displaystyle A}$ when solving a system of linear equations by Gaussian elimination.

For example, given the matrices ${\displaystyle A}$ and column vector ${\displaystyle B}$, where

${\displaystyle A={\begin{bmatrix}1&3&2\\2&0&1\\5&2&2\end{bmatrix}},\quad B={\begin{bmatrix}4\\3\\1\end{bmatrix}},}$
the augmented matrix ${\displaystyle (A\vert B)}$ is
${\displaystyle (A|B)=\left[{\begin{array}{ccc|c}1&3&2&4\\2&0&1&3\\5&2&2&1\end{array}}\right].}$

For a given number ${\displaystyle n}$ of unknowns, the number of solutions to a system of ${\displaystyle k}$ linear equations depends only on the rank of the matrix of coefficients ${\displaystyle A}$ representing the system and the rank of the corresponding augmented matrix ${\displaystyle (A\vert B)}$ where the components of ${\displaystyle B}$ consist of the right hand sides of the ${\displaystyle k}$ successive linear equations. According to the Rouché–Capelli theorem, any system of linear equations

${\displaystyle AX=B}$

where ${\displaystyle X=(x_{1},\dots ,x_{n})^{T}}$ is the ${\displaystyle n}$-component column vector whose entries are the unknowns of the system is inconsistent (has no solutions) if the rank of the augmented matrix ${\displaystyle (A\vert B)}$ is greater than the rank of the coefficient matrix ${\displaystyle A}$ . If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables ${\displaystyle n}$. Otherwise the general solution has ${\displaystyle j}$ free parameters where ${\displaystyle j}$ is the difference between the number of variables ${\displaystyle n}$ and the rank. In such a case there as an affine space of solutions of dimension equal to this difference.

The inverse of a nonsingular square matrix ${\displaystyle A}$ of dimension ${\displaystyle n\times n}$ may be found by appending the ${\displaystyle n\times n}$ identity matrix ${\displaystyle \mathbf {I} }$ to the right of ${\displaystyle A}$ to form the ${\displaystyle n\times 2n}$ dimensional augmented matrix ${\displaystyle (A\vert \mathbf {I} )}$. Applying elementary row operations to transform the left-hand ${\displaystyle n\times n}$ block to the identity matrix ${\displaystyle \mathbf {I} }$, the right-hand ${\displaystyle n\times n}$ block is then the inverse matrix ${\displaystyle A^{-1}}$

## Example of finding the inverse of a matrix

Let ${\displaystyle A}$  be the square 2×2 matrix

${\displaystyle A={\begin{bmatrix}1&3\\-5&0\end{bmatrix}}.}$

To find the inverse of ${\displaystyle A}$  we form the augmented matrix ${\displaystyle (A\vert \mathbf {I} _{2})}$  where ${\displaystyle \mathbf {I} _{2}}$  is the ${\displaystyle 2\times 2}$  identity matrix. We then reduce the part of ${\displaystyle (A\vert \mathbf {I} _{2}}$  corresponding to ${\displaystyle A}$  to the identity matrix using elementary row operations on ${\displaystyle (A\vert \mathbf {I} _{2})}$  .

${\displaystyle (A\vert \mathbf {I} _{2})=\left[{\begin{array}{cc|cc}1&3&1&0\\-5&0&0&1\end{array}}\right]}$

${\displaystyle (I|A^{-1})=\left[{\begin{array}{cc|cc}1&0&0&-{\frac {1}{5}}\\0&1&{\frac {1}{3}}&{\frac {1}{15}}\end{array}}\right],}$

the right part of which is the inverse ${\displaystyle A^{-1}}$ .

## Existence and number of solutions

Consider the system of equations

{\displaystyle {\begin{aligned}x+y+2z&=2\\x+y+z&=3\\2x+2y+2z&=6.\end{aligned}}}

The coefficient matrix is

${\displaystyle A={\begin{bmatrix}1&1&2\\1&1&1\\2&2&2\\\end{bmatrix}},}$

and the augmented matrix is
${\displaystyle (A|B)=\left[{\begin{array}{ccc|c}1&1&2&2\\1&1&1&3\\2&2&2&6\end{array}}\right].}$

Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are an infinite number of solutions.

In contrast, consider the system

{\displaystyle {\begin{aligned}x+y+2z&=3\\x+y+z&=1\\2x+2y+2z&=5.\end{aligned}}}

The coefficient matrix is

${\displaystyle A={\begin{bmatrix}1&1&2\\1&1&1\\2&2&2\\\end{bmatrix}},}$

and the augmented matrix is
${\displaystyle (A|B)=\left[{\begin{array}{ccc|c}1&1&2&3\\1&1&1&1\\2&2&2&5\end{array}}\right].}$

In this example the coefficient matrix has rank 2 while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent rows has made the system of equations inconsistent.

## Solution of a linear system

As used in linear algebra, an augmented matrix is used to represent the coefficients and the solution vector of each equation set. For the set of equations

{\displaystyle {\begin{aligned}x+2y+3z&=0\\3x+4y+7z&=2\\6x+5y+9z&=11\end{aligned}}}

the coefficients and constant terms give the matrices
${\displaystyle A={\begin{bmatrix}1&2&3\\3&4&7\\6&5&9\end{bmatrix}},\quad B={\begin{bmatrix}0\\2\\11\end{bmatrix}},}$

and hence give the augmented matrix
${\displaystyle (A|B)=\left[{\begin{array}{ccc|c}1&2&3&0\\3&4&7&2\\6&5&9&11\end{array}}\right].}$

Note that the rank of the coefficient matrix, which is 3, equals the rank of the augmented matrix, so at least one solution exists; and since this rank equals the number of unknowns, there is exactly one solution.

To obtain the solution, row operations can be performed on the augmented matrix to obtain the identity matrix on the left side, yielding

${\displaystyle \left[{\begin{array}{ccc|r}1&0&0&4\\0&1&0&1\\0&0&1&-2\\\end{array}}\right],}$

so the solution of the system is (x, y, z) = (4, 1, −2).

## References

• Marvin Marcus and Henryk Minc, A survey of matrix theory and matrix inequalities, Dover Publications, 1992, ISBN 0-486-67102-X. Page 31.