Block matrix

Summary

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.[1][2]

Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices.[3][2] For example, the 3x4 matrix presented below is divided by horizontal and vertical lines into four blocks: the top-left 2x3 block, the top-right 2x1 block, the bottom-left 1x3 block, and the bottom-right 1x1 block.

Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

This notion can be made more precise for an by matrix by partitioning into a collection , and then partitioning into a collection . The original matrix is then considered as the "total" of these groups, in the sense that the entry of the original matrix corresponds in a 1-to-1 way with some offset entry of some , where and .[4]

Block matrix algebra arises in general from biproducts in categories of matrices.[5]

A 168×168 element block matrix with 12×12, 12×24, 24×12, and 24×24 sub-matrices. Non-zero elements are in blue, zero elements are grayed.

Example edit

The matrix

 

can be visualized as divided into four blocks, as

 .

The horizontal and vertical lines have no special mathematical meaning,[6][7] but are a common way to visualize a partition.[6][7] By this partition,   is partitioned into four 2×2 blocks, as

 

The partitioned matrix can then be written as

 [8]

Formal definition edit

Let  . A partitioning of   is a representation of   in the form

 ,

where   are contiguous submatrices,  , and  .[9] The elements   of the partition are called blocks.[9]

By this definition, the blocks in any one column must all have the same number of columns.[9] Similarly, the blocks in any one row must have the same number of rows.[9]

Partitioning methods edit

A matrix can be partitioned in many ways.[9] For example, a matrix   is said to be partitioned by columns if it is written as

 ,

where   is the  th column of  .[9] A matrix can also be partitioned by rows:

 ,

where   is the  th row of  .[9]

Common partitions edit

Often,[9] we encounter the 2x2 partition

 ,[9]

particularly in the form where   is a scalar:

 .[9]

Block matrix operations edit

Transpose edit

Let

 

where  . (This matrix   will be reused in § Addition and § Multiplication.) Then its transpose is

 ,[9][10]

and the same equation holds with the transpose replaced by the conjugate transpose.[9]

Block transpose edit

A special form of matrix transpose can also be defined for block matrices, where individual blocks are reordered but not transposed. Let   be a   block matrix with   blocks  , the block transpose of   is the   block matrix   with   blocks  .[11] As with the conventional trace operator, the block transpose is a linear mapping such that  .[10] However, in general the property   does not hold unless the blocks of   and   commute.

Addition edit

Let

 ,

where  , and let   be the matrix defined in § Transpose. (This matrix   will be reused in § Multiplication.) Then if  ,  ,  , and  , then

 .[9]

Multiplication edit

It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "conformable partitions"[12] between two matrices   and   such that all submatrix products that will be used are defined.[13]

Two matrices   and   are said to be partitioned conformally for the product  , when   and   are partitioned into submatrices and if the multiplication   is carried out treating the submatrices as if they are scalars, but keeping the order, and when all products and sums of submatrices involved are defined.

— Arak M. Mathai and Hans J. Haubold, Linear Algebra: A Course for Physicists and Engineers[14]

Let   be the matrix defined in § Transpose, and let   be the matrix defined in § Addition. Then the matrix product

 

can be performed blockwise, yielding   as an   matrix. The matrices in the resulting matrix   are calculated by multiplying:

 [6]

Or, using the Einstein notation that implicitly sums over repeated indices:

 

Depicting   as a matrix, we have

 .[9]

Inversion edit

If a matrix is partitioned into four blocks, it can be inverted blockwise as follows:

 

where A and D are square blocks of arbitrary size, and B and C are conformable with them for partitioning. Furthermore, A and the Schur complement of A in P: P/A = DCA−1B must be invertible.[15]

Equivalently, by permuting the blocks:

 [16]

Here, D and the Schur complement of D in P: P/D = ABD−1C must be invertible.

If A and D are both invertible, then:

 

By the Weinstein–Aronszajn identity, one of the two matrices in the block-diagonal matrix is invertible exactly when the other is.

Determinant edit

The formula for the determinant of a  -matrix above continues to hold, under appropriate further assumptions, for a matrix composed of four submatrices  . The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is

 [16]

Using this formula, we can derive that characteristic polynomials of   and   are same and equal to the product of characteristic polynomials of   and  .[citation needed] Furthermore, If   or   is diagonalizable, then   and   are diagonalizable too. The converse is false; simply check  .[citation needed]

If   is invertible, one has

 [16]

and if   is invertible, one has

 [17][16]

If the blocks are square matrices of the same size further formulas hold. For example, if   and   commute (i.e.,  ), then

 [18]

This formula has been generalized to matrices composed of more than   blocks, again under appropriate commutativity conditions among the individual blocks.[19]

For   and  , the following formula holds (even if   and   do not commute)

 [16]

Special types of block matrices edit

Direct sums and block diagonal matrices edit

Direct sum edit

For any arbitrary matrices A (of size m × n) and B (of size p × q), we have the direct sum of A and B, denoted by A   B and defined as

 [10]

For instance,

 

This operation generalizes naturally to arbitrary dimensioned arrays (provided that A and B have the same number of dimensions).

Note that any element in the direct sum of two vector spaces of matrices could be represented as a direct sum of two matrices.

Block diagonal matrices edit

A block diagonal matrix is a block matrix that is a square matrix such that the main-diagonal blocks are square matrices and all off-diagonal blocks are zero matrices.[16] That is, a block diagonal matrix A has the form

 

where Ak is a square matrix for all k = 1, ..., n. In other words, matrix A is the direct sum of A1, ..., An.[16] It can also be indicated as A1 ⊕ A2 ⊕ ... ⊕ An[10] or diag(A1, A2, ..., An)[10] (the latter being the same formalism used for a diagonal matrix). Any square matrix can trivially be considered a block diagonal matrix with only one block.

For the determinant and trace, the following properties hold:

 [20][21] and
 [16][21]

A block diagonal matrix is invertible if and only if each of its main-diagonal blocks are invertible, and in this case its inverse is another block diagonal matrix given by

 [22]

The eigenvalues[23] and eigenvectors of   are simply those of the  s combined.[21]

Block tridiagonal matrices edit

A block tridiagonal matrix is another special block matrix, which is just like the block diagonal matrix a square matrix, having square matrices (blocks) in the lower diagonal, main diagonal and upper diagonal, with all other blocks being zero matrices. It is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal matrix   has the form

 

where  ,   and   are square sub-matrices of the lower, main and upper diagonal respectively.[24][25]

Block tridiagonal matrices are often encountered in numerical solutions of engineering problems (e.g., computational fluid dynamics). Optimized numerical methods for LU factorization are available[26] and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix. The Thomas algorithm, used for efficient solution of equation systems involving a tridiagonal matrix can also be applied using matrix operations to block tridiagonal matrices (see also Block LU decomposition).

Block triangular matrices edit

Upper block triangular edit

A matrix   is upper block triangular (or block upper triangular[27]) if

 ,

where   for all  .[23][27]

Lower block triangular edit

A matrix   is lower block triangular if

 ,

where   for all  .[23]

Block Toeplitz matrices edit

A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, as a Toeplitz matrix has elements repeated down the diagonal.

A matrix   is block Toeplitz if   for all  , that is,

 ,

where  .[23]

Block Hankel matrices edit

A matrix   is block Hankel if   for all  , that is,

 ,

where  .[23]

See also edit

  • Kronecker product (matrix direct product resulting in a block matrix)
  • Jordan normal form (canonical form of a linear operator on a finite-dimensional complex vector space)
  • Strassen algorithm (algorithm for matrix multiplication that is faster than the conventional matrix multiplication algorithm)

Notes edit

  1. ^ Eves, Howard (1980). Elementary Matrix Theory (reprint ed.). New York: Dover. p. 37. ISBN 0-486-63946-0. Retrieved 24 April 2013. We shall find that it is sometimes convenient to subdivide a matrix into rectangular blocks of elements. This leads us to consider so-called partitioned, or block, matrices.
  2. ^ a b Dobrushkin, Vladimir. "Partition Matrices". Linear Algebra with Mathematica. Retrieved 2024-03-24.{{cite web}}: CS1 maint: url-status (link)
  3. ^ Anton, Howard (1994). Elementary Linear Algebra (7th ed.). New York: John Wiley. p. 30. ISBN 0-471-58742-7. A matrix can be subdivided or partitioned into smaller matrices by inserting horizontal and vertical rules between selected rows and columns.
  4. ^ Indhumathi, D.; Sarala, S. (2014-05-16). "Fragment Analysis and Test Case Generation using F-Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing" (PDF). International Journal of Computer Applications. 93 (6): 13. doi:10.5120/16218-5662.
  5. ^ Macedo, H.D.; Oliveira, J.N. (2013). "Typing linear algebra: A biproduct-oriented approach". Science of Computer Programming. 78 (11): 2160–2191. arXiv:1312.4818. doi:10.1016/j.scico.2012.07.012.
  6. ^ a b c Johnston, Nathaniel (2021). Introduction to linear and matrix algebra. Cham, Switzerland: Springer Nature. pp. 30, 425. ISBN 978-3-030-52811-9.
  7. ^ a b Johnston, Nathaniel (2021). Advanced linear and matrix algebra. Cham, Switzerland: Springer Nature. p. 298. ISBN 978-3-030-52814-0.
  8. ^ Jeffrey, Alan (2010). Matrix operations for engineers and scientists: an essential guide in linear algebra. Dordrecht [Netherlands] ; New York: Springer. p. 54. ISBN 978-90-481-9273-1. OCLC 639165077.
  9. ^ a b c d e f g h i j k l m n Stewart, Gilbert W. (1998). Matrix algorithms. 1: Basic decompositions. Philadelphia, PA: Soc. for Industrial and Applied Mathematics. pp. 18–20. ISBN 978-0-89871-414-2.
  10. ^ a b c d e Gentle, James E. (2007). Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer Texts in Statistics. New York, NY: Springer New York Springer e-books. pp. 47, 487. ISBN 978-0-387-70873-7.
  11. ^ Mackey, D. Steven (2006). Structured linearizations for matrix polynomials (PDF) (Thesis). University of Manchester. ISSN 1749-9097. OCLC 930686781.
  12. ^ Eves, Howard (1980). Elementary Matrix Theory (reprint ed.). New York: Dover. p. 37. ISBN 0-486-63946-0. Retrieved 24 April 2013. A partitioning as in Theorem 1.9.4 is called a conformable partition of A and B.
  13. ^ Anton, Howard (1994). Elementary Linear Algebra (7th ed.). New York: John Wiley. p. 36. ISBN 0-471-58742-7. ...provided the sizes of the submatrices of A and B are such that the indicated operations can be performed.
  14. ^ Mathai, Arakaparampil M.; Haubold, Hans J. (2017). Linear Algebra: a course for physicists and engineers. De Gruyter textbook. Berlin Boston: De Gruyter. p. 162. ISBN 978-3-11-056259-0.
  15. ^ Bernstein, Dennis (2005). Matrix Mathematics. Princeton University Press. p. 44. ISBN 0-691-11802-7.
  16. ^ a b c d e f g h Abadir, Karim M.; Magnus, Jan R. (2005). Matrix Algebra. Cambridge University Press. pp. 97, 100, 106, 111, 114, 118. ISBN 9781139443647.
  17. ^ Taboga, Marco (2021). "Determinant of a block matrix", Lectures on matrix algebra.
  18. ^ Silvester, J. R. (2000). "Determinants of Block Matrices" (PDF). Math. Gaz. 84 (501): 460–467. doi:10.2307/3620776. JSTOR 3620776. Archived from the original (PDF) on 2015-03-18. Retrieved 2021-06-25.
  19. ^ Sothanaphan, Nat (January 2017). "Determinants of block matrices with noncommuting blocks". Linear Algebra and Its Applications. 512: 202–218. arXiv:1805.06027. doi:10.1016/j.laa.2016.10.004. S2CID 119272194.
  20. ^ Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000). Numerical mathematics. Texts in applied mathematics. New York: Springer. pp. 10, 13. ISBN 978-0-387-98959-4.
  21. ^ a b c George, Raju K.; Ajayakumar, Abhijith (2024). "A Course in Linear Algebra". University Texts in the Mathematical Sciences: 35, 407. doi:10.1007/978-981-99-8680-4. ISSN 2731-9318.
  22. ^ Prince, Simon J. D. (2012). Computer vision: models, learning, and inference. New York: Cambridge university press. p. 531. ISBN 978-1-107-01179-3.
  23. ^ a b c d e Bernstein, Dennis S. (2009). Matrix mathematics: theory, facts, and formulas (2 ed.). Princeton, NJ: Princeton University Press. pp. 168, 298. ISBN 978-0-691-14039-1.
  24. ^ Dietl, Guido K. E. (2007). Linear estimation and detection in Krylov subspaces. Foundations in signal processing, communications and networking. Berlin ; New York: Springer. pp. 85, 87. ISBN 978-3-540-68478-7. OCLC 85898525.
  25. ^ Horn, Roger A.; Johnson, Charles R. (2017). Matrix analysis (Second edition, corrected reprint ed.). New York, NY: Cambridge University Press. p. 36. ISBN 978-0-521-83940-2.
  26. ^ Datta, Biswa Nath (2010). Numerical linear algebra and applications (2 ed.). Philadelphia, Pa: SIAM. p. 168. ISBN 978-0-89871-685-6.
  27. ^ a b Stewart, Gilbert W. (2001). Matrix algorithms. 2: Eigensystems. Philadelphia, Pa: Soc. for Industrial and Applied Mathematics. p. 5. ISBN 978-0-89871-503-3.

References edit

  • Strang, Gilbert (1999). "Lecture 3: Multiplication and inverse matrices". MIT Open Course ware. 18:30–21:10.