fbpx
Wikipedia

Invertible matrix

In linear algebra, an n-by-n square matrix A is called invertible (also nonsingular, nondegenerate or rarely regular) if there exists an n-by-n square matrix B such that

where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication.[1] If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the inverse matrix of an invertible matrix.[citation needed]

Over a field, a square matrix that is not invertible is called singular or degenerate. A square matrix with entries in a field is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any bounded region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices, i.e. m-by-n matrices for which mn, do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is m-by-n and the rank of A is equal to n, (nm), then A has a left inverse, an n-by-m matrix B such that BA = In. If A has rank m (mn), then it has a right inverse, an n-by-m matrix B such that AB = Im.

While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any algebraic structure equipped with addition and multiplication (i.e. rings). However, in the case of a ring being commutative, the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a stricter requirement than it being nonzero. For a noncommutative ring, the usual determinant is not defined. The conditions for existence of left-inverse or right-inverse are more complicated, since a notion of rank does not exist over rings.

The set of n × n invertible matrices together with the operation of matrix multiplication and entries from ring R form a group, the general linear group of degree n, denoted GLn(R).

Properties edit

The invertible matrix theorem edit

Let A be a square n-by-n matrix over a field K (e.g., the field   of real numbers). The following statements are equivalent, i.e., they are either all true or all false for any given matrix:[2]

  • A is invertible, i.e. it has an inverse under matrix multiplication, i.e., there exists a B such that AB = In = BA. (In this statement, "invertible" can equivalently be replaced with "left-invertible" or "right-invertible", in which one-sided inverses are considered.)
  • The linear transformation mapping x to Ax is invertible, i.e., has an inverse under function composition. (Here, again, "invertible" can equivalently be replaced with either "left-invertible" or "right-invertible")
  • The transpose AT is an invertible matrix.
  • A is row-equivalent to the n-by-n identity matrix In.
  • A is column-equivalent to the n-by-n identity matrix In.
  • A has n pivot positions.
  • A has full rank: rank A = n.
  • A has a trivial kernel: ker(A) = {0}.
  • The linear transformation mapping x to Ax is bijective; that is, the equation Ax = b has exactly one solution for each b in Kn. (Here, "bijective" can equivalently be replaced with "injective" or "surjective")
  • The columns of A form a basis of Kn. (In this statement, "basis" can equivalently be replaced with either "linearly independent set" or "spanning set")
  • The rows of A form a basis of Kn. (Similarly, here, "basis" can equivalently be replaced with either "linearly independent set" or "spanning set")
  • The determinant of A is nonzero: det A ≠ 0. (In general, a square matrix over a commutative ring is invertible if and only if its determinant is a unit (i.e. multiplicatively invertible element) of that ring.
  • The number 0 is not an eigenvalue of A. (More generally, a number   is an eigenvalue of A if the matrix   is singular, where I is the identity matrix.)
  • The matrix A can be expressed as a finite product of elementary matrices.

Other properties edit

Furthermore, the following properties hold for an invertible matrix A:

  •  
  •   for nonzero scalar k
  •   if A has orthonormal columns, where + denotes the Moore–Penrose inverse and x is a vector
  •  
  • For any invertible n-by-n matrices A and B,   More generally, if   are invertible n-by-n matrices, then  
  •  

The rows of the inverse matrix V of a matrix U are orthonormal to the columns of U (and vice versa interchanging rows for columns). To see this, suppose that UV = VU = I where the rows of V are denoted as   and the columns of U as   for   Then clearly, the Euclidean inner product of any two   This property can also be useful in constructing the inverse of a square matrix in some instances, where a set of orthogonal vectors (but not necessarily orthonormal vectors) to the columns of U are known. In which case, one can apply the iterative Gram–Schmidt process to this initial set to determine the rows of the inverse V.

A matrix that is its own inverse (i.e., a matrix A such that A = A−1, and consequently A2 = I), is called an involutory matrix.

In relation to its adjugate edit

The adjugate of a matrix A can be used to find the inverse of A as follows:

If A is an invertible matrix, then

 

In relation to the identity matrix edit

It follows from the associativity of matrix multiplication that if

 

for finite square matrices A and B, then also

 [3]

Density edit

Over the field of real numbers, the set of singular n-by-n matrices, considered as a subset of   is a null set, that is, has Lebesgue measure zero. This is true because singular matrices are the roots of the determinant function. This is a continuous function because it is a polynomial in the entries of the matrix. Thus in the language of measure theory, almost all n-by-n matrices are invertible.

Furthermore, the n-by-n invertible matrices are a dense open set in the topological space of all n-by-n matrices. Equivalently, the set of singular matrices is closed and nowhere dense in the space of n-by-n matrices.

In practice however, one may encounter non-invertible matrices. And in numerical calculations, matrices which are invertible, but close to a non-invertible matrix, can still be problematic; such matrices are said to be ill-conditioned.

Examples edit

An example with rank of n − 1 is a non-invertible matrix

 

We can see the rank of this 2-by-2 matrix is 1, which is n − 1 ≠ n, so it is non-invertible.

Consider the following 2-by-2 matrix:

 

The matrix   is invertible. To check this, one can compute that  , which is non-zero.

As an example of a non-invertible, or singular, matrix, consider the matrix

 

The determinant of   is 0, which is a necessary and sufficient condition for a matrix to be non-invertible.

Methods of matrix inversion edit

Gaussian elimination edit

Gaussian elimination is a useful and easy way to compute the inverse of a matrix. To compute a matrix inverse using this method, an augmented matrix is first created with the left side being the matrix to invert and the right side being the identity matrix. Then, Gaussian elimination is used to convert the left side into the identity matrix, which causes the right side to become the inverse of the input matrix.

For example, take the following matrix:  

The first step to compute its inverse is to create the augmented matrix  

Call the first row of this matrix   and the second row  . Then, add row 1 to row 2   This yields  

Next, subtract row 2, multiplied by 3, from row 1   which yields  

Finally, multiply row 1 by −1   and row 2 by 2   This yields the identity matrix on the left side and the inverse matrix on the right: 

Thus,  

The reason it works is that the process of Gaussian elimination can be viewed as a sequence of applying left matrix multiplication using elementary row operations using elementary matrices ( ), such as  

Applying right-multiplication using   we get   And the right side   which is the inverse we want.

To obtain   we create the augumented matrix by combining A with I and applying Gaussian elimination. The two portions will be transformed using the same sequence of elementary row operations. When the left portion becomes I, the right portion applied the same elementary row operation sequence will become A−1.

Newton's method edit

A generalization of Newton's method as used for a multiplicative inverse algorithm may be convenient, if it is convenient to find a suitable starting seed:

 

Victor Pan and John Reif have done work that includes ways of generating a starting seed.[4][5]

Newton's method is particularly useful when dealing with families of related matrices that behave enough like the sequence manufactured for the homotopy above: sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix, for example, the pair of sequences of inverse matrices used in obtaining matrix square roots by Denman–Beavers iteration; this may need more than one pass of the iteration at each new matrix, if they are not close enough together for just one to be enough. Newton's method is also useful for "touch up" corrections to the Gauss–Jordan algorithm which has been contaminated by small errors due to imperfect computer arithmetic.

Cayley–Hamilton method edit

The Cayley–Hamilton theorem allows the inverse of A to be expressed in terms of det(A), traces and powers of A:[6]

 

where n is size of A, and tr(A) is the trace of matrix A given by the sum of the main diagonal. The sum is taken over s and the sets of all   satisfying the linear Diophantine equation

 

The formula can be rewritten in terms of complete Bell polynomials of arguments   as

 

This is described in more detail under Cayley–Hamilton method.

Eigendecomposition edit

If matrix A can be eigendecomposed, and if none of its eigenvalues are zero, then A is invertible and its inverse is given by

 

where Q is the square (N × N) matrix whose ith column is the eigenvector   of A, and Λ is the diagonal matrix whose diagonal entries are the corresponding eigenvalues, that is,   If A is symmetric, Q is guaranteed to be an orthogonal matrix, therefore   Furthermore, because Λ is a diagonal matrix, its inverse is easy to calculate:

 

Cholesky decomposition edit

If matrix A is positive definite, then its inverse can be obtained as

 

where L is the lower triangular Cholesky decomposition of A, and L* denotes the conjugate transpose of L.

Analytic solution edit

Writing the transpose of the matrix of cofactors, known as an adjugate matrix, can also be an efficient way to calculate the inverse of small matrices, but this recursive method is inefficient for large matrices. To determine the inverse, we calculate a matrix of cofactors:

 

so that

 

where |A| is the determinant of A, C is the matrix of cofactors, and CT represents the matrix transpose.

Inversion of 2 × 2 matrices edit

The cofactor equation listed above yields the following result for 2 × 2 matrices. Inversion of these matrices can be done as follows:[7]

 

This is possible because 1/(adbc) is the reciprocal of the determinant of the matrix in question, and the same strategy could be used for other matrix sizes.

The Cayley–Hamilton method gives

 

Inversion of 3 × 3 matrices edit

A computationally efficient 3 × 3 matrix inversion is given by

 

(where the scalar A is not to be confused with the matrix A).

If the determinant is non-zero, the matrix is invertible, with the entries of the intermediary matrix on the right side above given by

 

The determinant of A can be computed by applying the rule of Sarrus as follows:

 

The Cayley–Hamilton decomposition gives

 

The general 3 × 3 inverse can be expressed concisely in terms of the cross product and triple product. If a matrix   (consisting of three column vectors,  ,  , and  ) is invertible, its inverse is given by

 

The determinant of A, det(A), is equal to the triple product of x0, x1, and x2—the volume of the parallelepiped formed by the rows or columns:

 

The correctness of the formula can be checked by using cross- and triple-product properties and by noting that for groups, left and right inverses always coincide. Intuitively, because of the cross products, each row of A–1 is orthogonal to the non-corresponding two columns of A (causing the off-diagonal terms of   be zero). Dividing by

 

causes the diagonal entries of I = A−1A to be unity. For example, the first diagonal is:

 

Inversion of 4 × 4 matrices edit

With increasing dimension, expressions for the inverse of A get complicated. For n = 4, the Cayley–Hamilton method leads to an expression that is still tractable:

 

Blockwise inversion edit

Matrices can also be inverted blockwise by using the following analytic inversion formula:[8]

 

(1)

where A, B, C and D are matrix sub-blocks of arbitrary size. (A must be square, so that it can be inverted. Furthermore, A and DCA−1B must be nonsingular.[9]) This strategy is particularly advantageous if A is diagonal and DCA−1B (the Schur complement of A) is a small matrix, since they are the only matrices requiring inversion.

This technique was reinvented several times and is due to Hans Boltz (1923),[citation needed] who used it for the inversion of geodetic matrices, and Tadeusz Banachiewicz (1937), who generalized it and proved its correctness.

The nullity theorem says that the nullity of A equals the nullity of the sub-block in the lower right of the inverse matrix, and that the nullity of B equals the nullity of the sub-block in the upper right of the inverse matrix.

The inversion procedure that led to Equation (1) performed matrix block operations that operated on C and D first. Instead, if A and B are operated on first, and provided D and ABD−1C are nonsingular,[10] the result is

 

(2)

Equating Equations (1) and (2) leads to

 

(3)

where Equation (3) is the Woodbury matrix identity, which is equivalent to the binomial inverse theorem.

If A and D are both invertible, then the above two block matrix inverses can be combined to provide the simple factorization

 

(2)

By the Weinstein–Aronszajn identity, one of the two matrices in the block-diagonal matrix is invertible exactly when the other is.

This formula simplifies significantly when the upper right block matrix B is the zero matrix. This formulation is useful when the matrices A and D have relatively simple inverse formulas (or pseudo inverses in the case where the blocks are not all square. In this special case, the block matrix inversion formula stated in full generality above becomes

 

If the given invertible matrix is a symmetric matrix with invertible block A the following block inverse formula holds[11]

 

(4)

where  . This requires 2 inversions of the half-sized matrices A and S and only 4 multiplications of half-sized matrices, if organized properly         together with some additions, subtractions, negations and transpositions of negligible complexity. Any matrix   has an associated positive semidefinite, symmetric matrix  , which is exactly invertible (and positive definite), if and only if   is invertible. By writing   matrix inversion can be reduced to inverting symmetric matrices and 2 additional matrix multiplications, because the positive definite matrix   satisfies the invertibility condition for its left upper block A.

These formulas together allow to construct a divide and conquer algorithm that uses blockwise inversion of associated symmetric matrices to invert a matrix with the same time complexity as the matrix multiplication algorithm that is used internally.[11] Research into matrix multiplication complexity shows that there exist matrix multiplication algorithms with a complexity of O(n2.3727) operations, while the best proven lower bound is Ω(n2 log n).[12]

By Neumann series edit

If a matrix A has the property that

 

then A is nonsingular and its inverse may be expressed by a Neumann series:[13]

 

Truncating the sum results in an "approximate" inverse which may be useful as a preconditioner. Note that a truncated series can be accelerated exponentially by noting that the Neumann series is a geometric sum. As such, it satisfies

 .

Therefore, only 2L − 2 matrix multiplications are needed to compute 2L terms of the sum.

More generally, if A is "near" the invertible matrix X in the sense that

 

then A is nonsingular and its inverse is

 

If it is also the case that AX has rank 1 then this simplifies to

 

p-adic approximation edit

If A is a matrix with integer or rational entries and we seek a solution in arbitrary-precision rationals, then a p-adic approximation method converges to an exact solution in O(n4 log2 n), assuming standard O(n3) matrix multiplication is used.[14] The method relies on solving n linear systems via Dixon's method of p-adic approximation (each in O(n3 log2 n)) and is available as such in software specialized in arbitrary-precision matrix operations, for example, in IML.[15]

Reciprocal basis vectors method edit

Given an n × n square matrix  ,  , with n rows interpreted as n vectors   (Einstein summation assumed) where the   are a standard orthonormal basis of Euclidean space   ( ), then using Clifford algebra (or geometric algebra) we compute the reciprocal (sometimes called dual) column vectors:

 

as the columns of the inverse matrix   Note that, the place " " indicates that " " is removed from that place in the above expression for  . We then have  , where   is the Kronecker delta. We also have  , as required. If the vectors   are not linearly independent, then   and the matrix   is not invertible (has no inverse).

Derivative of the matrix inverse edit

Suppose that the invertible matrix A depends on a parameter t. Then the derivative of the inverse of A with respect to t is given by[16]

 

To derive the above expression for the derivative of the inverse of A, one can differentiate the definition of the matrix inverse   and then solve for the inverse of A:

 

Subtracting   from both sides of the above and multiplying on the right by   gives the correct expression for the derivative of the inverse:

 

Similarly, if   is a small number then

 

More generally, if

 

then,

 

Given a positive integer  ,

 

Therefore,

 

Generalized inverse edit

Some of the properties of inverse matrices are shared by generalized inverses (for example, the Moore–Penrose inverse), which can be defined for any m-by-n matrix.[17]

Applications edit

For most practical applications, it is not necessary to invert a matrix to solve a system of linear equations; however, for a unique solution, it is necessary that the matrix involved be invertible.

Decomposition techniques like LU decomposition are much faster than inversion, and various fast algorithms for special classes of linear systems have also been developed.

Regression/least squares edit

Although an explicit inverse is not necessary to estimate the vector of unknowns, it is the easiest way to estimate their accuracy, found in the diagonal of a matrix inverse (the posterior covariance matrix of the vector of unknowns). However, faster algorithms to compute only the diagonal entries of a matrix inverse are known in many cases.[18]

Matrix inverses in real-time simulations edit

Matrix inversion plays a significant role in computer graphics, particularly in 3D graphics rendering and 3D simulations. Examples include screen-to-world ray casting, world-to-subspace-to-world object transformations, and physical simulations.

Matrix inverses in MIMO wireless communication edit

Matrix inversion also plays a significant role in the MIMO (Multiple-Input, Multiple-Output) technology in wireless communications. The MIMO system consists of N transmit and M receive antennas. Unique signals, occupying the same frequency band, are sent via N transmit antennas and are received via M receive antennas. The signal arriving at each receive antenna will be a linear combination of the N transmitted signals forming an N × M transmission matrix H. It is crucial for the matrix H to be invertible for the receiver to be able to figure out the transmitted information.

See also edit

References edit

  1. ^ Axler, Sheldon (18 December 2014). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer Publishing (published 2015). p. 296. ISBN 978-3-319-11079-0.
  2. ^ Weisstein, Eric W. "Invertible Matrix Theorem". mathworld.wolfram.com. Retrieved 2020-09-08.
  3. ^ Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. p. 14. ISBN 978-0-521-38632-6..
  4. ^ Pan, Victor; Reif, John (1985), Efficient Parallel Solution of Linear Systems, Proceedings of the 17th Annual ACM Symposium on Theory of Computing, Providence: ACM
  5. ^ Pan, Victor; Reif, John (1985), Harvard University Center for Research in Computing Technology Report TR-02-85, Cambridge, MA: Aiken Computation Laboratory
  6. ^ A proof can be found in the Appendix B of Kondratyuk, L. A.; Krivoruchenko, M. I. (1992). "Superconducting quark matter in SU(2) color group". Zeitschrift für Physik A. 344 (1): 99–115. Bibcode:1992ZPhyA.344...99K. doi:10.1007/BF01291027. S2CID 120467300.
  7. ^ Strang, Gilbert (2003). Introduction to linear algebra (3rd ed.). SIAM. p. 71. ISBN 978-0-9614088-9-3., Chapter 2, page 71
  8. ^ Tzon-Tzer, Lu; Sheng-Hua, Shiou (2002). "Inverses of 2 × 2 block matrices". Computers & Mathematics with Applications. 43 (1–2): 119–129. doi:10.1016/S0898-1221(01)00278-4.
  9. ^ Bernstein, Dennis (2005). Matrix Mathematics. Princeton University Press. p. 44. ISBN 978-0-691-11802-4.
  10. ^ Bernstein, Dennis (2005). Matrix Mathematics. Princeton University Press. p. 45. ISBN 978-0-691-11802-4.
  11. ^ a b T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms, 3rd ed., MIT Press, Cambridge, MA, 2009, §28.2.
  12. ^ Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing. ACM Press, 2002. doi:10.1145/509907.509932.
  13. ^ Stewart, Gilbert (1998). Matrix Algorithms: Basic decompositions. SIAM. p. 55. ISBN 978-0-89871-414-2.
  14. ^ Haramoto, H.; Matsumoto, M. (2009). "A p-adic algorithm for computing the inverse of integer matrices". Journal of Computational and Applied Mathematics. 225 (1): 320–322. Bibcode:2009JCoAM.225..320H. doi:10.1016/j.cam.2008.07.044.
  15. ^ "IML - Integer Matrix Library". cs.uwaterloo.ca. Retrieved 14 April 2018.
  16. ^ Magnus, Jan R.; Neudecker, Heinz (1999). Matrix Differential Calculus : with Applications in Statistics and Econometrics (Revised ed.). New York: John Wiley & Sons. pp. 151–152. ISBN 0-471-98633-X.
  17. ^ Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, p. 446, ISBN 978-0-387-72828-5.
  18. ^ Lin, Lin; Lu, Jianfeng; Ying, Lexing; Car, Roberto; E, Weinan (2009). "Fast algorithm for extracting the diagonal of the inverse matrix with application to the electronic structure analysis of metallic systems". Communications in Mathematical Sciences. 7 (3): 755–777. doi:10.4310/CMS.2009.v7.n3.a12.

Further reading edit

External links edit

  • Sanderson, Grant (August 15, 2016). "Inverse Matrices, Column Space and Null Space". Essence of Linear Algebra. Archived from the original on 2021-11-03 – via YouTube.
  • Strang, Gilbert. "Linear Algebra Lecture on Inverse Matrices". MIT OpenCourseWare.
  • Moore-Penrose Inverse Matrix

invertible, matrix, this, article, multiple, issues, please, help, improve, discuss, these, issues, talk, page, learn, when, remove, these, template, messages, this, article, needs, additional, citations, verification, please, help, improve, this, article, add. This article has multiple issues Please help improve it or discuss these issues on the talk page Learn how and when to remove these template messages This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Invertible matrix news newspapers books scholar JSTOR September 2020 Learn how and when to remove this message This article may be too technical for most readers to understand Please help improve it to make it understandable to non experts without removing the technical details August 2021 Learn how and when to remove this message Learn how and when to remove this message In linear algebra an n by n square matrix A is called invertible also nonsingular nondegenerate or rarely regular if there exists an n by n square matrix B such thatA B B A I n displaystyle mathbf AB mathbf BA mathbf I n where In denotes the n by n identity matrix and the multiplication used is ordinary matrix multiplication 1 If this is the case then the matrix B is uniquely determined by A and is called the multiplicative inverse of A denoted by A 1 Matrix inversion is the process of finding the inverse matrix of an invertible matrix citation needed Over a field a square matrix that is not invertible is called singular or degenerate A square matrix with entries in a field is singular if and only if its determinant is zero Singular matrices are rare in the sense that if a square matrix s entries are randomly selected from any bounded region on the number line or complex plane the probability that the matrix is singular is 0 that is it will almost never be singular Non square matrices i e m by n matrices for which m n do not have an inverse However in some cases such a matrix may have a left inverse or right inverse If A is m by n and the rank of A is equal to n n m then A has a left inverse an n by m matrix B such that BA In If A has rank m m n then it has a right inverse an n by m matrix B such that AB Im While the most common case is that of matrices over the real or complex numbers all these definitions can be given for matrices over any algebraic structure equipped with addition and multiplication i e rings However in the case of a ring being commutative the condition for a square matrix to be invertible is that its determinant is invertible in the ring which in general is a stricter requirement than it being nonzero For a noncommutative ring the usual determinant is not defined The conditions for existence of left inverse or right inverse are more complicated since a notion of rank does not exist over rings The set of n n invertible matrices together with the operation of matrix multiplication and entries from ring R form a group the general linear group of degree n denoted GLn R Contents 1 Properties 1 1 The invertible matrix theorem 1 2 Other properties 1 3 In relation to its adjugate 1 4 In relation to the identity matrix 1 5 Density 2 Examples 3 Methods of matrix inversion 3 1 Gaussian elimination 3 2 Newton s method 3 3 Cayley Hamilton method 3 4 Eigendecomposition 3 5 Cholesky decomposition 3 6 Analytic solution 3 6 1 Inversion of 2 2 matrices 3 6 2 Inversion of 3 3 matrices 3 6 3 Inversion of 4 4 matrices 3 7 Blockwise inversion 3 8 By Neumann series 3 9 p adic approximation 3 10 Reciprocal basis vectors method 4 Derivative of the matrix inverse 5 Generalized inverse 6 Applications 6 1 Regression least squares 6 2 Matrix inverses in real time simulations 6 3 Matrix inverses in MIMO wireless communication 7 See also 8 References 9 Further reading 10 External linksProperties editThe invertible matrix theorem edit Let A be a square n by n matrix over a field K e g the field R displaystyle mathbb R nbsp of real numbers The following statements are equivalent i e they are either all true or all false for any given matrix 2 A is invertible i e it has an inverse under matrix multiplication i e there exists a B such that AB In BA In this statement invertible can equivalently be replaced with left invertible or right invertible in which one sided inverses are considered The linear transformation mapping x to Ax is invertible i e has an inverse under function composition Here again invertible can equivalently be replaced with either left invertible or right invertible The transpose AT is an invertible matrix A is row equivalent to the n by n identity matrix In A is column equivalent to the n by n identity matrix In A has n pivot positions A has full rank rank A n A has a trivial kernel ker A 0 The linear transformation mapping x to Ax is bijective that is the equation Ax b has exactly one solution for each b in Kn Here bijective can equivalently be replaced with injective or surjective The columns of A form a basis of Kn In this statement basis can equivalently be replaced with either linearly independent set or spanning set The rows of A form a basis of Kn Similarly here basis can equivalently be replaced with either linearly independent set or spanning set The determinant of A is nonzero det A 0 In general a square matrix over a commutative ring is invertible if and only if its determinant is a unit i e multiplicatively invertible element of that ring The number 0 is not an eigenvalue of A More generally a number l displaystyle lambda nbsp is an eigenvalue of A if the matrix A l I displaystyle mathbf A lambda mathbf I nbsp is singular where I is the identity matrix The matrix A can be expressed as a finite product of elementary matrices Other properties edit Furthermore the following properties hold for an invertible matrix A A 1 1 A displaystyle mathbf A 1 1 mathbf A nbsp k A 1 k 1 A 1 displaystyle k mathbf A 1 k 1 mathbf A 1 nbsp for nonzero scalar k A x x A 1 displaystyle mathbf Ax mathbf x mathbf A 1 nbsp if A has orthonormal columns where denotes the Moore Penrose inverse and x is a vector A T 1 A 1 T displaystyle mathbf A mathrm T 1 mathbf A 1 mathrm T nbsp For any invertible n by n matrices A and B A B 1 B 1 A 1 displaystyle mathbf AB 1 mathbf B 1 mathbf A 1 nbsp More generally if A 1 A k displaystyle mathbf A 1 dots mathbf A k nbsp are invertible n by n matrices then A 1 A 2 A k 1 A k 1 A k 1 A k 1 1 A 2 1 A 1 1 displaystyle mathbf A 1 mathbf A 2 cdots mathbf A k 1 mathbf A k 1 mathbf A k 1 mathbf A k 1 1 cdots mathbf A 2 1 mathbf A 1 1 nbsp det A 1 det A 1 displaystyle det mathbf A 1 det mathbf A 1 nbsp The rows of the inverse matrix V of a matrix U are orthonormal to the columns of U and vice versa interchanging rows for columns To see this suppose that UV VU I where the rows of V are denoted as v i T displaystyle v i mathrm T nbsp and the columns of U as u j displaystyle u j nbsp for 1 i j n displaystyle 1 leq i j leq n nbsp Then clearly the Euclidean inner product of any two v i T u j d i j displaystyle v i mathrm T u j delta i j nbsp This property can also be useful in constructing the inverse of a square matrix in some instances where a set of orthogonal vectors but not necessarily orthonormal vectors to the columns of U are known In which case one can apply the iterative Gram Schmidt process to this initial set to determine the rows of the inverse V A matrix that is its own inverse i e a matrix A such that A A 1 and consequently A2 I is called an involutory matrix In relation to its adjugate edit The adjugate of a matrix A can be used to find the inverse of A as follows If A is an invertible matrix then A 1 1 det A adj A displaystyle mathbf A 1 frac 1 det mathbf A operatorname adj mathbf A nbsp In relation to the identity matrix edit It follows from the associativity of matrix multiplication that if A B I displaystyle mathbf AB mathbf I nbsp for finite square matrices A and B then also B A I displaystyle mathbf BA mathbf I nbsp 3 Density edit Over the field of real numbers the set of singular n by n matrices considered as a subset of R n n displaystyle mathbb R n times n nbsp is a null set that is has Lebesgue measure zero This is true because singular matrices are the roots of the determinant function This is a continuous function because it is a polynomial in the entries of the matrix Thus in the language of measure theory almost all n by n matrices are invertible Furthermore the n by n invertible matrices are a dense open set in the topological space of all n by n matrices Equivalently the set of singular matrices is closed and nowhere dense in the space of n by n matrices In practice however one may encounter non invertible matrices And in numerical calculations matrices which are invertible but close to a non invertible matrix can still be problematic such matrices are said to be ill conditioned Examples editAn example with rank of n 1 is a non invertible matrix A 2 4 2 4 displaystyle mathbf A begin pmatrix 2 amp 4 2 amp 4 end pmatrix nbsp We can see the rank of this 2 by 2 matrix is 1 which is n 1 n so it is non invertible Consider the following 2 by 2 matrix B 1 3 2 1 1 displaystyle mathbf B begin pmatrix 1 amp tfrac 3 2 1 amp 1 end pmatrix nbsp The matrix B displaystyle mathbf B nbsp is invertible To check this one can compute that det B 1 2 textstyle det mathbf B frac 1 2 nbsp which is non zero As an example of a non invertible or singular matrix consider the matrix C 1 3 2 2 3 1 displaystyle mathbf C begin pmatrix 1 amp tfrac 3 2 tfrac 2 3 amp 1 end pmatrix nbsp The determinant of C displaystyle mathbf C nbsp is 0 which is a necessary and sufficient condition for a matrix to be non invertible Methods of matrix inversion editGaussian elimination edit Gaussian elimination is a useful and easy way to compute the inverse of a matrix To compute a matrix inverse using this method an augmented matrix is first created with the left side being the matrix to invert and the right side being the identity matrix Then Gaussian elimination is used to convert the left side into the identity matrix which causes the right side to become the inverse of the input matrix For example take the following matrix A 1 3 2 1 1 displaystyle mathbf A begin pmatrix 1 amp tfrac 3 2 1 amp 1 end pmatrix nbsp The first step to compute its inverse is to create the augmented matrix 1 3 2 1 0 1 1 0 1 displaystyle left begin array cc cc 1 amp tfrac 3 2 amp 1 amp 0 1 amp 1 amp 0 amp 1 end array right nbsp Call the first row of this matrix R 1 displaystyle R 1 nbsp and the second row R 2 displaystyle R 2 nbsp Then add row 1 to row 2 R 1 R 2 R 2 displaystyle R 1 R 2 to R 2 nbsp This yields 1 3 2 1 0 0 1 2 1 1 displaystyle left begin array cc cc 1 amp tfrac 3 2 amp 1 amp 0 0 amp tfrac 1 2 amp 1 amp 1 end array right nbsp Next subtract row 2 multiplied by 3 from row 1 R 1 3 R 2 R 1 displaystyle R 1 3 R 2 to R 1 nbsp which yields 1 0 2 3 0 1 2 1 1 displaystyle left begin array cc cc 1 amp 0 amp 2 amp 3 0 amp tfrac 1 2 amp 1 amp 1 end array right nbsp Finally multiply row 1 by 1 R 1 R 1 displaystyle R 1 to R 1 nbsp and row 2 by 2 2 R 2 R 2 displaystyle 2 R 2 to R 2 nbsp This yields the identity matrix on the left side and the inverse matrix on the right 1 0 2 3 0 1 2 2 displaystyle left begin array cc cc 1 amp 0 amp 2 amp 3 0 amp 1 amp 2 amp 2 end array right nbsp Thus A 1 2 3 2 2 displaystyle mathbf A 1 begin pmatrix 2 amp 3 2 amp 2 end pmatrix nbsp The reason it works is that the process of Gaussian elimination can be viewed as a sequence of applying left matrix multiplication using elementary row operations using elementary matrices E n displaystyle mathbf E n nbsp such as E n E n 1 E 2 E 1 A I displaystyle mathbf E n mathbf E n 1 cdots mathbf E 2 mathbf E 1 mathbf A mathbf I nbsp Applying right multiplication using A 1 displaystyle mathbf A 1 nbsp we get E n E n 1 E 2 E 1 I I A 1 displaystyle mathbf E n mathbf E n 1 cdots mathbf E 2 mathbf E 1 mathbf I mathbf I mathbf A 1 nbsp And the right side I A 1 A 1 displaystyle mathbf I mathbf A 1 mathbf A 1 nbsp which is the inverse we want To obtain E n E n 1 E 2 E 1 I displaystyle mathbf E n mathbf E n 1 cdots mathbf E 2 mathbf E 1 mathbf I nbsp we create the augumented matrix by combining A with I and applying Gaussian elimination The two portions will be transformed using the same sequence of elementary row operations When the left portion becomes I the right portion applied the same elementary row operation sequence will become A 1 Newton s method edit A generalization of Newton s method as used for a multiplicative inverse algorithm may be convenient if it is convenient to find a suitable starting seed X k 1 2 X k X k A X k displaystyle X k 1 2X k X k AX k nbsp Victor Pan and John Reif have done work that includes ways of generating a starting seed 4 5 Newton s method is particularly useful when dealing with families of related matrices that behave enough like the sequence manufactured for the homotopy above sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix for example the pair of sequences of inverse matrices used in obtaining matrix square roots by Denman Beavers iteration this may need more than one pass of the iteration at each new matrix if they are not close enough together for just one to be enough Newton s method is also useful for touch up corrections to the Gauss Jordan algorithm which has been contaminated by small errors due to imperfect computer arithmetic Cayley Hamilton method edit The Cayley Hamilton theorem allows the inverse of A to be expressed in terms of det A traces and powers of A 6 A 1 1 det A s 0 n 1 A s k 1 k 2 k n 1 l 1 n 1 1 k l 1 l k l k l tr A l k l displaystyle mathbf A 1 frac 1 det mathbf A sum s 0 n 1 mathbf A s sum k 1 k 2 ldots k n 1 prod l 1 n 1 frac 1 k l 1 l k l k l operatorname tr left mathbf A l right k l nbsp where n is size of A and tr A is the trace of matrix A given by the sum of the main diagonal The sum is taken over s and the sets of all k l 0 displaystyle k l geq 0 nbsp satisfying the linear Diophantine equation s l 1 n 1 l k l n 1 displaystyle s sum l 1 n 1 lk l n 1 nbsp The formula can be rewritten in terms of complete Bell polynomials of arguments t l l 1 tr A l displaystyle t l l 1 operatorname tr left A l right nbsp as A 1 1 det A s 1 n A s 1 1 n 1 n s B n s t 1 t 2 t n s displaystyle mathbf A 1 frac 1 det mathbf A sum s 1 n mathbf A s 1 frac 1 n 1 n s B n s t 1 t 2 ldots t n s nbsp This is described in more detail under Cayley Hamilton method Eigendecomposition edit Main article Eigendecomposition of a matrix If matrix A can be eigendecomposed and if none of its eigenvalues are zero then A is invertible and its inverse is given by A 1 Q L 1 Q 1 displaystyle mathbf A 1 mathbf Q mathbf Lambda 1 mathbf Q 1 nbsp where Q is the square N N matrix whose i th column is the eigenvector q i displaystyle q i nbsp of A and L is the diagonal matrix whose diagonal entries are the corresponding eigenvalues that is L i i l i displaystyle Lambda ii lambda i nbsp If A is symmetric Q is guaranteed to be an orthogonal matrix therefore Q 1 Q T displaystyle mathbf Q 1 mathbf Q mathrm T nbsp Furthermore because L is a diagonal matrix its inverse is easy to calculate L 1 i i 1 l i displaystyle left Lambda 1 right ii frac 1 lambda i nbsp Cholesky decomposition edit Main article Cholesky decomposition If matrix A is positive definite then its inverse can be obtained as A 1 L 1 L 1 displaystyle mathbf A 1 left mathbf L right 1 mathbf L 1 nbsp where L is the lower triangular Cholesky decomposition of A and L denotes the conjugate transpose of L Analytic solution edit Main article Cramer s rule Writing the transpose of the matrix of cofactors known as an adjugate matrix can also be an efficient way to calculate the inverse of small matrices but this recursive method is inefficient for large matrices To determine the inverse we calculate a matrix of cofactors A 1 1 A C T 1 A C 11 C 21 C n 1 C 12 C 22 C n 2 C 1 n C 2 n C n n displaystyle mathbf A 1 1 over begin vmatrix mathbf A end vmatrix mathbf C mathrm T 1 over begin vmatrix mathbf A end vmatrix begin pmatrix mathbf C 11 amp mathbf C 21 amp cdots amp mathbf C n1 mathbf C 12 amp mathbf C 22 amp cdots amp mathbf C n2 vdots amp vdots amp ddots amp vdots mathbf C 1n amp mathbf C 2n amp cdots amp mathbf C nn end pmatrix nbsp so that A 1 i j 1 A C T i j 1 A C j i displaystyle left mathbf A 1 right ij 1 over begin vmatrix mathbf A end vmatrix left mathbf C mathrm T right ij 1 over begin vmatrix mathbf A end vmatrix left mathbf C ji right nbsp where A is the determinant of A C is the matrix of cofactors and CT represents the matrix transpose Inversion of 2 2 matrices edit The cofactor equation listed above yields the following result for 2 2 matrices Inversion of these matrices can be done as follows 7 A 1 a b c d 1 1 det A d b c a 1 a d b c d b c a displaystyle mathbf A 1 begin bmatrix a amp b c amp d end bmatrix 1 frac 1 det mathbf A begin bmatrix d amp b c amp a end bmatrix frac 1 ad bc begin bmatrix d amp b c amp a end bmatrix nbsp This is possible because 1 ad bc is the reciprocal of the determinant of the matrix in question and the same strategy could be used for other matrix sizes The Cayley Hamilton method gives A 1 1 det A tr A I A displaystyle mathbf A 1 frac 1 det mathbf A left left operatorname tr mathbf A right mathbf I mathbf A right nbsp Inversion of 3 3 matrices edit A computationally efficient 3 3 matrix inversion is given by A 1 a b c d e f g h i 1 1 det A A B C D E F G H I T 1 det A A D G B E H C F I displaystyle mathbf A 1 begin bmatrix a amp b amp c d amp e amp f g amp h amp i end bmatrix 1 frac 1 det mathbf A begin bmatrix A amp B amp C D amp E amp F G amp H amp I end bmatrix mathrm T frac 1 det mathbf A begin bmatrix A amp D amp G B amp E amp H C amp F amp I end bmatrix nbsp where the scalar A is not to be confused with the matrix A If the determinant is non zero the matrix is invertible with the entries of the intermediary matrix on the right side above given by A e i f h D b i c h G b f c e B d i f g E a i c g H a f c d C d h e g F a h b g I a e b d displaystyle begin alignedat 6 A amp amp ei fh amp quad amp D amp amp bi ch amp quad amp G amp amp bf ce B amp amp di fg amp quad amp E amp amp ai cg amp quad amp H amp amp af cd C amp amp dh eg amp quad amp F amp amp ah bg amp quad amp I amp amp ae bd end alignedat nbsp The determinant of A can be computed by applying the rule of Sarrus as follows det A a A b B c C displaystyle det mathbf A aA bB cC nbsp The Cayley Hamilton decomposition gives A 1 1 det A 1 2 tr A 2 tr A 2 I A tr A A 2 displaystyle mathbf A 1 frac 1 det mathbf A left frac 1 2 left operatorname tr mathbf A 2 operatorname tr mathbf A 2 right mathbf I mathbf A operatorname tr mathbf A mathbf A 2 right nbsp The general 3 3 inverse can be expressed concisely in terms of the cross product and triple product If a matrix A x 0 x 1 x 2 displaystyle mathbf A begin bmatrix mathbf x 0 amp mathbf x 1 amp mathbf x 2 end bmatrix nbsp consisting of three column vectors x 0 displaystyle mathbf x 0 nbsp x 1 displaystyle mathbf x 1 nbsp and x 2 displaystyle mathbf x 2 nbsp is invertible its inverse is given by A 1 1 det A x 1 x 2 T x 2 x 0 T x 0 x 1 T displaystyle mathbf A 1 frac 1 det mathbf A begin bmatrix mathbf x 1 times mathbf x 2 mathrm T mathbf x 2 times mathbf x 0 mathrm T mathbf x 0 times mathbf x 1 mathrm T end bmatrix nbsp The determinant of A det A is equal to the triple product of x0 x1 and x2 the volume of the parallelepiped formed by the rows or columns det A x 0 x 1 x 2 displaystyle det mathbf A mathbf x 0 cdot mathbf x 1 times mathbf x 2 nbsp The correctness of the formula can be checked by using cross and triple product properties and by noting that for groups left and right inverses always coincide Intuitively because of the cross products each row of A 1 is orthogonal to the non corresponding two columns of A causing the off diagonal terms of I A 1 A displaystyle mathbf I mathbf A 1 mathbf A nbsp be zero Dividing by det A x 0 x 1 x 2 displaystyle det mathbf A mathbf x 0 cdot mathbf x 1 times mathbf x 2 nbsp causes the diagonal entries of I A 1A to be unity For example the first diagonal is 1 1 x 0 x 1 x 2 x 0 x 1 x 2 displaystyle 1 frac 1 mathbf x 0 cdot mathbf x 1 times mathbf x 2 mathbf x 0 cdot mathbf x 1 times mathbf x 2 nbsp Inversion of 4 4 matrices edit With increasing dimension expressions for the inverse of A get complicated For n 4 the Cayley Hamilton method leads to an expression that is still tractable A 1 1 det A 1 6 tr A 3 3 tr A tr A 2 2 tr A 3 I 1 2 A tr A 2 tr A 2 A 2 tr A A 3 displaystyle mathbf A 1 frac 1 det mathbf A left frac 1 6 left operatorname tr mathbf A 3 3 operatorname tr mathbf A operatorname tr mathbf A 2 2 operatorname tr mathbf A 3 right mathbf I frac 1 2 mathbf A left operatorname tr mathbf A 2 operatorname tr mathbf A 2 right mathbf A 2 operatorname tr mathbf A mathbf A 3 right nbsp Blockwise inversion edit Matrices can also be inverted blockwise by using the following analytic inversion formula 8 A B C D 1 A 1 A 1 B D C A 1 B 1 C A 1 A 1 B D C A 1 B 1 D C A 1 B 1 C A 1 D C A 1 B 1 displaystyle begin bmatrix mathbf A amp mathbf B mathbf C amp mathbf D end bmatrix 1 begin bmatrix mathbf A 1 mathbf A 1 mathbf B left mathbf D mathbf CA 1 mathbf B right 1 mathbf CA 1 amp mathbf A 1 mathbf B left mathbf D mathbf CA 1 mathbf B right 1 left mathbf D mathbf CA 1 mathbf B right 1 mathbf CA 1 amp left mathbf D mathbf CA 1 mathbf B right 1 end bmatrix nbsp 1 where A B C and D are matrix sub blocks of arbitrary size A must be square so that it can be inverted Furthermore A and D CA 1B must be nonsingular 9 This strategy is particularly advantageous if A is diagonal and D CA 1B the Schur complement of A is a small matrix since they are the only matrices requiring inversion This technique was reinvented several times and is due to Hans Boltz 1923 citation needed who used it for the inversion of geodetic matrices and Tadeusz Banachiewicz 1937 who generalized it and proved its correctness The nullity theorem says that the nullity of A equals the nullity of the sub block in the lower right of the inverse matrix and that the nullity of B equals the nullity of the sub block in the upper right of the inverse matrix The inversion procedure that led to Equation 1 performed matrix block operations that operated on C and D first Instead if A and B are operated on first and provided D and A BD 1C are nonsingular 10 the result is A B C D 1 A B D 1 C 1 A B D 1 C 1 B D 1 D 1 C A B D 1 C 1 D 1 D 1 C A B D 1 C 1 B D 1 displaystyle begin bmatrix mathbf A amp mathbf B mathbf C amp mathbf D end bmatrix 1 begin bmatrix left mathbf A mathbf BD 1 mathbf C right 1 amp left mathbf A mathbf BD 1 mathbf C right 1 mathbf BD 1 mathbf D 1 mathbf C left mathbf A mathbf BD 1 mathbf C right 1 amp quad mathbf D 1 mathbf D 1 mathbf C left mathbf A mathbf BD 1 mathbf C right 1 mathbf BD 1 end bmatrix nbsp 2 Equating Equations 1 and 2 leads to A B D 1 C 1 A 1 A 1 B D C A 1 B 1 C A 1 A B D 1 C 1 B D 1 A 1 B D C A 1 B 1 D 1 C A B D 1 C 1 D C A 1 B 1 C A 1 D 1 D 1 C A B D 1 C 1 B D 1 D C A 1 B 1 displaystyle begin aligned left mathbf A mathbf BD 1 mathbf C right 1 amp mathbf A 1 mathbf A 1 mathbf B left mathbf D mathbf CA 1 mathbf B right 1 mathbf CA 1 left mathbf A mathbf BD 1 mathbf C right 1 mathbf BD 1 amp mathbf A 1 mathbf B left mathbf D mathbf CA 1 mathbf B right 1 mathbf D 1 mathbf C left mathbf A mathbf BD 1 mathbf C right 1 amp left mathbf D mathbf CA 1 mathbf B right 1 mathbf CA 1 mathbf D 1 mathbf D 1 mathbf C left mathbf A mathbf BD 1 mathbf C right 1 mathbf BD 1 amp left mathbf D mathbf CA 1 mathbf B right 1 end aligned nbsp 3 where Equation 3 is the Woodbury matrix identity which is equivalent to the binomial inverse theorem If A and D are both invertible then the above two block matrix inverses can be combined to provide the simple factorization A B C D 1 A B D 1 C 1 0 0 D C A 1 B 1 I B D 1 C A 1 I displaystyle begin bmatrix mathbf A amp mathbf B mathbf C amp mathbf D end bmatrix 1 begin bmatrix left mathbf A mathbf B mathbf D 1 mathbf C right 1 amp mathbf 0 mathbf 0 amp left mathbf D mathbf C mathbf A 1 mathbf B right 1 end bmatrix begin bmatrix mathbf I amp mathbf B mathbf D 1 mathbf C mathbf A 1 amp mathbf I end bmatrix nbsp 2 By the Weinstein Aronszajn identity one of the two matrices in the block diagonal matrix is invertible exactly when the other is This formula simplifies significantly when the upper right block matrix B is the zero matrix This formulation is useful when the matrices A and D have relatively simple inverse formulas or pseudo inverses in the case where the blocks are not all square In this special case the block matrix inversion formula stated in full generality above becomes A 0 C D 1 A 1 0 D 1 C A 1 D 1 displaystyle begin bmatrix mathbf A amp mathbf 0 mathbf C amp mathbf D end bmatrix 1 begin bmatrix mathbf A 1 amp mathbf 0 mathbf D 1 mathbf CA 1 amp mathbf D 1 end bmatrix nbsp If the given invertible matrix is a symmetric matrix with invertible block A the following block inverse formula holds 11 A C T C D 1 A 1 A 1 C T S 1 C A 1 A 1 C T S 1 S 1 C A 1 S 1 displaystyle begin bmatrix mathbf A amp mathbf C T mathbf C amp mathbf D end bmatrix 1 begin bmatrix mathbf A 1 mathbf A 1 mathbf C T mathbf S 1 mathbf C mathbf A 1 amp mathbf A 1 mathbf C T mathbf S 1 mathbf S 1 mathbf C mathbf A 1 amp mathbf S 1 end bmatrix nbsp 4 where S D C A 1 C T displaystyle mathbf S mathbf D mathbf C mathbf A 1 mathbf C T nbsp This requires 2 inversions of the half sized matrices A and S and only 4 multiplications of half sized matrices if organized properly W 1 C A 1 displaystyle mathbf W 1 mathbf C mathbf A 1 nbsp W 2 W 1 C T C A 1 C T displaystyle mathbf W 2 mathbf W 1 mathbf C T mathbf C mathbf A 1 mathbf C T nbsp W 3 S 1 W 1 S 1 C A 1 displaystyle mathbf W 3 mathbf S 1 mathbf W 1 mathbf S 1 mathbf C mathbf A 1 nbsp W 4 W 1 T W 3 A 1 C T S 1 C A 1 displaystyle mathbf W 4 mathbf W 1 T mathbf W 3 mathbf A 1 mathbf C T mathbf S 1 mathbf C mathbf A 1 nbsp together with some additions subtractions negations and transpositions of negligible complexity Any matrix M displaystyle mathbf M nbsp has an associated positive semidefinite symmetric matrix M T M displaystyle mathbf M T mathbf M nbsp which is exactly invertible and positive definite if and only if M displaystyle mathbf M nbsp is invertible By writing M 1 M T M 1 M T displaystyle mathbf M 1 left mathbf M T mathbf M right 1 mathbf M T nbsp matrix inversion can be reduced to inverting symmetric matrices and 2 additional matrix multiplications because the positive definite matrix M T M displaystyle mathbf M T mathbf M nbsp satisfies the invertibility condition for its left upper block A These formulas together allow to construct a divide and conquer algorithm that uses blockwise inversion of associated symmetric matrices to invert a matrix with the same time complexity as the matrix multiplication algorithm that is used internally 11 Research into matrix multiplication complexity shows that there exist matrix multiplication algorithms with a complexity of O n2 3727 operations while the best proven lower bound is W n2 log n 12 By Neumann series edit If a matrix A has the property that lim n I A n 0 displaystyle lim n to infty mathbf I mathbf A n 0 nbsp then A is nonsingular and its inverse may be expressed by a Neumann series 13 A 1 n 0 I A n displaystyle mathbf A 1 sum n 0 infty mathbf I mathbf A n nbsp Truncating the sum results in an approximate inverse which may be useful as a preconditioner Note that a truncated series can be accelerated exponentially by noting that the Neumann series is a geometric sum As such it satisfies n 0 2 L 1 I A n l 0 L 1 I I A 2 l displaystyle sum n 0 2 L 1 mathbf I mathbf A n prod l 0 L 1 left mathbf I mathbf I mathbf A 2 l right nbsp Therefore only 2L 2 matrix multiplications are needed to compute 2L terms of the sum More generally if A is near the invertible matrix X in the sense that lim n I X 1 A n 0 o r lim n I A X 1 n 0 displaystyle lim n to infty left mathbf I mathbf X 1 mathbf A right n 0 mathrm or lim n to infty left mathbf I mathbf A mathbf X 1 right n 0 nbsp then A is nonsingular and its inverse is A 1 n 0 X 1 X A n X 1 displaystyle mathbf A 1 sum n 0 infty left mathbf X 1 mathbf X mathbf A right n mathbf X 1 nbsp If it is also the case that A X has rank 1 then this simplifies to A 1 X 1 X 1 A X X 1 1 tr X 1 A X displaystyle mathbf A 1 mathbf X 1 frac mathbf X 1 mathbf A mathbf X mathbf X 1 1 operatorname tr left mathbf X 1 mathbf A mathbf X right nbsp p adic approximation edit If A is a matrix with integer or rational entries and we seek a solution in arbitrary precision rationals then a p adic approximation method converges to an exact solution in O n4 log2 n assuming standard O n3 matrix multiplication is used 14 The method relies on solving n linear systems via Dixon s method of p adic approximation each in O n3 log2 n and is available as such in software specialized in arbitrary precision matrix operations for example in IML 15 Reciprocal basis vectors method edit Main article Reciprocal basis Given an n n square matrix X x i j displaystyle mathbf X left x ij right nbsp 1 i j n displaystyle 1 leq i j leq n nbsp with n rows interpreted as n vectors x i x i j e j displaystyle mathbf x i x ij mathbf e j nbsp Einstein summation assumed where the e j displaystyle mathbf e j nbsp are a standard orthonormal basis of Euclidean space R n displaystyle mathbb R n nbsp e i e i e i e j d i j displaystyle mathbf e i mathbf e i mathbf e i cdot mathbf e j delta i j nbsp then using Clifford algebra or geometric algebra we compute the reciprocal sometimes called dual column vectors x i x j i e j 1 i 1 x 1 i x n x 1 x 2 x n 1 displaystyle mathbf x i x ji mathbf e j 1 i 1 mathbf x 1 wedge cdots wedge i wedge cdots wedge mathbf x n cdot mathbf x 1 wedge mathbf x 2 wedge cdots wedge mathbf x n 1 nbsp as the columns of the inverse matrix X 1 x j i displaystyle mathbf X 1 x ji nbsp Note that the place i displaystyle i nbsp indicates that x i displaystyle mathbf x i nbsp is removed from that place in the above expression for x i displaystyle mathbf x i nbsp We then have X X 1 x i x j d i j I n displaystyle mathbf X mathbf X 1 left mathbf x i cdot mathbf x j right left delta i j right mathbf I n nbsp where d i j displaystyle delta i j nbsp is the Kronecker delta We also have X 1 X e i x k e j x k e i e j d i j I n displaystyle mathbf X 1 mathbf X left left mathbf e i cdot mathbf x k right left mathbf e j cdot mathbf x k right right left mathbf e i cdot mathbf e j right left delta i j right mathbf I n nbsp as required If the vectors x i displaystyle mathbf x i nbsp are not linearly independent then x 1 x 2 x n 0 displaystyle mathbf x 1 wedge mathbf x 2 wedge cdots wedge mathbf x n 0 nbsp and the matrix X displaystyle mathbf X nbsp is not invertible has no inverse Derivative of the matrix inverse editSuppose that the invertible matrix A depends on a parameter t Then the derivative of the inverse of A with respect to t is given by 16 d A 1 d t A 1 d A d t A 1 displaystyle frac mathrm d mathbf A 1 mathrm d t mathbf A 1 frac mathrm d mathbf A mathrm d t mathbf A 1 nbsp To derive the above expression for the derivative of the inverse of A one can differentiate the definition of the matrix inverse A 1 A I displaystyle mathbf A 1 mathbf A mathbf I nbsp and then solve for the inverse of A d A 1 A d t d A 1 d t A A 1 d A d t d I d t 0 displaystyle frac mathrm d mathbf A 1 mathbf A mathrm d t frac mathrm d mathbf A 1 mathrm d t mathbf A mathbf A 1 frac mathrm d mathbf A mathrm d t frac mathrm d mathbf I mathrm d t mathbf 0 nbsp Subtracting A 1 d A d t displaystyle mathbf A 1 frac mathrm d mathbf A mathrm d t nbsp from both sides of the above and multiplying on the right by A 1 displaystyle mathbf A 1 nbsp gives the correct expression for the derivative of the inverse d A 1 d t A 1 d A d t A 1 displaystyle frac mathrm d mathbf A 1 mathrm d t mathbf A 1 frac mathrm d mathbf A mathrm d t mathbf A 1 nbsp Similarly if e displaystyle varepsilon nbsp is a small number then A e X 1 A 1 e A 1 X A 1 O e 2 displaystyle left mathbf A varepsilon mathbf X right 1 mathbf A 1 varepsilon mathbf A 1 mathbf X mathbf A 1 mathcal O varepsilon 2 nbsp More generally if d f A d t i g i A d A d t h i A displaystyle frac mathrm d f mathbf A mathrm d t sum i g i mathbf A frac mathrm d mathbf A mathrm d t h i mathbf A nbsp then f A e X f A e i g i A X h i A O e 2 displaystyle f mathbf A varepsilon mathbf X f mathbf A varepsilon sum i g i mathbf A mathbf X h i mathbf A mathcal O left varepsilon 2 right nbsp Given a positive integer n displaystyle n nbsp d A n d t i 1 n A i 1 d A d t A n i d A n d t i 1 n A i d A d t A n 1 i displaystyle begin aligned frac mathrm d mathbf A n mathrm d t amp sum i 1 n mathbf A i 1 frac mathrm d mathbf A mathrm d t mathbf A n i frac mathrm d mathbf A n mathrm d t amp sum i 1 n mathbf A i frac mathrm d mathbf A mathrm d t mathbf A n 1 i end aligned nbsp Therefore A e X n A n e i 1 n A i 1 X A n i O e 2 A e X n A n e i 1 n A i X A n 1 i O e 2 displaystyle begin aligned mathbf A varepsilon mathbf X n amp mathbf A n varepsilon sum i 1 n mathbf A i 1 mathbf X mathbf A n i mathcal O left varepsilon 2 right mathbf A varepsilon mathbf X n amp mathbf A n varepsilon sum i 1 n mathbf A i mathbf X mathbf A n 1 i mathcal O left varepsilon 2 right end aligned nbsp Generalized inverse editSome of the properties of inverse matrices are shared by generalized inverses for example the Moore Penrose inverse which can be defined for any m by n matrix 17 Applications editFor most practical applications it is not necessary to invert a matrix to solve a system of linear equations however for a unique solution it is necessary that the matrix involved be invertible Decomposition techniques like LU decomposition are much faster than inversion and various fast algorithms for special classes of linear systems have also been developed Regression least squares edit Although an explicit inverse is not necessary to estimate the vector of unknowns it is the easiest way to estimate their accuracy found in the diagonal of a matrix inverse the posterior covariance matrix of the vector of unknowns However faster algorithms to compute only the diagonal entries of a matrix inverse are known in many cases 18 Matrix inverses in real time simulations edit Matrix inversion plays a significant role in computer graphics particularly in 3D graphics rendering and 3D simulations Examples include screen to world ray casting world to subspace to world object transformations and physical simulations Matrix inverses in MIMO wireless communication edit Matrix inversion also plays a significant role in the MIMO Multiple Input Multiple Output technology in wireless communications The MIMO system consists of N transmit and M receive antennas Unique signals occupying the same frequency band are sent via N transmit antennas and are received via M receive antennas The signal arriving at each receive antenna will be a linear combination of the N transmitted signals forming an N M transmission matrix H It is crucial for the matrix H to be invertible for the receiver to be able to figure out the transmitted information See also editBinomial inverse theorem LU decomposition Matrix decomposition Matrix square root Minor linear algebra Partial inverse of a matrix Pseudoinverse Rybicki Press algorithm Singular value decomposition Woodbury matrix identityReferences edit Axler Sheldon 18 December 2014 Linear Algebra Done Right Undergraduate Texts in Mathematics 3rd ed Springer Publishing published 2015 p 296 ISBN 978 3 319 11079 0 Weisstein Eric W Invertible Matrix Theorem mathworld wolfram com Retrieved 2020 09 08 Horn Roger A Johnson Charles R 1985 Matrix Analysis Cambridge University Press p 14 ISBN 978 0 521 38632 6 Pan Victor Reif John 1985 Efficient Parallel Solution of Linear Systems Proceedings of the 17th Annual ACM Symposium on Theory of Computing Providence ACM Pan Victor Reif John 1985 Harvard University Center for Research in Computing Technology Report TR 02 85 Cambridge MA Aiken Computation Laboratory A proof can be found in the Appendix B of Kondratyuk L A Krivoruchenko M I 1992 Superconducting quark matter in SU 2 color group Zeitschrift fur Physik A 344 1 99 115 Bibcode 1992ZPhyA 344 99K doi 10 1007 BF01291027 S2CID 120467300 Strang Gilbert 2003 Introduction to linear algebra 3rd ed SIAM p 71 ISBN 978 0 9614088 9 3 Chapter 2 page 71 Tzon Tzer Lu Sheng Hua Shiou 2002 Inverses of 2 2 block matrices Computers amp Mathematics with Applications 43 1 2 119 129 doi 10 1016 S0898 1221 01 00278 4 Bernstein Dennis 2005 Matrix Mathematics Princeton University Press p 44 ISBN 978 0 691 11802 4 Bernstein Dennis 2005 Matrix Mathematics Princeton University Press p 45 ISBN 978 0 691 11802 4 a b T H Cormen C E Leiserson R L Rivest C Stein Introduction to Algorithms 3rd ed MIT Press Cambridge MA 2009 28 2 Ran Raz On the complexity of matrix product In Proceedings of the thirty fourth annual ACM symposium on Theory of computing ACM Press 2002 doi 10 1145 509907 509932 Stewart Gilbert 1998 Matrix Algorithms Basic decompositions SIAM p 55 ISBN 978 0 89871 414 2 Haramoto H Matsumoto M 2009 A p adic algorithm for computing the inverse of integer matrices Journal of Computational and Applied Mathematics 225 1 320 322 Bibcode 2009JCoAM 225 320H doi 10 1016 j cam 2008 07 044 IML Integer Matrix Library cs uwaterloo ca Retrieved 14 April 2018 Magnus Jan R Neudecker Heinz 1999 Matrix Differential Calculus with Applications in Statistics and Econometrics Revised ed New York John Wiley amp Sons pp 151 152 ISBN 0 471 98633 X Roman Stephen 2008 Advanced Linear Algebra Graduate Texts in Mathematics Third ed Springer p 446 ISBN 978 0 387 72828 5 Lin Lin Lu Jianfeng Ying Lexing Car Roberto E Weinan 2009 Fast algorithm for extracting the diagonal of the inverse matrix with application to the electronic structure analysis of metallic systems Communications in Mathematical Sciences 7 3 755 777 doi 10 4310 CMS 2009 v7 n3 a12 Further reading edit Inversion of a matrix Encyclopedia of Mathematics EMS Press 2001 1994 Cormen Thomas H Leiserson Charles E Rivest Ronald L Stein Clifford 2001 1990 28 4 Inverting matrices Introduction to Algorithms 2nd ed MIT Press and McGraw Hill pp 755 760 ISBN 0 262 03293 7 Bernstein Dennis S 2009 Matrix Mathematics Theory Facts and Formulas 2nd ed Princeton University Press ISBN 978 0691140391 via Google Books Petersen Kaare Brandt Pedersen Michael Syskind November 15 2012 The Matrix Cookbook PDF pp 17 23 External links editThis article s use of external links may not follow Wikipedia s policies or guidelines Please improve this article by removing excessive or inappropriate external links and converting useful links where appropriate into footnote references June 2015 Learn how and when to remove this message Sanderson Grant August 15 2016 Inverse Matrices Column Space and Null Space Essence of Linear Algebra Archived from the original on 2021 11 03 via YouTube Strang Gilbert Linear Algebra Lecture on Inverse Matrices MIT OpenCourseWare Moore Penrose Inverse Matrix Retrieved from https en wikipedia org w index php title Invertible matrix amp oldid 1223630958, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.