fbpx
Wikipedia

Matrix exponential

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

Let X be an n×n real or complex matrix. The exponential of X, denoted by eX or exp(X), is the n×n matrix given by the power series

where is defined to be the identity matrix with the same dimensions as .[1] The series always converges, so the exponential of X is well-defined.

Equivalently,

where I is the n×n identity matrix.

When X is an n×n diagonal matrix then exp(X) will be an n×n diagonal matrix with each diagonal element equal to the ordinary exponential applied to the corresponding diagonal element of X.

Properties edit

Elementary properties edit

Let X and Y be n×n complex matrices and let a and b be arbitrary complex numbers. We denote the n×n identity matrix by I and the zero matrix by 0. The matrix exponential satisfies the following properties.[2]

We begin with the properties that are immediate consequences of the definition as a power series:

The next key result is this one:

  • If   then  .

The proof of this identity is the same as the standard power-series argument for the corresponding identity for the exponential of real numbers. That is to say, as long as   and   commute, it makes no difference to the argument whether   and   are numbers or matrices. It is important to note that this identity typically does not hold if   and   do not commute (see Golden-Thompson inequality below).

Consequences of the preceding identity are the following:

  • eaXebX = e(a + b)X
  • eXeX = I

Using the above results, we can easily verify the following claims. If X is symmetric then eX is also symmetric, and if X is skew-symmetric then eX is orthogonal. If X is Hermitian then eX is also Hermitian, and if X is skew-Hermitian then eX is unitary.

Finally, a Laplace transform of matrix exponentials amounts to the resolvent,

 
for all sufficiently large positive values of s.

Linear differential equation systems edit

One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations. The solution of

 
where A is a constant matrix, is given by
 

The matrix exponential can also be used to solve the inhomogeneous equation

 
See the section on applications below for examples.

There is no closed-form solution for differential equations of the form

 
where A is not constant, but the Magnus series gives the solution as an infinite sum.

The determinant of the matrix exponential edit

By Jacobi's formula, for any complex square matrix the following trace identity holds:[3]

 

In addition to providing a computational tool, this formula demonstrates that a matrix exponential is always an invertible matrix. This follows from the fact that the right hand side of the above equation is always non-zero, and so det(eA) ≠ 0, which implies that eA must be invertible.

In the real-valued case, the formula also exhibits the map

 
to not be surjective, in contrast to the complex case mentioned earlier. This follows from the fact that, for real-valued matrices, the right-hand side of the formula is always positive, while there exist invertible matrices with a negative determinant.

Real symmetric matrices edit

The matrix exponential of a real symmetric matrix is positive definite. Let   be an n×n real symmetric matrix and   a column vector. Using the elementary properties of the matrix exponential and of symmetric matrices, we have:

 

Since   is invertible, the equality only holds for  , and we have   for all non-zero  . Hence   is positive definite.

The exponential of sums edit

For any real numbers (scalars) x and y we know that the exponential function satisfies ex+y = ex ey. The same is true for commuting matrices. If matrices X and Y commute (meaning that XY = YX), then,

 

However, for matrices that do not commute the above equality does not necessarily hold.

The Lie product formula edit

Even if X and Y do not commute, the exponential eX + Y can be computed by the Lie product formula[4]

 

Using a large finite k to approximate the above is basis of the Suzuki-Trotter expansion, often used in numerical time evolution.

The Baker–Campbell–Hausdorff formula edit

In the other direction, if X and Y are sufficiently small (but not necessarily commuting) matrices, we have

 
where Z may be computed as a series in commutators of X and Y by means of the Baker–Campbell–Hausdorff formula:[5]
 
where the remaining terms are all iterated commutators involving X and Y. If X and Y commute, then all the commutators are zero and we have simply Z = X + Y.

Inequalities for exponentials of Hermitian matrices edit

For Hermitian matrices there is a notable theorem related to the trace of matrix exponentials.

If A and B are Hermitian matrices, then[6]

 

There is no requirement of commutativity. There are counterexamples to show that the Golden–Thompson inequality cannot be extended to three matrices – and, in any event, tr(exp(A)exp(B)exp(C)) is not guaranteed to be real for Hermitian A, B, C. However, Lieb proved[7][8] that it can be generalized to three matrices if we modify the expression as follows

 

The exponential map edit

The exponential of a matrix is always an invertible matrix. The inverse matrix of eX is given by eX. This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map

 
from the space of all n×n matrices to the general linear group of degree n, i.e. the group of all n×n invertible matrices. In fact, this map is surjective which means that every invertible matrix can be written as the exponential of some other matrix[9] (for this, it is essential to consider the field C of complex numbers and not R).

For any two matrices X and Y,

 

where ‖ · ‖ denotes an arbitrary matrix norm. It follows that the exponential map is continuous and Lipschitz continuous on compact subsets of Mn(C).

The map

 
defines a smooth curve in the general linear group which passes through the identity element at t = 0.

In fact, this gives a one-parameter subgroup of the general linear group since

 

The derivative of this curve (or tangent vector) at a point t is given by

 

(1)

The derivative at t = 0 is just the matrix X, which is to say that X generates this one-parameter subgroup.

More generally,[10] for a generic t-dependent exponent, X(t),

 

Taking the above expression eX(t) outside the integral sign and expanding the integrand with the help of the Hadamard lemma one can obtain the following useful expression for the derivative of the matrix exponent,[11]

 

The coefficients in the expression above are different from what appears in the exponential. For a closed form, see derivative of the exponential map.

Directional derivatives when restricted to Hermitian matrices edit

Let   be a   Hermitian matrix with distinct eigenvalues. Let   be its eigen-decomposition where   is a unitary matrix whose columns are the eigenvectors of  ,   is its conjugate transpose, and   the vector of corresponding eigenvalues. Then, for any   Hermitian matrix  , the directional derivative of   at   in the direction   is [12] [13]

 
where  , the operator   denotes the Hadamard product, and, for all  , the matrix   is defined as
 
In addition, for any   Hermitian matrix  , the second directional derivative in directions   and   is[13]
 
where the matrix-valued function   is defined, for all  , as
 
with
 

Computing the matrix exponential edit

Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis. Matlab, GNU Octave, R, and SciPy all use the Padé approximant.[14][15][16][17] In this section, we discuss methods that are applicable in principle to any matrix, and which can be carried out explicitly for small matrices.[18] Subsequent sections describe methods suitable for numerical evaluation on large matrices.

Diagonalizable case edit

If a matrix is diagonal:

 
then its exponential can be obtained by exponentiating each entry on the main diagonal:
 

This result also allows one to exponentiate diagonalizable matrices. If

A = UDU−1

and D is diagonal, then

eA = UeDU−1.

Application of Sylvester's formula yields the same result. (To see this, note that addition and multiplication, hence also exponentiation, of diagonal matrices is equivalent to element-wise addition and multiplication, and hence exponentiation; in particular, the "one-dimensional" exponentiation is felt element-wise for the diagonal case.)

Example : Diagonalizable edit

For example, the matrix

 
can be diagonalized as
 

Thus,

 

Nilpotent case edit

A matrix N is nilpotent if Nq = 0 for some integer q. In this case, the matrix exponential eN can be computed directly from the series expansion, as the series terminates after a finite number of terms:

 

Since the series has a finite number of steps, it is a matrix polynomial, which can be computed efficiently.

General case edit

Using the Jordan–Chevalley decomposition edit

By the Jordan–Chevalley decomposition, any   matrix X with complex entries can be expressed as

 
where
  • A is diagonalizable
  • N is nilpotent
  • A commutes with N

This means that we can compute the exponential of X by reducing to the previous two cases:

 

Note that we need the commutativity of A and N for the last step to work.

Using the Jordan canonical form edit

A closely related method is, if the field is algebraically closed, to work with the Jordan form of X. Suppose that X = PJP−1 where J is the Jordan form of X. Then

 

Also, since

 

Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form

 

where N is a special nilpotent matrix. The matrix exponential of J is then given by

 

Projection case edit

If P is a projection matrix (i.e. is idempotent: P2 = P), its matrix exponential is:

eP = I + (e − 1)P.

Deriving this by expansion of the exponential function, each power of P reduces to P which becomes a common factor of the sum:

 

Rotation case edit

For a simple rotation in which the perpendicular unit vectors a and b specify a plane,[19] the rotation matrix R can be expressed in terms of a similar exponential function involving a generator G and angle θ.[20][21]

 
 

The formula for the exponential results from reducing the powers of G in the series expansion and identifying the respective series coefficients of G2 and G with −cos(θ) and sin(θ) respectively. The second expression here for e is the same as the expression for R(θ) in the article containing the derivation of the generator, R(θ) = e.

In two dimensions, if   and  , then  ,  , and

 
reduces to the standard matrix for a plane rotation.

The matrix P = −G2 projects a vector onto the ab-plane and the rotation only affects this part of the vector. An example illustrating this is a rotation of 30° = π/6 in the plane spanned by a and b,

 
 

Let N = I - P, so N2 = N and its products with P and G are zero. This will allow us to evaluate powers of R.

 

Evaluation by Laurent series edit

By virtue of the Cayley–Hamilton theorem the matrix exponential is expressible as a polynomial of order n−1.

If P and Qt are nonzero polynomials in one variable, such that P(A) = 0, and if the meromorphic function

 
is entire, then
 
To prove this, multiply the first of the two above equalities by P(z) and replace z by A.

Such a polynomial Qt(z) can be found as follows−see Sylvester's formula. Letting a be a root of P, Qa,t(z) is solved from the product of P by the principal part of the Laurent series of f at a: It is proportional to the relevant Frobenius covariant. Then the sum St of the Qa,t, where a runs over all the roots of P, can be taken as a particular Qt. All the other Qt will be obtained by adding a multiple of P to St(z). In particular, St(z), the Lagrange-Sylvester polynomial, is the only Qt whose degree is less than that of P.

Example: Consider the case of an arbitrary 2×2 matrix,

 

The exponential matrix etA, by virtue of the Cayley–Hamilton theorem, must be of the form

 

(For any complex number z and any C-algebra B, we denote again by z the product of z by the unit of B.)

Let α and β be the roots of the characteristic polynomial of A,

 

Then we have

 
hence
 

if αβ; while, if α = β,

 

so that

 

Defining

 

we have

 

where sin(qt)/q is 0 if t = 0, and t if q = 0.

Thus,

 

Thus, as indicated above, the matrix A having decomposed into the sum of two mutually commuting pieces, the traceful piece and the traceless piece,

 

the matrix exponential reduces to a plain product of the exponentials of the two respective pieces. This is a formula often used in physics, as it amounts to the analog of Euler's formula for Pauli spin matrices, that is rotations of the doublet representation of the group SU(2).

The polynomial St can also be given the following "interpolation" characterization. Define et(z) ≡ etz, and n ≡ deg P. Then St(z) is the unique degree < n polynomial which satisfies St(k)(a) = et(k)(a) whenever k is less than the multiplicity of a as a root of P. We assume, as we obviously can, that P is the minimal polynomial of A. We further assume that A is a diagonalizable matrix. In particular, the roots of P are simple, and the "interpolation" characterization indicates that St is given by the Lagrange interpolation formula, so it is the Lagrange−Sylvester polynomial.

At the other extreme, if P = (z - a)n, then

 

The simplest case not covered by the above observations is when   with ab, which yields

 

Evaluation by implementation of Sylvester's formula edit

A practical, expedited computation of the above reduces to the following rapid steps. Recall from above that an n×n matrix exp(tA) amounts to a linear combination of the first n−1 powers of A by the Cayley–Hamilton theorem. For diagonalizable matrices, as illustrated above, e.g. in the 2×2 case, Sylvester's formula yields exp(tA) = Bα exp() + Bβ exp(), where the Bs are the Frobenius covariants of A.

It is easiest, however, to simply solve for these Bs directly, by evaluating this expression and its first derivative at t = 0, in terms of A and I, to find the same answer as above.

But this simple procedure also works for defective matrices, in a generalization due to Buchheim.[22] This is illustrated here for a 4×4 example of a matrix which is not diagonalizable, and the Bs are not projection matrices.

Consider

 
with eigenvalues λ1 = 3/4 and λ2 = 1, each with a multiplicity of two.

Consider the exponential of each eigenvalue multiplied by t, exp(λit). Multiply each exponentiated eigenvalue by the corresponding undetermined coefficient matrix Bi. If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but now multiplying by an extra factor of t for each repetition, to ensure linear independence.

(If one eigenvalue had a multiplicity of three, then there would be the three terms:  . By contrast, when all eigenvalues are distinct, the Bs are just the Frobenius covariants, and solving for them as below just amounts to the inversion of the Vandermonde matrix of these 4 eigenvalues.)

Sum all such terms, here four such,

 

To solve for all of the unknown matrices B in terms of the first three powers of A and the identity, one needs four equations, the above one providing one such at t = 0. Further, differentiate it with respect to t,

 

and again,

 

and once more,

 

(In the general case, n−1 derivatives need be taken.)

Setting t = 0 in these four equations, the four coefficient matrices Bs may now be solved for,

 

to yield

 

Substituting with the value for A yields the coefficient matrices

 

so the final answer is

 

The procedure is much shorter than Putzer's algorithm sometimes utilized in such cases.

Illustrations edit

Suppose that we want to compute the exponential of

 

Its Jordan form is

 
where the matrix P is given by
 

Let us first calculate exp(J). We have

 

The exponential of a 1×1 matrix is just the exponential of the one entry of the matrix, so exp(J1(4)) = [e4]. The exponential of J2(16) can be calculated by the formula eI + N) = eλ eN mentioned above; this yields[23]

 

Therefore, the exponential of the original matrix B is

 

Applications edit

Linear differential equations edit

The matrix exponential has applications to systems of linear differential equations. (See also matrix differential equation.) Recall from earlier in this article that a homogeneous differential equation of the form

 
has solution eAt y(0).

If we consider the vector

 
we can express a system of inhomogeneous coupled linear differential equations as
 
Making an ansatz to use an integrating factor of eAt and multiplying throughout, yields
 

The second step is possible due to the fact that, if AB = BA, then eAtB = BeAt. So, calculating eAt leads to the solution to the system, by simply integrating the third step with respect to t.

A solution to this can be obtained by integrating and multiplying by   to eliminate the exponent in the LHS. Notice that while   is a matrix, given that it is a matrix exponential, we can say that  . In other words,  .

Example (homogeneous) edit

Consider the system

 

The associated defective matrix is

 

The matrix exponential is

 

so that the general solution of the homogeneous system is

 

amounting to

 

Example (inhomogeneous) edit

Consider now the inhomogeneous system

 

We again have

 

and

 

From before, we already have the general solution to the homogeneous equation. Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, we now only need find the particular solution.

We have, by above,

matrix, exponential, mathematics, matrix, exponential, matrix, function, square, matrices, analogous, ordinary, exponential, function, used, solve, systems, linear, differential, equations, theory, groups, matrix, exponential, gives, exponential, between, matr. In mathematics the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function It is used to solve systems of linear differential equations In the theory of Lie groups the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group Let X be an n n real or complex matrix The exponential of X denoted by eX or exp X is the n n matrix given by the power seriese X k 0 1 k X k displaystyle e X sum k 0 infty frac 1 k X k where X 0 displaystyle X 0 is defined to be the identity matrix I displaystyle I with the same dimensions as X displaystyle X 1 The series always converges so the exponential of X is well defined Equivalently e X lim k I X k k displaystyle e X lim k rightarrow infty left I frac X k right k where I is the n n identity matrix When X is an n n diagonal matrix then exp X will be an n n diagonal matrix with each diagonal element equal to the ordinary exponential applied to the corresponding diagonal element of X Contents 1 Properties 1 1 Elementary properties 1 2 Linear differential equation systems 1 3 The determinant of the matrix exponential 1 4 Real symmetric matrices 2 The exponential of sums 2 1 The Lie product formula 2 2 The Baker Campbell Hausdorff formula 3 Inequalities for exponentials of Hermitian matrices 4 The exponential map 4 1 Directional derivatives when restricted to Hermitian matrices 5 Computing the matrix exponential 5 1 Diagonalizable case 5 2 Example Diagonalizable 5 3 Nilpotent case 5 4 General case 5 4 1 Using the Jordan Chevalley decomposition 5 4 2 Using the Jordan canonical form 5 5 Projection case 5 6 Rotation case 6 Evaluation by Laurent series 7 Evaluation by implementation of Sylvester s formula 8 Illustrations 9 Applications 9 1 Linear differential equations 9 1 1 Example homogeneous 9 1 2 Example inhomogeneous 9 2 Inhomogeneous case generalization variation of parameters 10 Matrix matrix exponentials 11 See also 12 References 13 External linksProperties editElementary properties edit Let X and Y be n n complex matrices and let a and b be arbitrary complex numbers We denote the n n identity matrix by I and the zero matrix by 0 The matrix exponential satisfies the following properties 2 We begin with the properties that are immediate consequences of the definition as a power series e0 I exp XT exp X T where XT denotes the transpose of X exp X exp X where X denotes the conjugate transpose of X If Y is invertible then eYXY 1 YeXY 1 The next key result is this one If X Y Y X displaystyle XY YX nbsp then e X e Y e X Y displaystyle e X e Y e X Y nbsp The proof of this identity is the same as the standard power series argument for the corresponding identity for the exponential of real numbers That is to say as long as X displaystyle X nbsp and Y displaystyle Y nbsp commute it makes no difference to the argument whether X displaystyle X nbsp and Y displaystyle Y nbsp are numbers or matrices It is important to note that this identity typically does not hold if X displaystyle X nbsp and Y displaystyle Y nbsp do not commute see Golden Thompson inequality below Consequences of the preceding identity are the following eaXebX e a b X eXe X I Using the above results we can easily verify the following claims If X is symmetric then eX is also symmetric and if X is skew symmetric then eX is orthogonal If X is Hermitian then eX is also Hermitian and if X is skew Hermitian then eX is unitary Finally a Laplace transform of matrix exponentials amounts to the resolvent 0 e t s e t X d t s I X 1 displaystyle int 0 infty e ts e tX dt sI X 1 nbsp for all sufficiently large positive values of s Linear differential equation systems edit Main article Matrix differential equation One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations The solution ofd d t y t A y t y 0 y 0 displaystyle frac d dt y t Ay t quad y 0 y 0 nbsp where A is a constant matrix is given by y t e A t y 0 displaystyle y t e At y 0 nbsp The matrix exponential can also be used to solve the inhomogeneous equationd d t y t A y t z t y 0 y 0 displaystyle frac d dt y t Ay t z t quad y 0 y 0 nbsp See the section on applications below for examples There is no closed form solution for differential equations of the formd d t y t A t y t y 0 y 0 displaystyle frac d dt y t A t y t quad y 0 y 0 nbsp where A is not constant but the Magnus series gives the solution as an infinite sum The determinant of the matrix exponential edit By Jacobi s formula for any complex square matrix the following trace identity holds 3 det e A e tr A displaystyle det left e A right e operatorname tr A nbsp In addition to providing a computational tool this formula demonstrates that a matrix exponential is always an invertible matrix This follows from the fact that the right hand side of the above equation is always non zero and so det eA 0 which implies that eA must be invertible In the real valued case the formula also exhibits the mapexp M n R G L n R displaystyle exp colon M n mathbb R to mathrm GL n mathbb R nbsp to not be surjective in contrast to the complex case mentioned earlier This follows from the fact that for real valued matrices the right hand side of the formula is always positive while there exist invertible matrices with a negative determinant Real symmetric matrices edit The matrix exponential of a real symmetric matrix is positive definite Let S displaystyle S nbsp be an n n real symmetric matrix and x R n displaystyle x in mathbb R n nbsp a column vector Using the elementary properties of the matrix exponential and of symmetric matrices we have x T e S x x T e S 2 e S 2 x x T e S 2 T e S 2 x e S 2 x T e S 2 x e S 2 x 2 0 displaystyle x T e S x x T e S 2 e S 2 x x T e S 2 T e S 2 x e S 2 x T e S 2 x lVert e S 2 x rVert 2 geq 0 nbsp Since e S 2 displaystyle e S 2 nbsp is invertible the equality only holds for x 0 displaystyle x 0 nbsp and we have x T e S x gt 0 displaystyle x T e S x gt 0 nbsp for all non zero x displaystyle x nbsp Hence e S displaystyle e S nbsp is positive definite The exponential of sums editFor any real numbers scalars x and y we know that the exponential function satisfies ex y ex ey The same is true for commuting matrices If matrices X and Y commute meaning that XY YX then e X Y e X e Y displaystyle e X Y e X e Y nbsp However for matrices that do not commute the above equality does not necessarily hold The Lie product formula edit Even if X and Y do not commute the exponential eX Y can be computed by the Lie product formula 4 e X Y lim k e 1 k X e 1 k Y k displaystyle e X Y lim k to infty left e frac 1 k X e frac 1 k Y right k nbsp Using a large finite k to approximate the above is basis of the Suzuki Trotter expansion often used in numerical time evolution The Baker Campbell Hausdorff formula edit In the other direction if X and Y are sufficiently small but not necessarily commuting matrices we havee X e Y e Z displaystyle e X e Y e Z nbsp where Z may be computed as a series in commutators of X and Y by means of the Baker Campbell Hausdorff formula 5 Z X Y 1 2 X Y 1 12 X X Y 1 12 Y X Y displaystyle Z X Y frac 1 2 X Y frac 1 12 X X Y frac 1 12 Y X Y cdots nbsp where the remaining terms are all iterated commutators involving X and Y If X and Y commute then all the commutators are zero and we have simply Z X Y Inequalities for exponentials of Hermitian matrices editMain article Golden Thompson inequality For Hermitian matrices there is a notable theorem related to the trace of matrix exponentials If A and B are Hermitian matrices then 6 tr exp A B tr exp A exp B displaystyle operatorname tr exp A B leq operatorname tr left exp A exp B right nbsp There is no requirement of commutativity There are counterexamples to show that the Golden Thompson inequality cannot be extended to three matrices and in any event tr exp A exp B exp C is not guaranteed to be real for Hermitian A B C However Lieb proved 7 8 that it can be generalized to three matrices if we modify the expression as followstr exp A B C 0 d t tr e A e B t 1 e C e B t 1 displaystyle operatorname tr exp A B C leq int 0 infty mathrm d t operatorname tr left e A left e B t right 1 e C left e B t right 1 right nbsp The exponential map editThe exponential of a matrix is always an invertible matrix The inverse matrix of eX is given by e X This is analogous to the fact that the exponential of a complex number is always nonzero The matrix exponential then gives us a mapexp M n C G L n C displaystyle exp colon M n mathbb C to mathrm GL n mathbb C nbsp from the space of all n n matrices to the general linear group of degree n i e the group of all n n invertible matrices In fact this map is surjective which means that every invertible matrix can be written as the exponential of some other matrix 9 for this it is essential to consider the field C of complex numbers and not R For any two matrices X and Y e X Y e X Y e X e Y displaystyle left e X Y e X right leq Y e X e Y nbsp where denotes an arbitrary matrix norm It follows that the exponential map is continuous and Lipschitz continuous on compact subsets of Mn C The mapt e t X t R displaystyle t mapsto e tX qquad t in mathbb R nbsp defines a smooth curve in the general linear group which passes through the identity element at t 0 In fact this gives a one parameter subgroup of the general linear group sincee t X e s X e t s X displaystyle e tX e sX e t s X nbsp The derivative of this curve or tangent vector at a point t is given by d d t e t X X e t X e t X X displaystyle frac d dt e tX Xe tX e tX X nbsp 1 The derivative at t 0 is just the matrix X which is to say that X generates this one parameter subgroup More generally 10 for a generic t dependent exponent X t d d t e X t 0 1 e a X t d X t d t e 1 a X t d a displaystyle frac d dt e X t int 0 1 e alpha X t frac dX t dt e 1 alpha X t d alpha nbsp Taking the above expression eX t outside the integral sign and expanding the integrand with the help of the Hadamard lemma one can obtain the following useful expression for the derivative of the matrix exponent 11 d d t e X t e X t d d t X t 1 2 X t d d t X t 1 3 X t X t d d t X t displaystyle left frac d dt e X t right e X t frac d dt X t frac 1 2 left X t frac d dt X t right frac 1 3 left X t left X t frac d dt X t right right cdots nbsp The coefficients in the expression above are different from what appears in the exponential For a closed form see derivative of the exponential map Directional derivatives when restricted to Hermitian matrices edit Let X displaystyle X nbsp be a n n displaystyle n times n nbsp Hermitian matrix with distinct eigenvalues Let X E diag L E displaystyle X E textrm diag Lambda E nbsp be its eigen decomposition where E displaystyle E nbsp is a unitary matrix whose columns are the eigenvectors of X displaystyle X nbsp E displaystyle E nbsp is its conjugate transpose and L l 1 l n displaystyle Lambda left lambda 1 ldots lambda n right nbsp the vector of corresponding eigenvalues Then for any n n displaystyle n times n nbsp Hermitian matrix V displaystyle V nbsp the directional derivative of exp X e X displaystyle exp X to e X nbsp at X displaystyle X nbsp in the direction V displaystyle V nbsp is 12 13 D exp X V lim ϵ 0 1 ϵ e X ϵ V e X E G V E displaystyle D exp X V triangleq lim epsilon to 0 frac 1 epsilon left displaystyle e X epsilon V e X right E G odot bar V E nbsp where V E V E displaystyle bar V E VE nbsp the operator displaystyle odot nbsp denotes the Hadamard product and for all 1 i j n displaystyle 1 leq i j leq n nbsp the matrix G displaystyle G nbsp is defined as G i j e l i e l j l i l j if i j e l i otherwise displaystyle G i j left begin aligned amp frac e lambda i e lambda j lambda i lambda j amp text if i neq j amp e lambda i amp text otherwise end aligned right nbsp In addition for any n n displaystyle n times n nbsp Hermitian matrix U displaystyle U nbsp the second directional derivative in directions U displaystyle U nbsp and V displaystyle V nbsp is 13 D 2 exp X U V lim ϵ u 0 lim ϵ v 0 1 4 ϵ u ϵ v e X ϵ u U ϵ v V e X ϵ u U ϵ v V e X ϵ u U ϵ v V e X ϵ u U ϵ v V E F U V E displaystyle D 2 exp X U V triangleq lim epsilon u to 0 lim epsilon v to 0 frac 1 4 epsilon u epsilon v left displaystyle e X epsilon u U epsilon v V e X epsilon u U epsilon v V e X epsilon u U epsilon v V e X epsilon u U epsilon v V right EF U V E nbsp where the matrix valued function F displaystyle F nbsp is defined for all 1 i j n displaystyle 1 leq i j leq n nbsp as F U V i j k 1 n ϕ i j k U i k V j k V i k U j k displaystyle F U V i j sum k 1 n phi i j k bar U ik bar V jk bar V ik bar U jk nbsp with ϕ i j k G i k G j k l i l j if i j G i i G i k l i l k if i j and k i G i i 2 if i j k displaystyle phi i j k left begin aligned amp frac G ik G jk lambda i lambda j amp text if i neq j amp frac G ii G ik lambda i lambda k amp text if i j text and k neq i amp frac G ii 2 amp text if i j k end aligned right nbsp Computing the matrix exponential editFinding reliable and accurate methods to compute the matrix exponential is difficult and this is still a topic of considerable current research in mathematics and numerical analysis Matlab GNU Octave R and SciPy all use the Pade approximant 14 15 16 17 In this section we discuss methods that are applicable in principle to any matrix and which can be carried out explicitly for small matrices 18 Subsequent sections describe methods suitable for numerical evaluation on large matrices Diagonalizable case edit If a matrix is diagonal A a 1 0 0 0 a 2 0 0 0 a n displaystyle A begin bmatrix a 1 amp 0 amp cdots amp 0 0 amp a 2 amp cdots amp 0 vdots amp vdots amp ddots amp vdots 0 amp 0 amp cdots amp a n end bmatrix nbsp then its exponential can be obtained by exponentiating each entry on the main diagonal e A e a 1 0 0 0 e a 2 0 0 0 e a n displaystyle e A begin bmatrix e a 1 amp 0 amp cdots amp 0 0 amp e a 2 amp cdots amp 0 vdots amp vdots amp ddots amp vdots 0 amp 0 amp cdots amp e a n end bmatrix nbsp This result also allows one to exponentiate diagonalizable matrices If A UDU 1 and D is diagonal then eA UeDU 1 Application of Sylvester s formula yields the same result To see this note that addition and multiplication hence also exponentiation of diagonal matrices is equivalent to element wise addition and multiplication and hence exponentiation in particular the one dimensional exponentiation is felt element wise for the diagonal case Example Diagonalizable edit For example the matrixA 1 4 1 1 displaystyle A begin bmatrix 1 amp 4 1 amp 1 end bmatrix nbsp can be diagonalized as 2 2 1 1 1 0 0 3 2 2 1 1 1 displaystyle begin bmatrix 2 amp 2 1 amp 1 end bmatrix begin bmatrix 1 amp 0 0 amp 3 end bmatrix begin bmatrix 2 amp 2 1 amp 1 end bmatrix 1 nbsp Thus e A 2 2 1 1 e 1 0 0 3 2 2 1 1 1 2 2 1 1 1 e 0 0 e 3 2 2 1 1 1 e 4 1 2 e e 4 1 e e 4 1 4 e e 4 1 2 e displaystyle e A begin bmatrix 2 amp 2 1 amp 1 end bmatrix e begin bmatrix 1 amp 0 0 amp 3 end bmatrix begin bmatrix 2 amp 2 1 amp 1 end bmatrix 1 begin bmatrix 2 amp 2 1 amp 1 end bmatrix begin bmatrix frac 1 e amp 0 0 amp e 3 end bmatrix begin bmatrix 2 amp 2 1 amp 1 end bmatrix 1 begin bmatrix frac e 4 1 2e amp frac e 4 1 e frac e 4 1 4e amp frac e 4 1 2e end bmatrix nbsp Nilpotent case edit A matrix N is nilpotent if Nq 0 for some integer q In this case the matrix exponential eN can be computed directly from the series expansion as the series terminates after a finite number of terms e N I N 1 2 N 2 1 6 N 3 1 q 1 N q 1 displaystyle e N I N frac 1 2 N 2 frac 1 6 N 3 cdots frac 1 q 1 N q 1 nbsp Since the series has a finite number of steps it is a matrix polynomial which can be computed efficiently General case edit Using the Jordan Chevalley decomposition edit By the Jordan Chevalley decomposition any n n displaystyle n times n nbsp matrix X with complex entries can be expressed asX A N displaystyle X A N nbsp where A is diagonalizable N is nilpotent A commutes with N This means that we can compute the exponential of X by reducing to the previous two cases e X e A N e A e N displaystyle e X e A N e A e N nbsp Note that we need the commutativity of A and N for the last step to work Using the Jordan canonical form edit A closely related method is if the field is algebraically closed to work with the Jordan form of X Suppose that X PJP 1 where J is the Jordan form of X Thene X P e J P 1 displaystyle e X Pe J P 1 nbsp Also sinceJ J a 1 l 1 J a 2 l 2 J a n l n e J exp J a 1 l 1 J a 2 l 2 J a n l n exp J a 1 l 1 exp J a 2 l 2 exp J a n l n displaystyle begin aligned J amp J a 1 lambda 1 oplus J a 2 lambda 2 oplus cdots oplus J a n lambda n e J amp exp big J a 1 lambda 1 oplus J a 2 lambda 2 oplus cdots oplus J a n lambda n big amp exp big J a 1 lambda 1 big oplus exp big J a 2 lambda 2 big oplus cdots oplus exp big J a n lambda n big end aligned nbsp Therefore we need only know how to compute the matrix exponential of a Jordan block But each Jordan block is of the formJ a l l I N e J a l e l I N e l e N displaystyle begin aligned amp amp J a lambda amp lambda I N amp Rightarrow amp e J a lambda amp e lambda I N e lambda e N end aligned nbsp where N is a special nilpotent matrix The matrix exponential of J is then given bye J e l 1 e N a 1 e l 2 e N a 2 e l n e N a n displaystyle e J e lambda 1 e N a 1 oplus e lambda 2 e N a 2 oplus cdots oplus e lambda n e N a n nbsp Projection case edit If P is a projection matrix i e is idempotent P2 P its matrix exponential is eP I e 1 P Deriving this by expansion of the exponential function each power of P reduces to P which becomes a common factor of the sum e P k 0 P k k I k 1 1 k P I e 1 P displaystyle e P sum k 0 infty frac P k k I left sum k 1 infty frac 1 k right P I e 1 P nbsp Rotation case edit For a simple rotation in which the perpendicular unit vectors a and b specify a plane 19 the rotation matrix R can be expressed in terms of a similar exponential function involving a generator G and angle 8 20 21 G b a T a b T P G 2 a a T b b T P 2 P P G G G P displaystyle begin aligned G amp mathbf ba mathsf T mathbf ab mathsf T amp P amp G 2 mathbf aa mathsf T mathbf bb mathsf T P 2 amp P amp PG amp G GP end aligned nbsp R 8 e G 8 I G sin 8 G 2 1 cos 8 I P P cos 8 G sin 8 displaystyle begin aligned R left theta right e G theta amp I G sin theta G 2 1 cos theta amp I P P cos theta G sin theta end aligned nbsp The formula for the exponential results from reducing the powers of G in the series expansion and identifying the respective series coefficients of G2 and G with cos 8 and sin 8 respectively The second expression here for eG8 is the same as the expression for R 8 in the article containing the derivation of the generator R 8 eG8 In two dimensions if a 1 0 displaystyle a left begin smallmatrix 1 0 end smallmatrix right nbsp and b 0 1 displaystyle b left begin smallmatrix 0 1 end smallmatrix right nbsp then G 0 1 1 0 displaystyle G left begin smallmatrix 0 amp 1 1 amp 0 end smallmatrix right nbsp G 2 1 0 0 1 displaystyle G 2 left begin smallmatrix 1 amp 0 0 amp 1 end smallmatrix right nbsp andR 8 cos 8 sin 8 sin 8 cos 8 I cos 8 G sin 8 displaystyle R theta begin bmatrix cos theta amp sin theta sin theta amp cos theta end bmatrix I cos theta G sin theta nbsp reduces to the standard matrix for a plane rotation The matrix P G2 projects a vector onto the ab plane and the rotation only affects this part of the vector An example illustrating this is a rotation of 30 p 6 in the plane spanned by a and b a 1 0 0 b 1 5 0 1 2 displaystyle begin aligned mathbf a amp begin bmatrix 1 0 0 end bmatrix amp mathbf b amp frac 1 sqrt 5 begin bmatrix 0 1 2 end bmatrix end aligned nbsp G 1 5 0 1 2 1 0 0 2 0 0 P G 2 1 5 5 0 0 0 1 2 0 2 4 P 1 2 3 1 5 5 8 16 a 8 5 b R p 6 1 10 5 3 5 2 5 5 8 3 4 2 3 2 5 4 2 3 2 4 3 displaystyle begin aligned G frac 1 sqrt 5 amp begin bmatrix 0 amp 1 amp 2 1 amp 0 amp 0 2 amp 0 amp 0 end bmatrix amp P G 2 amp frac 1 5 begin bmatrix 5 amp 0 amp 0 0 amp 1 amp 2 0 amp 2 amp 4 end bmatrix P begin bmatrix 1 2 3 end bmatrix frac 1 5 amp begin bmatrix 5 8 16 end bmatrix mathbf a frac 8 sqrt 5 mathbf b amp R left frac pi 6 right amp frac 1 10 begin bmatrix 5 sqrt 3 amp sqrt 5 amp 2 sqrt 5 sqrt 5 amp 8 sqrt 3 amp 4 2 sqrt 3 2 sqrt 5 amp 4 2 sqrt 3 amp 2 4 sqrt 3 end bmatrix end aligned nbsp Let N I P so N2 N and its products with P and G are zero This will allow us to evaluate powers of R R p 6 N P 3 2 G 1 2 R p 6 2 N P 1 2 G 3 2 R p 6 3 N G R p 6 6 N P R p 6 12 N P I displaystyle begin aligned R left frac pi 6 right amp N P frac sqrt 3 2 G frac 1 2 R left frac pi 6 right 2 amp N P frac 1 2 G frac sqrt 3 2 R left frac pi 6 right 3 amp N G R left frac pi 6 right 6 amp N P R left frac pi 6 right 12 amp N P I end aligned nbsp Further information Rodrigues rotation formula and Axis angle representation Exponential map from so 3 to SO 3 Evaluation by Laurent series editBy virtue of the Cayley Hamilton theorem the matrix exponential is expressible as a polynomial of order n 1 If P and Qt are nonzero polynomials in one variable such that P A 0 and if the meromorphic functionf z e t z Q t z P z displaystyle f z frac e tz Q t z P z nbsp is entire then e t A Q t A displaystyle e tA Q t A nbsp To prove this multiply the first of the two above equalities by P z and replace z by A Such a polynomial Qt z can be found as follows see Sylvester s formula Letting a be a root of P Qa t z is solved from the product of P by the principal part of the Laurent series of f at a It is proportional to the relevant Frobenius covariant Then the sum St of the Qa t where a runs over all the roots of P can be taken as a particular Qt All the other Qt will be obtained by adding a multiple of P to St z In particular St z the Lagrange Sylvester polynomial is the only Qt whose degree is less than that of P Example Consider the case of an arbitrary 2 2 matrix A a b c d displaystyle A begin bmatrix a amp b c amp d end bmatrix nbsp The exponential matrix etA by virtue of the Cayley Hamilton theorem must be of the forme t A s 0 t I s 1 t A displaystyle e tA s 0 t I s 1 t A nbsp For any complex number z and any C algebra B we denote again by z the product of z by the unit of B Let a and b be the roots of the characteristic polynomial of A P z z 2 a d z a d b c z a z b displaystyle P z z 2 a d z ad bc z alpha z beta nbsp Then we haveS t z e a t z b a b e b t z a b a displaystyle S t z e alpha t frac z beta alpha beta e beta t frac z alpha beta alpha nbsp hence s 0 t a e b t b e a t a b s 1 t e a t e b t a b displaystyle begin aligned s 0 t amp frac alpha e beta t beta e alpha t alpha beta amp s 1 t amp frac e alpha t e beta t alpha beta end aligned nbsp if a b while if a b S t z e a t 1 t z a displaystyle S t z e alpha t 1 t z alpha nbsp so thats 0 t 1 a t e a t s 1 t t e a t displaystyle begin aligned s 0 t amp 1 alpha t e alpha t amp s 1 t amp t e alpha t end aligned nbsp Definings a b 2 tr A 2 q a b 2 det A s I displaystyle begin aligned s amp equiv frac alpha beta 2 frac operatorname tr A 2 amp q amp equiv frac alpha beta 2 pm sqrt det left A sI right end aligned nbsp we haves 0 t e s t cosh q t s sinh q t q s 1 t e s t sinh q t q displaystyle begin aligned s 0 t amp e st left cosh qt s frac sinh qt q right amp s 1 t amp e st frac sinh qt q end aligned nbsp where sin qt q is 0 if t 0 and t if q 0 Thus e t A e s t cosh q t s sinh q t q I sinh q t q A displaystyle e tA e st left left cosh qt s frac sinh qt q right I frac sinh qt q A right nbsp Thus as indicated above the matrix A having decomposed into the sum of two mutually commuting pieces the traceful piece and the traceless piece A s I A s I displaystyle A sI A sI nbsp the matrix exponential reduces to a plain product of the exponentials of the two respective pieces This is a formula often used in physics as it amounts to the analog of Euler s formula for Pauli spin matrices that is rotations of the doublet representation of the group SU 2 The polynomial St can also be given the following interpolation characterization Define et z etz and n deg P Then St z is the unique degree lt n polynomial which satisfies St k a et k a whenever k is less than the multiplicity of a as a root of P We assume as we obviously can that P is the minimal polynomial of A We further assume that A is a diagonalizable matrix In particular the roots of P are simple and the interpolation characterization indicates that St is given by the Lagrange interpolation formula so it is the Lagrange Sylvester polynomial At the other extreme if P z a n thenS t e a t k 0 n 1 t k k z a k displaystyle S t e at sum k 0 n 1 frac t k k z a k nbsp The simplest case not covered by the above observations is when P z a 2 z b displaystyle P z a 2 z b nbsp with a b which yieldsS t e a t z b a b 1 t 1 b a z a e b t z a 2 b a 2 displaystyle S t e at frac z b a b left 1 left t frac 1 b a right z a right e bt frac z a 2 b a 2 nbsp Evaluation by implementation of Sylvester s formula editA practical expedited computation of the above reduces to the following rapid steps Recall from above that an n n matrix exp tA amounts to a linear combination of the first n 1 powers of A by the Cayley Hamilton theorem For diagonalizable matrices as illustrated above e g in the 2 2 case Sylvester s formula yields exp tA Ba exp ta Bb exp tb where the B s are the Frobenius covariants of A It is easiest however to simply solve for these B s directly by evaluating this expression and its first derivative at t 0 in terms of A and I to find the same answer as above But this simple procedure also works for defective matrices in a generalization due to Buchheim 22 This is illustrated here for a 4 4 example of a matrix which is not diagonalizable and the B s are not projection matrices ConsiderA 1 1 0 0 0 1 1 0 0 0 1 1 8 0 0 1 2 1 2 displaystyle A begin bmatrix 1 amp 1 amp 0 amp 0 0 amp 1 amp 1 amp 0 0 amp 0 amp 1 amp frac 1 8 0 amp 0 amp frac 1 2 amp frac 1 2 end bmatrix nbsp with eigenvalues l1 3 4 and l2 1 each with a multiplicity of two Consider the exponential of each eigenvalue multiplied by t exp lit Multiply each exponentiated eigenvalue by the corresponding undetermined coefficient matrix Bi If the eigenvalues have an algebraic multiplicity greater than 1 then repeat the process but now multiplying by an extra factor of t for each repetition to ensure linear independence If one eigenvalue had a multiplicity of three then there would be the three terms B i 1 e l i t B i 2 t e l i t B i 3 t 2 e l i t displaystyle B i 1 e lambda i t B i 2 te lambda i t B i 3 t 2 e lambda i t nbsp By contrast when all eigenvalues are distinct the B s are just the Frobenius covariants and solving for them as below just amounts to the inversion of the Vandermonde matrix of these 4 eigenvalues Sum all such terms here four such e A t B 1 1 e l 1 t B 1 2 t e l 1 t B 2 1 e l 2 t B 2 2 t e l 2 t e A t B 1 1 e 3 4 t B 1 2 t e 3 4 t B 2 1 e 1 t B 2 2 t e 1 t displaystyle begin aligned e At amp B 1 1 e lambda 1 t B 1 2 te lambda 1 t B 2 1 e lambda 2 t B 2 2 te lambda 2 t e At amp B 1 1 e frac 3 4 t B 1 2 te frac 3 4 t B 2 1 e 1t B 2 2 te 1t end aligned nbsp To solve for all of the unknown matrices B in terms of the first three powers of A and the identity one needs four equations the above one providing one such at t 0 Further differentiate it with respect to t A e A t 3 4 B 1 1 e 3 4 t 3 4 t 1 B 1 2 e 3 4 t 1 B 2 1 e 1 t 1 t 1 B 2 2 e 1 t displaystyle Ae At frac 3 4 B 1 1 e frac 3 4 t left frac 3 4 t 1 right B 1 2 e frac 3 4 t 1B 2 1 e 1t left 1t 1 right B 2 2 e 1t nbsp and again A 2 e A t 3 4 2 B 1 1 e 3 4 t 3 4 2 t 3 4 1 3 4 B 1 2 e 3 4 t B 2 1 e 1 t 1 2 t 1 1 1 B 2 2 e 1 t 3 4 2 B 1 1 e 3 4 t 3 4 2 t 3 2 B 1 2 e 3 4 t B 2 1 e t t 2 B 2 2 e t displaystyle begin aligned A 2 e At amp left frac 3 4 right 2 B 1 1 e frac 3 4 t left left frac 3 4 right 2 t left frac 3 4 1 cdot frac 3 4 right right B 1 2 e frac 3 4 t B 2 1 e 1t left 1 2 t 1 1 cdot 1 right B 2 2 e 1t amp left frac 3 4 right 2 B 1 1 e frac 3 4 t left left frac 3 4 right 2 t frac 3 2 right B 1 2 e frac 3 4 t B 2 1 e t left t 2 right B 2 2 e t end aligned nbsp and once more A 3 e A t 3 4 3 B 1 1 e 3 4 t 3 4 3 t 3 4 2 3 2 3 4 B 1 2 e 3 4 t B 2 1 e 1 t 1 3 t 1 2 1 B 2 2 e 1 t 3 4 3 B 1 1 e 3 4 t 3 4 3 t 27 16 B 1 2 e 3 4 t B 2 1 e t t 3 1 B 2 2 e t displaystyle begin aligned A 3 e At amp left frac 3 4 right 3 B 1 1 e frac 3 4 t left left frac 3 4 right 3 t left left frac 3 4 right 2 left frac 3 2 right cdot frac 3 4 right right B 1 2 e frac 3 4 t B 2 1 e 1t left 1 3 t 1 2 cdot 1 right B 2 2 e 1t amp left frac 3 4 right 3 B 1 1 e frac 3 4 t left left frac 3 4 right 3 t frac 27 16 right B 1 2 e frac 3 4 t B 2 1 e t left t 3 cdot 1 right B 2 2 e t end aligned nbsp In the general case n 1 derivatives need be taken Setting t 0 in these four equations the four coefficient matrices B s may now be solved for I B 1 1 B 2 1 A 3 4 B 1 1 B 1 2 B 2 1 B 2 2 A 2 3 4 2 B 1 1 3 2 B 1 2 B 2 1 2 B 2 2 A 3 3 4 3 B 1 1 27 16 B 1 2 B 2 1 3 B 2 2 displaystyle begin aligned I amp B 1 1 B 2 1 A amp frac 3 4 B 1 1 B 1 2 B 2 1 B 2 2 A 2 amp left frac 3 4 right 2 B 1 1 frac 3 2 B 1 2 B 2 1 2B 2 2 A 3 amp left frac 3 4 right 3 B 1 1 frac 27 16 B 1 2 B 2 1 3B 2 2 end aligned nbsp to yieldB 1 1 128 A 3 366 A 2 288 A 80 I B 1 2 16 A 3 44 A 2 40 A 12 I B 2 1 128 A 3 366 A 2 288 A 80 I B 2 2 16 A 3 40 A 2 33 A 9 I displaystyle begin aligned B 1 1 amp 128A 3 366A 2 288A 80I B 1 2 amp 16A 3 44A 2 40A 12I B 2 1 amp 128A 3 366A 2 288A 80I B 2 2 amp 16A 3 40A 2 33A 9I end aligned nbsp Substituting with the value for A yields the coefficient matricesB 1 1 0 0 48 16 0 0 8 2 0 0 1 0 0 0 0 1 B 1 2 0 0 4 2 0 0 1 1 2 0 0 1 4 1 8 0 0 1 2 1 4 B 2 1 1 0 48 16 0 1 8 2 0 0 0 0 0 0 0 0 B 2 2 0 1 8 2 0 0 0 0 0 0 0 0 0 0 0 0 displaystyle begin aligned B 1 1 amp begin bmatrix 0 amp 0 amp 48 amp 16 0 amp 0 amp 8 amp 2 0 amp 0 amp 1 amp 0 0 amp 0 amp 0 amp 1 end bmatrix B 1 2 amp begin bmatrix 0 amp 0 amp 4 amp 2 0 amp 0 amp 1 amp frac 1 2 0 amp 0 amp frac 1 4 amp frac 1 8 0 amp 0 amp frac 1 2 amp frac 1 4 end bmatrix B 2 1 amp begin bmatrix 1 amp 0 amp 48 amp 16 0 amp 1 amp 8 amp 2 0 amp 0 amp 0 amp 0 0 amp 0 amp 0 amp 0 end bmatrix B 2 2 amp begin bmatrix 0 amp 1 amp 8 amp 2 0 amp 0 amp 0 amp 0 0 amp 0 amp 0 amp 0 0 amp 0 amp 0 amp 0 end bmatrix end aligned nbsp so the final answer ise t A e t t e t 8 t 48 e t 4 t 48 e 3 4 t 16 2 t e t 2 t 16 e 3 4 t 0 e t 8 e t t 8 e 3 4 t 2 e t t 4 2 e 3 4 t 0 0 t 4 4 e 3 4 t t 8 e 3 4 t 0 0 t 2 e 3 4 t t 4 4 e 3 4 t displaystyle e tA begin bmatrix e t amp te t amp left 8t 48 right e t left 4t 48 right e frac 3 4 t amp left 16 2 t right e t left 2t 16 right e frac 3 4 t 0 amp e t amp 8e t left t 8 right e frac 3 4 t amp 2e t frac t 4 2 e frac 3 4 t 0 amp 0 amp frac t 4 4 e frac 3 4 t amp frac t 8 e frac 3 4 t 0 amp 0 amp frac t 2 e frac 3 4 t amp frac t 4 4 e frac 3 4 t end bmatrix nbsp The procedure is much shorter than Putzer s algorithm sometimes utilized in such cases See also Derivative of the exponential mapIllustrations editSuppose that we want to compute the exponential ofB 21 17 6 5 1 6 4 4 16 displaystyle B begin bmatrix 21 amp 17 amp 6 5 amp 1 amp 6 4 amp 4 amp 16 end bmatrix nbsp Its Jordan form isJ P 1 B P 4 0 0 0 16 1 0 0 16 displaystyle J P 1 BP begin bmatrix 4 amp 0 amp 0 0 amp 16 amp 1 0 amp 0 amp 16 end bmatrix nbsp where the matrix P is given by P 1 4 2 5 4 1 4 2 1 4 0 4 0 displaystyle P begin bmatrix frac 1 4 amp 2 amp frac 5 4 frac 1 4 amp 2 amp frac 1 4 0 amp 4 amp 0 end bmatrix nbsp Let us first calculate exp J We haveJ J 1 4 J 2 16 displaystyle J J 1 4 oplus J 2 16 nbsp The exponential of a 1 1 matrix is just the exponential of the one entry of the matrix so exp J1 4 e4 The exponential of J2 16 can be calculated by the formula e lI N el eN mentioned above this yields 23 exp 16 1 0 16 e 16 exp 0 1 0 0 e 16 1 0 0 1 0 1 0 0 1 2 0 0 0 0 e 16 e 16 0 e 16 displaystyle begin aligned amp exp left begin bmatrix 16 amp 1 0 amp 16 end bmatrix right e 16 exp left begin bmatrix 0 amp 1 0 amp 0 end bmatrix right 6pt amp e 16 left begin bmatrix 1 amp 0 0 amp 1 end bmatrix begin bmatrix 0 amp 1 0 amp 0 end bmatrix 1 over 2 begin bmatrix 0 amp 0 0 amp 0 end bmatrix cdots right begin bmatrix e 16 amp e 16 0 amp e 16 end bmatrix end aligned nbsp Therefore the exponential of the original matrix B isexp B P exp J P 1 P e 4 0 0 0 e 16 e 16 0 0 e 16 P 1 1 4 13 e 16 e 4 13 e 16 5 e 4 2 e 16 2 e 4 9 e 16 e 4 9 e 16 5 e 4 2 e 16 2 e 4 16 e 16 16 e 16 4 e 16 displaystyle begin aligned exp B amp P exp J P 1 P begin bmatrix e 4 amp 0 amp 0 0 amp e 16 amp e 16 0 amp 0 amp e 16 end bmatrix P 1 6pt amp 1 over 4 begin bmatrix 13e 16 e 4 amp 13e 16 5e 4 amp 2e 16 2e 4 9e 16 e 4 amp 9e 16 5e 4 amp 2e 16 2e 4 16e 16 amp 16e 16 amp 4e 16 end bmatrix end aligned nbsp Applications editLinear differential equations edit The matrix exponential has applications to systems of linear differential equations See also matrix differential equation Recall from earlier in this article that a homogeneous differential equation of the formy A y displaystyle mathbf y A mathbf y nbsp has solution eAt y 0 If we consider the vectory t y 1 t y n t displaystyle mathbf y t begin bmatrix y 1 t vdots y n t end bmatrix nbsp we can express a system of inhomogeneous coupled linear differential equations as y t A y t b t displaystyle mathbf y t A mathbf y t mathbf b t nbsp Making an ansatz to use an integrating factor of e At and multiplying throughout yields e A t y e A t A y e A t b e A t y A e A t y e A t b d d t e A t y e A t b displaystyle begin aligned amp amp e At mathbf y e At A mathbf y amp e At mathbf b amp Rightarrow amp e At mathbf y Ae At mathbf y amp e At mathbf b amp Rightarrow amp frac d dt left e At mathbf y right amp e At mathbf b end aligned nbsp The second step is possible due to the fact that if AB BA then eAtB BeAt So calculating eAt leads to the solution to the system by simply integrating the third step with respect to t A solution to this can be obtained by integrating and multiplying by e A t displaystyle e textbf A t nbsp to eliminate the exponent in the LHS Notice that while e A t displaystyle e textbf A t nbsp is a matrix given that it is a matrix exponential we can say that e A t e A t I displaystyle e textbf A t e textbf A t I nbsp In other words exp A t exp A t 1 displaystyle exp textbf A t exp textbf A t 1 nbsp Example homogeneous edit Consider the systemx 2 x y z y 3 y 1 z z 2 x y 3 z displaystyle begin matrix x amp amp 2x amp y amp z y amp amp amp 3y amp 1z z amp amp 2x amp y amp 3z end matrix nbsp The associated defective matrix isA 2 1 1 0 3 1 2 1 3 displaystyle A begin bmatrix 2 amp 1 amp 1 0 amp 3 amp 1 2 amp 1 amp 3 end bmatrix nbsp The matrix exponential ise t A 1 2 e 2 t 1 e 2 t 2 t 2 t e 2 t e 2 t 1 e 2 t e 2 t 1 e 2 t 2 t 2 t 1 e 2 t e 2 t 1 e 2 t e 2 t 1 e 2 t 2 t 2 t e 2 t e 2 t 1 e 2 t displaystyle e tA frac 1 2 begin bmatrix e 2t left 1 e 2t 2t right amp 2te 2t amp e 2t left 1 e 2t right e 2t left 1 e 2t 2t right amp 2 t 1 e 2t amp e 2t left 1 e 2t right e 2t left 1 e 2t 2t right amp 2te 2t amp e 2t left 1 e 2t right end bmatrix nbsp so that the general solution of the homogeneous system is x y z x 0 2 e 2 t 1 e 2 t 2 t e 2 t 1 e 2 t 2 t e 2 t 1 e 2 t 2 t y 0 2 2 t e 2 t 2 t 1 e 2 t 2 t e 2 t z 0 2 e 2 t 1 e 2 t e 2 t 1 e 2 t e 2 t 1 e 2 t displaystyle begin bmatrix x y z end bmatrix frac x 0 2 begin bmatrix e 2t left 1 e 2t 2t right e 2t left 1 e 2t 2t right e 2t left 1 e 2t 2t right end bmatrix frac y 0 2 begin bmatrix 2te 2t 2 t 1 e 2t 2te 2t end bmatrix frac z 0 2 begin bmatrix e 2t left 1 e 2t right e 2t left 1 e 2t right e 2t left 1 e 2t right end bmatrix nbsp amounting to2 x x 0 e 2 t 1 e 2 t 2 t y 0 2 t e 2 t z 0 e 2 t 1 e 2 t 2 y x 0 e 2 t 1 e 2 t 2 t y 0 2 t 1 e 2 t z 0 e 2 t 1 e 2 t 2 z x 0 e 2 t 1 e 2 t 2 t y 0 2 t e 2 t z 0 e 2 t 1 e 2 t displaystyle begin aligned 2x amp x 0 e 2t left 1 e 2t 2t right y 0 left 2te 2t right z 0 e 2t left 1 e 2t right 2pt 2y amp x 0 left e 2t right left 1 e 2t 2t right y 0 2 t 1 e 2t z 0 left e 2t right left 1 e 2t right 2pt 2z amp x 0 e 2t left 1 e 2t 2t right y 0 2te 2t z 0 e 2t left 1 e 2t right end aligned nbsp Example inhomogeneous edit Consider now the inhomogeneous systemx 2 x y z e 2 t y 3 y z z 2 x y 3 z e 2 t displaystyle begin matrix x amp amp 2x amp amp y amp amp z amp amp e 2t y amp amp amp amp 3y amp amp z amp z amp amp 2x amp amp y amp amp 3z amp amp e 2t end matrix nbsp We again haveA 2 1 1 0 3 1 2 1 3 displaystyle A left begin array rrr 2 amp 1 amp 1 0 amp 3 amp 1 2 amp 1 amp 3 end array right nbsp andb e 2 t 1 0 1 displaystyle mathbf b e 2t begin bmatrix 1 0 1 end bmatrix nbsp From before we already have the general solution to the homogeneous equation Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem we now only need find the particular solution We have by above y p e t A 0 t e u A e 2 u 0 e 2 u d u e t A c e t A 0 t 2 e u 2 u e 2 u 2 u e 2 u 0 2 e u 2 u 1 e 2 u 2 u 1 e 2 u 0 2 u e 2 u 2 u e 2 u 2 e u e 2 u 0 e 2 u d u e t A c e t A 0 t e 2 u 2 e u 2 u e 2 u e 2 u 2 e u 2 1 u e 2 u 2 e 3 u 2 u e 4 u d u e t A c e t A 1 24 e 3 t 3 e t 4 t 1 16 1 24 e 3 t 3 e t 4 t 4 16 1 24 e 3 t 3 e t 4 t 1 16 2 e t 2 t e 2 t 2 t e 2 t 0 2 e t 2 t 1 e 2 t 2 t 1 e 2 t 0 2 t e 2 t 2 t e 2 t 2 e t c 1 c 2 c 3 displaystyle begin aligned mathbf y p amp e tA int 0 t e u A begin bmatrix e 2u 0 e 2u end bmatrix du e tA mathbf c 6pt amp e tA int 0 t begin bmatrix 2e u 2ue 2u amp 2ue 2u amp 0 2e u 2 u 1 e 2u amp 2 u 1 e 2u amp 0 2ue 2u amp 2ue 2u amp 2e u end bmatrix begin bmatrix e 2u 0 e 2u end bmatrix du e tA mathbf c 6pt amp e tA int 0 t begin bmatrix e 2u left 2e u 2ue 2u right e 2u left 2e u 2 1 u e 2u right 2e 3u 2ue 4u end bmatrix du e tA mathbf c 6pt amp e tA begin bmatrix 1 over 24 e 3t left 3e t 4t 1 16 right 1 over 24 e 3t left 3e t 4t 4 16 right 1 over 24 e 3t left 3e t 4t 1 16 right end bmatrix begin bmatrix 2e t 2te 2t, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.