fbpx
Wikipedia

Eigenvalues and eigenvectors

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector (/ˈɡən-/ EYE-gən-) or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. Its eigenvectors are those vectors that are only stretched, with no rotation or shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or squished. If the eigenvalue is negative, the eigenvector's direction is reversed.[1]

The eigenvectors and eigenvalues of a transformation serve to characterize it, and so they play important roles in all the areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same inputs (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system, after many applications of the linear transformation, and the associated eigenvector is the steady state of the system.

Definition edit

Consider a matrix A and a nonzero vector  . If applying A to   (denoted by  ) simply scales   by a factor of λ, where λ is a scalar, then   is an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as:  .[2]

There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space into itself, given any basis of the vector space. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices, or the language of linear transformations.[3][4]

If V is finite-dimensional, the above equation is equivalent to[5]

 

where A is the matrix representation of T and u is the coordinate vector of v.

Overview edit

Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'.[6][7] Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization.

In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation

 
referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex.
 
In this shear mapping the red arrow changes direction, but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it does not change direction, and since its length is unchanged, its eigenvalue is 1.
 
A 2×2 real and symmetric matrix representing a stretching and shearing of the plane. The eigenvectors of the matrix (red lines) are the two special directions such that every point on them will just slide on them.

The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.

Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like  , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as

 
Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication
 
where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it.

Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them:

  • The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.[8][9]
  • The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue.[10]
  • If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis.

History edit

Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.

In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes.[a] Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.[11]

In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[12] Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation.[b]

Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.[13] Charles-François Sturm developed Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues.[12] This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices.[14]

Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle,[12] and Alfred Clebsch found the corresponding result for skew-symmetric matrices.[14] Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability.[12]

In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory.[15] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.[16]

At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[17] He was the first to use the German word eigen, which means "own",[7] to denote eigenvalues and eigenvectors in 1904,[c] though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.[18]

The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis[19] and Vera Kublanovskaya[20] in 1961.[21][22]

Eigenvalues and eigenvectors of matrices edit

Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.[23][24] Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices,[3][4] which is especially common in numerical and computational applications.[25]

 
Matrix A acts by stretching the vector x, not changing its direction, so x is an eigenvector of A.

Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors

 

These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that

 

In this case,  .

Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A,

 
or
 
where, for each row,
 

If it occurs that v and w are scalar multiples, that is if

 

 

 

 

 

(1)

then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A.

Equation (1) can be stated equivalently as

 

 

 

 

 

(2)

where I is the n by n identity matrix and 0 is the zero vector.

Eigenvalues and the characteristic polynomial edit

Equation (2) has a nonzero solution v if and only if the determinant of the matrix (AλI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation

 

 

 

 

 

(3)

Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A.

The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms,

 

 

 

 

 

(4)

where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A.

As a brief example, which is described in more detail in the examples section later, consider the matrix

 

Taking the determinant of (AλI), the characteristic polynomial of A is

 

Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation  . In this example, the eigenvectors are any nonzero scalar multiples of

 

If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers (that is, they cannot magically become transcendental numbers).

The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.

Algebraic multiplicity edit

Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λλi)k divides evenly that polynomial.[10][26][27]

Suppose a matrix A has dimension n and dn distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can instead be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,

 

If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as

 

If μA(λi) = 1, then λi is said to be a simple eigenvalue.[27] If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue.

Eigenspaces, geometric multiplicity, and the eigenbasis for matrices edit

Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2),

 

On one hand, this set is precisely the kernel or nullspace of the matrix (AλI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (AλI). E is called the eigenspace or characteristic space of A associated with λ.[28][10] In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of  .

Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, vE, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if vE and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ.

The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity  . Because E is also the nullspace of (AλI), the geometric multiplicity of λ is the dimension of the nullspace of (AλI), also called the nullity of (AλI), which relates to the dimension and rank of (AλI) as

 

Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n.

 

To prove the inequality  , consider how the definition of geometric multiplicity implies the existence of   orthonormal eigenvectors  , such that  . We can therefore find a (unitary) matrix   whose first   columns are these eigenvectors, and whose remaining columns can be any orthonormal set of   vectors orthogonal to these eigenvectors of  . Then   has full rank and is therefore invertible. Evaluating  , we get a matrix whose top left block is the diagonal matrix  . This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding   on both sides, we get   since   commutes with  . In other words,   is similar to  , and  . But from the definition of  , we know that   contains a factor  , which means that the algebraic multiplicity of   must satisfy  .

Suppose   has   distinct eigenvalues  , where the geometric multiplicity of   is  . The total geometric multiplicity of  ,

 
is the dimension of the sum of all the eigenspaces of  's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of  . If  , then
  • The direct sum of the eigenspaces of all of  's eigenvalues is the entire vector space  .
  • A basis of   can be formed from   linearly independent eigenvectors of  ; such a basis is called an eigenbasis
  • Any vector in   can be written as a linear combination of eigenvectors of  .

Additional properties of eigenvalues edit

Let   be an arbitrary   matrix of complex numbers with eigenvalues  . Each eigenvalue appears   times in this list, where   is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:

  • The trace of  , defined as the sum of its diagonal elements, is also the sum of all eigenvalues,[29][30][31]
     
  • The determinant of   is the product of all its eigenvalues,[29][32][33]
     
  • The eigenvalues of the  th power of  ; i.e., the eigenvalues of  , for any positive integer  , are  .
  • The matrix   is invertible if and only if every eigenvalue is nonzero.
  • If   is invertible, then the eigenvalues of   are   and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity.
  • If   is equal to its conjugate transpose  , or equivalently if   is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix.
  • If   is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively.
  • If   is unitary, every eigenvalue has absolute value  .
  • If   is a   matrix and   are its eigenvalues, then the eigenvalues of matrix   (where   is the identity matrix) are  . Moreover, if  , the eigenvalues of   are  . More generally, for a polynomial   the eigenvalues of matrix   are  .

Left and right eigenvectors edit

Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the   matrix   in the defining equation, equation (1),

 

The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix  . In this formulation, the defining equation is

 

where   is a scalar and   is a   matrix. Any row vector   satisfying this equation is called a left eigenvector of   and   is its associated eigenvalue. Taking the transpose of this equation,

 

Comparing this equation to equation (1), it follows immediately that a left eigenvector of   is the same as the transpose of a right eigenvector of  , with the same eigenvalue. Furthermore, since the characteristic polynomial of   is the same as the characteristic polynomial of  , the left and right eigenvectors of   are associated with the same eigenvalues.

Diagonalization and the eigendecomposition edit

Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A,

 

Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue,

 

With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then

 

Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1,

 

or by instead left multiplying both sides by Q−1,

 

A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ.

Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable.

A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces.

Variational characterization edit

In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of   is the maximum value of the quadratic form  . A value of   that realizes that maximum is an eigenvector.

Matrix examples edit

Two-dimensional matrix example edit

 
The transformation matrix A =   preserves the direction of purple vectors parallel to vλ=1 = [1 −1]T and blue vectors parallel to vλ=3 = [1 1]T. The red vectors are not parallel to either eigenvector, so, their directions are changed by the transformation. The lengths of the purple vectors are unchanged after the transformation (due to their eigenvalue of 1), while blue vectors are three times the length of the original (due to their eigenvalue of 3). See also: An extended version, showing all four quadrants.

Consider the matrix

 

The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues.

Taking the determinant to find characteristic polynomial of A,

 

Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A.

For λ=1, equation (2) becomes,

 
 

Any nonzero vector with v1 = −v2 solves this equation. Therefore,

 
is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector.

For λ=3, equation (2) becomes

 

Any nonzero vector with v1 = v2 solves this equation. Therefore,

 

is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector.

Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively.

Three-dimensional matrix example edit

Consider the matrix

 

The characteristic polynomial of A is

 

The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors  ,  , and  , or any nonzero multiple thereof.

Three-dimensional matrix example with complex eigenvalues edit

Consider the cyclic permutation matrix

 

This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are

 
where   is an imaginary unit with  .

For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example,

 

For the complex conjugate pair of imaginary eigenvalues,

 

Then

 
and
 

Therefore, the other two eigenvectors of A are complex and are   and   with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,

 

Diagonal matrix example edit

Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix

 

The characteristic polynomial of A is

 

which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.

Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,

 

respectively, as well as scalar multiples of these vectors.

Triangular matrix example edit

A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal.

Consider the lower triangular matrix,

 

The characteristic polynomial of A is

 

which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.

These eigenvalues correspond to the eigenvectors,

 

respectively, as well as scalar multiples of these vectors.

Matrix with repeated eigenvalues example edit

As in the previous example, the lower triangular matrix

 
has a characteristic polynomial that is the product of its diagonal elements,
 

The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A.

On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector   and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector  . The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.

Eigenvector-eigenvalue identity edit

For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix,

 
where   is the submatrix formed by removing the jth row and column from the original matrix.[34][35][36] This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature.[35][37]

Eigenvalues and eigenfunctions of differential operators edit

The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation

 

The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions.

Derivative operator example edit

Consider the derivative operator   with eigenvalue equation

 

This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function

 
is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant.

The main eigenfunction article gives other examples.

General definition edit

The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V,

 

We say that a nonzero vector vV is an eigenvector of T if and only if there exists a scalar λK such that

 

 

 

 

 

(5)

This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v.[38][39]

Eigenspaces, geometric multiplicity, and the eigenbasis edit

Given an eigenvalue λ, consider the set

 

which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ.[40]

By definition of a linear transformation,

 

for xy ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely uv ∈ E, then

 

So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αvE, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V.[41] If that subspace has dimension 1, it is sometimes called an eigenline.[42]

The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue.[10][27][43] By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.

The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.[d]

Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable.

Spectral theory edit

If λ is an eigenvalue of T, then the operator (TλI) is not one-to-one, and therefore its inverse (TλI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (TλI) may not have an inverse even if λ is not an eigenvalue.

For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (TλI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.

Associative algebras and representation theory edit

One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory.

The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively.

Dynamic equations edit

The simplest difference equations have the form

 

The solution of this equation for x in terms of t is found by using its characteristic equation

 

which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations   giving a k-dimensional system of the first order in the stacked variable vector   in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots   for use in the solution equation

 

A similar procedure is used for solving a differential equation of the form

 

Calculation edit

The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice.

Classical method edit

The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point.

Eigenvalues edit

The eigenvalues of a matrix   can be determined by finding the roots of the characteristic polynomial. This is easy for   matrices, but the difficulty increases rapidly with the size of the matrix.

In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy.[44] However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial).[44] Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an   matrix is a sum of   different products.[e]

Explicit algebraic formulas for the roots of a polynomial exist only if the degree   is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree   is the characteristic polynomial of some companion matrix of order  .) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical.

Eigenvectors edit

Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix

 

we can find its eigenvectors by solving the equation  , that is

 

This matrix equation is equivalent to two linear equations

 
     that is       

Both equations reduce to the single linear equation  . Therefore, any vector of the form  , for any nonzero real number  , is an eigenvector of   with eigenvalue  .

The matrix   above has another eigenvalue  . A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of  , that is, any vector of the form  , for any nonzero real number  .

Simple iterative methods edit

The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by  ; this causes it to converge to an eigenvector of the eigenvalue closest to  .

If   is (a good approximation of) an eigenvector of  , then the corresponding eigenvalue can be computed as

 

where   denotes the conjugate transpose of  .

Modern methods edit

Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961.[44] Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed] For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.[44]

Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed.

Applications edit

Geometric transformations edit

Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors.

Eigenvalues of geometric transformations
Scaling Unequal scaling Rotation Horizontal shear Hyperbolic rotation
Illustration      
 
 
Matrix          
Characteristic
polynomial
         
Eigenvalues,            
Algebraic mult.,
 
         
Geometric mult.,
 
         
Eigenvectors All nonzero vectors        

The characteristic equation for a rotation is a quadratic equation with discriminant  , which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers,  ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.

A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues.

Principal component analysis edit

 
PCA of the multivariate Gaussian distribution centered at   with a standard deviation of 3 in roughly the   direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) covariance matrix scaled by the square root of the corresponding eigenvalue. Just as in the one-dimensional case, the square root is taken because the standard deviation is more readily visualized than the variance.

The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data.

Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling.

Graphs edit

In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix  , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either   (sometimes called the combinatorial Laplacian) or   (sometimes called the normalized Laplacian), where   is a diagonal matrix with   equal to the degree of vertex  , and in  , the  th diagonal entry is  . The  th principal eigenvector of a graph is defined as either the eigenvector corresponding to the  th largest or  th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.

The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.

Markov chains edit

A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.

Vibration analysis edit

 
Mode shape of a tuning fork at eigenfrequency 440.09 Hz

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by

 
or
 

That is, acceleration is proportional to position (i.e., we expect   to be sinusoidal in time).

In   dimensions,   becomes a mass matrix and   a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem

 
where   is the eigenvalue and   is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of   alone. Furthermore, damped vibration, governed by
 
leads to a so-called quadratic eigenvalue problem,
 

This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system.

The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems.

Tensor of moment of inertia edit

In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.

Stress tensor edit

In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.

Schrödinger equation edit

 
The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator. They are associated with eigenvalues interpreted as their energies (increasing downward:  ) and angular momentum (increasing across: s, p, d, ...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a position measurement. The center of each figure is the atomic nucleus, a proton.

An example of an eigenvalue equation where the transformation   is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:

 

where  , the Hamiltonian, is a second-order differential operator and  , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue  , interpreted as its energy.

However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for   within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which   and   can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.

The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by  . In this notation, the Schrödinger equation is:

 

where   is an eigenstate of   and   represents the eigenvalue.   is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above   is understood to be the vector obtained by application of the transformation   to  .

Wave transport edit

Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix  .[45][46] The eigenvectors of the transmission operator   form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,  , of   correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with   and

eigenvalues, eigenvectors, characteristic, root, redirects, here, root, characteristic, equation, characteristic, equation, calculus, linear, algebra, often, important, know, which, vectors, have, their, directions, unchanged, given, linear, transformation, ei. Characteristic root redirects here For the root of a characteristic equation see Characteristic equation calculus In linear algebra it is often important to know which vectors have their directions unchanged by a given linear transformation An eigenvector ˈ aɪ ɡ en EYE gen or characteristic vector is such a vector Thus an eigenvector v displaystyle mathbf v of a linear transformation T displaystyle T is scaled by a constant factor l displaystyle lambda when the linear transformation is applied to it Tv lv displaystyle T mathbf v lambda mathbf v The corresponding eigenvalue characteristic value or characteristic root is the multiplying factor l displaystyle lambda Geometrically vectors are multi dimensional quantities with magnitude and direction often pictured as arrows A linear transformation rotates stretches or shears the vectors upon which it acts Its eigenvectors are those vectors that are only stretched with no rotation or shear The corresponding eigenvalue is the factor by which an eigenvector is stretched or squished If the eigenvalue is negative the eigenvector s direction is reversed 1 The eigenvectors and eigenvalues of a transformation serve to characterize it and so they play important roles in all the areas where linear algebra is applied from geology to quantum mechanics In particular it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same inputs feedback In such an application the largest eigenvalue is of particular importance because it governs the long term behavior of the system after many applications of the linear transformation and the associated eigenvector is the steady state of the system Contents 1 Definition 2 Overview 3 History 4 Eigenvalues and eigenvectors of matrices 4 1 Eigenvalues and the characteristic polynomial 4 2 Algebraic multiplicity 4 3 Eigenspaces geometric multiplicity and the eigenbasis for matrices 4 4 Additional properties of eigenvalues 4 5 Left and right eigenvectors 4 6 Diagonalization and the eigendecomposition 4 7 Variational characterization 4 8 Matrix examples 4 8 1 Two dimensional matrix example 4 8 2 Three dimensional matrix example 4 8 3 Three dimensional matrix example with complex eigenvalues 4 8 4 Diagonal matrix example 4 8 5 Triangular matrix example 4 8 6 Matrix with repeated eigenvalues example 4 9 Eigenvector eigenvalue identity 5 Eigenvalues and eigenfunctions of differential operators 5 1 Derivative operator example 6 General definition 6 1 Eigenspaces geometric multiplicity and the eigenbasis 6 2 Spectral theory 6 3 Associative algebras and representation theory 7 Dynamic equations 8 Calculation 8 1 Classical method 8 1 1 Eigenvalues 8 1 2 Eigenvectors 8 2 Simple iterative methods 8 3 Modern methods 9 Applications 9 1 Geometric transformations 9 2 Principal component analysis 9 3 Graphs 9 4 Markov chains 9 5 Vibration analysis 9 6 Tensor of moment of inertia 9 7 Stress tensor 9 8 Schrodinger equation 9 9 Wave transport 9 10 Molecular orbitals 9 11 Geology and glaciology 9 12 Basic reproduction number 9 13 Eigenfaces 10 See also 11 Notes 11 1 Citations 12 Sources 13 Further reading 14 External links 14 1 TheoryDefinition editConsider a matrix A and a nonzero vector v displaystyle mathbb v nbsp If applying A to v displaystyle mathbb v nbsp denoted by Av displaystyle A mathbb v nbsp simply scales v displaystyle mathbb v nbsp by a factor of l where l is a scalar then v displaystyle mathbb v nbsp is an eigenvector of A and l is the corresponding eigenvalue This relationship can be expressed as Av lv displaystyle A mathbb v lambda mathbb v nbsp 2 There is a direct correspondence between n by n square matrices and linear transformations from an n dimensional vector space into itself given any basis of the vector space Hence in a finite dimensional vector space it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations 3 4 If V is finite dimensional the above equation is equivalent to 5 Au lu displaystyle A mathbf u lambda mathbf u nbsp where A is the matrix representation of T and u is the coordinate vector of v Overview editEigenvalues and eigenvectors feature prominently in the analysis of linear transformations The prefix eigen is adopted from the German word eigen cognate with the English word own for proper characteristic own 6 7 Originally used to study principal axes of the rotational motion of rigid bodies eigenvalues and eigenvectors have a wide range of applications for example in stability analysis vibration analysis atomic orbitals facial recognition and matrix diagonalization In essence an eigenvector v of a linear transformation T is a nonzero vector that when T is applied to it does not change direction Applying T to the eigenvector only scales the eigenvector by the scalar value l called an eigenvalue This condition can be written as the equationT v lv displaystyle T mathbf v lambda mathbf v nbsp referred to as the eigenvalue equation or eigenequation In general l may be any scalar For example l may be negative in which case the eigenvector reverses direction as part of the scaling or it may be zero or complex nbsp In this shear mapping the red arrow changes direction but the blue arrow does not The blue arrow is an eigenvector of this shear mapping because it does not change direction and since its length is unchanged its eigenvalue is 1 nbsp A 2 2 real and symmetric matrix representing a stretching and shearing of the plane The eigenvectors of the matrix red lines are the two special directions such that every point on them will just slide on them The example here based on the Mona Lisa provides a simple illustration Each point on the painting can be represented as a vector pointing from the center of the painting to that point The linear transformation in this example is called a shear mapping Points in the top half are moved to the right and points in the bottom half are moved to the left proportional to how far they are from the horizontal axis that goes through the middle of the painting The vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation Points along the horizontal axis do not move at all when this transformation is applied Therefore any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction Moreover these eigenvectors all have an eigenvalue equal to one because the mapping does not change their length either Linear transformations can take many different forms mapping vectors in a variety of vector spaces so the eigenvectors can also take many forms For example the linear transformation could be a differential operator like ddx displaystyle tfrac d dx nbsp in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator such asddxelx lelx displaystyle frac d dx e lambda x lambda e lambda x nbsp Alternatively the linear transformation could take the form of an n by n matrix in which case the eigenvectors are n by 1 matrices If the linear transformation is expressed in the form of an n by n matrix A then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication Av lv displaystyle A mathbf v lambda mathbf v nbsp where the eigenvector v is an n by 1 matrix For a matrix eigenvalues and eigenvectors can be used to decompose the matrix for example by diagonalizing it Eigenvalues and eigenvectors give rise to many closely related mathematical concepts and the prefix eigen is applied liberally when naming them The set of all eigenvectors of a linear transformation each paired with its corresponding eigenvalue is called the eigensystem of that transformation 8 9 The set of all eigenvectors of T corresponding to the same eigenvalue together with the zero vector is called an eigenspace or the characteristic space of T associated with that eigenvalue 10 If a set of eigenvectors of T forms a basis of the domain of T then this basis is called an eigenbasis History editEigenvalues are often introduced in the context of linear algebra or matrix theory Historically however they arose in the study of quadratic forms and differential equations In the 18th century Leonhard Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes a Joseph Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix 11 In the early 19th century Augustin Louis Cauchy saw how their work could be used to classify the quadric surfaces and generalized it to arbitrary dimensions 12 Cauchy also coined the term racine caracteristique characteristic root for what is now called eigenvalue his term survives in characteristic equation b Later Joseph Fourier used the work of Lagrange and Pierre Simon Laplace to solve the heat equation by separation of variables in his famous 1822 book Theorie analytique de la chaleur 13 Charles Francois Sturm developed Fourier s ideas further and brought them to the attention of Cauchy who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues 12 This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices 14 Around the same time Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle 12 and Alfred Clebsch found the corresponding result for skew symmetric matrices 14 Finally Karl Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability 12 In the meantime Joseph Liouville studied eigenvalue problems similar to those of Sturm the discipline that grew out of their work is now called Sturm Liouville theory 15 Schwarz studied the first eigenvalue of Laplace s equation on general domains towards the end of the 19th century while Poincare studied Poisson s equation a few years later 16 At the start of the 20th century David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices 17 He was the first to use the German word eigen which means own 7 to denote eigenvalues and eigenvectors in 1904 c though he may have been following a related usage by Hermann von Helmholtz For some time the standard term in English was proper value but the more distinctive term eigenvalue is the standard today 18 The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929 when Richard von Mises published the power method One of the most popular methods today the QR algorithm was proposed independently by John G F Francis 19 and Vera Kublanovskaya 20 in 1961 21 22 Eigenvalues and eigenvectors of matrices editSee also Euclidean vector and Matrix mathematics Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices 23 24 Furthermore linear transformations over a finite dimensional vector space can be represented using matrices 3 4 which is especially common in numerical and computational applications 25 nbsp Matrix A acts by stretching the vector x not changing its direction so x is an eigenvector of A Consider n dimensional vectors that are formed as a list of n scalars such as the three dimensional vectorsx 1 34 andy 2060 80 displaystyle mathbf x begin bmatrix 1 3 4 end bmatrix quad mbox and quad mathbf y begin bmatrix 20 60 80 end bmatrix nbsp These vectors are said to be scalar multiples of each other or parallel or collinear if there is a scalar l such thatx ly displaystyle mathbf x lambda mathbf y nbsp In this case l 120 displaystyle lambda frac 1 20 nbsp Now consider the linear transformation of n dimensional vectors defined by an n by n matrix A Av w displaystyle A mathbf v mathbf w nbsp or A11A12 A1nA21A22 A2n An1An2 Ann v1v2 vn w1w2 wn displaystyle begin bmatrix A 11 amp A 12 amp cdots amp A 1n A 21 amp A 22 amp cdots amp A 2n vdots amp vdots amp ddots amp vdots A n1 amp A n2 amp cdots amp A nn end bmatrix begin bmatrix v 1 v 2 vdots v n end bmatrix begin bmatrix w 1 w 2 vdots w n end bmatrix nbsp where for each row wi Ai1v1 Ai2v2 Ainvn j 1nAijvj displaystyle w i A i1 v 1 A i2 v 2 cdots A in v n sum j 1 n A ij v j nbsp If it occurs that v and w are scalar multiples that is if Av w lv displaystyle A mathbf v mathbf w lambda mathbf v nbsp 1 then v is an eigenvector of the linear transformation A and the scale factor l is the eigenvalue corresponding to that eigenvector Equation 1 is the eigenvalue equation for the matrix A Equation 1 can be stated equivalently as A lI v 0 displaystyle left A lambda I right mathbf v mathbf 0 nbsp 2 where I is the n by n identity matrix and 0 is the zero vector Eigenvalues and the characteristic polynomial edit Main article Characteristic polynomial Equation 2 has a nonzero solution v if and only if the determinant of the matrix A lI is zero Therefore the eigenvalues of A are values of l that satisfy the equation det A lI 0 displaystyle det A lambda I 0 nbsp 3 Using the Leibniz formula for determinants the left hand side of equation 3 is a polynomial function of the variable l and the degree of this polynomial is n the order of the matrix A Its coefficients depend on the entries of A except that its term of degree n is always 1 nln This polynomial is called the characteristic polynomial of A Equation 3 is called the characteristic equation or the secular equation of A The fundamental theorem of algebra implies that the characteristic polynomial of an n by n matrix A being a polynomial of degree n can be factored into the product of n linear terms det A lI l1 l l2 l ln l displaystyle det A lambda I lambda 1 lambda lambda 2 lambda cdots lambda n lambda nbsp 4 where each li may be real but in general is a complex number The numbers l1 l2 ln which may not all have distinct values are roots of the polynomial and are the eigenvalues of A As a brief example which is described in more detail in the examples section later consider the matrixA 2112 displaystyle A begin bmatrix 2 amp 1 1 amp 2 end bmatrix nbsp Taking the determinant of A lI the characteristic polynomial of A isdet A lI 2 l112 l 3 4l l2 displaystyle det A lambda I begin vmatrix 2 lambda amp 1 1 amp 2 lambda end vmatrix 3 4 lambda lambda 2 nbsp Setting the characteristic polynomial equal to zero it has roots at l 1 and l 3 which are the two eigenvalues of A The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation A lI v 0 displaystyle left A lambda I right mathbf v mathbf 0 nbsp In this example the eigenvectors are any nonzero scalar multiples ofvl 1 1 1 vl 3 11 displaystyle mathbf v lambda 1 begin bmatrix 1 1 end bmatrix quad mathbf v lambda 3 begin bmatrix 1 1 end bmatrix nbsp If the entries of the matrix A are all real numbers then the coefficients of the characteristic polynomial will also be real numbers but the eigenvalues may still have nonzero imaginary parts The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts Similarly the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers However if the entries of A are all algebraic numbers which include the rationals the eigenvalues must also be algebraic numbers that is they cannot magically become transcendental numbers The non real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates namely with the two members of each pair having imaginary parts that differ only in sign and the same real part If the degree is odd then by the intermediate value theorem at least one of the roots is real Therefore any real matrix with odd order has at least one real eigenvalue whereas a real matrix with even order may not have any real eigenvalues The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs Algebraic multiplicity edit Let li be an eigenvalue of an n by n matrix A The algebraic multiplicity mA li of the eigenvalue is its multiplicity as a root of the characteristic polynomial that is the largest integer k such that l li k divides evenly that polynomial 10 26 27 Suppose a matrix A has dimension n and d n distinct eigenvalues Whereas equation 4 factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating the characteristic polynomial can instead be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity det A lI l1 l mA l1 l2 l mA l2 ld l mA ld displaystyle det A lambda I lambda 1 lambda mu A lambda 1 lambda 2 lambda mu A lambda 2 cdots lambda d lambda mu A lambda d nbsp If d n then the right hand side is the product of n linear terms and this is the same as equation 4 The size of each eigenvalue s algebraic multiplicity is related to the dimension n as1 mA li n mA i 1dmA li n displaystyle begin aligned 1 amp leq mu A lambda i leq n mu A amp sum i 1 d mu A left lambda i right n end aligned nbsp If mA li 1 then li is said to be a simple eigenvalue 27 If mA li equals the geometric multiplicity of li gA li defined in the next section then li is said to be a semisimple eigenvalue Eigenspaces geometric multiplicity and the eigenbasis for matrices edit Given a particular eigenvalue l of the n by n matrix A define the set E to be all vectors v that satisfy equation 2 E v A lI v 0 displaystyle E left mathbf v left A lambda I right mathbf v mathbf 0 right nbsp On one hand this set is precisely the kernel or nullspace of the matrix A lI On the other hand by definition any nonzero vector that satisfies this condition is an eigenvector of A associated with l So the set E is the union of the zero vector with the set of all eigenvectors of A associated with l and E equals the nullspace of A lI E is called the eigenspace or characteristic space of A associated with l 28 10 In general l is a complex number and the eigenvectors are complex n by 1 matrices A property of the nullspace is that it is a linear subspace so E is a linear subspace of Cn displaystyle mathbb C n nbsp Because the eigenspace E is a linear subspace it is closed under addition That is if two vectors u and v belong to the set E written u v E then u v E or equivalently A u v l u v This can be checked using the distributive property of matrix multiplication Similarly because E is a linear subspace it is closed under scalar multiplication That is if v E and a is a complex number av E or equivalently A av l av This can be checked by noting that multiplication of complex matrices by complex numbers is commutative As long as u v and av are not zero they are also eigenvectors of A associated with l The dimension of the eigenspace E associated with l or equivalently the maximum number of linearly independent eigenvectors associated with l is referred to as the eigenvalue s geometric multiplicity gA l displaystyle gamma A lambda nbsp Because E is also the nullspace of A lI the geometric multiplicity of l is the dimension of the nullspace of A lI also called the nullity of A lI which relates to the dimension and rank of A lI asgA l n rank A lI displaystyle gamma A lambda n operatorname rank A lambda I nbsp Because of the definition of eigenvalues and eigenvectors an eigenvalue s geometric multiplicity must be at least one that is each eigenvalue has at least one associated eigenvector Furthermore an eigenvalue s geometric multiplicity cannot exceed its algebraic multiplicity Additionally recall that an eigenvalue s algebraic multiplicity cannot exceed n 1 gA l mA l n displaystyle 1 leq gamma A lambda leq mu A lambda leq n nbsp To prove the inequality gA l mA l displaystyle gamma A lambda leq mu A lambda nbsp consider how the definition of geometric multiplicity implies the existence of gA l displaystyle gamma A lambda nbsp orthonormal eigenvectors v1 vgA l displaystyle boldsymbol v 1 ldots boldsymbol v gamma A lambda nbsp such that Avk lvk displaystyle A boldsymbol v k lambda boldsymbol v k nbsp We can therefore find a unitary matrix V displaystyle V nbsp whose first gA l displaystyle gamma A lambda nbsp columns are these eigenvectors and whose remaining columns can be any orthonormal set of n gA l displaystyle n gamma A lambda nbsp vectors orthogonal to these eigenvectors of A displaystyle A nbsp Then V displaystyle V nbsp has full rank and is therefore invertible Evaluating D VTAV displaystyle D V T AV nbsp we get a matrix whose top left block is the diagonal matrix lIgA l displaystyle lambda I gamma A lambda nbsp This can be seen by evaluating what the left hand side does to the first column basis vectors By reorganizing and adding 3V displaystyle xi V nbsp on both sides we get A 3I V V D 3I displaystyle A xi I V V D xi I nbsp since I displaystyle I nbsp commutes with V displaystyle V nbsp In other words A 3I displaystyle A xi I nbsp is similar to D 3I displaystyle D xi I nbsp and det A 3I det D 3I displaystyle det A xi I det D xi I nbsp But from the definition of D displaystyle D nbsp we know that det D 3I displaystyle det D xi I nbsp contains a factor 3 l gA l displaystyle xi lambda gamma A lambda nbsp which means that the algebraic multiplicity of l displaystyle lambda nbsp must satisfy mA l gA l displaystyle mu A lambda geq gamma A lambda nbsp Suppose A displaystyle A nbsp has d n displaystyle d leq n nbsp distinct eigenvalues l1 ld displaystyle lambda 1 ldots lambda d nbsp where the geometric multiplicity of li displaystyle lambda i nbsp is gA li displaystyle gamma A lambda i nbsp The total geometric multiplicity of A displaystyle A nbsp gA i 1dgA li d gA n displaystyle begin aligned gamma A amp sum i 1 d gamma A lambda i d amp leq gamma A leq n end aligned nbsp is the dimension of the sum of all the eigenspaces of A displaystyle A nbsp s eigenvalues or equivalently the maximum number of linearly independent eigenvectors of A displaystyle A nbsp If gA n displaystyle gamma A n nbsp then The direct sum of the eigenspaces of all of A displaystyle A nbsp s eigenvalues is the entire vector space Cn displaystyle mathbb C n nbsp A basis of Cn displaystyle mathbb C n nbsp can be formed from n displaystyle n nbsp linearly independent eigenvectors of A displaystyle A nbsp such a basis is called an eigenbasis Any vector in Cn displaystyle mathbb C n nbsp can be written as a linear combination of eigenvectors of A displaystyle A nbsp Additional properties of eigenvalues edit Let A displaystyle A nbsp be an arbitrary n n displaystyle n times n nbsp matrix of complex numbers with eigenvalues l1 ln displaystyle lambda 1 ldots lambda n nbsp Each eigenvalue appears mA li displaystyle mu A lambda i nbsp times in this list where mA li displaystyle mu A lambda i nbsp is the eigenvalue s algebraic multiplicity The following are properties of this matrix and its eigenvalues The trace of A displaystyle A nbsp defined as the sum of its diagonal elements is also the sum of all eigenvalues 29 30 31 tr A i 1naii i 1nli l1 l2 ln displaystyle operatorname tr A sum i 1 n a ii sum i 1 n lambda i lambda 1 lambda 2 cdots lambda n nbsp The determinant of A displaystyle A nbsp is the product of all its eigenvalues 29 32 33 det A i 1nli l1l2 ln displaystyle det A prod i 1 n lambda i lambda 1 lambda 2 cdots lambda n nbsp The eigenvalues of the k displaystyle k nbsp th power of A displaystyle A nbsp i e the eigenvalues of Ak displaystyle A k nbsp for any positive integer k displaystyle k nbsp are l1k lnk displaystyle lambda 1 k ldots lambda n k nbsp The matrix A displaystyle A nbsp is invertible if and only if every eigenvalue is nonzero If A displaystyle A nbsp is invertible then the eigenvalues of A 1 displaystyle A 1 nbsp are 1l1 1ln textstyle frac 1 lambda 1 ldots frac 1 lambda n nbsp and each eigenvalue s geometric multiplicity coincides Moreover since the characteristic polynomial of the inverse is the reciprocal polynomial of the original the eigenvalues share the same algebraic multiplicity If A displaystyle A nbsp is equal to its conjugate transpose A displaystyle A nbsp or equivalently if A displaystyle A nbsp is Hermitian then every eigenvalue is real The same is true of any symmetric real matrix If A displaystyle A nbsp is not only Hermitian but also positive definite positive semidefinite negative definite or negative semidefinite then every eigenvalue is positive non negative negative or non positive respectively If A displaystyle A nbsp is unitary every eigenvalue has absolute value li 1 displaystyle lambda i 1 nbsp If A displaystyle A nbsp is a n n displaystyle n times n nbsp matrix and l1 lk displaystyle lambda 1 ldots lambda k nbsp are its eigenvalues then the eigenvalues of matrix I A displaystyle I A nbsp where I displaystyle I nbsp is the identity matrix are l1 1 lk 1 displaystyle lambda 1 1 ldots lambda k 1 nbsp Moreover if a C displaystyle alpha in mathbb C nbsp the eigenvalues of aI A displaystyle alpha I A nbsp are l1 a lk a displaystyle lambda 1 alpha ldots lambda k alpha nbsp More generally for a polynomial P displaystyle P nbsp the eigenvalues of matrix P A displaystyle P A nbsp are P l1 P lk displaystyle P lambda 1 ldots P lambda k nbsp Left and right eigenvectors edit See also left and right algebra Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row For that reason the word eigenvector in the context of matrices almost always refers to a right eigenvector namely a column vector that right multiplies the n n displaystyle n times n nbsp matrix A displaystyle A nbsp in the defining equation equation 1 Av lv displaystyle A mathbf v lambda mathbf v nbsp The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix A displaystyle A nbsp In this formulation the defining equation isuA ku displaystyle mathbf u A kappa mathbf u nbsp where k displaystyle kappa nbsp is a scalar and u displaystyle u nbsp is a 1 n displaystyle 1 times n nbsp matrix Any row vector u displaystyle u nbsp satisfying this equation is called a left eigenvector of A displaystyle A nbsp and k displaystyle kappa nbsp is its associated eigenvalue Taking the transpose of this equation ATuT kuT displaystyle A textsf T mathbf u textsf T kappa mathbf u textsf T nbsp Comparing this equation to equation 1 it follows immediately that a left eigenvector of A displaystyle A nbsp is the same as the transpose of a right eigenvector of AT displaystyle A textsf T nbsp with the same eigenvalue Furthermore since the characteristic polynomial of AT displaystyle A textsf T nbsp is the same as the characteristic polynomial of A displaystyle A nbsp the left and right eigenvectors of A displaystyle A nbsp are associated with the same eigenvalues Diagonalization and the eigendecomposition edit Main article Eigendecomposition of a matrix Suppose the eigenvectors of A form a basis or equivalently A has n linearly independent eigenvectors v1 v2 vn with associated eigenvalues l1 l2 ln The eigenvalues need not be distinct Define a square matrix Q whose columns are the n linearly independent eigenvectors of A Q v1v2 vn displaystyle Q begin bmatrix mathbf v 1 amp mathbf v 2 amp cdots amp mathbf v n end bmatrix nbsp Since each column of Q is an eigenvector of A right multiplying A by Q scales each column of Q by its associated eigenvalue AQ l1v1l2v2 lnvn displaystyle AQ begin bmatrix lambda 1 mathbf v 1 amp lambda 2 mathbf v 2 amp cdots amp lambda n mathbf v n end bmatrix nbsp With this in mind define a diagonal matrix L where each diagonal element Lii is the eigenvalue associated with the ith column of Q Then AQ QL displaystyle AQ Q Lambda nbsp Because the columns of Q are linearly independent Q is invertible Right multiplying both sides of the equation by Q 1 A QLQ 1 displaystyle A Q Lambda Q 1 nbsp or by instead left multiplying both sides by Q 1 Q 1AQ L displaystyle Q 1 AQ Lambda nbsp A can therefore be decomposed into a matrix composed of its eigenvectors a diagonal matrix with its eigenvalues along the diagonal and the inverse of the matrix of eigenvectors This is called the eigendecomposition and it is a similarity transformation Such a matrix A is said to be similar to the diagonal matrix L or diagonalizable The matrix Q is the change of basis matrix of the similarity transformation Essentially the matrices A and L represent the same linear transformation expressed in two different bases The eigenvectors are used as the basis when representing the linear transformation as L Conversely suppose a matrix A is diagonalizable Let P be a non singular square matrix such that P 1AP is some diagonal matrix D Left multiplying both by P AP PD Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D Since the columns of P must be linearly independent for P to be invertible there exist n linearly independent eigenvectors of A It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable A matrix that is not diagonalizable is said to be defective For defective matrices the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form Over an algebraically closed field any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces Variational characterization edit Main article Min max theorem In the Hermitian case eigenvalues can be given a variational characterization The largest eigenvalue of H displaystyle H nbsp is the maximum value of the quadratic form xTHx xTx displaystyle mathbf x textsf T H mathbf x mathbf x textsf T mathbf x nbsp A value of x displaystyle mathbf x nbsp that realizes that maximum is an eigenvector Matrix examples edit Two dimensional matrix example edit nbsp The transformation matrix A 2112 displaystyle left begin smallmatrix 2 amp 1 1 amp 2 end smallmatrix right nbsp preserves the direction of purple vectors parallel to vl 1 1 1 T and blue vectors parallel to vl 3 1 1 T The red vectors are not parallel to either eigenvector so their directions are changed by the transformation The lengths of the purple vectors are unchanged after the transformation due to their eigenvalue of 1 while blue vectors are three times the length of the original due to their eigenvalue of 3 See also An extended version showing all four quadrants Consider the matrixA 2112 displaystyle A begin bmatrix 2 amp 1 1 amp 2 end bmatrix nbsp The figure on the right shows the effect of this transformation on point coordinates in the plane The eigenvectors v of this transformation satisfy equation 1 and the values of l for which the determinant of the matrix A lI equals zero are the eigenvalues Taking the determinant to find characteristic polynomial of A det A lI 2112 l 1001 2 l112 l 3 4l l2 l 3 l 1 displaystyle begin aligned det A lambda I amp left begin bmatrix 2 amp 1 1 amp 2 end bmatrix lambda begin bmatrix 1 amp 0 0 amp 1 end bmatrix right begin vmatrix 2 lambda amp 1 1 amp 2 lambda end vmatrix 6pt amp 3 4 lambda lambda 2 6pt amp lambda 3 lambda 1 end aligned nbsp Setting the characteristic polynomial equal to zero it has roots at l 1 and l 3 which are the two eigenvalues of A For l 1 equation 2 becomes A I vl 1 1111 v1v2 00 displaystyle A I mathbf v lambda 1 begin bmatrix 1 amp 1 1 amp 1 end bmatrix begin bmatrix v 1 v 2 end bmatrix begin bmatrix 0 0 end bmatrix nbsp 1v1 1v2 0 displaystyle 1v 1 1v 2 0 nbsp Any nonzero vector with v1 v2 solves this equation Therefore vl 1 v1 v1 1 1 displaystyle mathbf v lambda 1 begin bmatrix v 1 v 1 end bmatrix begin bmatrix 1 1 end bmatrix nbsp is an eigenvector of A corresponding to l 1 as is any scalar multiple of this vector For l 3 equation 2 becomes A 3I vl 3 111 1 v1v2 00 1v1 1v2 0 1v1 1v2 0 displaystyle begin aligned A 3I mathbf v lambda 3 amp begin bmatrix 1 amp 1 1 amp 1 end bmatrix begin bmatrix v 1 v 2 end bmatrix begin bmatrix 0 0 end bmatrix 1v 1 1v 2 amp 0 1v 1 1v 2 amp 0 end aligned nbsp Any nonzero vector with v1 v2 solves this equation Therefore vl 3 v1v1 11 displaystyle mathbf v lambda 3 begin bmatrix v 1 v 1 end bmatrix begin bmatrix 1 1 end bmatrix nbsp is an eigenvector of A corresponding to l 3 as is any scalar multiple of this vector Thus the vectors vl 1 and vl 3 are eigenvectors of A associated with the eigenvalues l 1 and l 3 respectively Three dimensional matrix example edit Consider the matrixA 200034049 displaystyle A begin bmatrix 2 amp 0 amp 0 0 amp 3 amp 4 0 amp 4 amp 9 end bmatrix nbsp The characteristic polynomial of A isdet A lI 200034049 l 100010001 2 l0003 l4049 l 2 l 3 l 9 l 16 l3 14l2 35l 22 displaystyle begin aligned det A lambda I amp left begin bmatrix 2 amp 0 amp 0 0 amp 3 amp 4 0 amp 4 amp 9 end bmatrix lambda begin bmatrix 1 amp 0 amp 0 0 amp 1 amp 0 0 amp 0 amp 1 end bmatrix right begin vmatrix 2 lambda amp 0 amp 0 0 amp 3 lambda amp 4 0 amp 4 amp 9 lambda end vmatrix 6pt amp 2 lambda bigl 3 lambda 9 lambda 16 bigr lambda 3 14 lambda 2 35 lambda 22 end aligned nbsp The roots of the characteristic polynomial are 2 1 and 11 which are the only three eigenvalues of A These eigenvalues correspond to the eigenvectors 100 T displaystyle begin bmatrix 1 amp 0 amp 0 end bmatrix textsf T nbsp 0 21 T displaystyle begin bmatrix 0 amp 2 amp 1 end bmatrix textsf T nbsp and 012 T displaystyle begin bmatrix 0 amp 1 amp 2 end bmatrix textsf T nbsp or any nonzero multiple thereof Three dimensional matrix example with complex eigenvalues edit Consider the cyclic permutation matrixA 010001100 displaystyle A begin bmatrix 0 amp 1 amp 0 0 amp 0 amp 1 1 amp 0 amp 0 end bmatrix nbsp This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom Its characteristic polynomial is 1 l3 whose roots arel1 1l2 12 i32l3 l2 12 i32 displaystyle begin aligned lambda 1 amp 1 lambda 2 amp frac 1 2 i frac sqrt 3 2 lambda 3 amp lambda 2 frac 1 2 i frac sqrt 3 2 end aligned nbsp where i displaystyle i nbsp is an imaginary unit with i2 1 displaystyle i 2 1 nbsp For the real eigenvalue l1 1 any vector with three equal nonzero entries is an eigenvector For example A 555 555 1 555 displaystyle A begin bmatrix 5 5 5 end bmatrix begin bmatrix 5 5 5 end bmatrix 1 cdot begin bmatrix 5 5 5 end bmatrix nbsp For the complex conjugate pair of imaginary eigenvalues l2l3 1 l22 l3 l32 l2 displaystyle lambda 2 lambda 3 1 quad lambda 2 2 lambda 3 quad lambda 3 2 lambda 2 nbsp ThenA 1l2l3 l2l31 l2 1l2l3 displaystyle A begin bmatrix 1 lambda 2 lambda 3 end bmatrix begin bmatrix lambda 2 lambda 3 1 end bmatrix lambda 2 cdot begin bmatrix 1 lambda 2 lambda 3 end bmatrix nbsp and A 1l3l2 l3l21 l3 1l3l2 displaystyle A begin bmatrix 1 lambda 3 lambda 2 end bmatrix begin bmatrix lambda 3 lambda 2 1 end bmatrix lambda 3 cdot begin bmatrix 1 lambda 3 lambda 2 end bmatrix nbsp Therefore the other two eigenvectors of A are complex and are vl2 1l2l3 T displaystyle mathbf v lambda 2 begin bmatrix 1 amp lambda 2 amp lambda 3 end bmatrix textsf T nbsp and vl3 1l3l2 T displaystyle mathbf v lambda 3 begin bmatrix 1 amp lambda 3 amp lambda 2 end bmatrix textsf T nbsp with eigenvalues l2 and l3 respectively The two complex eigenvectors also appear in a complex conjugate pair vl2 vl3 displaystyle mathbf v lambda 2 mathbf v lambda 3 nbsp Diagonal matrix example edit Matrices with entries only along the main diagonal are called diagonal matrices The eigenvalues of a diagonal matrix are the diagonal elements themselves Consider the matrixA 100020003 displaystyle A begin bmatrix 1 amp 0 amp 0 0 amp 2 amp 0 0 amp 0 amp 3 end bmatrix nbsp The characteristic polynomial of A isdet A lI 1 l 2 l 3 l displaystyle det A lambda I 1 lambda 2 lambda 3 lambda nbsp which has the roots l1 1 l2 2 and l3 3 These roots are the diagonal elements as well as the eigenvalues of A Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element In the example the eigenvalues correspond to the eigenvectors vl1 100 vl2 010 vl3 001 displaystyle mathbf v lambda 1 begin bmatrix 1 0 0 end bmatrix quad mathbf v lambda 2 begin bmatrix 0 1 0 end bmatrix quad mathbf v lambda 3 begin bmatrix 0 0 1 end bmatrix nbsp respectively as well as scalar multiples of these vectors Triangular matrix example edit A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix As with diagonal matrices the eigenvalues of triangular matrices are the elements of the main diagonal Consider the lower triangular matrix A 100120233 displaystyle A begin bmatrix 1 amp 0 amp 0 1 amp 2 amp 0 2 amp 3 amp 3 end bmatrix nbsp The characteristic polynomial of A isdet A lI 1 l 2 l 3 l displaystyle det A lambda I 1 lambda 2 lambda 3 lambda nbsp which has the roots l1 1 l2 2 and l3 3 These roots are the diagonal elements as well as the eigenvalues of A These eigenvalues correspond to the eigenvectors vl1 1 112 vl2 01 3 vl3 001 displaystyle mathbf v lambda 1 begin bmatrix 1 1 frac 1 2 end bmatrix quad mathbf v lambda 2 begin bmatrix 0 1 3 end bmatrix quad mathbf v lambda 3 begin bmatrix 0 0 1 end bmatrix nbsp respectively as well as scalar multiples of these vectors Matrix with repeated eigenvalues example edit As in the previous example the lower triangular matrixA 2000120001300013 displaystyle A begin bmatrix 2 amp 0 amp 0 amp 0 1 amp 2 amp 0 amp 0 0 amp 1 amp 3 amp 0 0 amp 0 amp 1 amp 3 end bmatrix nbsp has a characteristic polynomial that is the product of its diagonal elements det A lI 2 l00012 l00013 l00013 l 2 l 2 3 l 2 displaystyle det A lambda I begin vmatrix 2 lambda amp 0 amp 0 amp 0 1 amp 2 lambda amp 0 amp 0 0 amp 1 amp 3 lambda amp 0 0 amp 0 amp 1 amp 3 lambda end vmatrix 2 lambda 2 3 lambda 2 nbsp The roots of this polynomial and hence the eigenvalues are 2 and 3 The algebraic multiplicity of each eigenvalue is 2 in other words they are both double roots The sum of the algebraic multiplicities of all distinct eigenvalues is mA 4 n the order of the characteristic polynomial and the dimension of A On the other hand the geometric multiplicity of the eigenvalue 2 is only 1 because its eigenspace is spanned by just one vector 01 11 T displaystyle begin bmatrix 0 amp 1 amp 1 amp 1 end bmatrix textsf T nbsp and is therefore 1 dimensional Similarly the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector 0001 T displaystyle begin bmatrix 0 amp 0 amp 0 amp 1 end bmatrix textsf T nbsp The total geometric multiplicity gA is 2 which is the smallest it could be for a matrix with two distinct eigenvalues Geometric multiplicities are defined in a later section Eigenvector eigenvalue identity edit For a Hermitian matrix the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix vi j 2 k li lk Mj k i li lk displaystyle v i j 2 frac prod k lambda i lambda k M j prod k neq i lambda i lambda k nbsp where Mj textstyle M j nbsp is the submatrix formed by removing the jth row and column from the original matrix 34 35 36 This identity also extends to diagonalizable matrices and has been rediscovered many times in the literature 35 37 Eigenvalues and eigenfunctions of differential operators editMain article Eigenfunction The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite dimensional Hilbert or Banach space A widely used class of linear transformations acting on infinite dimensional spaces are the differential operators on function spaces Let D be a linear differential operator on the space C of infinitely differentiable real functions of a real argument t The eigenvalue equation for D is the differential equationDf t lf t displaystyle Df t lambda f t nbsp The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions Derivative operator example edit Consider the derivative operator ddt displaystyle tfrac d dt nbsp with eigenvalue equationddtf t lf t displaystyle frac d dt f t lambda f t nbsp This differential equation can be solved by multiplying both sides by dt f t and integrating Its solution the exponential functionf t f 0 elt displaystyle f t f 0 e lambda t nbsp is the eigenfunction of the derivative operator In this case the eigenfunction is itself a function of its associated eigenvalue In particular for l 0 the eigenfunction f t is a constant The main eigenfunction article gives other examples General definition editThe concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces Let V be any vector space over some field K of scalars and let T be a linear transformation mapping V into V T V V displaystyle T V to V nbsp We say that a nonzero vector v V is an eigenvector of T if and only if there exists a scalar l K such that T v lv displaystyle T mathbf v lambda mathbf v nbsp 5 This equation is called the eigenvalue equation for T and the scalar l is the eigenvalue of T corresponding to the eigenvector v T v is the result of applying the transformation T to the vector v while lv is the product of the scalar l with v 38 39 Eigenspaces geometric multiplicity and the eigenbasis edit Given an eigenvalue l consider the setE v T v lv displaystyle E left mathbf v T mathbf v lambda mathbf v right nbsp which is the union of the zero vector with the set of all eigenvectors associated with l E is called the eigenspace or characteristic space of T associated with l 40 By definition of a linear transformation T x y T x T y T ax aT x displaystyle begin aligned T mathbf x mathbf y amp T mathbf x T mathbf y T alpha mathbf x amp alpha T mathbf x end aligned nbsp for x y V and a K Therefore if u and v are eigenvectors of T associated with eigenvalue l namely u v E thenT u v l u v T av l av displaystyle begin aligned T mathbf u mathbf v amp lambda mathbf u mathbf v T alpha mathbf v amp lambda alpha mathbf v end aligned nbsp So both u v and av are either zero or eigenvectors of T associated with l namely u v av E and E is closed under addition and scalar multiplication The eigenspace E associated with l is therefore a linear subspace of V 41 If that subspace has dimension 1 it is sometimes called an eigenline 42 The geometric multiplicity gT l of an eigenvalue l is the dimension of the eigenspace associated with l i e the maximum number of linearly independent eigenvectors associated with that eigenvalue 10 27 43 By the definition of eigenvalues and eigenvectors gT l 1 because every eigenvalue has at least one eigenvector The eigenspaces of T always form a direct sum As a consequence eigenvectors of different eigenvalues are always linearly independent Therefore the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates and there cannot be more than n distinct eigenvalues d Any subspace spanned by eigenvectors of T is an invariant subspace of T and the restriction of T to such a subspace is diagonalizable Moreover if the entire vector space V can be spanned by the eigenvectors of T or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T When T admits an eigenbasis T is diagonalizable Spectral theory edit Main article Spectral theory If l is an eigenvalue of T then the operator T lI is not one to one and therefore its inverse T lI 1 does not exist The converse is true for finite dimensional vector spaces but not for infinite dimensional vector spaces In general the operator T lI may not have an inverse even if l is not an eigenvalue For this reason in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars l for which the operator T lI has no bounded inverse The spectrum of an operator always contains all its eigenvalues but is not limited to them Associative algebras and representation theory edit Main article Weight representation theory One can generalize the algebraic object that is acting on the vector space replacing a single operator acting on a vector space with an algebra representation an associative algebra acting on a module The study of such actions is the field of representation theory The representation theoretical concept of weight is an analog of eigenvalues while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces respectively Dynamic equations editThe simplest difference equations have the form xt a1xt 1 a2xt 2 akxt k displaystyle x t a 1 x t 1 a 2 x t 2 cdots a k x t k nbsp The solution of this equation for x in terms of t is found by using its characteristic equation lk a1lk 1 a2lk 2 ak 1l ak 0 displaystyle lambda k a 1 lambda k 1 a 2 lambda k 2 cdots a k 1 lambda a k 0 nbsp which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k 1 equations xt 1 xt 1 xt k 1 xt k 1 displaystyle x t 1 x t 1 dots x t k 1 x t k 1 nbsp giving a k dimensional system of the first order in the stacked variable vector xt xt k 1 displaystyle begin bmatrix x t amp cdots amp x t k 1 end bmatrix nbsp in terms of its once lagged value and taking the characteristic equation of this system s matrix This equation gives k characteristic roots l1 lk displaystyle lambda 1 ldots lambda k nbsp for use in the solution equation xt c1l1t cklkt displaystyle x t c 1 lambda 1 t cdots c k lambda k t nbsp A similar procedure is used for solving a differential equation of the form dkxdtk ak 1dk 1xdtk 1 a1dxdt a0x 0 displaystyle frac d k x dt k a k 1 frac d k 1 x dt k 1 cdots a 1 frac dx dt a 0 x 0 nbsp Calculation editMain article Eigenvalue algorithm The calculation of eigenvalues and eigenvectors is a topic where theory as presented in elementary linear algebra textbooks is often very far from practice Classical method edit The classical method is to first find the eigenvalues and then calculate the eigenvectors for each eigenvalue It is in several ways poorly suited for non exact arithmetics such as floating point Eigenvalues edit The eigenvalues of a matrix A displaystyle A nbsp can be determined by finding the roots of the characteristic polynomial This is easy for 2 2 displaystyle 2 times 2 nbsp matrices but the difficulty increases rapidly with the size of the matrix In theory the coefficients of the characteristic polynomial can be computed exactly since they are sums of products of matrix elements and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy 44 However this approach is not viable in practice because the coefficients would be contaminated by unavoidable round off errors and the roots of a polynomial can be an extremely sensitive function of the coefficients as exemplified by Wilkinson s polynomial 44 Even for matrices whose elements are integers the calculation becomes nontrivial because the sums are very long the constant term is the determinant which for an n n displaystyle n times n nbsp matrix is a sum of n displaystyle n nbsp different products e Explicit algebraic formulas for the roots of a polynomial exist only if the degree n displaystyle n nbsp is 4 or less According to the Abel Ruffini theorem there is no general explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more Generality matters because any polynomial with degree n displaystyle n nbsp is the characteristic polynomial of some companion matrix of order n displaystyle n nbsp Therefore for matrices of order 5 or more the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula and must therefore be computed by approximate numerical methods Even the exact formula for the roots of a degree 3 polynomial is numerically impractical Eigenvectors edit Once the exact value of an eigenvalue is known the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation that becomes a system of linear equations with known coefficients For example once it is known that 6 is an eigenvalue of the matrixA 4163 displaystyle A begin bmatrix 4 amp 1 6 amp 3 end bmatrix nbsp we can find its eigenvectors by solving the equation Av 6v displaystyle Av 6v nbsp that is 4163 xy 6 xy displaystyle begin bmatrix 4 amp 1 6 amp 3 end bmatrix begin bmatrix x y end bmatrix 6 cdot begin bmatrix x y end bmatrix nbsp This matrix equation is equivalent to two linear equations 4x y 6x6x 3y 6y displaystyle left begin aligned 4x y amp 6x 6x 3y amp 6y end aligned right nbsp that is 2x y 06x 3y 0 displaystyle left begin aligned 2x y amp 0 6x 3y amp 0 end aligned right nbsp Both equations reduce to the single linear equation y 2x displaystyle y 2x nbsp Therefore any vector of the form a2a T displaystyle begin bmatrix a amp 2a end bmatrix textsf T nbsp for any nonzero real number a displaystyle a nbsp is an eigenvector of A displaystyle A nbsp with eigenvalue l 6 displaystyle lambda 6 nbsp The matrix A displaystyle A nbsp above has another eigenvalue l 1 displaystyle lambda 1 nbsp A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of 3x y 0 displaystyle 3x y 0 nbsp that is any vector of the form b 3b T displaystyle begin bmatrix b amp 3b end bmatrix textsf T nbsp for any nonzero real number b displaystyle b nbsp Simple iterative methods edit Main article Power iteration The converse approach of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector turns out to be far more tractable for computers The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix optionally normalizing the vector to keep its elements of reasonable size this makes the vector converge towards an eigenvector A variation is to instead multiply the vector by A mI 1 displaystyle A mu I 1 nbsp this causes it to converge to an eigenvector of the eigenvalue closest to m C displaystyle mu in mathbb C nbsp If v displaystyle mathbf v nbsp is a good approximation of an eigenvector of A displaystyle A nbsp then the corresponding eigenvalue can be computed as l v Avv v displaystyle lambda frac mathbf v A mathbf v mathbf v mathbf v nbsp where v displaystyle mathbf v nbsp denotes the conjugate transpose of v displaystyle mathbf v nbsp Modern methods edit Efficient accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961 44 Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm citation needed For large Hermitian sparse matrices the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors among several other possibilities 44 Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by product of the computation although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed Applications editGeometric transformations edit Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes The following table presents some example transformations in the plane along with their 2 2 matrices eigenvalues and eigenvectors Eigenvalues of geometric transformations Scaling Unequal scaling Rotation Horizontal shear Hyperbolic rotationIllustration nbsp nbsp nbsp nbsp nbsp Matrix k00k displaystyle begin bmatrix k amp 0 0 amp k end bmatrix nbsp k100k2 displaystyle begin bmatrix k 1 amp 0 0 amp k 2 end bmatrix nbsp cos 8 sin 8sin 8cos 8 displaystyle begin bmatrix cos theta amp sin theta sin theta amp cos theta end bmatrix nbsp 1k01 displaystyle begin bmatrix 1 amp k 0 amp 1 end bmatrix nbsp cosh fsinh fsinh fcosh f displaystyle begin bmatrix cosh varphi amp sinh varphi sinh varphi amp cosh varphi end bmatrix nbsp Characteristicpolynomial l k 2 displaystyle lambda k 2 nbsp l k1 l k2 displaystyle lambda k 1 lambda k 2 nbsp l2 2cos 8 l 1 displaystyle lambda 2 2 cos theta lambda 1 nbsp l 1 2 displaystyle lambda 1 2 nbsp l2 2cosh f l 1 displaystyle lambda 2 2 cosh varphi lambda 1 nbsp Eigenvalues li displaystyle lambda i nbsp l1 l2 k displaystyle lambda 1 lambda 2 k nbsp l1 k1l2 k2 displaystyle begin aligned lambda 1 amp k 1 lambda 2 amp k 2 end aligned nbsp l1 ei8 cos 8 isin 8l2 e i8 cos 8 isin 8 displaystyle begin aligned lambda 1 amp e i theta amp cos theta i sin theta lambda 2 amp e i theta amp cos theta i sin theta end aligned nbsp l1 l2 1 displaystyle lambda 1 lambda 2 1 nbsp l1 ef cosh f sinh fl2 e f cosh f sinh f displaystyle begin aligned lambda 1 amp e varphi amp cosh varphi sinh varphi lambda 2 amp e varphi amp cosh varphi sinh varphi end aligned nbsp Algebraic mult mi m li displaystyle mu i mu lambda i nbsp m1 2 displaystyle mu 1 2 nbsp m1 1m2 1 displaystyle begin aligned mu 1 amp 1 mu 2 amp 1 end aligned nbsp m1 1m2 1 displaystyle begin aligned mu 1 amp 1 mu 2 amp 1 end aligned nbsp m1 2 displaystyle mu 1 2 nbsp m1 1m2 1 displaystyle begin aligned mu 1 amp 1 mu 2 amp 1 end aligned nbsp Geometric mult gi g li displaystyle gamma i gamma lambda i nbsp g1 2 displaystyle gamma 1 2 nbsp g1 1g2 1 displaystyle begin aligned gamma 1 amp 1 gamma 2 amp 1 end aligned nbsp g1 1g2 1 displaystyle begin aligned gamma 1 amp 1 gamma 2 amp 1 end aligned nbsp g1 1 displaystyle gamma 1 1 nbsp g1 1g2 1 displaystyle begin aligned gamma 1 amp 1 gamma 2 amp 1 end aligned nbsp Eigenvectors All nonzero vectors u1 10 u2 01 displaystyle begin aligned mathbf u 1 amp begin bmatrix 1 0 end bmatrix mathbf u 2 amp begin bmatrix 0 1 end bmatrix end aligned nbsp u1 1 i u2 1 i displaystyle begin aligned mathbf u 1 amp begin bmatrix 1 i end bmatrix mathbf u 2 amp begin bmatrix 1 i end bmatrix end aligned nbsp u1 10 displaystyle mathbf u 1 begin bmatrix 1 0 end bmatrix nbsp u1 11 u2 1 1 displaystyle begin aligned mathbf u 1 amp begin bmatrix 1 1 end bmatrix mathbf u 2 amp begin bmatrix 1 1 end bmatrix end aligned nbsp The characteristic equation for a rotation is a quadratic equation with discriminant D 4 sin 8 2 displaystyle D 4 sin theta 2 nbsp which is a negative number whenever 8 is not an integer multiple of 180 Therefore except for these special cases the two eigenvalues are complex numbers cos 8 isin 8 displaystyle cos theta pm i sin theta nbsp and all eigenvectors have non real entries Indeed except for those special cases a rotation changes the direction of every nonzero vector in the plane A linear transformation that takes a square to a rectangle of the same area a squeeze mapping has reciprocal eigenvalues Principal component analysis edit nbsp PCA of the multivariate Gaussian distribution centered at 1 3 displaystyle 1 3 nbsp with a standard deviation of 3 in roughly the 0 878 0 478 displaystyle 0 878 0 478 nbsp direction and of 1 in the orthogonal direction The vectors shown are unit eigenvectors of the symmetric positive semidefinite covariance matrix scaled by the square root of the corresponding eigenvalue Just as in the one dimensional case the square root is taken because the standard deviation is more readily visualized than the variance Main article Principal component analysis See also Positive semidefinite matrix and Factor analysis The eigendecomposition of a symmetric positive semidefinite PSD matrix yields an orthogonal basis of eigenvectors each of which has a nonnegative eigenvalue The orthogonal decomposition of a PSD matrix is used in multivariate analysis where the sample covariance matrices are PSD This orthogonal decomposition is called principal component analysis PCA in statistics PCA studies linear relations among variables PCA is performed on the covariance matrix or the correlation matrix in which each variable is scaled to have its sample variance equal to one For the covariance or correlation matrix the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data In this basis the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data Principal component analysis is used as a means of dimensionality reduction in the study of large data sets such as those encountered in bioinformatics In Q methodology the eigenvalues of the correlation matrix determine the Q methodologist s judgment of practical significance which differs from the statistical significance of hypothesis testing cf criteria for determining the number of factors More generally principal component analysis can be used as a method of factor analysis in structural equation modeling Graphs edit In spectral graph theory an eigenvalue of a graph is defined as an eigenvalue of the graph s adjacency matrix A displaystyle A nbsp or increasingly of the graph s Laplacian matrix due to its discrete Laplace operator which is either D A displaystyle D A nbsp sometimes called the combinatorial Laplacian or I D 1 2AD 1 2 displaystyle I D 1 2 AD 1 2 nbsp sometimes called the normalized Laplacian where D displaystyle D nbsp is a diagonal matrix with Dii displaystyle D ii nbsp equal to the degree of vertex vi displaystyle v i nbsp and in D 1 2 displaystyle D 1 2 nbsp the i displaystyle i nbsp th diagonal entry is 1 deg vi textstyle 1 sqrt deg v i nbsp The k displaystyle k nbsp th principal eigenvector of a graph is defined as either the eigenvector corresponding to the k displaystyle k nbsp th largest or k displaystyle k nbsp th smallest eigenvalue of the Laplacian The first principal eigenvector of the graph is also referred to merely as the principal eigenvector The principal eigenvector is used to measure the centrality of its vertices An example is Google s PageRank algorithm The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components This vector corresponds to the stationary distribution of the Markov chain represented by the row normalized adjacency matrix however the adjacency matrix must first be modified to ensure a stationary distribution exists The second smallest eigenvector can be used to partition the graph into clusters via spectral clustering Other methods are also available for clustering Markov chains edit A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system In particular the entries are non negative and every row of the matrix sums to one being the sum of probabilities of transitions from one state to some other state of the system The Perron Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue which governs the convergence of the system to a steady state Vibration analysis edit nbsp Mode shape of a tuning fork at eigenfrequency 440 09 HzMain article Vibration Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom The eigenvalues are the natural frequencies or eigenfrequencies of vibration and the eigenvectors are the shapes of these vibrational modes In particular undamped vibration is governed bymx kx 0 displaystyle m ddot x kx 0 nbsp or mx kx displaystyle m ddot x kx nbsp That is acceleration is proportional to position i e we expect x displaystyle x nbsp to be sinusoidal in time In n displaystyle n nbsp dimensions m displaystyle m nbsp becomes a mass matrix and k displaystyle k nbsp a stiffness matrix Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problemkx w2mx displaystyle kx omega 2 mx nbsp where w2 displaystyle omega 2 nbsp is the eigenvalue and w displaystyle omega nbsp is the imaginary angular frequency The principal vibration modes are different from the principal compliance modes which are the eigenvectors of k displaystyle k nbsp alone Furthermore damped vibration governed by mx cx kx 0 displaystyle m ddot x c dot x kx 0 nbsp leads to a so called quadratic eigenvalue problem w2m wc k x 0 displaystyle left omega 2 m omega c k right x 0 nbsp This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors The eigenvalue problem of complex structures is often solved using finite element analysis but neatly generalize the solution to scalar valued vibration problems Tensor of moment of inertia edit In mechanics the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass Stress tensor edit In solid mechanics the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis Because it is diagonal in this orientation the stress tensor has no shear components the components it does have are the principal components Schrodinger equation edit nbsp The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator They are associated with eigenvalues interpreted as their energies increasing downward n 1 2 3 displaystyle n 1 2 3 ldots nbsp and angular momentum increasing across s p d The illustration shows the square of the absolute value of the wavefunctions Brighter areas correspond to higher probability density for a position measurement The center of each figure is the atomic nucleus a proton An example of an eigenvalue equation where the transformation T displaystyle T nbsp is represented in terms of a differential operator is the time independent Schrodinger equation in quantum mechanics HpsE EpsE displaystyle H psi E E psi E nbsp where H displaystyle H nbsp the Hamiltonian is a second order differential operator and psE displaystyle psi E nbsp the wavefunction is one of its eigenfunctions corresponding to the eigenvalue E displaystyle E nbsp interpreted as its energy However in the case where one is interested only in the bound state solutions of the Schrodinger equation one looks for psE displaystyle psi E nbsp within the space of square integrable functions Since this space is a Hilbert space with a well defined scalar product one can introduce a basis set in which psE displaystyle psi E nbsp and H displaystyle H nbsp can be represented as a one dimensional array i e a vector and a matrix respectively This allows one to represent the Schrodinger equation in a matrix form The bra ket notation is often used in this context A vector which represents a state of the system in the Hilbert space of square integrable functions is represented by PSE displaystyle Psi E rangle nbsp In this notation the Schrodinger equation is H PSE E PSE displaystyle H Psi E rangle E Psi E rangle nbsp where PSE displaystyle Psi E rangle nbsp is an eigenstate of H displaystyle H nbsp and E displaystyle E nbsp represents the eigenvalue H displaystyle H nbsp is an observable self adjoint operator the infinite dimensional analog of Hermitian matrices As in the matrix case in the equation above H PSE displaystyle H Psi E rangle nbsp is understood to be the vector obtained by application of the transformation H displaystyle H nbsp to PSE displaystyle Psi E rangle nbsp Wave transport edit Light acoustic waves and microwaves are randomly scattered numerous times when traversing a static disordered system Even though multiple scattering repeatedly randomizes the waves ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix t displaystyle mathbf t nbsp 45 46 The eigenvectors of the transmission operator t t displaystyle mathbf t dagger mathbf t nbsp form a set of disorder specific input wavefronts which enable waves to couple into the disordered system s eigenchannels the independent pathways waves can travel through the system The eigenvalues t displaystyle tau nbsp of t t displaystyle mathbf t dagger mathbf t nbsp correspond to the intensity transmittance associated with each eigenchannel One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with tmax 1 displaystyle tau max 1 nbsp and tmin 0 displaystyle tau min 0 img, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.