fbpx
Wikipedia

Modified Richardson iteration

Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.

We seek the solution to a set of linear equations, expressed in matrix terms as

The Richardson iteration is

where is a scalar parameter that has to be chosen such that the sequence converges.

It is easy to see that the method has the correct fixed points, because if it converges, then and has to approximate a solution of .

Convergence edit

Subtracting the exact solution  , and introducing the notation for the error  , we get the equality for the errors

 

Thus,

 

for any vector norm and the corresponding induced matrix norm. Thus, if  , the method converges.

Suppose that   is symmetric positive definite and that   are the eigenvalues of  . The error converges to   if   for all eigenvalues  . If, e.g., all eigenvalues are positive, this can be guaranteed if   is chosen such that  . The optimal choice, minimizing all  , is  , which gives the simplest Chebyshev iteration. This optimal choice yields a spectral radius of

 

where   is the condition number.

If there are both positive and negative eigenvalues, the method will diverge for any   if the initial error   has nonzero components in the corresponding eigenvectors.

Equivalence to gradient descent edit

Consider minimizing the function  . Since this is a convex function, a sufficient condition for optimality is that the gradient is zero ( ) which gives rise to the equation

 

Define   and  . Because of the form of A, it is a positive semi-definite matrix, so it has no negative eigenvalues.

A step of gradient descent is

 

which is equivalent to the Richardson iteration by making  .

See also edit

References edit

  • Richardson, L.F. (1910). "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam". Philosophical Transactions of the Royal Society A. 210: 307–357. doi:10.1098/rsta.1911.0009. JSTOR 90994.
  • Lebedev, Vyacheslav Ivanovich (2001) [1994], "Chebyshev iteration method", in Michiel Hazewinkel (ed.), Encyclopedia of Mathematics, EMS Press, ISBN 1-4020-0609-8, retrieved 2010-05-25

modified, richardson, iteration, iterative, method, solving, system, linear, equations, richardson, iteration, proposed, lewis, richardson, work, dated, 1910, similar, jacobi, gauss, seidel, method, seek, solution, linear, equations, expressed, matrix, terms, . Modified Richardson iteration is an iterative method for solving a system of linear equations Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910 It is similar to the Jacobi and Gauss Seidel method We seek the solution to a set of linear equations expressed in matrix terms as A x b displaystyle Ax b The Richardson iteration is x k 1 x k w b A x k displaystyle x k 1 x k omega left b Ax k right where w displaystyle omega is a scalar parameter that has to be chosen such that the sequence x k displaystyle x k converges It is easy to see that the method has the correct fixed points because if it converges then x k 1 x k displaystyle x k 1 approx x k and x k displaystyle x k has to approximate a solution of A x b displaystyle Ax b Contents 1 Convergence 2 Equivalence to gradient descent 3 See also 4 ReferencesConvergence editSubtracting the exact solution x displaystyle x nbsp and introducing the notation for the error e k x k x displaystyle e k x k x nbsp we get the equality for the errors e k 1 e k w A e k I w A e k displaystyle e k 1 e k omega Ae k I omega A e k nbsp Thus e k 1 I w A e k I w A e k displaystyle e k 1 I omega A e k leq I omega A e k nbsp for any vector norm and the corresponding induced matrix norm Thus if I w A lt 1 displaystyle I omega A lt 1 nbsp the method converges Suppose that A displaystyle A nbsp is symmetric positive definite and that l j j displaystyle lambda j j nbsp are the eigenvalues of A displaystyle A nbsp The error converges to 0 displaystyle 0 nbsp if 1 w l j lt 1 displaystyle 1 omega lambda j lt 1 nbsp for all eigenvalues l j displaystyle lambda j nbsp If e g all eigenvalues are positive this can be guaranteed if w displaystyle omega nbsp is chosen such that 0 lt w lt w max w max 2 l max A displaystyle 0 lt omega lt omega text max omega text max 2 lambda text max A nbsp The optimal choice minimizing all 1 w l j displaystyle 1 omega lambda j nbsp is w opt 2 l min A l max A displaystyle omega text opt 2 lambda text min A lambda text max A nbsp which gives the simplest Chebyshev iteration This optimal choice yields a spectral radius of min w 0 w max r I w A r I w opt A 1 2 k A 1 displaystyle min omega in 0 omega text max rho I omega A rho I omega text opt A 1 frac 2 kappa A 1 nbsp where k A displaystyle kappa A nbsp is the condition number If there are both positive and negative eigenvalues the method will diverge for any w displaystyle omega nbsp if the initial error e 0 displaystyle e 0 nbsp has nonzero components in the corresponding eigenvectors Equivalence to gradient descent editConsider minimizing the function F x 1 2 A x b 2 2 displaystyle F x frac 1 2 tilde A x tilde b 2 2 nbsp Since this is a convex function a sufficient condition for optimality is that the gradient is zero F x 0 displaystyle nabla F x 0 nbsp which gives rise to the equation A T A x A T b displaystyle tilde A T tilde A x tilde A T tilde b nbsp Define A A T A displaystyle A tilde A T tilde A nbsp and b A T b displaystyle b tilde A T tilde b nbsp Because of the form of A it is a positive semi definite matrix so it has no negative eigenvalues A step of gradient descent is x k 1 x k t F x k x k t A x k b displaystyle x k 1 x k t nabla F x k x k t Ax k b nbsp which is equivalent to the Richardson iteration by making t w displaystyle t omega nbsp See also editRichardson extrapolationReferences editRichardson L F 1910 The approximate arithmetical solution by finite differences of physical problems involving differential equations with an application to the stresses in a masonry dam Philosophical Transactions of the Royal Society A 210 307 357 doi 10 1098 rsta 1911 0009 JSTOR 90994 Lebedev Vyacheslav Ivanovich 2001 1994 Chebyshev iteration method in Michiel Hazewinkel ed Encyclopedia of Mathematics EMS Press ISBN 1 4020 0609 8 retrieved 2010 05 25 Retrieved from https en wikipedia org w index php title Modified Richardson iteration amp oldid 1170978051, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.