fbpx
Wikipedia

Gauss–Legendre method

In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s.[1]

All Gauss–Legendre methods are A-stable.[2]

The Gauss–Legendre method of order two is the implicit midpoint rule. Its Butcher tableau is:

1/2 1/2
1

The Gauss–Legendre method of order four has Butcher tableau:

The Gauss–Legendre method of order six has Butcher tableau:

The computational cost of higher-order Gauss–Legendre methods is usually excessive, and thus, they are rarely used.[3]

Intuition edit

Gauss-Legendre Runge-Kutta (GLRK) methods solve an ordinary differential equation   with  . The distinguishing feature of GLRK is the estimation of   with Gaussian quadrature.

 

where   are the sampled velocities,   are the quadrature weights,   are the abscissas, and   are the roots   of the Legendre polynomial of degree  . A further approximation is needed, as   is still impossible to evaluate. To maintain truncation error of order  , we only need   to order  . The Runge-Kutta implicit definition   is invoked to accomplish this. This is an implicit constraint that must be solved by a root finding algorithm like Newton's method. The values of the Runge-Kutta parameters   can be determined from a Taylor series expansion in  .

Practical example edit

The Gauss-Legendre methods are implicit, so in general they cannot be applied exactly. Instead one makes an educated guess of  , and then uses Newton's method to converge arbitrarily close to the true solution. Below is a Matlab function which implements the Gauss-Legendre method of order four.

% starting point x = [ 10.5440; 4.1124; 35.8233]; dt = 0.01; N = 10000; x_series = [x]; for i = 1:N  x = gauss_step(x, @lorenz_dynamics, dt, 1e-7, 1, 100);  x_series = [x_series x]; end plot3( x_series(1,:), x_series(2,:), x_series(3,:) ); set(gca,'xtick',[],'ytick',[],'ztick',[]); title('Lorenz Attractor'); return; function [td, j] = lorenz_dynamics(state)  % return a time derivative and a Jacobian of that time derivative  x = state(1);  y = state(2);  z = state(3);  sigma = 10;  beta = 8/3;  rho = 28;  td = [sigma*(y-x); x*(rho-z)-y; x*y-beta*z];  j = [-sigma, sigma, 0;  rho-z, -1, -x;  y, x, -beta]; end function x_next = gauss_step( x, dynamics, dt, threshold, damping, max_iterations )  [d,~] = size(x);  sq3 = sqrt(3);  if damping > 1 || damping <= 0  error('damping should be between 0 and 1.')  end  % Use explicit Euler steps as initial guesses  [k,~] = dynamics(x);  x1_guess = x + (1/2-sq3/6)*dt*k;  x2_guess = x + (1/2+sq3/6)*dt*k;  [k1,~] = dynamics(x1_guess);  [k2,~] = dynamics(x2_guess);  a11 = 1/4;  a12 = 1/4 - sq3/6;  a21 = 1/4 + sq3/6;  a22 = 1/4;  error = @(k1, k2) [k1 - dynamics(x+(a11*k1+a12*k2)*dt); k2 - dynamics(x+(a21*k1+a22*k2)*dt)];  er = error(k1, k2);  iteration=1;  while (norm(er) > threshold && iteration < max_iterations)  fprintf('Newton iteration %d: error is %f.\n', iteration, norm(er) );  iteration = iteration + 1;    [~, j1] = dynamics(x+(a11*k1+a12*k2)*dt);  [~, j2] = dynamics(x+(a21*k1+a22*k2)*dt);    j = [eye(d) - dt*a11*j1, -dt*a12*j1;  -dt*a21*j2, eye(d) - dt*a22*j2];    k_next = [k1;k2] - damping * linsolve(j, er);  k1 = k_next(1:d);  k2 = k_next(d+(1:d));    er = error(k1, k2);  end  if norm(er) > threshold  error('Newton did not converge by %d iterations.', max_iterations);  end  x_next = x + dt / 2 * (k1 + k2); end 

This algorithm is surprisingly cheap. The error in   can fall below   in as few as 2 Newton steps. The only extra work compared to explicit Runge-Kutta methods is the computation of the Jacobian.

 
An integrated orbit near the Lorenz attractor.

Time-symmetric variants edit

At the cost of adding an additional implicit relation, these methods can be adapted to have time reversal symmetry. In these methods, the averaged position   is used in computing   instead of just the initial position   in standard Runge-Kutta methods. The method of order 2 is just an implicit midpoint method.

 
 

The method of order 4 with 2 stages is as follows.

 
 
 

The method of order 6 with 3 stages is as follows.

 
 
 
 

Notes edit

  1. ^ Iserles 1996, p. 47
  2. ^ Iserles 1996, p. 63
  3. ^ Iserles 1996, p. 47

References edit

  • Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2.

gauss, legendre, method, this, article, relies, largely, entirely, single, source, relevant, discussion, found, talk, page, please, help, improve, this, article, introducing, citations, additional, sources, find, sources, news, newspapers, books, scholar, jsto. This article relies largely or entirely on a single source Relevant discussion may be found on the talk page Please help improve this article by introducing citations to additional sources Find sources Gauss Legendre method news newspapers books scholar JSTOR January 2023 In numerical analysis and scientific computing the Gauss Legendre methods are a family of numerical methods for ordinary differential equations Gauss Legendre methods are implicit Runge Kutta methods More specifically they are collocation methods based on the points of Gauss Legendre quadrature The Gauss Legendre method based on s points has order 2s 1 All Gauss Legendre methods are A stable 2 The Gauss Legendre method of order two is the implicit midpoint rule Its Butcher tableau is 1 2 1 21The Gauss Legendre method of order four has Butcher tableau 12 163 displaystyle tfrac 1 2 tfrac 1 6 sqrt 3 14 displaystyle tfrac 1 4 14 163 displaystyle tfrac 1 4 tfrac 1 6 sqrt 3 12 163 displaystyle tfrac 1 2 tfrac 1 6 sqrt 3 14 163 displaystyle tfrac 1 4 tfrac 1 6 sqrt 3 14 displaystyle tfrac 1 4 12 displaystyle tfrac 1 2 12 displaystyle tfrac 1 2 The Gauss Legendre method of order six has Butcher tableau 12 11015 displaystyle tfrac 1 2 tfrac 1 10 sqrt 15 536 displaystyle tfrac 5 36 29 11515 displaystyle tfrac 2 9 tfrac 1 15 sqrt 15 536 13015 displaystyle tfrac 5 36 tfrac 1 30 sqrt 15 12 displaystyle tfrac 1 2 536 12415 displaystyle tfrac 5 36 tfrac 1 24 sqrt 15 29 displaystyle tfrac 2 9 536 12415 displaystyle tfrac 5 36 tfrac 1 24 sqrt 15 12 11015 displaystyle tfrac 1 2 tfrac 1 10 sqrt 15 536 13015 displaystyle tfrac 5 36 tfrac 1 30 sqrt 15 29 11515 displaystyle tfrac 2 9 tfrac 1 15 sqrt 15 536 displaystyle tfrac 5 36 518 displaystyle tfrac 5 18 49 displaystyle tfrac 4 9 518 displaystyle tfrac 5 18 The computational cost of higher order Gauss Legendre methods is usually excessive and thus they are rarely used 3 Contents 1 Intuition 2 Practical example 3 Time symmetric variants 4 Notes 5 ReferencesIntuition editGauss Legendre Runge Kutta GLRK methods solve an ordinary differential equation x f x displaystyle dot x f x nbsp with x 0 x0 displaystyle x 0 x 0 nbsp The distinguishing feature of GLRK is the estimation of x h x0 0hdtf x t textstyle x h x 0 int 0 h dt f x t nbsp with Gaussian quadrature x h x 0 h2 i 1ℓwiki O h2ℓ displaystyle x h x 0 frac h 2 sum i 1 ell w i k i O h 2 ell nbsp where ki f x hci displaystyle k i f x hc i nbsp are the sampled velocities wi displaystyle w i nbsp are the quadrature weights ci 12h 1 ri textstyle c i frac 1 2 h 1 r i nbsp are the abscissas and ri displaystyle r i nbsp are the roots Pℓ ri 0 displaystyle P ell r i 0 nbsp of the Legendre polynomial of degree ℓ displaystyle ell nbsp A further approximation is needed as ki displaystyle k i nbsp is still impossible to evaluate To maintain truncation error of order O h2ℓ displaystyle O h 2 ell nbsp we only need ki displaystyle k i nbsp to order O h2ℓ 1 displaystyle O h 2 ell 1 nbsp The Runge Kutta implicit definition ki f x0 h jaijkj textstyle k i f left x 0 h sum j a ij k j right nbsp is invoked to accomplish this This is an implicit constraint that must be solved by a root finding algorithm like Newton s method The values of the Runge Kutta parameters aij displaystyle a ij nbsp can be determined from a Taylor series expansion in h displaystyle h nbsp Practical example editThe Gauss Legendre methods are implicit so in general they cannot be applied exactly Instead one makes an educated guess of ki displaystyle k i nbsp and then uses Newton s method to converge arbitrarily close to the true solution Below is a Matlab function which implements the Gauss Legendre method of order four starting point x 10 5440 4 1124 35 8233 dt 0 01 N 10000 x series x for i 1 N x gauss step x lorenz dynamics dt 1e 7 1 100 x series x series x end plot3 x series 1 x series 2 x series 3 set gca xtick ytick ztick title Lorenz Attractor return function td j lorenz dynamics state return a time derivative and a Jacobian of that time derivative x state 1 y state 2 z state 3 sigma 10 beta 8 3 rho 28 td sigma y x x rho z y x y beta z j sigma sigma 0 rho z 1 x y x beta end function x next gauss step x dynamics dt threshold damping max iterations d size x sq3 sqrt 3 if damping gt 1 damping lt 0 error damping should be between 0 and 1 end Use explicit Euler steps as initial guesses k dynamics x x1 guess x 1 2 sq3 6 dt k x2 guess x 1 2 sq3 6 dt k k1 dynamics x1 guess k2 dynamics x2 guess a11 1 4 a12 1 4 sq3 6 a21 1 4 sq3 6 a22 1 4 error k1 k2 k1 dynamics x a11 k1 a12 k2 dt k2 dynamics x a21 k1 a22 k2 dt er error k1 k2 iteration 1 while norm er gt threshold amp amp iteration lt max iterations fprintf Newton iteration d error is f n iteration norm er iteration iteration 1 j1 dynamics x a11 k1 a12 k2 dt j2 dynamics x a21 k1 a22 k2 dt j eye d dt a11 j1 dt a12 j1 dt a21 j2 eye d dt a22 j2 k next k1 k2 damping linsolve j er k1 k next 1 d k2 k next d 1 d er error k1 k2 end if norm er gt threshold error Newton did not converge by d iterations max iterations end x next x dt 2 k1 k2 end This algorithm is surprisingly cheap The error in ki displaystyle k i nbsp can fall below 10 12 displaystyle 10 12 nbsp in as few as 2 Newton steps The only extra work compared to explicit Runge Kutta methods is the computation of the Jacobian nbsp An integrated orbit near the Lorenz attractor Time symmetric variants editAt the cost of adding an additional implicit relation these methods can be adapted to have time reversal symmetry In these methods the averaged position xf xi 2 displaystyle x f x i 2 nbsp is used in computing ki displaystyle k i nbsp instead of just the initial position xi displaystyle x i nbsp in standard Runge Kutta methods The method of order 2 is just an implicit midpoint method k1 f xf xi2 displaystyle k 1 f left frac x f x i 2 right nbsp xf xi hk1 displaystyle x f x i hk 1 nbsp The method of order 4 with 2 stages is as follows k1 f xf xi2 36hk2 displaystyle k 1 f left frac x f x i 2 frac sqrt 3 6 hk 2 right nbsp k2 f xf xi2 36hk1 displaystyle k 2 f left frac x f x i 2 frac sqrt 3 6 hk 1 right nbsp xf xi h2 k1 k2 displaystyle x f x i frac h 2 k 1 k 2 nbsp The method of order 6 with 3 stages is as follows k1 f xf xi2 1515hk2 1530hk3 displaystyle k 1 f left frac x f x i 2 frac sqrt 15 15 hk 2 frac sqrt 15 30 hk 3 right nbsp k2 f xf xi2 1524hk1 1524hk3 displaystyle k 2 f left frac x f x i 2 frac sqrt 15 24 hk 1 frac sqrt 15 24 hk 3 right nbsp k3 f xf xi2 1530hk1 1515hk2 displaystyle k 3 f left frac x f x i 2 frac sqrt 15 30 hk 1 frac sqrt 15 15 hk 2 right nbsp xf xi h18 5k1 8k2 5k3 displaystyle x f x i frac h 18 5k 1 8k 2 5k 3 nbsp Notes edit Iserles 1996 p 47 Iserles 1996 p 63 Iserles 1996 p 47References editIserles Arieh 1996 A First Course in the Numerical Analysis of Differential Equations Cambridge University Press ISBN 978 0 521 55655 2 Retrieved from https en wikipedia org w index php title Gauss Legendre method amp oldid 1158781665, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.