fbpx
Wikipedia

Linear differential equation

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.

Such an equation is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives.

Types of solution edit

A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any.

The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound.

Basic terminology edit

The highest order of derivation that appears in a (linear) differential equation is the order of the equation. The term b(x), which does not depend on the unknown function and its derivatives, is sometimes called the constant term of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be homogeneous, as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the associated homogeneous equation. A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation.

A solution of a differential equation is a function that satisfies the equation. The solutions of a homogeneous linear differential equation form a vector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation.

Linear differential operator edit

A basic differential operator of order i is a mapping that maps any differentiable function to its ith derivative, or, in the case of several variables, to one of its partial derivatives of order i. It is commonly denoted

 
in the case of univariate functions, and
 
in the case of functions of n variables. The basic differential operators include the derivative of order 0, which is the identity mapping.

A linear differential operator (abbreviated, in this article, as linear operator or, simply, operator) is a linear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form[1]

 
where a0(x), ..., an(x) are differentiable functions, and the nonnegative integer n is the order of the operator (if an(x) is not the zero function).

Let L be a linear differential operator. The application of L to a function f is usually denoted Lf or Lf(X), if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a linear operator, since it maps sums to sums and the product by a scalar to the product by the same scalar.

As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a vector space over the real numbers or the complex numbers (depending on the nature of the functions that are considered). They form also a free module over the ring of differentiable functions.

The language of operators allows a compact writing for differentiable equations: if

 
is a linear differential operator, then the equation
 
may be rewritten
 

There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in y and the right-hand and of the equation, such as Ly(x) = b(x) or Ly = b.

The kernel of a linear differential operator is its kernel as a linear mapping, that is the vector space of the solutions of the (homogeneous) differential equation Ly = 0.

In the case of an ordinary differential operator of order n, Carathéodory's existence theorem implies that, under very mild conditions, the kernel of L is a vector space of dimension n, and that the solutions of the equation Ly(x) = b(x) have the form

 
where c1, ..., cn are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval I, if the functions b, a0, ..., an are continuous in I, and there is a positive real number k such that |an(x)| > k for every x in I.

Homogeneous equation with constant coefficients edit

A homogeneous linear differential equation has constant coefficients if it has the form

 
where a1, ..., an are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients.

The study of these differential equations with constant coefficients dates back to Leonhard Euler, who introduced the exponential function ex, which is the unique solution of the equation f′ = f such that f(0) = 1. It follows that the nth derivative of ecx is cnecx, and this allows solving homogeneous linear differential equations rather easily.

Let

 
be a homogeneous linear differential equation with constant coefficients (that is a0, ..., an are real or complex numbers).

Searching solutions of this equation that have the form eαx is equivalent to searching the constants α such that

 
Factoring out eαx (which is never zero), shows that α must be a root of the characteristic polynomial
 
of the differential equation, which is the left-hand side of the characteristic equation
 

When these roots are all distinct, one has n distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be linearly independent, by considering the Vandermonde determinant of the values of these solutions at x = 0, ..., n – 1. Together they form a basis of the vector space of solutions of the differential equation (that is, the kernel of the differential operator).

Example
 
has the characteristic equation
 
This has zeros, i, i, and 1 (multiplicity 2). The solution basis is thus
 
A real basis of solution is thus
 

In the case where the characteristic polynomial has only simple roots, the preceding provides a complete basis of the solutions vector space. In the case of multiple roots, more linearly independent solutions are needed for having a basis. These have the form

 
where k is a nonnegative integer, α is a root of the characteristic polynomial of multiplicity m, and k < m. For proving that these functions are solutions, one may remark that if α is a root of the characteristic polynomial of multiplicity m, the characteristic polynomial may be factored as P(t)(tα)m. Thus, applying the differential operator of the equation is equivalent with applying first m times the operator  , and then the operator that has P as characteristic polynomial. By the exponential shift theorem,
 

and thus one gets zero after k + 1 application of  .

As, by the fundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a base of the vector space of the solutions.

In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of real-valued functions. Such a basis may be obtained from the preceding basis by remarking that, if a + ib is a root of the characteristic polynomial, then aib is also a root, of the same multiplicity. Thus a real basis is obtained by using Euler's formula, and replacing   and   by   and  .

Second-order case edit

A homogeneous linear differential equation of the second order may be written

 
and its characteristic polynomial is
 

If a and b are real, there are three cases for the solutions, depending on the discriminant D = a2 − 4b. In all three cases, the general solution depends on two arbitrary constants c1 and c2.

  • If D > 0, the characteristic polynomial has two distinct real roots α, and β. In this case, the general solution is
     
  • If D = 0, the characteristic polynomial has a double root a/2, and the general solution is
     
  • If D < 0, the characteristic polynomial has two complex conjugate roots α ± βi, and the general solution is
     
    which may be rewritten in real terms, using Euler's formula as
     

Finding the solution y(x) satisfying y(0) = d1 and y′(0) = d2, one equates the values of the above general solution at 0 and its derivative there to d1 and d2, respectively. This results in a linear system of two linear equations in the two unknowns c1 and c2. Solving this system gives the solution for a so-called Cauchy problem, in which the values at 0 for the solution of the DEQ and its derivative are specified.

Non-homogeneous equation with constant coefficients edit

A non-homogeneous equation of order n with constant coefficients may be written

 
where a1, ..., an are real or complex numbers, f is a given function of x, and y is the unknown function (for sake of simplicity, "(x)" will be omitted in the following).

There are several methods for solving such an equation. The best method depends on the nature of the function f that makes the equation non-homogeneous. If f is a linear combination of exponential and sinusoidal functions, then the exponential response formula may be used. If, more generally, f is a linear combination of functions of the form xneax, xn cos(ax), and xn sin(ax), where n is a nonnegative integer, and a a constant (which need not be the same in each term), then the method of undetermined coefficients may be used. Still more general, the annihilator method applies when f satisfies a homogeneous linear differential equation, typically, a holonomic function.

The most general method is the variation of constants, which is presented here.

The general solution of the associated homogeneous equation

 
is
 
where (y1, ..., yn) is a basis of the vector space of the solutions and u1, ..., un are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of considering u1, ..., un as constants, they can be considered as unknown functions that have to be determined for making y a solution of the non-homogeneous equation. For this purpose, one adds the constraints
 
which imply (by product rule and induction)
 
for i = 1, ..., n – 1, and
 

Replacing in the original equation y and its derivatives by these expressions, and using the fact that y1, ..., yn are solutions of the original homogeneous equation, one gets

 

This equation and the above ones with 0 as left-hand side form a system of n linear equations in u1, ..., un whose coefficients are known functions (f, the yi, and their derivatives). This system can be solved by any method of linear algebra. The computation of antiderivatives gives u1, ..., un, and then y = u1y1 + ⋯ + unyn.

As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation.

First-order equation with variable coefficients edit

The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient of y′(x), is:

 

If the equation is homogeneous, i.e. g(x) = 0, one may rewrite and integrate:

 
where k is an arbitrary constant of integration and   is any antiderivative of f. Thus, the general solution of the homogeneous equation is
 
where c = ek is an arbitrary constant.

For the general non-homogeneous equation, it is useful to multiply both sides of the equation by the reciprocal eF of a solution of the homogeneous equation.[2] This gives

 
As   the product rule allows rewriting the equation as
 
Thus, the general solution is
 
where c is a constant of integration, and F is any antiderivative of f (changing of antiderivative amounts to change the constant of integration).

Example edit

Solving the equation

 
The associated homogeneous equation   gives
 
that is
 

Dividing the original equation by one of these solutions gives

 
That is
 
 
and
 
For the initial condition
 
one gets the particular solution
 

System of linear differential equations edit

A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations.

An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if   appear in an equation, one may replace them by new unknown functions   that must satisfy the equations   and   for i = 1, ..., k – 1.

A linear system of the first order, which has n unknown functions and n differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a differential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the form

 
where   and the   are functions of x. In matrix notation, this system may be written (omitting "(x)")
 

The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.

Let

 
be the homogeneous equation associated to the above matrix equation. Its solutions form a vector space of dimension n, and are therefore the columns of a square matrix of functions  , whose determinant is not the zero function. If n = 1, or A is a matrix of constants, or, more generally, if A commutes with its antiderivative  , then one may choose U equal the exponential of B. In fact, in these cases, one has
 
In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a numerical method, or an approximation method such as Magnus expansion.

Knowing the matrix U, the general solution of the non-homogeneous equation is

 
where the column matrix   is an arbitrary constant of integration.

If initial conditions are given as

 
the solution that satisfies these initial conditions is
 

Higher order with variable coefficients edit

A linear ordinary equation of order one with variable coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is not the case for order at least two. This is the main result of Picard–Vessiot theory which was initiated by Émile Picard and Ernest Vessiot, and whose recent developments are called differential Galois theory.

The impossibility of solving by quadrature can be compared with the Abel–Ruffini theorem, which states that an algebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of differential Galois theory.

Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers.

Nevertheless, the case of order two with rational coefficients has been completely solved by Kovacic's algorithm.

Cauchy–Euler equation edit

Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form

 
where   are constant coefficients.

Holonomic functions edit

A holonomic function, also called a D-finite function, is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients.

Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include polynomials, algebraic functions, logarithm, exponential function, sine, cosine, hyperbolic sine, hyperbolic cosine, inverse trigonometric and inverse hyperbolic functions, and many special functions such as Bessel functions and hypergeometric functions.

Holonomic functions have several closure properties; in particular, sums, products, derivative and integrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are algorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input.[3]

Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows.[3]

A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients. The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a power series is holonomic, then the series defines a holonomic function (even if the radius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and vice versa. [3]

It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most calculus operations can be done automatically on these functions, such as derivative, indefinite and definite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, limits, localization of singularities, asymptotic behavior at infinity and near singularities, proof of identities, etc.[4]

See also edit

References edit

  1. ^ Gershenfeld 1999, p.9
  2. ^ Motivation: In analogy to completing the square technique we write the equation as y′ − fy = g, and try to modify the left side so it becomes a derivative. Specifically, we seek an "integrating factor" h = h(x) such that multiplying by it makes the left side equal to the derivative of hy, namely hy′ − hfy = (hy)′. This means h′ = −f, so that h = e−∫ f dx = eF, as in the text.
  3. ^ a b c Zeilberger, Doron. A holonomic systems approach to special functions identities. Journal of computational and applied mathematics. 32.3 (1990): 321-368
  4. ^ Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., & Salvy, B. (2010, September). The dynamic dictionary of mathematical functions (DDMF). In International Congress on Mathematical Software (pp. 35-41). Springer, Berlin, Heidelberg.
  • Birkhoff, Garrett & Rota, Gian-Carlo (1978), Ordinary Differential Equations, New York: John Wiley and Sons, Inc., ISBN 0-471-07411-X
  • Gershenfeld, Neil (1999), The Nature of Mathematical Modeling, Cambridge, UK.: Cambridge University Press, ISBN 978-0-521-57095-4
  • Robinson, James C. (2004), An Introduction to Ordinary Differential Equations, Cambridge, UK.: Cambridge University Press, ISBN 0-521-82650-0

External links edit

linear, differential, equation, this, article, about, linear, differential, equations, with, independent, variable, similar, equations, with, more, independent, variables, partial, differential, equation, linear, equations, second, order, mathematics, linear, . This article is about linear differential equations with one independent variable For similar equations with two or more independent variables see Partial differential equation Linear equations of second order In mathematics a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives that is an equation of the forma 0 x y a 1 x y a 2 x y a n x y n b x displaystyle a 0 x y a 1 x y a 2 x y cdots a n x y n b x where a0 x an x and b x are arbitrary differentiable functions that do not need to be linear and y y n are the successive derivatives of an unknown function y of the variable x Such an equation is an ordinary differential equation ODE A linear differential equation may also be a linear partial differential equation PDE if the unknown function depends on several variables and the derivatives that appear in the equation are partial derivatives Contents 1 Types of solution 2 Basic terminology 3 Linear differential operator 4 Homogeneous equation with constant coefficients 4 1 Second order case 5 Non homogeneous equation with constant coefficients 6 First order equation with variable coefficients 6 1 Example 7 System of linear differential equations 8 Higher order with variable coefficients 8 1 Cauchy Euler equation 9 Holonomic functions 10 See also 11 References 12 External linksTypes of solution editA linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature which means that the solutions may be expressed in terms of integrals This is also true for a linear equation of order one with non constant coefficients An equation of order two or higher with non constant coefficients cannot in general be solved by quadrature For order two Kovacic s algorithm allows deciding whether there are solutions in terms of integrals and computing them if any The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions This class of functions is stable under sums products differentiation integration and contains many usual functions and special functions such as exponential function logarithm sine cosine inverse trigonometric functions error function Bessel functions and hypergeometric functions Their representation by the defining differential equation and initial conditions allows making algorithmic on these functions most operations of calculus such as computation of antiderivatives limits asymptotic expansion and numerical evaluation to any precision with a certified error bound Basic terminology editThe highest order of derivation that appears in a linear differential equation is the order of the equation The term b x which does not depend on the unknown function and its derivatives is sometimes called the constant term of the equation by analogy with algebraic equations even when this term is a non constant function If the constant term is the zero function then the differential equation is said to be homogeneous as it is a homogeneous polynomial in the unknown function and its derivatives The equation obtained by replacing in a linear differential equation the constant term by the zero function is the associated homogeneous equation A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation A solution of a differential equation is a function that satisfies the equation The solutions of a homogeneous linear differential equation form a vector space In the ordinary case this vector space has a finite dimension equal to the order of the equation All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation Linear differential operator editMain article Differential operator A basic differential operator of order i is a mapping that maps any differentiable function to its i th derivative or in the case of several variables to one of its partial derivatives of order i It is commonly denotedd i d x i displaystyle frac d i dx i nbsp in the case of univariate functions and i 1 i n x 1 i 1 x n i n displaystyle frac partial i 1 cdots i n partial x 1 i 1 cdots partial x n i n nbsp in the case of functions of n variables The basic differential operators include the derivative of order 0 which is the identity mapping A linear differential operator abbreviated in this article as linear operator or simply operator is a linear combination of basic differential operators with differentiable functions as coefficients In the univariate case a linear operator has thus the form 1 a 0 x a 1 x d d x a n x d n d x n displaystyle a 0 x a 1 x frac d dx cdots a n x frac d n dx n nbsp where a0 x an x are differentiable functions and the nonnegative integer n is the order of the operator if an x is not the zero function Let L be a linear differential operator The application of L to a function f is usually denoted Lf or Lf X if one needs to specify the variable this must not be confused with a multiplication A linear differential operator is a linear operator since it maps sums to sums and the product by a scalar to the product by the same scalar As the sum of two linear operators is a linear operator as well as the product on the left of a linear operator by a differentiable function the linear differential operators form a vector space over the real numbers or the complex numbers depending on the nature of the functions that are considered They form also a free module over the ring of differentiable functions The language of operators allows a compact writing for differentiable equations ifL a 0 x a 1 x d d x a n x d n d x n displaystyle L a 0 x a 1 x frac d dx cdots a n x frac d n dx n nbsp is a linear differential operator then the equation a 0 x y a 1 x y a 2 x y a n x y n b x displaystyle a 0 x y a 1 x y a 2 x y cdots a n x y n b x nbsp may be rewritten L y b x displaystyle Ly b x nbsp There may be several variants to this notation in particular the variable of differentiation may appear explicitly or not in y and the right hand and of the equation such as Ly x b x or Ly b The kernel of a linear differential operator is its kernel as a linear mapping that is the vector space of the solutions of the homogeneous differential equation Ly 0 In the case of an ordinary differential operator of order n Caratheodory s existence theorem implies that under very mild conditions the kernel of L is a vector space of dimension n and that the solutions of the equation Ly x b x have the formS 0 x c 1 S 1 x c n S n x displaystyle S 0 x c 1 S 1 x cdots c n S n x nbsp where c1 cn are arbitrary numbers Typically the hypotheses of Caratheodory s theorem are satisfied in an interval I if the functions b a0 an are continuous in I and there is a positive real number k such that an x gt k for every x in I Homogeneous equation with constant coefficients editA homogeneous linear differential equation has constant coefficients if it has the forma 0 y a 1 y a 2 y a n y n 0 displaystyle a 0 y a 1 y a 2 y cdots a n y n 0 nbsp where a1 an are real or complex numbers In other words it has constant coefficients if it is defined by a linear operator with constant coefficients The study of these differential equations with constant coefficients dates back to Leonhard Euler who introduced the exponential function ex which is the unique solution of the equation f f such that f 0 1 It follows that the n th derivative of ecx is cnecx and this allows solving homogeneous linear differential equations rather easily Leta 0 y a 1 y a 2 y a n y n 0 displaystyle a 0 y a 1 y a 2 y cdots a n y n 0 nbsp be a homogeneous linear differential equation with constant coefficients that is a0 an are real or complex numbers Searching solutions of this equation that have the form eax is equivalent to searching the constants a such thata 0 e a x a 1 a e a x a 2 a 2 e a x a n a n e a x 0 displaystyle a 0 e alpha x a 1 alpha e alpha x a 2 alpha 2 e alpha x cdots a n alpha n e alpha x 0 nbsp Factoring out eax which is never zero shows that a must be a root of the characteristic polynomial a 0 a 1 t a 2 t 2 a n t n displaystyle a 0 a 1 t a 2 t 2 cdots a n t n nbsp of the differential equation which is the left hand side of the characteristic equation a 0 a 1 t a 2 t 2 a n t n 0 displaystyle a 0 a 1 t a 2 t 2 cdots a n t n 0 nbsp When these roots are all distinct one has n distinct solutions that are not necessarily real even if the coefficients of the equation are real These solutions can be shown to be linearly independent by considering the Vandermonde determinant of the values of these solutions at x 0 n 1 Together they form a basis of the vector space of solutions of the differential equation that is the kernel of the differential operator Example y 2 y 2 y 2 y y 0 displaystyle y 2y 2y 2y y 0 nbsp has the characteristic equation z 4 2 z 3 2 z 2 2 z 1 0 displaystyle z 4 2z 3 2z 2 2z 1 0 nbsp This has zeros i i and 1 multiplicity 2 The solution basis is thus e i x e i x e x x e x displaystyle e ix e ix e x xe x nbsp A real basis of solution is thus cos x sin x e x x e x displaystyle cos x sin x e x xe x nbsp In the case where the characteristic polynomial has only simple roots the preceding provides a complete basis of the solutions vector space In the case of multiple roots more linearly independent solutions are needed for having a basis These have the formx k e a x displaystyle x k e alpha x nbsp where k is a nonnegative integer a is a root of the characteristic polynomial of multiplicity m and k lt m For proving that these functions are solutions one may remark that if a is a root of the characteristic polynomial of multiplicity m the characteristic polynomial may be factored as P t t a m Thus applying the differential operator of the equation is equivalent with applying first m times the operator d d x a textstyle frac d dx alpha nbsp and then the operator that has P as characteristic polynomial By the exponential shift theorem d d x a x k e a x k x k 1 e a x displaystyle left frac d dx alpha right left x k e alpha x right kx k 1 e alpha x nbsp and thus one gets zero after k 1 application of d d x a textstyle frac d dx alpha nbsp As by the fundamental theorem of algebra the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial the number of above solutions equals the order of the differential equation and these solutions form a base of the vector space of the solutions In the common case where the coefficients of the equation are real it is generally more convenient to have a basis of the solutions consisting of real valued functions Such a basis may be obtained from the preceding basis by remarking that if a ib is a root of the characteristic polynomial then a ib is also a root of the same multiplicity Thus a real basis is obtained by using Euler s formula and replacing x k e a i b x displaystyle x k e a ib x nbsp and x k e a i b x displaystyle x k e a ib x nbsp by x k e a x cos b x displaystyle x k e ax cos bx nbsp and x k e a x sin b x displaystyle x k e ax sin bx nbsp Second order case edit A homogeneous linear differential equation of the second order may be writteny a y b y 0 displaystyle y ay by 0 nbsp and its characteristic polynomial is r 2 a r b displaystyle r 2 ar b nbsp If a and b are real there are three cases for the solutions depending on the discriminant D a2 4b In all three cases the general solution depends on two arbitrary constants c1 and c2 If D gt 0 the characteristic polynomial has two distinct real roots a and b In this case the general solution is c 1 e a x c 2 e b x displaystyle c 1 e alpha x c 2 e beta x nbsp If D 0 the characteristic polynomial has a double root a 2 and the general solution is c 1 c 2 x e a x 2 displaystyle c 1 c 2 x e ax 2 nbsp If D lt 0 the characteristic polynomial has two complex conjugate roots a bi and the general solution is c 1 e a b i x c 2 e a b i x displaystyle c 1 e alpha beta i x c 2 e alpha beta i x nbsp which may be rewritten in real terms using Euler s formula as e a x c 1 cos b x c 2 sin b x displaystyle e alpha x c 1 cos beta x c 2 sin beta x nbsp Finding the solution y x satisfying y 0 d1 and y 0 d2 one equates the values of the above general solution at 0 and its derivative there to d1 and d2 respectively This results in a linear system of two linear equations in the two unknowns c1 and c2 Solving this system gives the solution for a so called Cauchy problem in which the values at 0 for the solution of the DEQ and its derivative are specified Non homogeneous equation with constant coefficients editA non homogeneous equation of order n with constant coefficients may be writteny n x a 1 y n 1 x a n 1 y x a n y x f x displaystyle y n x a 1 y n 1 x cdots a n 1 y x a n y x f x nbsp where a1 an are real or complex numbers f is a given function of x and y is the unknown function for sake of simplicity x will be omitted in the following There are several methods for solving such an equation The best method depends on the nature of the function f that makes the equation non homogeneous If f is a linear combination of exponential and sinusoidal functions then the exponential response formula may be used If more generally f is a linear combination of functions of the form xneax xn cos ax and xn sin ax where n is a nonnegative integer and a a constant which need not be the same in each term then the method of undetermined coefficients may be used Still more general the annihilator method applies when f satisfies a homogeneous linear differential equation typically a holonomic function The most general method is the variation of constants which is presented here The general solution of the associated homogeneous equationy n a 1 y n 1 a n 1 y a n y 0 displaystyle y n a 1 y n 1 cdots a n 1 y a n y 0 nbsp is y u 1 y 1 u n y n displaystyle y u 1 y 1 cdots u n y n nbsp where y1 yn is a basis of the vector space of the solutions and u1 un are arbitrary constants The method of variation of constants takes its name from the following idea Instead of considering u1 un as constants they can be considered as unknown functions that have to be determined for making y a solution of the non homogeneous equation For this purpose one adds the constraints 0 u 1 y 1 u 2 y 2 u n y n 0 u 1 y 1 u 2 y 2 u n y n 0 u 1 y 1 n 2 u 2 y 2 n 2 u n y n n 2 displaystyle begin aligned 0 amp u 1 y 1 u 2 y 2 cdots u n y n 0 amp u 1 y 1 u 2 y 2 cdots u n y n amp vdots 0 amp u 1 y 1 n 2 u 2 y 2 n 2 cdots u n y n n 2 end aligned nbsp which imply by product rule and induction y i u 1 y 1 i u n y n i displaystyle y i u 1 y 1 i cdots u n y n i nbsp for i 1 n 1 and y n u 1 y 1 n u n y n n u 1 y 1 n 1 u 2 y 2 n 1 u n y n n 1 displaystyle y n u 1 y 1 n cdots u n y n n u 1 y 1 n 1 u 2 y 2 n 1 cdots u n y n n 1 nbsp Replacing in the original equation y and its derivatives by these expressions and using the fact that y1 yn are solutions of the original homogeneous equation one getsf u 1 y 1 n 1 u n y n n 1 displaystyle f u 1 y 1 n 1 cdots u n y n n 1 nbsp This equation and the above ones with 0 as left hand side form a system of n linear equations in u 1 u n whose coefficients are known functions f the yi and their derivatives This system can be solved by any method of linear algebra The computation of antiderivatives gives u1 un and then y u1y1 unyn As antiderivatives are defined up to the addition of a constant one finds again that the general solution of the non homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation First order equation with variable coefficients editThe general form of a linear ordinary differential equation of order 1 after dividing out the coefficient of y x is y x f x y x g x displaystyle y x f x y x g x nbsp If the equation is homogeneous i e g x 0 one may rewrite and integrate y y f log y k F displaystyle frac y y f qquad log y k F nbsp where k is an arbitrary constant of integration and F f d x displaystyle F textstyle int f dx nbsp is any antiderivative of f Thus the general solution of the homogeneous equation is y c e F displaystyle y ce F nbsp where c ek is an arbitrary constant For the general non homogeneous equation it is useful to multiply both sides of the equation by the reciprocal e F of a solution of the homogeneous equation 2 This givesy e F y f e F g e F displaystyle y e F yfe F ge F nbsp As f e F d d x e F displaystyle fe F tfrac d dx left e F right nbsp the product rule allows rewriting the equation as d d x y e F g e F displaystyle frac d dx left ye F right ge F nbsp Thus the general solution is y c e F e F g e F d x displaystyle y ce F e F int ge F dx nbsp where c is a constant of integration and F is any antiderivative of f changing of antiderivative amounts to change the constant of integration Example edit Solving the equationy x y x x 3 x displaystyle y x frac y x x 3x nbsp The associated homogeneous equation y x y x x 0 displaystyle y x frac y x x 0 nbsp gives y y 1 x displaystyle frac y y frac 1 x nbsp that is y c x displaystyle y frac c x nbsp Dividing the original equation by one of these solutions givesx y y 3 x 2 displaystyle xy y 3x 2 nbsp That is x y 3 x 2 displaystyle xy 3x 2 nbsp x y x 3 c displaystyle xy x 3 c nbsp and y x x 2 c x displaystyle y x x 2 c x nbsp For the initial condition y 1 a displaystyle y 1 alpha nbsp one gets the particular solution y x x 2 a 1 x displaystyle y x x 2 frac alpha 1 x nbsp System of linear differential equations editMain article Matrix differential equation A system of linear differential equations consists of several linear differential equations that involve several unknown functions In general one restricts the study to systems such that the number of unknown functions equals the number of equations An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives That is if y y y k displaystyle y y ldots y k nbsp appear in an equation one may replace them by new unknown functions y 1 y k displaystyle y 1 ldots y k nbsp that must satisfy the equations y y 1 displaystyle y y 1 nbsp and y i y i 1 displaystyle y i y i 1 nbsp for i 1 k 1 A linear system of the first order which has n unknown functions and n differential equations may normally be solved for the derivatives of the unknown functions If it is not the case this is a differential algebraic system and this is a different theory Therefore the systems that are considered here have the formy 1 x b 1 x a 1 1 x y 1 a 1 n x y n y n x b n x a n 1 x y 1 a n n x y n displaystyle begin aligned y 1 x amp b 1 x a 1 1 x y 1 cdots a 1 n x y n 1ex amp vdots 1ex y n x amp b n x a n 1 x y 1 cdots a n n x y n end aligned nbsp where b n displaystyle b n nbsp and the a i j displaystyle a i j nbsp are functions of x In matrix notation this system may be written omitting x y A y b displaystyle mathbf y A mathbf y mathbf b nbsp The solving method is similar to that of a single first order linear differential equations but with complications stemming from noncommutativity of matrix multiplication Letu A u displaystyle mathbf u A mathbf u nbsp be the homogeneous equation associated to the above matrix equation Its solutions form a vector space of dimension n and are therefore the columns of a square matrix of functions U x displaystyle U x nbsp whose determinant is not the zero function If n 1 or A is a matrix of constants or more generally if A commutes with its antiderivative B A d x displaystyle textstyle B int Adx nbsp then one may choose U equal the exponential of B In fact in these cases one has d d x exp B A exp B displaystyle frac d dx exp B A exp B nbsp In the general case there is no closed form solution for the homogeneous equation and one has to use either a numerical method or an approximation method such as Magnus expansion Knowing the matrix U the general solution of the non homogeneous equation isy x U x y 0 U x U 1 x b x d x displaystyle mathbf y x U x mathbf y 0 U x int U 1 x mathbf b x dx nbsp where the column matrix y 0 displaystyle mathbf y 0 nbsp is an arbitrary constant of integration If initial conditions are given asy x 0 y 0 displaystyle mathbf y x 0 mathbf y 0 nbsp the solution that satisfies these initial conditions is y x U x U 1 x 0 y 0 U x x 0 x U 1 t b t d t displaystyle mathbf y x U x U 1 x 0 mathbf y 0 U x int x 0 x U 1 t mathbf b t dt nbsp Higher order with variable coefficients editA linear ordinary equation of order one with variable coefficients may be solved by quadrature which means that the solutions may be expressed in terms of integrals This is not the case for order at least two This is the main result of Picard Vessiot theory which was initiated by Emile Picard and Ernest Vessiot and whose recent developments are called differential Galois theory The impossibility of solving by quadrature can be compared with the Abel Ruffini theorem which states that an algebraic equation of degree at least five cannot in general be solved by radicals This analogy extends to the proof methods and motivates the denomination of differential Galois theory Similarly to the algebraic case the theory allows deciding which equations may be solved by quadrature and if possible solving them However for both theories the necessary computations are extremely difficult even with the most powerful computers Nevertheless the case of order two with rational coefficients has been completely solved by Kovacic s algorithm Cauchy Euler equation edit Cauchy Euler equations are examples of equations of any order with variable coefficients that can be solved explicitly These are the equations of the formx n y n x a n 1 x n 1 y n 1 x a 0 y x 0 displaystyle x n y n x a n 1 x n 1 y n 1 x cdots a 0 y x 0 nbsp where a 0 a n 1 displaystyle a 0 ldots a n 1 nbsp are constant coefficients Holonomic functions editMain article holonomic function A holonomic function also called a D finite function is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions In fact holonomic functions include polynomials algebraic functions logarithm exponential function sine cosine hyperbolic sine hyperbolic cosine inverse trigonometric and inverse hyperbolic functions and many special functions such as Bessel functions and hypergeometric functions Holonomic functions have several closure properties in particular sums products derivative and integrals of holonomic functions are holonomic Moreover these closure properties are effective in the sense that there are algorithms for computing the differential equation of the result of any of these operations knowing the differential equations of the input 3 Usefulness of the concept of holonomic functions results of Zeilberger s theorem which follows 3 A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence Conversely if the sequence of the coefficients of a power series is holonomic then the series defines a holonomic function even if the radius of convergence is zero There are efficient algorithms for both conversions that is for computing the recurrence relation from the differential equation and vice versa 3 It follows that if one represents in a computer holonomic functions by their defining differential equations and initial conditions most calculus operations can be done automatically on these functions such as derivative indefinite and definite integral fast computation of Taylor series thanks of the recurrence relation on its coefficients evaluation to a high precision with certified bound of the approximation error limits localization of singularities asymptotic behavior at infinity and near singularities proof of identities etc 4 See also editContinuous repayment mortgage Fourier transform Laplace transform Linear difference equation Variation of parametersReferences edit Gershenfeld 1999 p 9 Motivation In analogy to completing the square technique we write the equation as y fy g and try to modify the left side so it becomes a derivative Specifically we seek an integrating factor h h x such that multiplying by it makes the left side equal to the derivative of hy namely hy hfy hy This means h f so that h e f dx e F as in the text a b c Zeilberger Doron A holonomic systems approach to special functions identities Journal of computational and applied mathematics 32 3 1990 321 368 Benoit A Chyzak F Darrasse A Gerhold S Mezzarobba M amp Salvy B 2010 September The dynamic dictionary of mathematical functions DDMF In International Congress on Mathematical Software pp 35 41 Springer Berlin Heidelberg Birkhoff Garrett amp Rota Gian Carlo 1978 Ordinary Differential Equations New York John Wiley and Sons Inc ISBN 0 471 07411 X Gershenfeld Neil 1999 The Nature of Mathematical Modeling Cambridge UK Cambridge University Press ISBN 978 0 521 57095 4 Robinson James C 2004 An Introduction to Ordinary Differential Equations Cambridge UK Cambridge University Press ISBN 0 521 82650 0External links edithttp eqworld ipmnet ru en solutions ode htm Dynamic Dictionary of Mathematical Function Automatic and interactive study of many holonomic functions Retrieved from https en wikipedia org w index php title Linear differential equation amp oldid 1218541513 Basic terminology, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.