fbpx
Wikipedia

Root-finding algorithms

In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).[1]

Solving an equation f(x) = g(x) is the same as finding the roots of the function h(x) = f(x) – g(x). Thus root-finding algorithms allow solving any equation defined by continuous functions. However, most root-finding algorithms do not guarantee that they will find all the roots; in particular, if such an algorithm does not find any root, that does not mean that no root exists.

Most numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converges towards the root as its limit. They require one or more initial guesses of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points, and for converging rapidly to these fixed points.

The behavior of general root-finding algorithms is studied in numerical analysis. However, for polynomials, root-finding study belongs generally to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency of an algorithm may depend dramatically on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed, and locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton's method) to the unique root so located.

Bracketing methods Edit

Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root has been found. They generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require to start with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of polynomials there are other methods (Descartes' rule of signs, Budan's theorem and Sturm's theorem) for getting information on the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which ensure finding all real roots with a guaranteed accuracy.

Bisection method Edit

The simplest root-finding algorithm is the bisection method. Let f be a continuous function, for which one knows an interval [a, b] such that f(a) and f(b) have opposite signs (a bracket). Let c = (a +b)/2 be the middle of the interval (the midpoint or the point that bisects the interval). Then either f(a) and f(c), or f(c) and f(b) have opposite signs, and one has divided by two the size of the interval. Although the bisection method is robust, it gains one and only one bit of accuracy with each iteration. Therefore, the number of function evaluations required for finding an ε-approximate root is  . Other methods, under appropriate conditions, can gain accuracy faster.

False position (regula falsi) Edit

The false position method, also called the regula falsi method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the x-intercept of the line that connects the plotted function values at the endpoints of the interval, that is

 

False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method; however, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for f(c); typically, this may occur if the rate of variation of f is large in the neighborhood of the root.

ITP method Edit

The ITP method is the only known method to bracket the root with the same worst case guarantees of the bisection method while guaranteeing a superlinear convergence to the root of smooth functions as the secant method. It is also the only known method guaranteed to outperform the bisection method on the average for any continuous distribution on the location of the root (see ITP Method#Analysis). It does so by keeping track of both the bracketing interval as well as the minmax interval in which any point therein converges as fast as the bisection method. The construction of the queried point c follows three steps: interpolation (similar to the regula falsi), truncation (adjusting the regula falsi similar to Regula falsi § Improvements in regula falsi) and then projection onto the minmax interval. The combination of these steps produces a simultaneously minmax optimal method with guarantees similar to interpolation based methods for smooth functions, and, in practice will outperform both the bisection method and interpolation based methods under both smooth and non-smooth functions.

Interpolation Edit

Many root-finding processes work by interpolation. This consists in using the last computed approximate values of the root for approximating the function by a polynomial of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated.

Two values allow interpolating a function by a polynomial of degree one (that is approximating the graph of the function by a line). This is the basis of the secant method. Three values define a quadratic function, which approximates the graph of the function by a parabola. This is Muller's method.

Regula falsi is also an interpolation method, which differs from the secant method by using, for interpolating by a line, two points that are not necessarily the last two computed points.

Iterative methods Edit

Although all root-finding algorithms proceed by iteration, an iterative root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a fixed point (up to the desired precision) of the auxiliary function is reached, that is when the new computed value is sufficiently close to the preceding ones.

Newton's method (and similar derivative-based methods) Edit

Newton's method assumes the function f to have a continuous derivative. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method, and is usually quadratic. Newton's method is also important because it readily generalizes to higher-dimensional problems. Newton-like methods with higher orders of convergence are the Householder's methods. The first one after Newton's method is Halley's method with cubic order of convergence.

Secant method Edit

Replacing the derivative in Newton's method with a finite difference, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order is approximately 1.6 (golden ratio)). A generalization of the secant method in higher dimensions is Broyden's method.

Steffensen's method Edit

If we use a polynomial fit to remove the quadratic part of the finite difference used in the Secant method, so that it better approximates the derivative, we obtain Steffensen's method, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative.

Fixed point iteration method Edit

We can use the fixed-point iteration to find the root of a function. Given a function   which we have set to zero to find the root ( ), we rewrite the equation in terms of   so that   becomes   (note, there are often many   functions for each   function). Next, we relabel each side of the equation as   so that we can perform the iteration. Next, we pick a value for   and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if  .

As an example of converting   to  , if given the function  , we will rewrite it as one of the following equations.

 ,
 ,
 ,
 , or
 .

Inverse interpolation Edit

The appearance of complex values in interpolation methods can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.

Combinations of methods Edit

Brent's method Edit

Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.

Ridders' method Edit

Ridders' method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.

Roots of polynomials Edit

Finding polynomial roots is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century, algebra meant essentially theory of polynomial equations.

Finding roots in higher dimensions Edit

The bisection method has been generalized to higher dimensions; these methods are called generalized bisection methods.[2][3] At each iteration, the domain is partitioned into two parts, and the algorithm decides - based on a small number of function evaluations - which of these two parts must contain a root. In one dimension, the criterion for decision is that the function has opposite signs. The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily, and guarantees the existence of a root.

The Poincaré–Miranda theorem gives a criterion for the existence of a root in a rectangle, but it is hard to verify, since it requires to evaluate the function on the entire boundary of the triangle.

Another criterion is given by a theorem of Kronecker.[4] It says that, if the topological degree of a function f on a rectangle is non-zero, then the rectangle must contain at least one root of f. This criterion is the basis for several root-finding methods, such as by Stenger[5] and Kearfott.[6] However, computing the topological degree can be time-consuming.

A third criterion is based on a characteristic polyhedron. This criterion is used by a method called Characteristic Bisection.[2]: 19--  It does not require to compute the topological degree - it only requires to compute the signs of function values. The number of required evaluations is at least  , where D is the length of the longest edge of the characteristic polyhedron.[7]: 11, Lemma.4.7  Note that [7] prove a lower bound on the number of evaluations, and not an upper bound.

A fourth method uses an intermediate-value theorem on simplices.[8] Again, no upper bound on the number of queries is given.

See also Edit

Broyden's method – Quasi-Newton root-finding method for the multivariable case

References Edit

  1. ^ Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Chapter 9. Root Finding and Nonlinear Sets of Equations". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
  2. ^ a b Mourrain, B.; Vrahatis, M. N.; Yakoubsohn, J. C. (2002-06-01). "On the Complexity of Isolating Real Roots and Computing with Certainty the Topological Degree". Journal of Complexity. 18 (2): 612–640. doi:10.1006/jcom.2001.0636. ISSN 0885-064X.
  3. ^ Vrahatis, Michael N. (2020). Sergeyev, Yaroslav D.; Kvasov, Dmitri E. (eds.). "Generalizations of the Intermediate Value Theorem for Approximating Fixed Points and Zeros of Continuous Functions". Numerical Computations: Theory and Algorithms. Lecture Notes in Computer Science. Cham: Springer International Publishing. 11974: 223–238. doi:10.1007/978-3-030-40616-5_17. ISBN 978-3-030-40616-5. S2CID 211160947.
  4. ^ "Iterative solution of nonlinear equations in several variables". Guide books. Retrieved 2023-04-16.
  5. ^ Stenger, Frank (1975-03-01). "Computing the topological degree of a mapping inRn". Numerische Mathematik. 25 (1): 23–38. doi:10.1007/BF01419526. ISSN 0945-3245. S2CID 122196773.
  6. ^ Kearfott, Baker (1979-06-01). "An efficient degree-computation method for a generalized method of bisection". Numerische Mathematik. 32 (2): 109–127. doi:10.1007/BF01404868. ISSN 0029-599X. S2CID 122058552.
  7. ^ a b Vrahatis, M. N.; Iordanidis, K. I. (1986-03-01). "A rapid Generalized Method of Bisection for solving Systems of Non-linear Equations". Numerische Mathematik. 49 (2): 123–138. doi:10.1007/BF01389620. ISSN 0945-3245.
  8. ^ Vrahatis, Michael N. (2020-04-15). "Intermediate value theorem for simplices for simplicial approximation of fixed points and zeros". Topology and its Applications. 275: 107036. doi:10.1016/j.topol.2019.107036. ISSN 0166-8641.

Further reading Edit

  • J.M. McNamee: "Numerical Methods for Roots of Polynomials - Part I", Elsevier (2007).
  • J.M. McNamee and Victor Pan: "Numerical Methods for Roots of Polynomials - Part II", Elsevier (2013).


root, finding, algorithms, mathematics, computing, root, finding, algorithm, algorithm, finding, zeros, also, called, roots, continuous, functions, zero, function, from, real, numbers, real, numbers, from, complex, numbers, complex, numbers, number, such, that. In mathematics and computing a root finding algorithm is an algorithm for finding zeros also called roots of continuous functions A zero of a function f from the real numbers to real numbers or from the complex numbers to the complex numbers is a number x such that f x 0 As generally the zeros of a function cannot be computed exactly nor expressed in closed form root finding algorithms provide approximations to zeros expressed either as floating point numbers or as small isolating intervals or disks for complex roots an interval or disk output being equivalent to an approximate output together with an error bound 1 Solving an equation f x g x is the same as finding the roots of the function h x f x g x Thus root finding algorithms allow solving any equation defined by continuous functions However most root finding algorithms do not guarantee that they will find all the roots in particular if such an algorithm does not find any root that does not mean that no root exists Most numerical root finding methods use iteration producing a sequence of numbers that hopefully converges towards the root as its limit They require one or more initial guesses of the root as starting values then each iteration of the algorithm produces a successively more accurate approximation to the root Since the iteration must be stopped at some point these methods produce an approximation to the root not an exact solution Many methods compute subsequent values by evaluating an auxiliary function on the preceding values The limit is thus a fixed point of the auxiliary function which is chosen for having the roots of the original equation as fixed points and for converging rapidly to these fixed points The behavior of general root finding algorithms is studied in numerical analysis However for polynomials root finding study belongs generally to computer algebra since algebraic properties of polynomials are fundamental for the most efficient algorithms The efficiency of an algorithm may depend dramatically on the characteristics of the given functions For example many algorithms use the derivative of the input function while others work on every continuous function In general numerical algorithms are not guaranteed to find all the roots of a function so failing to find a root does not prove that there is no root However for polynomials there are specific algorithms that use algebraic properties for certifying that no root is missed and locating the roots in separate intervals or disks for complex roots that are small enough to ensure the convergence of numerical methods typically Newton s method to the unique root so located Contents 1 Bracketing methods 1 1 Bisection method 1 2 False position regula falsi 1 3 ITP method 2 Interpolation 3 Iterative methods 3 1 Newton s method and similar derivative based methods 3 2 Secant method 3 3 Steffensen s method 3 4 Fixed point iteration method 3 5 Inverse interpolation 4 Combinations of methods 4 1 Brent s method 4 2 Ridders method 5 Roots of polynomials 6 Finding roots in higher dimensions 7 See also 8 References 9 Further readingBracketing methods EditBracketing methods determine successively smaller intervals brackets that contain a root When the interval is small enough then a root has been found They generally use the intermediate value theorem which asserts that if a continuous function has values of opposite signs at the end points of an interval then the function has at least one root in the interval Therefore they require to start with an interval such that the function takes opposite signs at the end points of the interval However in the case of polynomials there are other methods Descartes rule of signs Budan s theorem and Sturm s theorem for getting information on the number of roots in an interval They lead to efficient algorithms for real root isolation of polynomials which ensure finding all real roots with a guaranteed accuracy Bisection method Edit The simplest root finding algorithm is the bisection method Let f be a continuous function for which one knows an interval a b such that f a and f b have opposite signs a bracket Let c a b 2 be the middle of the interval the midpoint or the point that bisects the interval Then either f a and f c or f c and f b have opposite signs and one has divided by two the size of the interval Although the bisection method is robust it gains one and only one bit of accuracy with each iteration Therefore the number of function evaluations required for finding an e approximate root is log 2 b a e displaystyle log 2 frac b a varepsilon nbsp Other methods under appropriate conditions can gain accuracy faster False position regula falsi Edit The false position method also called the regula falsi method is similar to the bisection method but instead of using bisection search s middle of the interval it uses the x intercept of the line that connects the plotted function values at the endpoints of the interval that is c a f b b f a f b f a displaystyle c frac af b bf a f b f a nbsp False position is similar to the secant method except that instead of retaining the last two points it makes sure to keep one point on either side of the root The false position method can be faster than the bisection method and will never diverge like the secant method however it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for f c typically this may occur if the rate of variation of f is large in the neighborhood of the root ITP method Edit The ITP method is the only known method to bracket the root with the same worst case guarantees of the bisection method while guaranteeing a superlinear convergence to the root of smooth functions as the secant method It is also the only known method guaranteed to outperform the bisection method on the average for any continuous distribution on the location of the root see ITP Method Analysis It does so by keeping track of both the bracketing interval as well as the minmax interval in which any point therein converges as fast as the bisection method The construction of the queried point c follows three steps interpolation similar to the regula falsi truncation adjusting the regula falsi similar to Regula falsi Improvements in regula falsi and then projection onto the minmax interval The combination of these steps produces a simultaneously minmax optimal method with guarantees similar to interpolation based methods for smooth functions and in practice will outperform both the bisection method and interpolation based methods under both smooth and non smooth functions Interpolation EditMany root finding processes work by interpolation This consists in using the last computed approximate values of the root for approximating the function by a polynomial of low degree which takes the same values at these approximate roots Then the root of the polynomial is computed and used as a new approximate value of the root of the function and the process is iterated Two values allow interpolating a function by a polynomial of degree one that is approximating the graph of the function by a line This is the basis of the secant method Three values define a quadratic function which approximates the graph of the function by a parabola This is Muller s method Regula falsi is also an interpolation method which differs from the secant method by using for interpolating by a line two points that are not necessarily the last two computed points Iterative methods EditAlthough all root finding algorithms proceed by iteration an iterative root finding method generally uses a specific type of iteration consisting of defining an auxiliary function which is applied to the last computed approximations of a root for getting a new approximation The iteration stops when a fixed point up to the desired precision of the auxiliary function is reached that is when the new computed value is sufficiently close to the preceding ones Newton s method and similar derivative based methods Edit Newton s method assumes the function f to have a continuous derivative Newton s method may not converge if started too far away from a root However when it does converge it is faster than the bisection method and is usually quadratic Newton s method is also important because it readily generalizes to higher dimensional problems Newton like methods with higher orders of convergence are the Householder s methods The first one after Newton s method is Halley s method with cubic order of convergence Secant method Edit Replacing the derivative in Newton s method with a finite difference we get the secant method This method does not require the computation nor the existence of a derivative but the price is slower convergence the order is approximately 1 6 golden ratio A generalization of the secant method in higher dimensions is Broyden s method Steffensen s method Edit If we use a polynomial fit to remove the quadratic part of the finite difference used in the Secant method so that it better approximates the derivative we obtain Steffensen s method which has quadratic convergence and whose behavior both good and bad is essentially the same as Newton s method but does not require a derivative Fixed point iteration method Edit We can use the fixed point iteration to find the root of a function Given a function f x displaystyle f x nbsp which we have set to zero to find the root f x 0 displaystyle f x 0 nbsp we rewrite the equation in terms of x displaystyle x nbsp so that f x 0 displaystyle f x 0 nbsp becomes x g x displaystyle x g x nbsp note there are often many g x displaystyle g x nbsp functions for each f x 0 displaystyle f x 0 nbsp function Next we relabel each side of the equation as x n 1 g x n displaystyle x n 1 g x n nbsp so that we can perform the iteration Next we pick a value for x 1 displaystyle x 1 nbsp and perform the iteration until it converges towards a root of the function If the iteration converges it will converge to a root The iteration will only converge if g r o o t lt 1 displaystyle g root lt 1 nbsp As an example of converting f x 0 displaystyle f x 0 nbsp to x g x displaystyle x g x nbsp if given the function f x x 2 x 1 displaystyle f x x 2 x 1 nbsp we will rewrite it as one of the following equations x n 1 1 x n 1 displaystyle x n 1 1 x n 1 nbsp x n 1 1 x n 1 displaystyle x n 1 1 x n 1 nbsp x n 1 1 x n 2 displaystyle x n 1 1 x n 2 nbsp x n 1 x n 2 2 x n 1 displaystyle x n 1 x n 2 2x n 1 nbsp or x n 1 1 x n displaystyle x n 1 pm sqrt 1 x n nbsp Inverse interpolation Edit The appearance of complex values in interpolation methods can be avoided by interpolating the inverse of f resulting in the inverse quadratic interpolation method Again convergence is asymptotically faster than the secant method but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root Combinations of methods EditBrent s method Edit Brent s method is a combination of the bisection method the secant method and inverse quadratic interpolation At every iteration Brent s method decides which method out of these three is likely to do best and proceeds by doing a step according to that method This gives a robust and fast method which therefore enjoys considerable popularity Ridders method Edit Ridders method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method Roots of polynomials EditThis section is an excerpt from Polynomial root finding algorithms edit Finding polynomial roots is a long standing problem that has been the object of much research throughout history A testament to this is that up until the 19th century algebra meant essentially theory of polynomial equations Finding roots in higher dimensions EditThe bisection method has been generalized to higher dimensions these methods are called generalized bisection methods 2 3 At each iteration the domain is partitioned into two parts and the algorithm decides based on a small number of function evaluations which of these two parts must contain a root In one dimension the criterion for decision is that the function has opposite signs The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily and guarantees the existence of a root The Poincare Miranda theorem gives a criterion for the existence of a root in a rectangle but it is hard to verify since it requires to evaluate the function on the entire boundary of the triangle Another criterion is given by a theorem of Kronecker 4 It says that if the topological degree of a function f on a rectangle is non zero then the rectangle must contain at least one root of f This criterion is the basis for several root finding methods such as by Stenger 5 and Kearfott 6 However computing the topological degree can be time consuming A third criterion is based on a characteristic polyhedron This criterion is used by a method called Characteristic Bisection 2 19 It does not require to compute the topological degree it only requires to compute the signs of function values The number of required evaluations is at least log 2 D ϵ displaystyle log 2 D epsilon nbsp where D is the length of the longest edge of the characteristic polyhedron 7 11 Lemma 4 7 Note that 7 prove a lower bound on the number of evaluations and not an upper bound A fourth method uses an intermediate value theorem on simplices 8 Again no upper bound on the number of queries is given See also EditList of root finding algorithms Fixed point computationBroyden s method Quasi Newton root finding method for the multivariable case Cryptographically secure pseudorandom number generator Type of functions designed for being unsolvable by root finding algorithms GNU Scientific Library Graeffe s method Algorithm for finding polynomial roots Lill s method Graphical method for the real roots of a polynomial MPSolve Software for approximating the roots of a polynomial with arbitrarily high precision Multiplicity mathematics Number of times an object must be counted for making true a general formula n th root algorithm System of polynomial equations Roots of multiple multivariate polynomials Kantorovich theorem About the convergence of Newton s methodReferences Edit Press W H Teukolsky S A Vetterling W T Flannery B P 2007 Chapter 9 Root Finding and Nonlinear Sets of Equations Numerical Recipes The Art of Scientific Computing 3rd ed New York Cambridge University Press ISBN 978 0 521 88068 8 a b Mourrain B Vrahatis M N Yakoubsohn J C 2002 06 01 On the Complexity of Isolating Real Roots and Computing with Certainty the Topological Degree Journal of Complexity 18 2 612 640 doi 10 1006 jcom 2001 0636 ISSN 0885 064X Vrahatis Michael N 2020 Sergeyev Yaroslav D Kvasov Dmitri E eds Generalizations of the Intermediate Value Theorem for Approximating Fixed Points and Zeros of Continuous Functions Numerical Computations Theory and Algorithms Lecture Notes in Computer Science Cham Springer International Publishing 11974 223 238 doi 10 1007 978 3 030 40616 5 17 ISBN 978 3 030 40616 5 S2CID 211160947 Iterative solution of nonlinear equations in several variables Guide books Retrieved 2023 04 16 Stenger Frank 1975 03 01 Computing the topological degree of a mapping inRn Numerische Mathematik 25 1 23 38 doi 10 1007 BF01419526 ISSN 0945 3245 S2CID 122196773 Kearfott Baker 1979 06 01 An efficient degree computation method for a generalized method of bisection Numerische Mathematik 32 2 109 127 doi 10 1007 BF01404868 ISSN 0029 599X S2CID 122058552 a b Vrahatis M N Iordanidis K I 1986 03 01 A rapid Generalized Method of Bisection for solving Systems of Non linear Equations Numerische Mathematik 49 2 123 138 doi 10 1007 BF01389620 ISSN 0945 3245 Vrahatis Michael N 2020 04 15 Intermediate value theorem for simplices for simplicial approximation of fixed points and zeros Topology and its Applications 275 107036 doi 10 1016 j topol 2019 107036 ISSN 0166 8641 Further reading EditJ M McNamee Numerical Methods for Roots of Polynomials Part I Elsevier 2007 J M McNamee and Victor Pan Numerical Methods for Roots of Polynomials Part II Elsevier 2013 Retrieved from https en wikipedia org w index php title Root finding algorithms amp oldid 1170105394, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.