fbpx
Wikipedia

Regula falsi

In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.

As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of (written in modern notation) the equation x + x/4 = 15. This is solved by false position.[1] First, guess that x = 4 to obtain, on the left, 4 + 4/4 = 5. This guess is a good choice since it produces an integer value. However, 4 is not the solution of the original equation, as it gives a value which is three times too small. To compensate, multiply x (currently set to 4) by 3 and substitute again to get 12 + 12/4 = 15, verifying that the solution is x = 12.

Modern versions of the technique employ systematic ways of choosing new test values and are concerned with the questions of whether or not an approximation to a solution can be obtained, and if it can, how fast can the approximation be found.

Two historical types edit

Two basic types of false position method can be distinguished historically, simple false position and double false position.

Simple false position is aimed at solving problems involving direct proportion. Such problems can be written algebraically in the form: determine x such that

 

if a and b are known. The method begins by using a test input value x, and finding the corresponding output value b by multiplication: ax′ = b. The correct answer is then found by proportional adjustment, x = b/ b x.

Double false position is aimed at solving more difficult problems that can be written algebraically in the form: determine x such that

 

if it is known that

 

Double false position is mathematically equivalent to linear interpolation. By using a pair of test inputs and the corresponding pair of outputs, the result of this algorithm given by,[2]

 

would be memorized and carried out by rote. Indeed, the rule as given by Robert Recorde in his Ground of Artes (c. 1542) is:[2]

Gesse at this woorke as happe doth leade.
By chaunce to truthe you may procede.
And firste woorke by the question,
Although no truthe therein be don.
Suche falsehode is so good a grounde,
That truth by it will soone be founde.
From many bate to many mo,
From to fewe take to fewe also.
With to much ioyne to fewe againe,
To to fewe adde to manye plaine.
In crossewaies multiplye contrary kinde,
All truthe by falsehode for to fynde.

For an affine linear function,

 

double false position provides the exact solution, while for a nonlinear function f it provides an approximation that can be successively improved by iteration.

History edit

The simple false position technique is found in cuneiform tablets from ancient Babylonian mathematics, and in papyri from ancient Egyptian mathematics.[3][1]

Double false position arose in late antiquity as a purely arithmetical algorithm. In the ancient Chinese mathematical text called The Nine Chapters on the Mathematical Art (九章算術),[4] dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call secant lines on a conic section. A more typical example is this "joint purchase" problem involving an "excess and deficit" condition:[5]

Now an item is purchased jointly; everyone contributes 8 [coins], the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53.[6]

Between the 9th and 10th centuries, the Egyptian mathematician Abu Kamil wrote a now-lost treatise on the use of double false position, known as the Book of the Two Errors (Kitāb al-khaṭāʾayn). The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa (10th century), an Arab mathematician from Baalbek, Lebanon. He justified the technique by a formal, Euclidean-style geometric proof. Within the tradition of medieval Muslim mathematics, double false position was known as hisāb al-khaṭāʾayn ("reckoning by two errors"). It was used for centuries to solve practical problems such as commercial and juridical questions (estate partitions according to rules of Quranic inheritance), as well as purely recreational problems. The algorithm was often memorized with the aid of mnemonics, such as a verse attributed to Ibn al-Yasamin and balance-scale diagrams explained by al-Hassar and Ibn al-Banna, all three being mathematicians of Moroccan origin.[7]

Leonardo of Pisa (Fibonacci) devoted Chapter 13 of his book Liber Abaci (AD 1202) to explaining and demonstrating the uses of double false position, terming the method regulis elchatayn after the al-khaṭāʾayn method that he had learned from Arab sources.[7] In 1494, Pacioli used the term el cataym in his book Summa de arithmetica, probably taking the term from Fibonacci. Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular. For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556.[8] Pacioli's term nearly disappeared in the 16th century European works and the technique went by various names such as "Rule of False", "Rule of Position" and "Rule of False Position". Regula Falsi appears as the Latinized version of Rule of False as early as 1690.[2]

Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth. For instance, in 1568 Humphrey Baker says:[2]

The Rule of falsehoode is so named not for that it teacheth anye deceyte or falsehoode, but that by fayned numbers taken at all aduentures, it teacheth to finde out the true number that is demaunded, and this of all the vulgar Rules which are in practise) is ye most excellence.

Numerical analysis edit

The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions. However, in numerical analysis, double false position became a root-finding algorithm used in iterative numerical approximation techniques.

Many equations, including most of the more complicated ones, can be solved only by iterative numerical approximation. This consists of trial and error, in which various values of the unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the procedure, a new estimate for the solution. There are many ways to arrive at a calculated-estimate and regula falsi provides one of these.

Given an equation, move all of its terms to one side so that it has the form, f (x) = 0, where f is some function of the unknown variable x. A value c that satisfies this equation, that is, f (c) = 0, is called a root or zero of the function f and is a solution of the original equation. If f is a continuous function and there exist two points a0 and b0 such that f (a0) and f (b0) are of opposite signs, then, by the intermediate value theorem, the function f has a root in the interval (a0, b0).

There are many root-finding algorithms that can be used to obtain approximations to such a root. One of the most common is Newton's method, but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function's derivative. Other methods are needed and one general class of methods are the two-point bracketing methods. These methods proceed by producing a sequence of shrinking intervals [ak, bk], at the kth step, such that (ak, bk) contains a root of f.

Two-point bracketing methods edit

These methods start with two x-values, initially found by trial-and-error, at which f (x) has opposite signs. Under the continuity assumption, a root of f is guaranteed to lie between these two values, that is to say, these values "bracket" the root. A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root. If c is the point selected, then the smaller interval goes from c to the endpoint where f (x) has the sign opposite that of f (c). In the improbable case that f (c) = 0, a root has been found and the algorithm stops. Otherwise, the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy.

The point selected in any current interval can be thought of as an estimate of the solution. The different variations of this method involve different ways of calculating this solution estimate.

Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution, a guarantee not available with other root finding methods such as Newton's method or the secant method.

The simplest variation, called the bisection method, calculates the solution estimate as the midpoint of the bracketing interval. That is, if at step k, the current bracketing interval is [ak, bk], then the new solution estimate ck is obtained by,

 

This ensures that ck is between ak and bk, thereby guaranteeing convergence toward the solution.

Since the bracketing interval's length is halved at each step, the bisection method's error is, on average, halved with each iteration. Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy.

The regula falsi (false position) method edit

 
The first two iterations of the false position method. The red curve shows the function f and the blue lines are the secants.

The convergence rate of the bisection method could possibly be improved by using a different solution estimate.

The regula falsi method calculates the new solution estimate as the x-intercept of the line segment joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment.[9]

More precisely, suppose that in the k-th iteration the bracketing interval is (ak, bk). Construct the line through the points (ak, f (ak)) and (bk, f (bk)), as illustrated. This line is a secant or chord of the graph of the function f. In point-slope form, its equation is given by

 

Now choose ck to be the x-intercept of this line, that is, the value of x for which y = 0, and substitute these values to obtain

 

Solving this equation for ck gives:

 

This last symmetrical form has a computational advantage:

As a solution is approached, ak and bk will be very close together, and nearly always of the same sign. Such a subtraction can lose significant digits. Because f (bk) and f (ak) are always of opposite sign the “subtraction” in the numerator of the improved formula is effectively an addition (as is the subtraction in the denominator too).

At iteration number k, the number ck is calculated as above and then, if f (ak) and f (ck) have the same sign, set ak + 1 = ck and bk + 1 = bk, otherwise set ak + 1 = ak and bk + 1 = ck. This process is repeated until the root is approximated sufficiently well.

The above formula is also used in the secant method, but the secant method always retains the last two computed points, and so, while it is slightly faster, it does not preserve bracketing and may not converge.

The fact that regula falsi always converges, and has versions that do well at avoiding slowdowns, makes it a good choice when speed is needed. However, its rate of convergence can drop below that of the bisection method.

Analysis edit

Since the initial end-points a0 and b0 are chosen such that f (a0) and f (b0) are of opposite signs, at each step, one of the end-points will get closer to a root of f. If the second derivative of f is of constant sign (so there is no inflection point) in the interval, then one endpoint (the one where f also has the same sign) will remain fixed for all subsequent iterations while the converging endpoint becomes updated. As a result, unlike the bisection method, the width of the bracket does not tend to zero (unless the zero is at an inflection point around which sign(f ) = −sign(f ")). As a consequence, the linear approximation to f (x), which is used to pick the false position, does not improve as rapidly as possible.

One example of this phenomenon is the function

 

on the initial bracket [−1,1]. The left end, −1, is never replaced (it does not change at first and after the first three iterations, f " is negative on the interval) and thus the width of the bracket never falls below 1. Hence, the right endpoint approaches 0 at a linear rate (the number of accurate digits grows linearly, with a rate of convergence of 2/3).[citation needed]

For discontinuous functions, this method can only be expected to find a point where the function changes sign (for example at x = 0 for 1/x or the sign function). In addition to sign changes, it is also possible for the method to converge to a point where the limit of the function is zero, even if the function is undefined (or has another value) at that point (for example at x = 0 for the function given by f (x) = abs(x) − x2 when x ≠ 0 and by f (0) = 5, starting with the interval [-0.5, 3.0]). It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change, but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change, for example at x = ±1 in

 

The method of bisection avoids this hypothetical convergence problem.

Improvements in regula falsi edit

Though regula falsi always converges, usually considerably faster than bisection, there are situations that can slow its convergence – sometimes to a prohibitive degree. That problem isn't unique to regula falsi: Other than bisection, all of the numerical equation-solving methods can have a slow-convergence or no-convergence problem under some conditions. Sometimes, Newton's method and the secant method diverge instead of converging – and often do so under the same conditions that slow regula falsi's convergence.

But, though regula falsi is one of the best methods, and even in its original un-improved version would often be the best choice; for example, when Newton's isn't used because the derivative is prohibitively time-consuming to evaluate, or when Newton's and Successive-Substitutions have failed to converge.

Regula falsi's failure mode is easy to detect: The same end-point is retained twice in a row. The problem is easily remedied by picking instead a modified false position, chosen to avoid slowdowns due to those relatively unusual unfavorable situations. A number of such improvements to regula falsi have been proposed; two of them, the Illinois algorithm and the Anderson–Björk algorithm, are described below.

The Illinois algorithm edit

The Illinois algorithm halves the y-value of the retained end point in the next estimate computation when the new y-value (that is, f (ck)) has the same sign as the previous one (f (ck − 1)), meaning that the end point of the previous step will be retained. Hence:

 

or

 

down-weighting one of the endpoint values to force the next ck to occur on that side of the function.[10] The factor ½ used above looks arbitrary, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442). There are other ways to pick the rescaling which give even better superlinear convergence rates.[11]

The above adjustment to regula falsi is called the Illinois algorithm by some scholars.[10][12] Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.[11]

Anderson–Björck algorithm edit

Suppose that in the k-th iteration the bracketing interval is [ak, bk] and that the functional value of the new calculated estimate ck has the same sign as f (bk). In this case, the new bracketing interval [ak + 1, bk + 1] = [ak, ck] and the left-hand endpoint has been retained. (So far, that's the same as ordinary Regula Falsi and the Illinois algorithm.)

But, whereas the Illinois algorithm would multiply f (ak) by 1/2, Anderson–Björck algorithm multiplies it by m, where m has one of the two following values:[13]

 

For simple roots, Anderson–Björck performs very well in practice.[14]

ITP method edit

Given  ,   and   where   is the golden ration  , in each iteration   the ITP method calculates the point   following three steps:

  1. [Interpolation Step] Calculate the bisection and the regula falsi points:   and   ;
  2. [Truncation Step] Perturb the estimator towards the center:   where   and   ;
  3. [Projection Step] Project the estimator to minmax interval:   where  .

The value of the function   on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. This three step procedure guarantees that the minmax properties of the bisection method are enjoyed by the estimate as well as the superlinear convergence of the secant method. And, is observed to outperform both bisection and interpolation based methods under smooth and non-smooth functions.[15]

Practical considerations edit

When solving one equation, or just a few, using a computer, the bisection method is an adequate choice. Although bisection isn't as fast as the other methods—when they're at their best and don't have a problem—bisection nevertheless is guaranteed to converge at a useful rate, roughly halving the error with each iteration – gaining roughly a decimal place of accuracy with every 3 iterations.

For manual calculation, by calculator, one tends to want to use faster methods, and they usually, but not always, converge faster than bisection. But a computer, even using bisection, will solve an equation, to the desired accuracy, so rapidly that there's no need to try to save time by using a less reliable method—and every method is less reliable than bisection.

An exception would be if the computer program had to solve equations very many times during its run. Then the time saved by the faster methods could be significant.

Then, a program could start with Newton's method, and, if Newton's isn't converging, switch to regula falsi, maybe in one of its improved versions, such as the Illinois or Anderson–Björck versions. Or, if even that isn't converging as well as bisection would, switch to bisection, which always converges at a useful, if not spectacular, rate.

When the change in y has become very small, and x is also changing very little, then Newton's method most likely will not run into trouble, and will converge. So, under those favorable conditions, one could switch to Newton's method if one wanted the error to be very small and wanted very fast convergence.

Example: Growth of a bulrush edit

In chapter 7 of The Nine Chapters, a root finding problem can be translated to modern language as follows:

Excess And Deficit Problem #11:

  • A bulrush grew 3 units on its first day. At the end of each day, the plant is observed to have grown by  1 /2 of the previous day's growth.
  • A club-rush grew 1 unit on its first day. At the end of each day, the plant has grown by 2 times as much as the previous day's growth.
  • Find the time [in fractional days] that the club-rush becomes as tall as the bulrush.

Answer:   days; the height is   units.

Explanation:

  • Suppose it is day 2. The club-rush is shorter than the bulrush by 1.5 units.
  • Suppose it is day 3. The club-rush is taller than the bulrush by 1.75 units. ∎
 
Plot of function F, its exact root (point K), and the approximated root

To understand this, we shall model the heights of the plants on day n (n = 1, 2, 3...) after a geometric series.

 Bulrush
 Club-rush

For the sake of better notations, let   Rewrite the plant height series   in terms of k and invoke the sum formula.

 
 

Now, use regula falsi to find the root of  

 

Set   and compute   which equals −1.5 (the "deficit").
Set   and compute   which equals 1.75 (the "excess").

Estimated root (1st iteration):

 

Example code edit

This example program, written in the C programming language, is an example of the Illinois algorithm. To find the positive number x where cos(x) = x3, the equation is transformed into a root-finding form f (x) = cos(x) -- x3 = 0.

#include <stdio.h> #include <math.h> double f(double x) {  return cos(x) - x*x*x; } /* a,b: endpoints of an interval where we search  e: half of upper bound for relative error  m: maximal number of iteration */ double FalsiMethod(double (*f)(double), double a, double b, double e, int m) {  double c, fc;  int n, side = 0;  /* starting values at endpoints of interval */  double fa = f(a);  double fb = f(b);  for (n = 0; n < m; n++) {  c = (fa * b - fb * a) / (fa - fb);  if (fabs(b - a) < e * fabs(b + a))  break;  fc = f(c);  if (fc * fb > 0) {  /* fc and fb have same sign, copy c to b */  b = c; fb = fc;  if (side == -1)  fa /= 2;  side = -1;  } else if (fa * fc > 0) {  /* fc and fa have same sign, copy c to a */  a = c; fa = fc;  if (side == +1)  fb /= 2;  side = +1;  } else {  /* fc * f_ very small (looks like zero) */  break;  }  }  return c; } int main(void) {  printf("%0.15f\n", FalsiMethod(&f, 0, 1, 5E-15, 100));  return 0; } 

After running this code, the final answer is approximately 0.865474033101614.

See also edit

References edit

  1. ^ a b Katz, Victor J. (1998), A History of Mathematics (2nd ed.), Addison Wesley Longman, p. 15, ISBN 978-0-321-01618-8
  2. ^ a b c d Smith, D. E. (1958) [1925], History of Mathematics, vol. II, Dover, pp. 437–441, ISBN 978-0-486-20430-7
  3. ^ Chabert, Jean-Luc, ed. (2012) [1999]. "3. Methods of False Position". A History of Algorithms: From the Pebble to the Microchip. Springer. pp. 86–91. ISBN 978-3-642-18192-4.
  4. ^ Needham, Joseph (1959). Mathematics and the Sciences of the Heavens and the Earth. Science and Civilisation in China. Vol. 3. Cambridge University Press. pp. 147–. ISBN 978-0-521-05801-8.
  5. ^ "Nine chapters". www-groups.dcs.st-and.ac.uk. Retrieved 2019-02-16.
  6. ^ Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999). The Nine Chapters on the Mathematical Art: Companion and Commentary. Oxford University Press. p. 358. ISBN 978-7-03-006101-0.
  7. ^ a b Schwartz, R. K. (2004). Issues in the Origin and Development of Hisab al-Khata'ayn (Calculation by Double False Position). Eighth North African Meeting on the History of Arab Mathematics. Radès, Tunisia. Available online at: http://facstaff.uindy.edu/~oaks/Biblio/COMHISMA8paper.doc and (PDF). Archived from the original (PDF) on 2014-05-16. Retrieved 2012-06-08.{{cite web}}: CS1 maint: archived copy as title (link)
  8. ^ General Trattato, vol. I, Venice, 1556, p. fol. 238, v, Regola Helcataym (vocabulo Arabo) che in nostra lingua vuol dire delle false Positioni
  9. ^ Conte, S.D.; Boor, Carl de (1965). Elementary Numerical Analysis: an algorithmic approach (2nd ed.). McGraw-Hill. p. 40. OCLC 1088854304.
  10. ^ a b Dahlquist, Germund; Björck, Åke (2003) [1974]. Numerical Methods. Dover. pp. 231–232. ISBN 978-0486428079.
  11. ^ a b Ford, J. A. (1995), Improved Algorithms of Illinois-type for the Numerical Solution of Nonlinear Equations, Technical Report, University of Essex Press, CiteSeerX 10.1.1.53.8676, CSM-257
  12. ^ Dowell, M.; Jarratt, P. (1971). "A modified regula falsi method for computing the root of an equation". BIT. 11 (2): 168–174. doi:10.1007/BF01934364. S2CID 50473598.
  13. ^ King, Richard F. (October 1983). "Anderson-Bjorck for Linear Sequences". Mathematics of Computation. 41 (164): 591–596. doi:10.2307/2007695. JSTOR 2007695.
  14. ^ Galdino, Sérgio (2011). "A family of regula falsi root-finding methods". Proceedings of 2011 World Congress on Engineering and Technology. 1. Retrieved 9 September 2016.
  15. ^ Oliveira, I. F. D.; Takahashi, R. H. C. (2020-12-06). "An Enhancement of the Bisection Method Average Performance Preserving Minmax Optimality". ACM Transactions on Mathematical Software. 47 (1): 5:1–5:24. doi:10.1145/3423597. ISSN 0098-3500. S2CID 230586635.

Further reading edit

  • Burden, Richard L.; Faires, J. Douglas (2000). Numerical Analysis (7th ed.). Brooks/Cole. ISBN 0-534-38216-9.
  • Sigler, L.E. (2002). Fibonacci's Liber Abaci, Leonardo Pisano's Book of Calculation. Springer-Verlag. ISBN 0-387-40737-5.
  • Roberts, A.M. (2020). "Mathematical Philology in the Treatise on Double False Position in an Arabic Manuscript at Columbia University". Philological Encounters. 5 (3–4): 3–4. doi:10.1163/24519197-BJA10007. S2CID 229538951. (On a previously unpublished treatise on Double False Position in a medieval Arabic manuscript.)

regula, falsi, mathematics, regula, falsi, method, false, position, false, position, method, very, method, solving, equation, with, unknown, this, method, modified, form, still, simple, terms, method, trial, error, technique, using, test, false, values, variab. In mathematics the regula falsi method of false position or false position method is a very old method for solving an equation with one unknown this method in modified form is still in use In simple terms the method is the trial and error technique of using test false values for the variable and then adjusting the test value according to the outcome This is sometimes also referred to as guess and check Versions of the method predate the advent of algebra and the use of equations As an example consider problem 26 in the Rhind papyrus which asks for a solution of written in modern notation the equation x x 4 15 This is solved by false position 1 First guess that x 4 to obtain on the left 4 4 4 5 This guess is a good choice since it produces an integer value However 4 is not the solution of the original equation as it gives a value which is three times too small To compensate multiply x currently set to 4 by 3 and substitute again to get 12 12 4 15 verifying that the solution is x 12 Modern versions of the technique employ systematic ways of choosing new test values and are concerned with the questions of whether or not an approximation to a solution can be obtained and if it can how fast can the approximation be found Contents 1 Two historical types 2 History 3 Numerical analysis 3 1 Two point bracketing methods 3 2 The regula falsi false position method 4 Analysis 5 Improvements in regula falsi 5 1 The Illinois algorithm 5 2 Anderson Bjorck algorithm 5 3 ITP method 6 Practical considerations 7 Example Growth of a bulrush 8 Example code 9 See also 10 References 11 Further readingTwo historical types editTwo basic types of false position method can be distinguished historically simple false position and double false position Simple false position is aimed at solving problems involving direct proportion Such problems can be written algebraically in the form determine x such that a x b displaystyle ax b nbsp if a and b are known The method begins by using a test input value x and finding the corresponding output value b by multiplication ax b The correct answer is then found by proportional adjustment x b b x Double false position is aimed at solving more difficult problems that can be written algebraically in the form determine x such that f x a x c 0 displaystyle f x ax c 0 nbsp if it is known that f x 1 b 1 f x 2 b 2 displaystyle f x 1 b 1 qquad f x 2 b 2 nbsp Double false position is mathematically equivalent to linear interpolation By using a pair of test inputs and the corresponding pair of outputs the result of this algorithm given by 2 x b 1 x 2 b 2 x 1 b 1 b 2 displaystyle x frac b 1 x 2 b 2 x 1 b 1 b 2 nbsp would be memorized and carried out by rote Indeed the rule as given by Robert Recorde in his Ground of Artes c 1542 is 2 Gesse at this woorke as happe doth leade By chaunce to truthe you may procede And firste woorke by the question Although no truthe therein be don Suche falsehode is so good a grounde That truth by it will soone be founde From many bate to many mo From to fewe take to fewe also With to much ioyne to fewe againe To to fewe adde to manye plaine In crossewaies multiplye contrary kinde All truthe by falsehode for to fynde For an affine linear function f x a x c displaystyle f x ax c nbsp double false position provides the exact solution while for a nonlinear function f it provides an approximation that can be successively improved by iteration History editThe simple false position technique is found in cuneiform tablets from ancient Babylonian mathematics and in papyri from ancient Egyptian mathematics 3 1 Double false position arose in late antiquity as a purely arithmetical algorithm In the ancient Chinese mathematical text called The Nine Chapters on the Mathematical Art 九章算術 4 dated from 200 BC to AD 100 most of Chapter 7 was devoted to the algorithm There the procedure was justified by concrete arithmetical arguments then applied creatively to a wide variety of story problems including one involving what we would call secant lines on a conic section A more typical example is this joint purchase problem involving an excess and deficit condition 5 Now an item is purchased jointly everyone contributes 8 coins the excess is 3 everyone contributes 7 the deficit is 4 Tell The number of people the item price what is each Answer 7 people item price 53 6 Between the 9th and 10th centuries the Egyptian mathematician Abu Kamil wrote a now lost treatise on the use of double false position known as the Book of the Two Errors Kitab al khaṭaʾayn The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa 10th century an Arab mathematician from Baalbek Lebanon He justified the technique by a formal Euclidean style geometric proof Within the tradition of medieval Muslim mathematics double false position was known as hisab al khaṭaʾayn reckoning by two errors It was used for centuries to solve practical problems such as commercial and juridical questions estate partitions according to rules of Quranic inheritance as well as purely recreational problems The algorithm was often memorized with the aid of mnemonics such as a verse attributed to Ibn al Yasamin and balance scale diagrams explained by al Hassar and Ibn al Banna all three being mathematicians of Moroccan origin 7 Leonardo of Pisa Fibonacci devoted Chapter 13 of his book Liber Abaci AD 1202 to explaining and demonstrating the uses of double false position terming the method regulis elchatayn after the al khaṭaʾayn method that he had learned from Arab sources 7 In 1494 Pacioli used the term el cataym in his book Summa de arithmetica probably taking the term from Fibonacci Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular For instance Tartaglia translates the Latinized version of Pacioli s term into the vernacular false positions in 1556 8 Pacioli s term nearly disappeared in the 16th century European works and the technique went by various names such as Rule of False Rule of Position and Rule of False Position Regula Falsi appears as the Latinized version of Rule of False as early as 1690 2 Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth For instance in 1568 Humphrey Baker says 2 The Rule of falsehoode is so named not for that it teacheth anye deceyte or falsehoode but that by fayned numbers taken at all aduentures it teacheth to finde out the true number that is demaunded and this of all the vulgar Rules which are in practise is ye most excellence Numerical analysis editThe method of false position provides an exact solution for linear functions but more direct algebraic techniques have supplanted its use for these functions However in numerical analysis double false position became a root finding algorithm used in iterative numerical approximation techniques Many equations including most of the more complicated ones can be solved only by iterative numerical approximation This consists of trial and error in which various values of the unknown quantity are tried That trial and error may be guided by calculating at each step of the procedure a new estimate for the solution There are many ways to arrive at a calculated estimate and regula falsi provides one of these Given an equation move all of its terms to one side so that it has the form f x 0 where f is some function of the unknown variable x A value c that satisfies this equation that is f c 0 is called a root or zero of the function f and is a solution of the original equation If f is a continuous function and there exist two points a0 and b0 such that f a0 and f b0 are of opposite signs then by the intermediate value theorem the function f has a root in the interval a0 b0 There are many root finding algorithms that can be used to obtain approximations to such a root One of the most common is Newton s method but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function s derivative Other methods are needed and one general class of methods are the two point bracketing methods These methods proceed by producing a sequence of shrinking intervals ak bk at the k th step such that ak bk contains a root of f Two point bracketing methods edit These methods start with two x values initially found by trial and error at which f x has opposite signs Under the continuity assumption a root of f is guaranteed to lie between these two values that is to say these values bracket the root A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root If c is the point selected then the smaller interval goes from c to the endpoint where f x has the sign opposite that of f c In the improbable case that f c 0 a root has been found and the algorithm stops Otherwise the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy The point selected in any current interval can be thought of as an estimate of the solution The different variations of this method involve different ways of calculating this solution estimate Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution a guarantee not available with other root finding methods such as Newton s method or the secant method The simplest variation called the bisection method calculates the solution estimate as the midpoint of the bracketing interval That is if at step k the current bracketing interval is ak bk then the new solution estimate ck is obtained by c k a k b k 2 displaystyle c k frac a k b k 2 nbsp This ensures that ck is between ak and bk thereby guaranteeing convergence toward the solution Since the bracketing interval s length is halved at each step the bisection method s error is on average halved with each iteration Hence every 3 iterations the method gains approximately a factor of 23 i e roughly a decimal place in accuracy The regula falsi false position method edit nbsp The first two iterations of the false position method The red curve shows the function f and the blue lines are the secants The convergence rate of the bisection method could possibly be improved by using a different solution estimate The regula falsi method calculates the new solution estimate as the x intercept of the line segment joining the endpoints of the function on the current bracketing interval Essentially the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment 9 More precisely suppose that in the k th iteration the bracketing interval is ak bk Construct the line through the points ak f ak and bk f bk as illustrated This line is a secant or chord of the graph of the function f In point slope form its equation is given by y f b k f b k f a k b k a k x b k displaystyle y f b k frac f b k f a k b k a k x b k nbsp Now choose ck to be the x intercept of this line that is the value of x for which y 0 and substitute these values to obtain f b k f b k f a k b k a k c k b k 0 displaystyle f b k frac f b k f a k b k a k c k b k 0 nbsp Solving this equation for ck gives c k b k f b k b k a k f b k f a k a k f b k b k f a k f b k f a k displaystyle c k b k f b k frac b k a k f b k f a k frac a k f b k b k f a k f b k f a k nbsp This last symmetrical form has a computational advantage As a solution is approached ak and bk will be very close together and nearly always of the same sign Such a subtraction can lose significant digits Because f bk and f ak are always of opposite sign the subtraction in the numerator of the improved formula is effectively an addition as is the subtraction in the denominator too At iteration number k the number ck is calculated as above and then if f ak and f ck have the same sign set ak 1 ck and bk 1 bk otherwise set ak 1 ak and bk 1 ck This process is repeated until the root is approximated sufficiently well The above formula is also used in the secant method but the secant method always retains the last two computed points and so while it is slightly faster it does not preserve bracketing and may not converge The fact that regula falsi always converges and has versions that do well at avoiding slowdowns makes it a good choice when speed is needed However its rate of convergence can drop below that of the bisection method Analysis editSince the initial end points a0 and b0 are chosen such that f a0 and f b0 are of opposite signs at each step one of the end points will get closer to a root of f If the second derivative of f is of constant sign so there is no inflection point in the interval then one endpoint the one where f also has the same sign will remain fixed for all subsequent iterations while the converging endpoint becomes updated As a result unlike the bisection method the width of the bracket does not tend to zero unless the zero is at an inflection point around which sign f sign f As a consequence the linear approximation to f x which is used to pick the false position does not improve as rapidly as possible One example of this phenomenon is the function f x 2 x 3 4 x 2 3 x displaystyle f x 2x 3 4x 2 3x nbsp on the initial bracket 1 1 The left end 1 is never replaced it does not change at first and after the first three iterations f is negative on the interval and thus the width of the bracket never falls below 1 Hence the right endpoint approaches 0 at a linear rate the number of accurate digits grows linearly with a rate of convergence of 2 3 citation needed For discontinuous functions this method can only be expected to find a point where the function changes sign for example at x 0 for 1 x or the sign function In addition to sign changes it is also possible for the method to converge to a point where the limit of the function is zero even if the function is undefined or has another value at that point for example at x 0 for the function given by f x abs x x2 when x 0 and by f 0 5 starting with the interval 0 5 3 0 It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change for example at x 1 in f x 1 x 1 2 1 x 1 2 displaystyle f x frac 1 x 1 2 frac 1 x 1 2 nbsp The method of bisection avoids this hypothetical convergence problem Improvements in regula falsi editThough regula falsi always converges usually considerably faster than bisection there are situations that can slow its convergence sometimes to a prohibitive degree That problem isn t unique to regula falsi Other than bisection all of the numerical equation solving methods can have a slow convergence or no convergence problem under some conditions Sometimes Newton s method and the secant method diverge instead of converging and often do so under the same conditions that slow regula falsi s convergence But though regula falsi is one of the best methods and even in its original un improved version would often be the best choice for example when Newton s isn t used because the derivative is prohibitively time consuming to evaluate or when Newton s and Successive Substitutions have failed to converge Regula falsi s failure mode is easy to detect The same end point is retained twice in a row The problem is easily remedied by picking instead a modified false position chosen to avoid slowdowns due to those relatively unusual unfavorable situations A number of such improvements to regula falsi have been proposed two of them the Illinois algorithm and the Anderson Bjork algorithm are described below The Illinois algorithm edit The Illinois algorithm halves the y value of the retained end point in the next estimate computation when the new y value that is f ck has the same sign as the previous one f ck 1 meaning that the end point of the previous step will be retained Hence c k 1 2 f b k a k f a k b k 1 2 f b k f a k displaystyle c k frac frac 1 2 f b k a k f a k b k frac 1 2 f b k f a k nbsp or c k f b k a k 1 2 f a k b k f b k 1 2 f a k displaystyle c k frac f b k a k frac 1 2 f a k b k f b k frac 1 2 f a k nbsp down weighting one of the endpoint values to force the next ck to occur on that side of the function 10 The factor used above looks arbitrary but it guarantees superlinear convergence asymptotically the algorithm will perform two regular steps after any modified step and has order of convergence 1 442 There are other ways to pick the rescaling which give even better superlinear convergence rates 11 The above adjustment to regula falsi is called the Illinois algorithm by some scholars 10 12 Ford 1995 summarizes and analyzes this and other similar superlinear variants of the method of false position 11 Anderson Bjorck algorithm edit Suppose that in the k th iteration the bracketing interval is ak bk and that the functional value of the new calculated estimate ck has the same sign as f bk In this case the new bracketing interval ak 1 bk 1 ak ck and the left hand endpoint has been retained So far that s the same as ordinary Regula Falsi and the Illinois algorithm But whereas the Illinois algorithm would multiply f ak by 1 2 Anderson Bjorck algorithm multiplies it by m where m has one of the two following values 13 m 1 f c k f b k m m if m gt 0 1 2 otherwise displaystyle begin aligned m amp 1 frac f c k f b k m amp begin cases m amp text if m gt 0 frac 1 2 amp text otherwise end cases end aligned nbsp For simple roots Anderson Bjorck performs very well in practice 14 ITP method edit Main article ITP method Given k 1 0 k 2 1 1 ϕ displaystyle kappa 1 in 0 infty kappa 2 in left 1 1 phi right nbsp n 1 2 b 0 a 0 2 ϵ displaystyle n 1 2 equiv lceil b 0 a 0 2 epsilon rceil nbsp and n 0 0 displaystyle n 0 in 0 infty nbsp where ϕ displaystyle phi nbsp is the golden ration 1 2 1 5 displaystyle tfrac 1 2 1 sqrt 5 nbsp in each iteration j 0 1 2 displaystyle j 0 1 2 nbsp the ITP method calculates the point x ITP displaystyle x text ITP nbsp following three steps Interpolation Step Calculate the bisection and the regula falsi points x 1 2 a b 2 displaystyle x 1 2 equiv frac a b 2 nbsp and x f b f a a f b f a f b displaystyle x f equiv frac bf a af b f a f b nbsp Truncation Step Perturb the estimator towards the center x t x f s d displaystyle x t equiv x f sigma delta nbsp where s sign x 1 2 x f displaystyle sigma equiv text sign x 1 2 x f nbsp and d min k 1 b a k 2 x 1 2 x f displaystyle delta equiv min kappa 1 b a kappa 2 x 1 2 x f nbsp Projection Step Project the estimator to minmax interval x ITP x 1 2 s r k displaystyle x text ITP equiv x 1 2 sigma rho k nbsp where r k min ϵ 2 n 1 2 n 0 j b a 2 x t x 1 2 displaystyle rho k equiv min left epsilon 2 n 1 2 n 0 j frac b a 2 x t x 1 2 right nbsp The value of the function f x ITP displaystyle f x text ITP nbsp on this point is queried and the interval is then reduced to bracket the root by keeping the sub interval with function values of opposite sign on each end This three step procedure guarantees that the minmax properties of the bisection method are enjoyed by the estimate as well as the superlinear convergence of the secant method And is observed to outperform both bisection and interpolation based methods under smooth and non smooth functions 15 Practical considerations editWhen solving one equation or just a few using a computer the bisection method is an adequate choice Although bisection isn t as fast as the other methods when they re at their best and don t have a problem bisection nevertheless is guaranteed to converge at a useful rate roughly halving the error with each iteration gaining roughly a decimal place of accuracy with every 3 iterations For manual calculation by calculator one tends to want to use faster methods and they usually but not always converge faster than bisection But a computer even using bisection will solve an equation to the desired accuracy so rapidly that there s no need to try to save time by using a less reliable method and every method is less reliable than bisection An exception would be if the computer program had to solve equations very many times during its run Then the time saved by the faster methods could be significant Then a program could start with Newton s method and if Newton s isn t converging switch to regula falsi maybe in one of its improved versions such as the Illinois or Anderson Bjorck versions Or if even that isn t converging as well as bisection would switch to bisection which always converges at a useful if not spectacular rate When the change in y has become very small and x is also changing very little then Newton s method most likely will not run into trouble and will converge So under those favorable conditions one could switch to Newton s method if one wanted the error to be very small and wanted very fast convergence Example Growth of a bulrush editIn chapter 7 of The Nine Chapters a root finding problem can be translated to modern language as follows Excess And Deficit Problem 11 A bulrush grew 3 units on its first day At the end of each day the plant is observed to have grown by 1 2 of the previous day s growth A club rush grew 1 unit on its first day At the end of each day the plant has grown by 2 times as much as the previous day s growth Find the time in fractional days that the club rush becomes as tall as the bulrush Answer 2 6 13 displaystyle 2 frac 6 13 nbsp days the height is 4 8 10 6 130 displaystyle 4 frac 8 10 frac 6 130 nbsp units Explanation Suppose it is day 2 The club rush is shorter than the bulrush by 1 5 units Suppose it is day 3 The club rush is taller than the bulrush by 1 75 units nbsp Plot of function F its exact root point K and the approximated rootTo understand this we shall model the heights of the plants on day n n 1 2 3 after a geometric series B n i 1 n 3 1 2 i 1 displaystyle B n sum i 1 n 3 cdot frac 1 2 i 1 quad nbsp Bulrush C n i 1 n 1 2 i 1 displaystyle C n sum i 1 n 1 cdot 2 i 1 quad nbsp Club rushFor the sake of better notations let k i 1 displaystyle k i 1 nbsp Rewrite the plant height series B n C n displaystyle B n C n nbsp in terms of k and invoke the sum formula B n k 0 n 1 3 1 2 k 3 1 1 2 n 1 1 1 1 2 6 1 1 2 n displaystyle B n sum k 0 n 1 3 cdot frac 1 2 k 3 left frac 1 tfrac 1 2 n 1 1 1 tfrac 1 2 right 6 left 1 frac 1 2 n right nbsp C n k 0 n 1 2 k 1 2 n 1 2 2 n 1 displaystyle C n sum k 0 n 1 2 k frac 1 2 n 1 2 2 n 1 nbsp Now use regula falsi to find the root of C n B n displaystyle C n B n nbsp F n C n B n 6 2 n 2 n 7 displaystyle F n C n B n frac 6 2 n 2 n 7 nbsp Set x 1 2 displaystyle x 1 2 nbsp and compute F x 1 F 2 displaystyle F x 1 F 2 nbsp which equals 1 5 the deficit Set x 2 3 displaystyle x 2 3 nbsp and compute F x 2 F 3 displaystyle F x 2 F 3 nbsp which equals 1 75 the excess Estimated root 1st iteration x x 1 F x 2 x 2 F x 1 F x 2 F x 1 2 1 75 3 1 5 1 75 1 5 2 4615 displaystyle hat x frac x 1 F x 2 x 2 F x 1 F x 2 F x 1 frac 2 times 1 75 3 times 1 5 1 75 1 5 approx 2 4615 nbsp Example code editThis example program written in the C programming language is an example of the Illinois algorithm To find the positive number x where cos x x3 the equation is transformed into a root finding form f x cos x x3 0 include lt stdio h gt include lt math h gt double f double x return cos x x x x a b endpoints of an interval where we search e half of upper bound for relative error m maximal number of iteration double FalsiMethod double f double double a double b double e int m double c fc int n side 0 starting values at endpoints of interval double fa f a double fb f b for n 0 n lt m n c fa b fb a fa fb if fabs b a lt e fabs b a break fc f c if fc fb gt 0 fc and fb have same sign copy c to b b c fb fc if side 1 fa 2 side 1 else if fa fc gt 0 fc and fa have same sign copy c to a a c fa fc if side 1 fb 2 side 1 else fc f very small looks like zero break return c int main void printf 0 15f n FalsiMethod amp f 0 1 5E 15 100 return 0 After running this code the final answer is approximately 0 865474033101614 See also editITP method a variation with guaranteed minmax and superlinear convergence Ridders method another root finding method based on the false position method Brent s methodReferences edit a b Katz Victor J 1998 A History of Mathematics 2nd ed Addison Wesley Longman p 15 ISBN 978 0 321 01618 8 a b c d Smith D E 1958 1925 History of Mathematics vol II Dover pp 437 441 ISBN 978 0 486 20430 7 Chabert Jean Luc ed 2012 1999 3 Methods of False Position A History of Algorithms From the Pebble to the Microchip Springer pp 86 91 ISBN 978 3 642 18192 4 Needham Joseph 1959 Mathematics and the Sciences of the Heavens and the Earth Science and Civilisation in China Vol 3 Cambridge University Press pp 147 ISBN 978 0 521 05801 8 Nine chapters www groups dcs st and ac uk Retrieved 2019 02 16 Shen Kangshen Crossley John N Lun Anthony Wah Cheung 1999 The Nine Chapters on the Mathematical Art Companion and Commentary Oxford University Press p 358 ISBN 978 7 03 006101 0 a b Schwartz R K 2004 Issues in the Origin and Development of Hisab al Khata ayn Calculation by Double False Position Eighth North African Meeting on the History of Arab Mathematics Rades Tunisia Available online at http facstaff uindy edu oaks Biblio COMHISMA8paper doc and Archived copy PDF Archived from the original PDF on 2014 05 16 Retrieved 2012 06 08 a href Template Cite web html title Template Cite web cite web a CS1 maint archived copy as title link General Trattato vol I Venice 1556 p fol 238 v Regola Helcataym vocabulo Arabo che in nostra lingua vuol dire delle false Positioni Conte S D Boor Carl de 1965 Elementary Numerical Analysis an algorithmic approach 2nd ed McGraw Hill p 40 OCLC 1088854304 a b Dahlquist Germund Bjorck Ake 2003 1974 Numerical Methods Dover pp 231 232 ISBN 978 0486428079 a b Ford J A 1995 Improved Algorithms of Illinois type for the Numerical Solution of Nonlinear Equations Technical Report University of Essex Press CiteSeerX 10 1 1 53 8676 CSM 257 Dowell M Jarratt P 1971 A modified regula falsi method for computing the root of an equation BIT 11 2 168 174 doi 10 1007 BF01934364 S2CID 50473598 King Richard F October 1983 Anderson Bjorck for Linear Sequences Mathematics of Computation 41 164 591 596 doi 10 2307 2007695 JSTOR 2007695 Galdino Sergio 2011 A family of regula falsi root finding methods Proceedings of 2011 World Congress on Engineering and Technology 1 Retrieved 9 September 2016 Oliveira I F D Takahashi R H C 2020 12 06 An Enhancement of the Bisection Method Average Performance Preserving Minmax Optimality ACM Transactions on Mathematical Software 47 1 5 1 5 24 doi 10 1145 3423597 ISSN 0098 3500 S2CID 230586635 Further reading editBurden Richard L Faires J Douglas 2000 Numerical Analysis 7th ed Brooks Cole ISBN 0 534 38216 9 Sigler L E 2002 Fibonacci s Liber Abaci Leonardo Pisano s Book of Calculation Springer Verlag ISBN 0 387 40737 5 Roberts A M 2020 Mathematical Philology in the Treatise on Double False Position in an Arabic Manuscript at Columbia University Philological Encounters 5 3 4 3 4 doi 10 1163 24519197 BJA10007 S2CID 229538951 On a previously unpublished treatise on Double False Position in a medieval Arabic manuscript Retrieved from https en wikipedia org w index php title Regula falsi amp oldid 1183280567, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.