fbpx
Wikipedia

Miller–Rabin primality test

The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen primality test.

It is of historical significance in the search for a polynomial-time deterministic primality test. Its probabilistic variant remains widely used in practice, as one of the simplest and fastest tests known.

Gary L. Miller discovered the test in 1976. Miller's version of the test is deterministic, but its correctness relies on the unproven extended Riemann hypothesis.[1] Michael O. Rabin modified it to obtain an unconditional probabilistic algorithm in 1980.[2][a]

Mathematical concepts edit

Similarly to the Fermat and Solovay–Strassen tests, the Miller–Rabin primality test checks whether a specific property, which is known to hold for prime values, holds for the number under testing.

Strong probable primes edit

The property is the following. For a given odd integer n > 2, let’s write n − 1 as 2sd where s is a positive integer and d is an odd positive integer. Let’s consider an integer a, called a base, which is coprime to n. Then, n is said to be a strong probable prime to base a if one of these congruence relations holds:

  •  ;
  •   for some 0 ≤ r < s.

This simplifies to first checking for   and then   for successive values of r. For each value of r, the value of the expression may be calculated using the value obtained for the previous value of r by squaring under the modulus of n.

The idea beneath this test is that when n is an odd prime, it passes the test because of two facts:

  • by Fermat's little theorem,   (this property alone defines the weaker notion of probable prime to base a, on which the Fermat test is based);
  • the only square roots of 1 modulo n are 1 and −1.

Hence, by contraposition, if n is not a strong probable prime to base a, then n is definitely composite, and a is called a witness for the compositeness of n.

However, this property is not an exact characterization of prime numbers. If n is composite, it may nonetheless be a strong probable prime to base a, in which case it is called a strong pseudoprime, and a is a strong liar.

Choices of bases edit

Thankfully, no composite number is a strong pseudoprime to all bases at the same time (contrary to the Fermat primality test for which Fermat pseudoprimes to all bases exist: the Carmichael numbers). However no simple way of finding a witness is known. A naïve solution is to try all possible bases, which yields an inefficient deterministic algorithm. The Miller test is a more efficient variant of this (see section Miller test below).

Another solution is to pick a base at random. This yields a fast probabilistic test. When n is composite, most bases are witnesses, so the test will detect n as composite with a reasonably high probability (see section Accuracy below). We can quickly reduce the probability of a false positive to an arbitrarily small rate, by combining the outcome of as many independently chosen bases as necessary to achieve the said rate. This is the Miller–Rabin test. The number of bases to try does not depend on n. There seems to be diminishing returns in trying many bases, because if n is a pseudoprime to some base, then it seems more likely to be a pseudoprime to another base.[4]: §8 

Note that ad ≡ 1 (mod n) holds trivially for a ≡ 1 (mod n), because the congruence relation is compatible with exponentiation. And ad = a20d ≡ −1 (mod n) holds trivially for a ≡ −1 (mod n) since d is odd, for the same reason. That is why random a are usually chosen in the interval 1 < a < n − 1.

For testing arbitrarily large n, choosing bases at random is essential, as we don't know the distribution of witnesses and strong liars among the numbers 2, 3, ..., n − 2.[b]

However, a pre-selected set of a few small bases guarantees the identification of all composites up to a pre-computed maximum. This maximum is generally quite large compared to the bases. This gives very fast deterministic tests for small enough n (see section Testing against small sets of bases below).

Proofs edit

Here is a proof that, if n is a prime, then the only square roots of 1 modulo n are 1 and −1.

Proof

Certainly 1 and −1, when squared modulo n, always yield 1. It remains to show that there are no other square roots of 1 modulo n. This is a special case, here applied with the polynomial X2 − 1 over the finite field Z/nZ, of the more general fact that a polynomial over some field has no more roots than its degree (this theorem follows from the existence of an Euclidean division for polynomials). Here follows a more elementary proof. Suppose that x is a square root of 1 modulo n. Then:

 

In other words, n divides the product (x − 1)(x + 1). By Euclid's lemma, since n is prime, it divides one of the factors x − 1 or x + 1, implying that x is congruent to either 1 or −1 modulo n.

Here is a proof that, if n is an odd prime, then it is a strong probable prime to base a.

Proof

If n is an odd prime and we write n − 1= 2sd where s is a positive integer and d is an odd positive integer, by Fermat's little theorem:

 

Each term of the sequence   is a square root of the previous term. Since the first term is congruent to 1, the second term is a square root of 1 modulo n. By the previous lemma, it is congruent to either 1 or −1 modulo n. If it is congruent to −1, we are done. Otherwise, it is congruent to 1 and we can iterate the reasoning. At the end, either one of the terms is congruent to −1, or all of them are congruent to 1, and in particular the last term, ad, is.

Example edit

Suppose we wish to determine if   is prime. We write  , so that we have  . We randomly select a number   such that  .

Say  :

 

Since  , either 221 is prime, or 174 is a strong liar for 221. We try another random  , this time choosing  :

 

Hence 137 is a witness for the compositeness of 221, and 174 was in fact a strong liar. Note that this tells us nothing about the factors of 221 (which are 13 and 17). However, the example with 341 in a later section shows how these calculations can sometimes produce a factor of n.

For a practical guide to choosing the value of a see Testing against small sets of bases.

Miller–Rabin test edit

The algorithm can be written in pseudocode as follows. The parameter k determines the accuracy of the test. The greater the number of rounds, the more accurate the result.[6]

Input #1: n > 2, an odd integer to be tested for primality Input #2: k, the number of rounds of testing to perform Output: “composite” if n is found to be composite, “probably prime” otherwise 
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1 repeat k times: a ← random(2, n − 2) # n is always a probable prime to base 1 and n − 1 xad mod n repeat s times: yx2 mod n if y = 1 and x ≠ 1 and xn − 1 then # nontrivial square root of 1 modulo n returncompositexy if y ≠ 1 then returncompositereturnprobably prime

Complexity edit

Using repeated squaring, the running time of this algorithm is O(k log3 n), where n is the number tested for primality, and k is the number of rounds performed; thus this is an efficient, polynomial-time algorithm. FFT-based multiplication (Harvey-Hoeven algorithm) can decrease the running time to O(k log2 n log log n) = Õ(k log2 n).

Accuracy edit

The error made by the primality test is measured by the probability that a composite number is declared probably prime. The more bases a are tried, the better the accuracy of the test. It can be shown that if n is composite, then at most 1/4 of the bases a are strong liars for n.[2][7] As a consequence, if n is composite then running k iterations of the Miller–Rabin test will declare n probably prime with a probability at most 4k.

This is an improvement over the Solovay–Strassen test, whose worst‐case error bound is 2k. Moreover, the Miller–Rabin test is strictly stronger than the Solovay–Strassen test in the sense that for every composite n, the set of strong liars for n is a subset of the set of Euler liars for n, and for many n, the subset is proper.

In addition, for large values of n, the probability for a composite number to be declared probably prime is often significantly smaller than 4k. For instance, for most numbers n, this probability is bounded by 8k; the proportion of numbers n which invalidate this upper bound vanishes as we consider larger values of n.[8] Hence the average case has a much better accuracy than 4k, a fact which can be exploited for generating probable primes (see below). However, such improved error bounds should not be relied upon to verify primes whose probability distribution is not controlled, since a cryptographic adversary might send a carefully chosen pseudoprime in order to defeat the primality test.[c] In such contexts, only the worst‐case error bound of 4k can be relied upon.

The above error measure is the probability for a composite number to be declared as a strong probable prime after k rounds of testing; in mathematical words, it is the conditional probability   where P is the event that the number being tested is prime, and MRk is the event that it passes the Miller–Rabin test with k rounds. We are often interested instead in the inverse conditional probability  : the probability that a number which has been declared as a strong probable prime is in fact composite. These two probabilities are related by Bayes' law:

 

In the last equation, we simplified the expression using the fact that all prime numbers are correctly reported as strong probable primes (the test has no false negative). By dropping the left part of the denominator, we derive a simple upper bound:

 

Hence this conditional probability is related not only to the error measure discussed above — which is bounded by 4k — but also to the probability distribution of the input number. In the general case, as said earlier, this distribution is controlled by a cryptographic adversary, thus unknown, so we cannot deduce much about  . However, in the case when we use the Miller–Rabin test to generate primes (see below), the distribution is chosen by the generator itself, so we can exploit this result.

Deterministic variants edit

Miller test edit

The Miller–Rabin algorithm can be made deterministic by trying all possible values of a below a certain limit. Taking n as the limit would imply O(n) trials, hence the running time would be exponential with respect to the size log n of the input. To improve the running time, the challenge is then to lower the limit as much as possible while keeping the test reliable.

If the tested number n is composite, the strong liars a coprime to n are contained in a proper subgroup of the group (Z/nZ)*, which means that if we test all a from a set which generates (Z/nZ)*, one of them must lie outside the said subgroup, hence must be a witness for the compositeness of n. Assuming the truth of the generalized Riemann hypothesis (GRH), it is known that the group is generated by its elements smaller than O((ln n)2), which was already noted by Miller.[1] The constant involved in the Big O notation was reduced to 2 by Eric Bach.[10] This leads to the following primality testing algorithm, known as the Miller test, which is deterministic assuming the GRH:

Input: n > 2, an odd integer to be tested for primality Output: “composite” if n is composite, “prime” otherwise 
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1 for all a in the range [2, min(n − 2, ⌊2(ln n)2⌋)]: xad mod n repeat s times: yx2 mod n if y = 1 and x ≠ 1 and xn − 1 then # nontrivial square root of 1 modulo n returncompositexy if y ≠ 1 then returncompositereturnprime

The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of even index, it suffices to assume the validity of GRH for quadratic Dirichlet characters.[7]

The running time of the algorithm is, in the soft-O notation, Õ((log n)4) (using FFT‐based multiplication).

The Miller test is not used in practice. For most purposes, proper use of the probabilistic Miller–Rabin test or the Baillie–PSW primality test gives sufficient confidence while running much faster. It is also slower in practice than commonly used proof methods such as APR-CL and ECPP which give results that do not rely on unproven assumptions. For theoretical purposes requiring a deterministic polynomial time algorithm, it was superseded by the AKS primality test, which also does not rely on unproven assumptions.

Testing against small sets of bases edit

When the number n to be tested is small, trying all a < 2(ln n)2 is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, Pomerance, Selfridge, Wagstaff[4] and Jaeschke[11] have verified that

  • if n < 2,047, it is enough to test a = 2;
  • if n < 1,373,653, it is enough to test a = 2 and 3;
  • if n < 9,080,191, it is enough to test a = 31 and 73;
  • if n < 25,326,001, it is enough to test a = 2, 3, and 5;
  • if n < 3,215,031,751, it is enough to test a = 2, 3, 5, and 7;
  • if n < 4,759,123,141, it is enough to test a = 2, 7, and 61;
  • if n < 1,122,004,669,633, it is enough to test a = 2, 13, 23, and 1662803;
  • if n < 2,152,302,898,747, it is enough to test a = 2, 3, 5, 7, and 11;
  • if n < 3,474,749,660,383, it is enough to test a = 2, 3, 5, 7, 11, and 13;
  • if n < 341,550,071,728,321, it is enough to test a = 2, 3, 5, 7, 11, 13, and 17.

Using the work of Feitsma and Galway enumerating all base 2 pseudoprimes in 2010, this was extended (see OEISA014233), with the first result later shown using different methods in Jiang and Deng:[12]

  • if n < 3,825,123,056,546,413,051, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, and 23.
  • if n < 18,446,744,073,709,551,616 = 264, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37.

Sorenson and Webster[13] verify the above and calculate precise results for these larger than 64‐bit results:

  • if n < 318,665,857,834,031,151,167,461, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37.
  • if n < 3,317,044,064,679,887,385,961,981, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, and 41.

Other criteria of this sort, often more efficient (fewer bases required) than those shown above, exist.[14][15][16][17] They give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.

There is a small list of potential witnesses for every possible input size (at most b values for b‐bit numbers). However, no finite set of bases is sufficient for all composite numbers. Alford, Granville, and Pomerance have shown that there exist infinitely many composite numbers n whose smallest compositeness witness is at least (ln n)1/(3ln ln ln n).[18] They also argue heuristically that the smallest number w such that every composite number below n has a compositeness witness less than w should be of order Θ(log n log log n).

Variants for finding factors edit

By inserting greatest common divisor calculations into the above algorithm, we can sometimes obtain a factor of n instead of merely determining that n is composite. This occurs for example when n is a probable prime to base a but not a strong probable prime to base a.[19]: 1402 

If x is a nontrivial square root of 1 modulo n,

  • since x2 ≡ 1 (mod n), we know that n divides x2 − 1 = (x − 1)(x + 1);
  • since x ≢ ±1 (mod n), we know that n does not divide x − 1 nor x + 1.

From this we deduce that A = gcd(x − 1, n) and B = gcd(x + 1, n) are nontrivial (not necessarily prime) factors of n (in fact, since n is odd, these factors are coprime and n = AB). Hence, if factoring is a goal, these gcd calculations can be inserted into the algorithm at little additional computational cost. This leads to the following pseudocode, where the added or changed code is highlighted:

Input #1: n > 2, an odd integer to be tested for primality Input #2: k, the number of rounds of testing to perform Output: (“multiple of”, m) if a nontrivial factor m of n is found,composite” if n is otherwise found to be composite, “probably prime” otherwise 
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1 repeat k times: a ← random(2, n − 2) # n is always a probable prime to base 1 and n − 1 xad mod n repeat s times: yx2 mod n if y = 1 and x ≠ 1 and xn − 1 then # nontrivial square root of 1 modulo n return (“multiple of”, gcd(x − 1, n)) xy if y ≠ 1 then returncompositereturnprobably prime

This is not a probabilistic factorization algorithm because it is only able to find factors for numbers n which are pseudoprime to base a (in other words, for numbers n such that an−1 ≡ 1 mod n). For other numbers, the algorithm only returns “composite” with no further information.

For example, consider n = 341 and a = 2. We have n − 1 = 85 × 4. Then 285 mod 341 = 32 and 322 mod 341 = 1. This tells us that n is a pseudoprime base 2, but not a strong pseudoprime base 2. By computing a gcd at this stage, we find a factor of 341: gcd(32 − 1, 341) = 31. Indeed, 341 = 11 × 31.

In order to find factors more often, the same ideas can also be applied to the square roots of −1 (or any other number). This strategy can be implemented by exploiting knowledge from previous rounds of the Miller–Rabin test. In those rounds we may have identified a square root modulo n of −1, say R. Then, when x2 mod n = n − 1, we can compare the value of x against R: if x is neither R nor nR, then gcd(xR, n) and gcd(x + R, n) are nontrivial factors of n.[14]

Generation of probable primes edit

The Miller–Rabin test can be used to generate strong probable primes, simply by drawing integers at random until one passes the test. This algorithm terminates almost surely (since at each iteration there is a chance to draw a prime number). The pseudocode for generating bbit strong probable primes (with the most significant bit set) is as follows:

Input #1: b, the number of bits of the result Input #2: k, the number of rounds of testing to perform Output: a strong probable prime n 
while True: pick a random odd integer n in the range [2b−1, 2b−1] if the Miller–Rabin test with inputs n and k returns “probably primethen return n 

Complexity edit

Of course the worst-case running time is infinite, since the outer loop may never terminate, but that happens with probability zero. As per the geometric distribution, the expected number of draws is   (reusing notations from earlier).

As any prime number passes the test, the probability of being prime gives a coarse lower bound to the probability of passing the test. If we draw odd integers uniformly in the range [2b−1, 2b−1], then we get:

 

where π is the prime-counting function. Using an asymptotic expansion of π (an extension of the prime number theorem), we can approximate this probability when b grows towards infinity. We find:

 
 

Hence we can expect the generator to run no more Miller–Rabin tests than a number proportional to b. Taking into account the worst-case complexity of each Miller–Rabin test (see earlier), the expected running time of the generator with inputs b and k is then bounded by O(k b4) (or Õ(k b3) using FFT-based multiplication).

Accuracy edit

The error measure of this generator is the probability that it outputs a composite number.

Using the relation between conditional probabilities (shown in an earlier section) and the asymptotic behavior of   (shown just before), this error measure can be given a coarse upper bound:

 

Hence, for large enough b, this error measure is less than  . However, much better bounds exist.

Using the fact that the Miller–Rabin test itself often has an error bound much smaller than 4k (see earlier), Damgård, Landrock and Pomerance derived several error bounds for the generator, with various classes of parameters b and k.[8] These error bounds allow an implementor to choose a reasonable k for a desired accuracy.

One of these error bounds is 4k, which holds for all b ≥ 2 (the authors only showed it for b ≥ 51, while Ronald Burthe Jr. completed the proof with the remaining values 2 ≤ b ≤ 50[20]). Again this simple bound can be improved for large values of b. For instance, another bound derived by the same authors is:

 

which holds for all b ≥ 21 and kb/4. This bound is smaller than 4k as soon as b ≥ 32.

Notes edit

  1. ^ The Miller–Rabin test is often incorrectly said to have been discovered by M. M. Artjuhov as soon as 1967; a reading of Artjuhov's paper[3] (particularly his Theorem E) shows that he actually discovered the Solovay–Strassen test.
  2. ^ For instance, in 1995, Arnault gives a 397-digit composite number for which all bases less than 307 are strong liars; this number was reported to be prime by the Maple isprime() function, because it implemented the Miller–Rabin test with the specific bases 2, 3, 5, 7 and 11.[5]
  3. ^ For instance, in 2018, Albrecht et al. were able to construct, for many cryptographic libraries such as OpenSSL and GNU GMP, composite numbers that these libraries declared prime, thus demonstrating that they were not implemented with an adversarial context in mind.[9]

References edit

  1. ^ a b Miller, Gary L. (1976), "Riemann's Hypothesis and Tests for Primality", Journal of Computer and System Sciences, 13 (3): 300–317, doi:10.1145/800116.803773, S2CID 10690396
  2. ^ a b Rabin, Michael O. (1980), "Probabilistic algorithm for testing primality", Journal of Number Theory, 12 (1): 128–138, doi:10.1016/0022-314X(80)90084-0
  3. ^ Artjuhov, M. M. (1966–1967), "Certain criteria for primality of numbers connected with the little Fermat theorem", Acta Arithmetica, 12: 355–364, MR 0213289
  4. ^ a b Carl Pomerance; John L. Selfridge; Samuel S. Wagstaff, Jr. (July 1980). "The pseudoprimes to 25 ⋅ 109" (PDF). Mathematics of Computation. 35 (151): 1003–1026. doi:10.1090/S0025-5718-1980-0572872-7.
  5. ^ F. Arnault (August 1995). "Constructing Carmichael Numbers Which Are Strong Pseudoprimes to Several Bases". Journal of Symbolic Computation. 20 (2): 151–161. doi:10.1006/jsco.1995.1042.
  6. ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. "31". Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 968–971. ISBN 0-262-03384-4.
  7. ^ a b Schoof, René (2004), "Four primality testing algorithms" (PDF), Algorithmic Number Theory: Lattices, Number Fields, Curves and Cryptography, Cambridge University Press, ISBN 978-0-521-80854-5
  8. ^ a b Damgård, I.; Landrock, P. & Pomerance, C. (1993), "Average case error estimates for the strong probable prime test" (PDF), Mathematics of Computation, 61 (203): 177–194, Bibcode:1993MaCom..61..177D, doi:10.2307/2152945, JSTOR 2152945
  9. ^ Martin R. Albrecht; Jake Massimo; Kenneth G. Paterson; Juraj Somorovsky (15 October 2018). Prime and Prejudice: Primality Testing Under Adversarial Conditions (PDF). ACM SIGSAC Conference on Computer and Communications Security 2018. Toronto: Association for Computing Machinery. pp. 281–298. doi:10.1145/3243734.3243787.
  10. ^ Bach, Eric (1990), "Explicit bounds for primality testing and related problems", Mathematics of Computation, 55 (191): 355–380, Bibcode:1990MaCom..55..355B, doi:10.2307/2008811, JSTOR 2008811
  11. ^ Jaeschke, Gerhard (1993), "On strong pseudoprimes to several bases", Mathematics of Computation, 61 (204): 915–926, doi:10.2307/2153262, JSTOR 2153262
  12. ^ Jiang, Yupeng; Deng, Yingpu (2014). "Strong pseudoprimes to the first eight prime bases". Mathematics of Computation. 83 (290): 2915–2924. doi:10.1090/S0025-5718-2014-02830-5. S2CID 33599405.
  13. ^ Sorenson, Jonathan; Webster, Jonathan (2015). "Strong Pseudoprimes to Twelve Prime Bases". Mathematics of Computation. 86 (304): 985–1003. arXiv:1509.00864. Bibcode:2015arXiv150900864S. doi:10.1090/mcom/3134. S2CID 6955806.
  14. ^ a b Caldwell, Chris. "Finding primes & proving primality — 2.3: Strong probable-primality and a practical test". The Prime Pages. Retrieved February 24, 2019.
  15. ^ Zhang, Zhenxiang & Tang, Min (2003), "Finding strong pseudoprimes to several bases. II", Mathematics of Computation, 72 (44): 2085–2097, Bibcode:2003MaCom..72.2085Z, doi:10.1090/S0025-5718-03-01545-X
  16. ^ Sloane, N. J. A. (ed.). "Sequence A014233 (Smallest odd number for which Miller–Rabin primality test on bases <= n-th prime does not reveal compositeness)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
  17. ^ Izykowski, Wojciech. "Deterministic variants of the Miller–Rabin primality test". Retrieved February 24, 2019.
  18. ^ Alford, W. R.; Granville, A.; Pomerance, C. (1994), "On the difficulty of finding reliable witnesses", Algorithmic Number Theory (PDF), Lecture Notes in Computer Science, vol. 877, Springer-Verlag, pp. 1–16, doi:10.1007/3-540-58691-1_36, ISBN 978-3-540-58691-3
  19. ^ Robert Baillie; Samuel S. Wagstaff, Jr. (October 1980). "Lucas Pseudoprimes" (PDF). Mathematics of Computation. 35 (152): 1391–1417. doi:10.1090/S0025-5718-1980-0583518-6. MR 0583518.
  20. ^ Burthe Jr., Ronald J. (1996), "Further investigations with the strong probable prime test" (PDF), Mathematics of Computation, 65 (213): 373–381, Bibcode:1996MaCom..65..373B, doi:10.1090/S0025-5718-96-00695-3

External links edit

  • Weisstein, Eric W. "Rabin-Miller Strong Pseudoprime Test". MathWorld.
  • Interactive Online Implementation of the Deterministic Variant (stepping through the algorithm step-by-step)
  • Applet (German)
  • Miller–Rabin primality test in C#
  • Miller–Rabin primality test in JavaScript using arbitrary precision arithmetic

miller, rabin, primality, test, rabin, miller, primality, test, probabilistic, primality, test, algorithm, which, determines, whether, given, number, likely, prime, similar, fermat, primality, test, solovay, strassen, primality, test, historical, significance,. The Miller Rabin primality test or Rabin Miller primality test is a probabilistic primality test an algorithm which determines whether a given number is likely to be prime similar to the Fermat primality test and the Solovay Strassen primality test It is of historical significance in the search for a polynomial time deterministic primality test Its probabilistic variant remains widely used in practice as one of the simplest and fastest tests known Gary L Miller discovered the test in 1976 Miller s version of the test is deterministic but its correctness relies on the unproven extended Riemann hypothesis 1 Michael O Rabin modified it to obtain an unconditional probabilistic algorithm in 1980 2 a Contents 1 Mathematical concepts 1 1 Strong probable primes 1 2 Choices of bases 1 3 Proofs 2 Example 3 Miller Rabin test 3 1 Complexity 3 2 Accuracy 4 Deterministic variants 4 1 Miller test 4 2 Testing against small sets of bases 5 Variants for finding factors 6 Generation of probable primes 6 1 Complexity 6 2 Accuracy 7 Notes 8 References 9 External linksMathematical concepts editSimilarly to the Fermat and Solovay Strassen tests the Miller Rabin primality test checks whether a specific property which is known to hold for prime values holds for the number under testing Strong probable primes edit The property is the following For a given odd integer n gt 2 let s write n 1 as 2sd where s is a positive integer and d is an odd positive integer Let s consider an integer a called a base which is coprime to n Then n is said to be a strong probable prime to base a if one of these congruence relations holds ad 1 modn displaystyle a d equiv 1 pmod n nbsp a2rd 1 modn displaystyle a 2 r d equiv 1 pmod n nbsp for some 0 r lt s This simplifies to first checking for admodn 1 displaystyle a d bmod n 1 nbsp and then a2rd n 1 displaystyle a 2 r d n 1 nbsp for successive values of r For each value of r the value of the expression may be calculated using the value obtained for the previous value of r by squaring under the modulus of n The idea beneath this test is that when n is an odd prime it passes the test because of two facts by Fermat s little theorem an 1 1 modn displaystyle a n 1 equiv 1 pmod n nbsp this property alone defines the weaker notion of probable prime to base a on which the Fermat test is based the only square roots of 1 modulo n are 1 and 1 Hence by contraposition if n is not a strong probable prime to base a then n is definitely composite and a is called a witness for the compositeness of n However this property is not an exact characterization of prime numbers If n is composite it may nonetheless be a strong probable prime to base a in which case it is called a strong pseudoprime and a is a strong liar Choices of bases edit Thankfully no composite number is a strong pseudoprime to all bases at the same time contrary to the Fermat primality test for which Fermat pseudoprimes to all bases exist the Carmichael numbers However no simple way of finding a witness is known A naive solution is to try all possible bases which yields an inefficient deterministic algorithm The Miller test is a more efficient variant of this see section Miller test below Another solution is to pick a base at random This yields a fast probabilistic test When n is composite most bases are witnesses so the test will detect n as composite with a reasonably high probability see section Accuracy below We can quickly reduce the probability of a false positive to an arbitrarily small rate by combining the outcome of as many independently chosen bases as necessary to achieve the said rate This is the Miller Rabin test The number of bases to try does not depend on n There seems to be diminishing returns in trying many bases because if n is a pseudoprime to some base then it seems more likely to be a pseudoprime to another base 4 8 Note that ad 1 mod n holds trivially for a 1 mod n because the congruence relation is compatible with exponentiation And ad a20d 1 mod n holds trivially for a 1 mod n since d is odd for the same reason That is why random a are usually chosen in the interval 1 lt a lt n 1 For testing arbitrarily large n choosing bases at random is essential as we don t know the distribution of witnesses and strong liars among the numbers 2 3 n 2 b However a pre selected set of a few small bases guarantees the identification of all composites up to a pre computed maximum This maximum is generally quite large compared to the bases This gives very fast deterministic tests for small enough n see section Testing against small sets of bases below Proofs edit Here is a proof that if n is a prime then the only square roots of 1 modulo n are 1 and 1 Proof Certainly 1 and 1 when squared modulo n always yield 1 It remains to show that there are no other square roots of 1 modulo n This is a special case here applied with the polynomial X2 1 over the finite field Z nZ of the more general fact that a polynomial over some field has no more roots than its degree this theorem follows from the existence of an Euclidean division for polynomials Here follows a more elementary proof Suppose that x is a square root of 1 modulo n Then x 1 x 1 x2 1 0 modn displaystyle x 1 x 1 x 2 1 equiv 0 pmod n nbsp In other words n divides the product x 1 x 1 By Euclid s lemma since n is prime it divides one of the factors x 1 or x 1 implying that x is congruent to either 1 or 1 modulo n Here is a proof that if n is an odd prime then it is a strong probable prime to base a Proof If n is an odd prime and we write n 1 2sd where s is a positive integer and d is an odd positive integer by Fermat s little theorem a2sd 1 modn displaystyle a 2 s d equiv 1 pmod n nbsp Each term of the sequence a2sd a2s 1d a2d ad displaystyle a 2 s d a 2 s 1 d dots a 2d a d nbsp is a square root of the previous term Since the first term is congruent to 1 the second term is a square root of 1 modulo n By the previous lemma it is congruent to either 1 or 1 modulo n If it is congruent to 1 we are done Otherwise it is congruent to 1 and we can iterate the reasoning At the end either one of the terms is congruent to 1 or all of them are congruent to 1 and in particular the last term ad is Example editSuppose we wish to determine if n 221 displaystyle n 221 nbsp is prime We write n 1 as 22 55 displaystyle n 1 text as 2 2 times 55 nbsp so that we have s 2 and d 55 displaystyle s 2 text and d 55 nbsp We randomly select a number a displaystyle a nbsp such that 2 a n 2 displaystyle 2 leq a leq n 2 nbsp Say a 174 displaystyle a 174 nbsp as0d mod n 1742055 mod 221 17455 47 Since 47 1 and 47 n 1 we continue 1742155 mod 221 174110 220 n 1 displaystyle begin aligned a s 0 d text mod n rightarrow amp 174 2 0 55 text mod 221 equiv 174 55 equiv 47 text Since 47 neq 1 text and 47 neq n 1 text we continue amp 174 2 1 55 text mod 221 equiv 174 110 equiv 220 n 1 end aligned nbsp Since 220 1 mod n displaystyle 220 equiv 1 text mod n nbsp either 221 is prime or 174 is a strong liar for 221 We try another random a displaystyle a nbsp this time choosing a 137 displaystyle a 137 nbsp as0d mod n 1372055 mod 221 13755 188 Since 188 1 and 188 n 1 we continue 1372155 mod 221 137110 205 n 1 displaystyle begin aligned a s 0 d text mod n rightarrow amp 137 2 0 55 text mod 221 equiv 137 55 equiv 188 text Since 188 neq 1 text and 188 neq n 1 text we continue amp 137 2 1 55 text mod 221 equiv 137 110 equiv 205 neq n 1 end aligned nbsp Hence 137 is a witness for the compositeness of 221 and 174 was in fact a strong liar Note that this tells us nothing about the factors of 221 which are 13 and 17 However the example with 341 in a later section shows how these calculations can sometimes produce a factor of n For a practical guide to choosing the value of a see Testing against small sets of bases Miller Rabin test editThe algorithm can be written in pseudocode as follows The parameter k determines the accuracy of the test The greater the number of rounds the more accurate the result 6 Input 1 n gt 2 an odd integer to be tested for primality Input 2 k the number of rounds of testing to perform Output composite if n is found to be composite probably prime otherwise let s gt 0 and d odd gt 0 such that n 1 2sd by factoring out powers of 2 from n 1 repeat k times a random 2 n 2 n is always a probable prime to base 1 and n 1 x ad mod n repeat s times y x2 mod n if y 1 and x 1 and x n 1 then nontrivial square root of 1 modulo n return composite x y if y 1 then return composite return probably prime Complexity edit Using repeated squaring the running time of this algorithm is O k log3 n where n is the number tested for primality and k is the number of rounds performed thus this is an efficient polynomial time algorithm FFT based multiplication Harvey Hoeven algorithm can decrease the running time to O k log2 n log log n O k log2 n Accuracy edit The error made by the primality test is measured by the probability that a composite number is declared probably prime The more bases a are tried the better the accuracy of the test It can be shown that if n is composite then at most 1 4 of the bases a are strong liars for n 2 7 As a consequence if n is composite then running k iterations of the Miller Rabin test will declare n probably prime with a probability at most 4 k This is an improvement over the Solovay Strassen test whose worst case error bound is 2 k Moreover the Miller Rabin test is strictly stronger than the Solovay Strassen test in the sense that for every composite n the set of strong liars for n is a subset of the set of Euler liars for n and for many n the subset is proper In addition for large values of n the probability for a composite number to be declared probably prime is often significantly smaller than 4 k For instance for most numbers n this probability is bounded by 8 k the proportion of numbers n which invalidate this upper bound vanishes as we consider larger values of n 8 Hence the average case has a much better accuracy than 4 k a fact which can be exploited for generating probable primes see below However such improved error bounds should not be relied upon to verify primes whose probability distribution is not controlled since a cryptographic adversary might send a carefully chosen pseudoprime in order to defeat the primality test c In such contexts only the worst case error bound of 4 k can be relied upon The above error measure is the probability for a composite number to be declared as a strong probable prime after k rounds of testing in mathematical words it is the conditional probability Pr MRk P displaystyle Pr M R k mid lnot P nbsp where P is the event that the number being tested is prime and MRk is the event that it passes the Miller Rabin test with k rounds We are often interested instead in the inverse conditional probability Pr P MRk displaystyle Pr lnot P mid M R k nbsp the probability that a number which has been declared as a strong probable prime is in fact composite These two probabilities are related by Bayes law Pr P MRk Pr P MRk Pr P MRk Pr P MRk 11 Pr MRk P Pr MRk P Pr P Pr P 11 1Pr MRk P Pr P 1 Pr P displaystyle begin aligned Pr lnot P mid M R k amp frac Pr lnot P land M R k Pr lnot P land M R k Pr P land M R k amp frac 1 1 frac Pr M R k mid P Pr M R k mid lnot P frac Pr P Pr lnot P amp frac 1 1 frac 1 Pr M R k mid lnot P frac Pr P 1 Pr P end aligned nbsp In the last equation we simplified the expression using the fact that all prime numbers are correctly reported as strong probable primes the test has no false negative By dropping the left part of the denominator we derive a simple upper bound Pr P MRk lt Pr MRk P 1Pr P 1 displaystyle Pr lnot P mid M R k lt Pr M R k mid lnot P left tfrac 1 Pr P 1 right nbsp Hence this conditional probability is related not only to the error measure discussed above which is bounded by 4 k but also to the probability distribution of the input number In the general case as said earlier this distribution is controlled by a cryptographic adversary thus unknown so we cannot deduce much about Pr P MRk displaystyle Pr lnot P mid M R k nbsp However in the case when we use the Miller Rabin test to generate primes see below the distribution is chosen by the generator itself so we can exploit this result Deterministic variants editMiller test edit The Miller Rabin algorithm can be made deterministic by trying all possible values of a below a certain limit Taking n as the limit would imply O n trials hence the running time would be exponential with respect to the size log n of the input To improve the running time the challenge is then to lower the limit as much as possible while keeping the test reliable If the tested number n is composite the strong liars a coprime to n are contained in a proper subgroup of the group Z nZ which means that if we test all a from a set which generates Z nZ one of them must lie outside the said subgroup hence must be a witness for the compositeness of n Assuming the truth of the generalized Riemann hypothesis GRH it is known that the group is generated by its elements smaller than O ln n 2 which was already noted by Miller 1 The constant involved in the Big O notation was reduced to 2 by Eric Bach 10 This leads to the following primality testing algorithm known as the Miller test which is deterministic assuming the GRH Input n gt 2 an odd integer to be tested for primality Output composite if n is composite prime otherwise let s gt 0 and d odd gt 0 such that n 1 2sd by factoring out powers of 2 from n 1 for all a in the range 2 min n 2 2 ln n 2 x ad mod n repeat s times y x2 mod n if y 1 and x 1 and x n 1 then nontrivial square root of 1 modulo n return composite x y if y 1 then return composite return prime The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test as we deal with subgroups of even index it suffices to assume the validity of GRH for quadratic Dirichlet characters 7 The running time of the algorithm is in the soft O notation O log n 4 using FFT based multiplication The Miller test is not used in practice For most purposes proper use of the probabilistic Miller Rabin test or the Baillie PSW primality test gives sufficient confidence while running much faster It is also slower in practice than commonly used proof methods such as APR CL and ECPP which give results that do not rely on unproven assumptions For theoretical purposes requiring a deterministic polynomial time algorithm it was superseded by the AKS primality test which also does not rely on unproven assumptions Testing against small sets of bases edit When the number n to be tested is small trying all a lt 2 ln n 2 is not necessary as much smaller sets of potential witnesses are known to suffice For example Pomerance Selfridge Wagstaff 4 and Jaeschke 11 have verified that if n lt 2 047 it is enough to test a 2 if n lt 1 373 653 it is enough to test a 2 and 3 if n lt 9 080 191 it is enough to test a 31 and 73 if n lt 25 326 001 it is enough to test a 2 3 and 5 if n lt 3 215 031 751 it is enough to test a 2 3 5 and 7 if n lt 4 759 123 141 it is enough to test a 2 7 and 61 if n lt 1 122 004 669 633 it is enough to test a 2 13 23 and 1662803 if n lt 2 152 302 898 747 it is enough to test a 2 3 5 7 and 11 if n lt 3 474 749 660 383 it is enough to test a 2 3 5 7 11 and 13 if n lt 341 550 071 728 321 it is enough to test a 2 3 5 7 11 13 and 17 Using the work of Feitsma and Galway enumerating all base 2 pseudoprimes in 2010 this was extended see OEIS A014233 with the first result later shown using different methods in Jiang and Deng 12 if n lt 3 825 123 056 546 413 051 it is enough to test a 2 3 5 7 11 13 17 19 and 23 if n lt 18 446 744 073 709 551 616 264 it is enough to test a 2 3 5 7 11 13 17 19 23 29 31 and 37 Sorenson and Webster 13 verify the above and calculate precise results for these larger than 64 bit results if n lt 318 665 857 834 031 151 167 461 it is enough to test a 2 3 5 7 11 13 17 19 23 29 31 and 37 if n lt 3 317 044 064 679 887 385 961 981 it is enough to test a 2 3 5 7 11 13 17 19 23 29 31 37 and 41 Other criteria of this sort often more efficient fewer bases required than those shown above exist 14 15 16 17 They give very fast deterministic primality tests for numbers in the appropriate range without any assumptions There is a small list of potential witnesses for every possible input size at most b values for b bit numbers However no finite set of bases is sufficient for all composite numbers Alford Granville and Pomerance have shown that there exist infinitely many composite numbers n whose smallest compositeness witness is at least ln n 1 3ln ln ln n 18 They also argue heuristically that the smallest number w such that every composite number below n has a compositeness witness less than w should be of order 8 log n log log n Variants for finding factors editBy inserting greatest common divisor calculations into the above algorithm we can sometimes obtain a factor of n instead of merely determining that n is composite This occurs for example when n is a probable prime to base a but not a strong probable prime to base a 19 1402 If x is a nontrivial square root of 1 modulo n since x2 1 mod n we know that n divides x2 1 x 1 x 1 since x 1 mod n we know that n does not divide x 1 nor x 1 From this we deduce that A gcd x 1 n and B gcd x 1 n are nontrivial not necessarily prime factors of n in fact since n is odd these factors are coprime and n AB Hence if factoring is a goal these gcd calculations can be inserted into the algorithm at little additional computational cost This leads to the following pseudocode where the added or changed code is highlighted Input 1 n gt 2 an odd integer to be tested for primality Input 2 k the number of rounds of testing to perform Output multiple of m if a nontrivial factor m of n is found composite if n is otherwise found to be composite probably prime otherwise let s gt 0 and d odd gt 0 such that n 1 2sd by factoring out powers of 2 from n 1 repeat k times a random 2 n 2 n is always a probable prime to base 1 and n 1 x ad mod n repeat s times y x2 mod n if y 1 and x 1 and x n 1 then nontrivial square root of 1 modulo n return multiple of gcd x 1 n x y if y 1 then return composite return probably prime This is not a probabilistic factorization algorithm because it is only able to find factors for numbers n which are pseudoprime to base a in other words for numbers n such that an 1 1 mod n For other numbers the algorithm only returns composite with no further information For example consider n 341 and a 2 We have n 1 85 4 Then 285 mod 341 32 and 322 mod 341 1 This tells us that n is a pseudoprime base 2 but not a strong pseudoprime base 2 By computing a gcd at this stage we find a factor of 341 gcd 32 1 341 31 Indeed 341 11 31 In order to find factors more often the same ideas can also be applied to the square roots of 1 or any other number This strategy can be implemented by exploiting knowledge from previous rounds of the Miller Rabin test In those rounds we may have identified a square root modulo n of 1 say R Then when x2 mod n n 1 we can compare the value of x against R if x is neither R nor n R then gcd x R n and gcd x R n are nontrivial factors of n 14 Generation of probable primes editThe Miller Rabin test can be used to generate strong probable primes simply by drawing integers at random until one passes the test This algorithm terminates almost surely since at each iteration there is a chance to draw a prime number The pseudocode for generating b bit strong probable primes with the most significant bit set is as follows Input 1 b the number of bits of the result Input 2 k the number of rounds of testing to perform Output a strong probable prime n while True pick a random odd integer n in the range 2b 1 2b 1 if the Miller Rabin test with inputs n and k returns probably prime then return n Complexity edit Of course the worst case running time is infinite since the outer loop may never terminate but that happens with probability zero As per the geometric distribution the expected number of draws is 1Pr MRk displaystyle tfrac 1 Pr M R k nbsp reusing notations from earlier As any prime number passes the test the probability of being prime gives a coarse lower bound to the probability of passing the test If we draw odd integers uniformly in the range 2b 1 2b 1 then we get Pr MRk gt Pr P p 2b p 2b 1 2b 2 displaystyle Pr M R k gt Pr P frac pi left 2 b right pi left 2 b 1 right 2 b 2 nbsp where p is the prime counting function Using an asymptotic expansion of p an extension of the prime number theorem we can approximate this probability when b grows towards infinity We find Pr P 2ln 2b 1 O b 3 displaystyle Pr P tfrac 2 ln 2 b 1 mathcal O left b 3 right nbsp 1Pr P ln 22b O b 1 displaystyle tfrac 1 Pr P tfrac ln 2 2 b mathcal O left b 1 right nbsp Hence we can expect the generator to run no more Miller Rabin tests than a number proportional to b Taking into account the worst case complexity of each Miller Rabin test see earlier the expected running time of the generator with inputs b and k is then bounded by O k b4 or O k b3 using FFT based multiplication Accuracy edit The error measure of this generator is the probability that it outputs a composite number Using the relation between conditional probabilities shown in an earlier section and the asymptotic behavior of Pr P displaystyle Pr P nbsp shown just before this error measure can be given a coarse upper bound Pr P MRk lt Pr MRk P 1Pr P 1 4 k ln 22b 1 O b 1 displaystyle Pr lnot P mid M R k lt Pr M R k mid lnot P left tfrac 1 Pr P 1 right leq 4 k left tfrac ln 2 2 b 1 mathcal O left b 1 right right nbsp Hence for large enough b this error measure is less than ln 224 kb displaystyle tfrac ln 2 2 4 k b nbsp However much better bounds exist Using the fact that the Miller Rabin test itself often has an error bound much smaller than 4 k see earlier Damgard Landrock and Pomerance derived several error bounds for the generator with various classes of parameters b and k 8 These error bounds allow an implementor to choose a reasonable k for a desired accuracy One of these error bounds is 4 k which holds for all b 2 the authors only showed it for b 51 while Ronald Burthe Jr completed the proof with the remaining values 2 b 50 20 Again this simple bound can be improved for large values of b For instance another bound derived by the same authors is 17b1542 b2 4 k displaystyle left frac 1 7 b frac 15 4 2 frac b 2 right 4 k nbsp which holds for all b 21 and k b 4 This bound is smaller than 4 k as soon as b 32 Notes edit The Miller Rabin test is often incorrectly said to have been discovered by M M Artjuhov as soon as 1967 a reading of Artjuhov s paper 3 particularly his Theorem E shows that he actually discovered the Solovay Strassen test For instance in 1995 Arnault gives a 397 digit composite number for which all bases less than 307 are strong liars this number was reported to be prime by the Maple isprime function because it implemented the Miller Rabin test with the specific bases 2 3 5 7 and 11 5 For instance in 2018 Albrecht et al were able to construct for many cryptographic libraries such as OpenSSL and GNU GMP composite numbers that these libraries declared prime thus demonstrating that they were not implemented with an adversarial context in mind 9 References edit a b Miller Gary L 1976 Riemann s Hypothesis and Tests for Primality Journal of Computer and System Sciences 13 3 300 317 doi 10 1145 800116 803773 S2CID 10690396 a b Rabin Michael O 1980 Probabilistic algorithm for testing primality Journal of Number Theory 12 1 128 138 doi 10 1016 0022 314X 80 90084 0 Artjuhov M M 1966 1967 Certain criteria for primality of numbers connected with the little Fermat theorem Acta Arithmetica 12 355 364 MR 0213289 a b Carl Pomerance John L Selfridge Samuel S Wagstaff Jr July 1980 The pseudoprimes to 25 109 PDF Mathematics of Computation 35 151 1003 1026 doi 10 1090 S0025 5718 1980 0572872 7 F Arnault August 1995 Constructing Carmichael Numbers Which Are Strong Pseudoprimes to Several Bases Journal of Symbolic Computation 20 2 151 161 doi 10 1006 jsco 1995 1042 Cormen Thomas H Leiserson Charles E Rivest Ronald L Stein Clifford 2009 1990 31 Introduction to Algorithms 3rd ed MIT Press and McGraw Hill pp 968 971 ISBN 0 262 03384 4 a b Schoof Rene 2004 Four primality testing algorithms PDF Algorithmic Number Theory Lattices Number Fields Curves and Cryptography Cambridge University Press ISBN 978 0 521 80854 5 a b Damgard I Landrock P amp Pomerance C 1993 Average case error estimates for the strong probable prime test PDF Mathematics of Computation 61 203 177 194 Bibcode 1993MaCom 61 177D doi 10 2307 2152945 JSTOR 2152945 Martin R Albrecht Jake Massimo Kenneth G Paterson Juraj Somorovsky 15 October 2018 Prime and Prejudice Primality Testing Under Adversarial Conditions PDF ACM SIGSAC Conference on Computer and Communications Security 2018 Toronto Association for Computing Machinery pp 281 298 doi 10 1145 3243734 3243787 Bach Eric 1990 Explicit bounds for primality testing and related problems Mathematics of Computation 55 191 355 380 Bibcode 1990MaCom 55 355B doi 10 2307 2008811 JSTOR 2008811 Jaeschke Gerhard 1993 On strong pseudoprimes to several bases Mathematics of Computation 61 204 915 926 doi 10 2307 2153262 JSTOR 2153262 Jiang Yupeng Deng Yingpu 2014 Strong pseudoprimes to the first eight prime bases Mathematics of Computation 83 290 2915 2924 doi 10 1090 S0025 5718 2014 02830 5 S2CID 33599405 Sorenson Jonathan Webster Jonathan 2015 Strong Pseudoprimes to Twelve Prime Bases Mathematics of Computation 86 304 985 1003 arXiv 1509 00864 Bibcode 2015arXiv150900864S doi 10 1090 mcom 3134 S2CID 6955806 a b Caldwell Chris Finding primes amp proving primality 2 3 Strong probable primality and a practical test The Prime Pages Retrieved February 24 2019 Zhang Zhenxiang amp Tang Min 2003 Finding strong pseudoprimes to several bases II Mathematics of Computation 72 44 2085 2097 Bibcode 2003MaCom 72 2085Z doi 10 1090 S0025 5718 03 01545 X Sloane N J A ed Sequence A014233 Smallest odd number for which Miller Rabin primality test on bases lt n th prime does not reveal compositeness The On Line Encyclopedia of Integer Sequences OEIS Foundation Izykowski Wojciech Deterministic variants of the Miller Rabin primality test Retrieved February 24 2019 Alford W R Granville A Pomerance C 1994 On the difficulty of finding reliable witnesses Algorithmic Number Theory PDF Lecture Notes in Computer Science vol 877 Springer Verlag pp 1 16 doi 10 1007 3 540 58691 1 36 ISBN 978 3 540 58691 3 Robert Baillie Samuel S Wagstaff Jr October 1980 Lucas Pseudoprimes PDF Mathematics of Computation 35 152 1391 1417 doi 10 1090 S0025 5718 1980 0583518 6 MR 0583518 Burthe Jr Ronald J 1996 Further investigations with the strong probable prime test PDF Mathematics of Computation 65 213 373 381 Bibcode 1996MaCom 65 373B doi 10 1090 S0025 5718 96 00695 3External links edit nbsp The Wikibook Algorithm Implementation has a page on the topic of Primality testing Weisstein Eric W Rabin Miller Strong Pseudoprime Test MathWorld Interactive Online Implementation of the Deterministic Variant stepping through the algorithm step by step Applet German Miller Rabin primality test in C Miller Rabin primality test in JavaScript using arbitrary precision arithmetic Retrieved from https en wikipedia org w index php title Miller Rabin primality test amp oldid 1213361293, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.