fbpx
Wikipedia

Order statistic

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value.[1] Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

Probability density functions of the order statistics for a sample of size n = 5 from an exponential distribution with unit scale parameter

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

Notation and examples edit

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6, 9, 3, 8,

the order statistics would be denoted

 

where the subscript (i) enclosed in parentheses indicates the ith order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

 

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size n, the nth order statistic (or largest order statistic) is the maximum, that is,

 

The sample range is the difference between the maximum and minimum. It is a function of the order statistics:

 

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number n of observations is odd. More precisely, if n = 2m+1 for some integer m, then the sample median is   and so is an order statistic. On the other hand, when n is even, n = 2m and there are two middle values,   and  , and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

Probabilistic analysis edit

Given any random variables X1, X2..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order.

When the random variables X1, X2..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

Cumulative distribution function of order statistics edit

For a random sample as above, with cumulative distribution  , the order statistics for that sample have cumulative distributions as follows[2] (where r specifies which order statistic):

 

the corresponding probability density function may be derived from this result, and is found to be

 

Moreover, there are two special cases, which have CDFs that are easy to compute.

 
 

Which can be derived by careful consideration of probabilities.

Probability distributions of order statistics edit

Order statistics sampled from a uniform distribution edit

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that   is a random sample drawn from a continuous distribution with cdf  . Denoting   we obtain the corresponding random sample   from the standard uniform distribution. Note that the order statistics also satisfy  .

The probability density function of the order statistic   is equal to[3]

 

that is, the kth order statistic of the uniform distribution is a beta-distributed random variable.[3][4]

 

The proof of these statements is as follows. For   to be between u and u + du, it is necessary that exactly k − 1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already  , so we have to calculate the probability that exactly k − 1, 1 and n − k observations fall in the intervals  ,   and   respectively. This equals (refer to multinomial distribution for details)

 

and the result follows.

The mean of this distribution is k / (n + 1).

The joint distribution of the order statistics of the uniform distribution edit

Similarly, for i < j, the joint probability density function of the two order statistics U(i) < U(j) can be shown to be

 

which is (up to terms of higher order than  ) the probability that i − 1, 1, j − 1 − i, 1 and n − j sample elements fall in the intervals  ,  ,  ,  ,   respectively.

One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:

 

One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region  . It is also related with another particularity of order statistics of uniform random variables: It follows from the BRS-inequality that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding   is bounded above by  , which is thus invariant on the set of all   with constant product  .

Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of  , i.e. maximum minus the minimum. More generally, for  ,   also has a beta distribution:

 
From these formulas we can derive the covariance between two order statistics:
 
The formula follows from noting that
 
and comparing that with
 
where  , which is the actual distribution of the difference.

Order statistics sampled from an exponential distribution edit

For   a random sample of size n from an exponential distribution with parameter λ, the order statistics X(i) for i = 1,2,3, ..., n each have distribution

 

where the Zj are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by Alfréd Rényi.[5][6]

Order statistics sampled from an Erlang distribution edit

The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method [clarification needed].[7]

The joint distribution of the order statistics of an absolutely continuous distribution edit

If FX is absolutely continuous, it has a density such that  , and we can use the substitutions

 

and

 

to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X:

 
  where  
  where  

Application: confidence intervals for quantiles edit

An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution.

A small-sample-size example edit

The simplest case to consider is how well the sample median estimates the population median.

As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is [clarification needed]

 

Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability

 

With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.

Large sample sizes edit

For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by

 

For a general distribution F with a continuous non-zero density at F −1(p), a similar asymptotic normality applies:

 

where f is the density function, and F −1 is the quantile function associated with F. One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946.[8] Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds. The convergence to normal distribution also holds in a stronger sense, such as convergence in relative entropy or KL divergence.[9]

An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the sample mean, by the central limit theorem, is also asymptotically normally distributed, but with variance σ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for X that are normally distributed.

Proof edit

It can be shown that

 

where

 

with Zi being independent identically distributed exponential random variables with rate 1. Since X/n and Y/n are asymptotically normally distributed by the CLT, our results follow by application of the delta method.

Application: Non-parametric density estimation edit

Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator.[10] Suppose, we want to estimate the density   at the point  . Consider the random variables  , which are i.i.d with distribution function  . In particular,  .

The expected value of the first order statistic   given a sample of   total observations yields,

 

where   is the quantile function associated with the distribution  , and  . This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm,

 Input: A sample of   observations.   points of density evaluation. Tuning parameter   (usually 1/3). Output:   estimated density at the points of evaluation. 
 1: Set   2: Set   3: Create an   matrix   which holds   subsets with   observations each. 4: Create a vector   to hold the density evaluations. 5: for   do 6: for   do 7: Find the nearest distance   to the current point   within the  th subset 8: end for 9: Compute the subset average of distances to   10: Compute the density estimate at   11: end for 12: return   

In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.[11]

Dealing with discrete variables edit

Suppose   are i.i.d. random variables from a discrete distribution with cumulative distribution function   and probability mass function  . To find the probabilities of the   order statistics, three values are first needed, namely

 

The cumulative distribution function of the   order statistic can be computed by noting that

 

Similarly,   is given by

 

Note that the probability mass function of   is just the difference of these values, that is to say

 

Computing order statistics edit

The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n).

See also edit

Examples of order statistics edit

References edit

  1. ^ David, H. A.; Nagaraja, H. N. (2003). Order Statistics. Wiley Series in Probability and Statistics. doi:10.1002/0471722162. ISBN 9780471722168.
  2. ^ Casella, George; Berger, Roger (2002). Statistical Inference (2nd ed.). Cengage Learning. p. 229. ISBN 9788131503942.
  3. ^ a b Gentle, James E. (2009), Computational Statistics, Springer, p. 63, ISBN 9780387981444.
  4. ^ Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages", Statistical Methodology, 6 (1): 70–81, doi:10.1016/j.stamet.2008.04.001, As is well known, the beta distribution is the distribution of the m 'th order statistic from a random sample of size n from the uniform distribution (on (0,1)).
  5. ^ David, H. A.; Nagaraja, H. N. (2003), "Chapter 2. Basic Distribution Theory", Order Statistics, Wiley Series in Probability and Statistics, p. 9, doi:10.1002/0471722162.ch2, ISBN 9780471722168
  6. ^ Rényi, Alfréd (1953). "On the theory of order statistics". Acta Mathematica Hungarica. 4 (3): 191–231. doi:10.1007/BF02127580.
  7. ^ Hlynka, M.; Brill, P. H.; Horn, W. (2010). "A method for obtaining Laplace transforms of order statistics of Erlang random variables". Statistics & Probability Letters. 80: 9–18. doi:10.1016/j.spl.2009.09.006.
  8. ^ Mosteller, Frederick (1946). "On Some Useful "Inefficient" Statistics". Annals of Mathematical Statistics. 17 (4): 377–408. doi:10.1214/aoms/1177730881. Retrieved February 26, 2015.
  9. ^ M. Cardone, A. Dytso and C. Rush, "Entropic Central Limit Theorem for Order Statistics," in IEEE Transactions on Information Theory, vol. 69, no. 4, pp. 2193-2205, April 2023, doi: 10.1109/TIT.2022.3219344.
  10. ^ Garg, Vikram V.; Tenorio, Luis; Willcox, Karen (2017). "Minimum local distance density estimation". Communications in Statistics - Theory and Methods. 46 (1): 148–164. arXiv:1412.2851. doi:10.1080/03610926.2014.988260. S2CID 14334678.
  11. ^ David, H. A.; Nagaraja, H. N. (2003), "Chapter 3. Expected Values and Moments", Order Statistics, Wiley Series in Probability and Statistics, p. 34, doi:10.1002/0471722162.ch3, ISBN 9780471722168

External links edit

order, statistic, statistics, order, statistic, statistical, sample, equal, smallest, value, together, with, rank, statistics, order, statistics, among, most, fundamental, tools, parametric, statistics, inference, probability, density, functions, order, statis. In statistics the kth order statistic of a statistical sample is equal to its kth smallest value 1 Together with rank statistics order statistics are among the most fundamental tools in non parametric statistics and inference Probability density functions of the order statistics for a sample of size n 5 from an exponential distribution with unit scale parameter Important special cases of the order statistics are the minimum and maximum value of a sample and with some qualifications discussed below the sample median and other sample quantiles When using probability theory to analyze order statistics of random samples from a continuous distribution the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution Contents 1 Notation and examples 2 Probabilistic analysis 2 1 Cumulative distribution function of order statistics 2 2 Probability distributions of order statistics 2 2 1 Order statistics sampled from a uniform distribution 2 2 2 The joint distribution of the order statistics of the uniform distribution 2 2 3 Order statistics sampled from an exponential distribution 2 2 4 Order statistics sampled from an Erlang distribution 2 2 5 The joint distribution of the order statistics of an absolutely continuous distribution 3 Application confidence intervals for quantiles 3 1 A small sample size example 3 2 Large sample sizes 3 2 1 Proof 4 Application Non parametric density estimation 5 Dealing with discrete variables 6 Computing order statistics 7 See also 7 1 Examples of order statistics 8 References 9 External linksNotation and examples editFor example suppose that four numbers are observed or recorded resulting in a sample of size 4 If the sample values are 6 9 3 8 the order statistics would be denoted x 1 3 x 2 6 x 3 8 x 4 9 displaystyle x 1 3 x 2 6 x 3 8 x 4 9 nbsp where the subscript i enclosed in parentheses indicates the i th order statistic of the sample The first order statistic or smallest order statistic is always the minimum of the sample that is X 1 min X 1 X n displaystyle X 1 min X 1 ldots X n nbsp where following a common convention we use upper case letters to refer to random variables and lower case letters as above to refer to their actual observed values Similarly for a sample of size n the n th order statistic or largest order statistic is the maximum that is X n max X 1 X n displaystyle X n max X 1 ldots X n nbsp The sample range is the difference between the maximum and minimum It is a function of the order statistics R a n g e X 1 X n X n X 1 displaystyle rm Range X 1 ldots X n X n X 1 nbsp A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range The sample median may or may not be an order statistic since there is a single middle value only when the number n of observations is odd More precisely if n 2m 1 for some integer m then the sample median is X m 1 displaystyle X m 1 nbsp and so is an order statistic On the other hand when n is even n 2m and there are two middle values X m displaystyle X m nbsp and X m 1 displaystyle X m 1 nbsp and the sample median is some function of the two usually the average and hence not an order statistic Similar remarks apply to all sample quantiles Probabilistic analysis editGiven any random variables X1 X2 Xn the order statistics X 1 X 2 X n are also random variables defined by sorting the values realizations of X1 Xn in increasing order When the random variables X1 X2 Xn form a sample they are independent and identically distributed This is the case treated below In general the random variables X1 Xn can arise by sampling from more than one population Then they are independent but not necessarily identically distributed and their joint probability distribution is given by the Bapat Beg theorem From now on we will assume that the random variables under consideration are continuous and where convenient we will also assume that they have a probability density function PDF that is they are absolutely continuous The peculiarities of the analysis of distributions assigning mass to points in particular discrete distributions are discussed at the end Cumulative distribution function of order statistics edit For a random sample as above with cumulative distribution F X x displaystyle F X x nbsp the order statistics for that sample have cumulative distributions as follows 2 where r specifies which order statistic F X r x j r n n j F X x j 1 F X x n j displaystyle F X r x sum j r n binom n j F X x j 1 F X x n j nbsp the corresponding probability density function may be derived from this result and is found to be f X r x n r 1 n r f X x F X x r 1 1 F X x n r displaystyle f X r x frac n r 1 n r f X x F X x r 1 1 F X x n r nbsp Moreover there are two special cases which have CDFs that are easy to compute F X n x Prob max X 1 X n x F X x n displaystyle F X n x operatorname Prob max X 1 ldots X n leq x F X x n nbsp F X 1 x Prob min X 1 X n x 1 1 F X x n displaystyle F X 1 x operatorname Prob min X 1 ldots X n leq x 1 1 F X x n nbsp Which can be derived by careful consideration of probabilities Probability distributions of order statistics edit Order statistics sampled from a uniform distribution edit In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the beta distribution family We also give a simple method to derive the joint distribution of any number of order statistics and finally translate these results to arbitrary continuous distributions using the cdf We assume throughout this section that X 1 X 2 X n displaystyle X 1 X 2 ldots X n nbsp is a random sample drawn from a continuous distribution with cdf F X displaystyle F X nbsp Denoting U i F X X i displaystyle U i F X X i nbsp we obtain the corresponding random sample U 1 U n displaystyle U 1 ldots U n nbsp from the standard uniform distribution Note that the order statistics also satisfy U i F X X i displaystyle U i F X X i nbsp The probability density function of the order statistic U k displaystyle U k nbsp is equal to 3 f U k u n k 1 n k u k 1 1 u n k displaystyle f U k u n over k 1 n k u k 1 1 u n k nbsp that is the kth order statistic of the uniform distribution is a beta distributed random variable 3 4 U k Beta k n 1 k displaystyle U k sim operatorname Beta k n 1 mathbf k nbsp The proof of these statements is as follows For U k displaystyle U k nbsp to be between u and u du it is necessary that exactly k 1 elements of the sample are smaller than u and that at least one is between u and u du The probability that more than one is in this latter interval is already O d u 2 displaystyle O du 2 nbsp so we have to calculate the probability that exactly k 1 1 and n k observations fall in the intervals 0 u displaystyle 0 u nbsp u u d u displaystyle u u du nbsp and u d u 1 displaystyle u du 1 nbsp respectively This equals refer to multinomial distribution for details n k 1 n k u k 1 d u 1 u d u n k displaystyle n over k 1 n k u k 1 cdot du cdot 1 u du n k nbsp and the result follows The mean of this distribution is k n 1 The joint distribution of the order statistics of the uniform distribution edit Similarly for i lt j the joint probability density function of the two order statistics U i lt U j can be shown to be f U i U j u v n u i 1 i 1 v u j i 1 j i 1 1 v n j n j displaystyle f U i U j u v n u i 1 over i 1 v u j i 1 over j i 1 1 v n j over n j nbsp which is up to terms of higher order than O d u d v displaystyle O du dv nbsp the probability that i 1 1 j 1 i 1 and n j sample elements fall in the intervals 0 u displaystyle 0 u nbsp u u d u displaystyle u u du nbsp u d u v displaystyle u du v nbsp v v d v displaystyle v v dv nbsp v d v 1 displaystyle v dv 1 nbsp respectively One reasons in an entirely analogous way to derive the higher order joint distributions Perhaps surprisingly the joint density of the n order statistics turns out to be constant f U 1 U 2 U n u 1 u 2 u n n displaystyle f U 1 U 2 ldots U n u 1 u 2 ldots u n n nbsp One way to understand this is that the unordered sample does have constant density equal to 1 and that there are n different permutations of the sample corresponding to the same sequence of order statistics This is related to the fact that 1 n is the volume of the region 0 lt u 1 lt lt u n lt 1 displaystyle 0 lt u 1 lt cdots lt u n lt 1 nbsp It is also related with another particularity of order statistics of uniform random variables It follows from the BRS inequality that the maximum expected number of uniform U 0 1 random variables one can choose from a sample of size n with a sum up not exceeding 0 lt s lt n 2 displaystyle 0 lt s lt n 2 nbsp is bounded above by 2 s n displaystyle sqrt 2sn nbsp which is thus invariant on the set of all s n displaystyle s n nbsp with constant product s n displaystyle sn nbsp Using the above formulas one can derive the distribution of the range of the order statistics that is the distribution of U n U 1 displaystyle U n U 1 nbsp i e maximum minus the minimum More generally for n k gt j 1 displaystyle n geq k gt j geq 1 nbsp U k U j displaystyle U k U j nbsp also has a beta distribution U k U j Beta k j n k j 1 displaystyle U k U j sim operatorname Beta k j n k j 1 nbsp From these formulas we can derive the covariance between two order statistics Cov U k U j j n k 1 n 1 2 n 2 displaystyle operatorname Cov U k U j frac j n k 1 n 1 2 n 2 nbsp The formula follows from noting that Var U k U j Var U k Var U j 2 Cov U k U j k n k 1 n 1 2 n 2 j n j 1 n 1 2 n 2 2 Cov U k U j displaystyle operatorname Var U k U j operatorname Var U k operatorname Var U j 2 cdot operatorname Cov U k U j frac k n k 1 n 1 2 n 2 frac j n j 1 n 1 2 n 2 2 cdot operatorname Cov U k U j nbsp and comparing that with Var U k j n k j 1 n 1 2 n 2 displaystyle operatorname Var U frac k j n k j 1 n 1 2 n 2 nbsp where U Beta k j n k j 1 displaystyle U sim operatorname Beta k j n k j 1 nbsp which is the actual distribution of the difference Order statistics sampled from an exponential distribution edit For X 1 X 2 X n displaystyle X 1 X 2 X n nbsp a random sample of size n from an exponential distribution with parameter l the order statistics X i for i 1 2 3 n each have distribution X i d 1 l j 1 i Z j n j 1 displaystyle X i stackrel d frac 1 lambda left sum j 1 i frac Z j n j 1 right nbsp dd where the Zj are iid standard exponential random variables i e with rate parameter 1 This result was first published by Alfred Renyi 5 6 Order statistics sampled from an Erlang distribution edit The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method clarification needed 7 The joint distribution of the order statistics of an absolutely continuous distribution edit If FX is absolutely continuous it has a density such that d F X x f X x d x displaystyle dF X x f X x dx nbsp and we can use the substitutions u F X x displaystyle u F X x nbsp and d u f X x d x displaystyle du f X x dx nbsp to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X f X k x n k 1 n k F X x k 1 1 F X x n k f X x displaystyle f X k x frac n k 1 n k F X x k 1 1 F X x n k f X x nbsp f X j X k x y n j 1 k j 1 n k F X x j 1 F X y F X x k 1 j 1 F X y n k f X x f X y displaystyle f X j X k x y frac n j 1 k j 1 n k F X x j 1 F X y F X x k 1 j 1 F X y n k f X x f X y nbsp where x y displaystyle x leq y nbsp f X 1 X n x 1 x n n f X x 1 f X x n displaystyle f X 1 ldots X n x 1 ldots x n n f X x 1 cdots f X x n nbsp where x 1 x 2 x n displaystyle x 1 leq x 2 leq dots leq x n nbsp Application confidence intervals for quantiles editAn interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution A small sample size example edit The simplest case to consider is how well the sample median estimates the population median As an example consider a random sample of size 6 In that case the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics However we know from the preceding discussion that the probability that this interval actually contains the population median is clarification needed 6 3 1 2 6 5 16 31 displaystyle 6 choose 3 1 2 6 5 over 16 approx 31 nbsp Although the sample median is probably among the best distribution independent point estimates of the population median what this example illustrates is that it is not a particularly good one in absolute terms In this particular case a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics which contains the population median with probability 6 2 6 3 6 4 1 2 6 25 32 78 displaystyle left 6 choose 2 6 choose 3 6 choose 4 right 1 2 6 25 over 32 approx 78 nbsp With such a small sample size if one wants at least 95 confidence one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31 32 or approximately 97 Size 6 is in fact the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95 confidence interval for the population median Large sample sizes edit For the uniform distribution as n tends to infinity the pth sample quantile is asymptotically normally distributed since it is approximated by U n p A N p p 1 p n displaystyle U lceil np rceil sim AN left p frac p 1 p n right nbsp For a general distribution F with a continuous non zero density at F 1 p a similar asymptotic normality applies X n p A N F 1 p p 1 p n f F 1 p 2 displaystyle X lceil np rceil sim AN left F 1 p frac p 1 p n f F 1 p 2 right nbsp where f is the density function and F 1 is the quantile function associated with F One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946 8 Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds The convergence to normal distribution also holds in a stronger sense such as convergence in relative entropy or KL divergence 9 An interesting observation can be made in the case where the distribution is symmetric and the population median equals the population mean In this case the sample mean by the central limit theorem is also asymptotically normally distributed but with variance s2 n instead This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis and vice versa For example the median achieves better confidence intervals for the Laplace distribution while the mean performs better for X that are normally distributed Proof edit It can be shown that B k n 1 k d X X Y displaystyle B k n 1 k stackrel mathrm d frac X X Y nbsp where X i 1 k Z i Y i k 1 n 1 Z i displaystyle X sum i 1 k Z i quad Y sum i k 1 n 1 Z i nbsp with Zi being independent identically distributed exponential random variables with rate 1 Since X n and Y n are asymptotically normally distributed by the CLT our results follow by application of the delta method Application Non parametric density estimation editMoments of the distribution for the first order statistic can be used to develop a non parametric density estimator 10 Suppose we want to estimate the density f X displaystyle f X nbsp at the point x displaystyle x nbsp Consider the random variables Y i X i x displaystyle Y i X i x nbsp which are i i d with distribution function g Y y f X y x f X x y displaystyle g Y y f X y x f X x y nbsp In particular f X x g Y 0 2 displaystyle f X x frac g Y 0 2 nbsp The expected value of the first order statistic Y 1 displaystyle Y 1 nbsp given a sample of N displaystyle N nbsp total observations yields E Y 1 1 N 1 g 0 1 N 1 N 2 0 1 Q z d N 1 z d z displaystyle E Y 1 frac 1 N 1 g 0 frac 1 N 1 N 2 int 0 1 Q z delta N 1 z dz nbsp where Q displaystyle Q nbsp is the quantile function associated with the distribution g Y displaystyle g Y nbsp and d N z N 1 1 z N displaystyle delta N z N 1 1 z N nbsp This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm Input A sample of N displaystyle N nbsp observations x ℓ ℓ 1 M displaystyle x ell ell 1 M nbsp points of density evaluation Tuning parameter a 0 1 displaystyle a in 0 1 nbsp usually 1 3 Output f ℓ ℓ 1 M displaystyle hat f ell ell 1 M nbsp estimated density at the points of evaluation 1 Set m N round N 1 a displaystyle m N operatorname round N 1 a nbsp 2 Set s N N m N displaystyle s N frac N m N nbsp 3 Create an s N m N displaystyle s N times m N nbsp matrix M i j displaystyle M ij nbsp which holds m N displaystyle m N nbsp subsets with s N displaystyle s N nbsp observations each 4 Create a vector f displaystyle hat f nbsp to hold the density evaluations 5 for ℓ 1 M displaystyle ell 1 to M nbsp do 6 for k 1 m N displaystyle k 1 to m N nbsp do 7 Find the nearest distance d ℓ k displaystyle d ell k nbsp to the current point x ℓ displaystyle x ell nbsp within the k displaystyle k nbsp th subset 8 end for 9 Compute the subset average of distances to x ℓ d ℓ k 1 m N d ℓ k m N displaystyle x ell d ell sum k 1 m N frac d ell k m N nbsp 10 Compute the density estimate at x ℓ f ℓ 1 2 1 s N d ℓ displaystyle x ell hat f ell frac 1 2 1 s N d ell nbsp 11 end for 12 return f displaystyle hat f nbsp In contrast to the bandwidth length based tuning parameters for histogram and kernel based approaches the tuning parameter for the order statistic based density estimator is the size of sample subsets Such an estimator is more robust than histogram and kernel based approaches for example densities like the Cauchy distribution which lack finite moments can be inferred without the need for specialized modifications such as IQR based bandwidths This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does but the converse is not necessarily true 11 Dealing with discrete variables editSuppose X 1 X 2 X n displaystyle X 1 X 2 ldots X n nbsp are i i d random variables from a discrete distribution with cumulative distribution function F x displaystyle F x nbsp and probability mass function f x displaystyle f x nbsp To find the probabilities of the k th displaystyle k text th nbsp order statistics three values are first needed namely p 1 P X lt x F x f x p 2 P X x f x and p 3 P X gt x 1 F x displaystyle p 1 P X lt x F x f x p 2 P X x f x text and p 3 P X gt x 1 F x nbsp The cumulative distribution function of the k th displaystyle k text th nbsp order statistic can be computed by noting that P X k x P there are at least k observations less than or equal to x P there are at most n k observations greater than x j 0 n k n j p 3 j p 1 p 2 n j displaystyle begin aligned P X k leq x amp P text there are at least k text observations less than or equal to x amp P text there are at most n k text observations greater than x amp sum j 0 n k n choose j p 3 j p 1 p 2 n j end aligned nbsp Similarly P X k lt x displaystyle P X k lt x nbsp is given by P X k lt x P there are at least k observations less than x P there are at most n k observations greater than or equal to x j 0 n k n j p 2 p 3 j p 1 n j displaystyle begin aligned P X k lt x amp P text there are at least k text observations less than x amp P text there are at most n k text observations greater than or equal to x amp sum j 0 n k n choose j p 2 p 3 j p 1 n j end aligned nbsp Note that the probability mass function of X k displaystyle X k nbsp is just the difference of these values that is to say P X k x P X k x P X k lt x j 0 n k n j p 3 j p 1 p 2 n j p 2 p 3 j p 1 n j j 0 n k n j 1 F x j F x n j 1 F x f x j F x f x n j displaystyle begin aligned P X k x amp P X k leq x P X k lt x amp sum j 0 n k n choose j left p 3 j p 1 p 2 n j p 2 p 3 j p 1 n j right amp sum j 0 n k n choose j left 1 F x j F x n j 1 F x f x j F x f x n j right end aligned nbsp Computing order statistics editMain articles Selection algorithm and Sampling in order The problem of computing the kth smallest or largest element of a list is called the selection problem and is solved by a selection algorithm Although this problem is difficult for very large lists sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list even if the list is totally unordered If the data is stored in certain specialized data structures this time can be brought down to O log n In many applications all order statistics are required in which case a sorting algorithm can be used and the time taken is O n log n See also editRankit Box plot BRS inequality Concomitant statistics Fisher Tippett distribution Bapat Beg theorem for the order statistics of independent but not necessarily identically distributed random variables Bernstein polynomial L estimator linear combinations of order statistics Rank size distribution Selection algorithm Examples of order statistics edit Sample maximum and minimum Quantile Percentile Decile Quartile Median Mean Sample mean and covariance This article includes a list of general references but it lacks sufficient corresponding inline citations Please help to improve this article by introducing more precise citations December 2010 Learn how and when to remove this message References edit David H A Nagaraja H N 2003 Order Statistics Wiley Series in Probability and Statistics doi 10 1002 0471722162 ISBN 9780471722168 Casella George Berger Roger 2002 Statistical Inference 2nd ed Cengage Learning p 229 ISBN 9788131503942 a b Gentle James E 2009 Computational Statistics Springer p 63 ISBN 9780387981444 Jones M C 2009 Kumaraswamy s distribution A beta type distribution with some tractability advantages Statistical Methodology 6 1 70 81 doi 10 1016 j stamet 2008 04 001 As is well known the beta distribution is the distribution of the m th order statistic from a random sample of size n from the uniform distribution on 0 1 David H A Nagaraja H N 2003 Chapter 2 Basic Distribution Theory Order Statistics Wiley Series in Probability and Statistics p 9 doi 10 1002 0471722162 ch2 ISBN 9780471722168 Renyi Alfred 1953 On the theory of order statistics Acta Mathematica Hungarica 4 3 191 231 doi 10 1007 BF02127580 Hlynka M Brill P H Horn W 2010 A method for obtaining Laplace transforms of order statistics of Erlang random variables Statistics amp Probability Letters 80 9 18 doi 10 1016 j spl 2009 09 006 Mosteller Frederick 1946 On Some Useful Inefficient Statistics Annals of Mathematical Statistics 17 4 377 408 doi 10 1214 aoms 1177730881 Retrieved February 26 2015 M Cardone A Dytso and C Rush Entropic Central Limit Theorem for Order Statistics in IEEE Transactions on Information Theory vol 69 no 4 pp 2193 2205 April 2023 doi 10 1109 TIT 2022 3219344 Garg Vikram V Tenorio Luis Willcox Karen 2017 Minimum local distance density estimation Communications in Statistics Theory and Methods 46 1 148 164 arXiv 1412 2851 doi 10 1080 03610926 2014 988260 S2CID 14334678 David H A Nagaraja H N 2003 Chapter 3 Expected Values and Moments Order Statistics Wiley Series in Probability and Statistics p 34 doi 10 1002 0471722162 ch3 ISBN 9780471722168External links editOrder statistics at PlanetMath Retrieved Feb 02 2005 Weisstein Eric W Order Statistic MathWorld Retrieved Feb 02 2005 C source Dynamic Order Statistics Retrieved from https en wikipedia org w index php title Order statistic amp oldid 1194980969, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.