fbpx
Wikipedia

Wishart distribution

In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928.[1]

Wishart
Notation X ~ Wp(V, n)
Parameters n > p − 1 degrees of freedom (real)
V > 0 scale matrix (p × p pos. def)
Support X(p × p) positive definite matrix
PDF

Mean
Mode (np − 1)V for np + 1
Variance
Entropy see below
CF

It is a family of probability distributions defined over symmetric, nonnegative-definite random matrices (i.e. matrix-valued random variables). In random matrix theory, the space of Wishart matrices is called the Wishart ensemble.

These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector.[2]

Definition

Suppose G is a p × n matrix, each column of which is independently drawn from a p-variate normal distribution with zero mean:

 

Then the Wishart distribution is the probability distribution of the p × p random matrix [3]

 

known as the scatter matrix. One indicates that S has that probability distribution by writing

 

The positive integer n is the number of degrees of freedom. Sometimes this is written W(V, p, n). For np the matrix S is invertible with probability 1 if V is invertible.

If p = V = 1 then this distribution is a chi-squared distribution with n degrees of freedom.

Occurrence

The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices[citation needed] and in multidimensional Bayesian analysis.[4] It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels .[5]

Probability density function

The Wishart distribution can be characterized by its probability density function as follows:

Let X be a p × p symmetric matrix of random variables that is positive semi-definite. Let V be a (fixed) symmetric positive definite matrix of size p × p.

Then, if np, X has a Wishart distribution with n degrees of freedom if it has the probability density function

 

where   is the determinant of   and Γp is the multivariate gamma function defined as

 

The density above is not the joint density of all the   elements of the random matrix X (such  -dimensional density does not exist because of the symmetry constrains  ), it is rather the joint density of   elements   for   (,[1] page 38). Also, the density formula above applies only to positive definite matrices   for other matrices the density is equal to zero.

The joint-eigenvalue density for the eigenvalues   of a random matrix   is,[6][7]

 

where  is a constant.

In fact the above definition can be extended to any real n > p − 1. If np − 1, then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lower-dimension subspace of the space of p × p matrices.[8]

Use in Bayesian statistics

In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the precision matrix Ω = Σ−1, where Σ is the covariance matrix.[9]: 135 

Choice of parameters

The least informative, proper Wishart prior is obtained by setting n = p.[citation needed]

The prior mean of Wp(V, n) is nV, suggesting that a reasonable choice for V would be n−1Σ0−1, where Σ0 is some prior guess for the covariance matrix.

Properties

Log-expectation

The following formula plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution:[9]: 693 

 

where   is the multivariate digamma function (the derivative of the log of the multivariate gamma function).

Log-variance

The following variance computation could be of help in Bayesian statistics:

 

where   is the trigamma function. This comes up when computing the Fisher information of the Wishart random variable.

Entropy

The information entropy of the distribution has the following formula:[9]: 693 

 

where B(V, n) is the normalizing constant of the distribution:

 

This can be expanded as follows:

 

Cross-entropy

The cross entropy of two Wishart distributions   with parameters   and   with parameters   is

 

Note that when   and  we recover the entropy.

KL-divergence

The Kullback–Leibler divergence of   from   is

 

Characteristic function

The characteristic function of the Wishart distribution is

 

where E[⋅] denotes expectation. (Here Θ is any matrix with the same dimensions as V, 1 indicates the identity matrix, and i is a square root of −1).[7] Properly interpreting this formula requires a little care, because noninteger complex powers are multivalued; when n is noninteger, the correct branch must be determined via analytic continuation.[10]

Theorem

If a p × p random matrix X has a Wishart distribution with m degrees of freedom and variance matrix V — write   — and C is a q × p matrix of rank q, then [11]

 

Corollary 1

If z is a nonzero p × 1 constant vector, then:[11]

 

In this case,   is the chi-squared distribution and   (note that   is a constant; it is positive because V is positive definite).

Corollary 2

Consider the case where zT = (0, ..., 0, 1, 0, ..., 0) (that is, the j-th element is one and all others zero). Then corollary 1 above shows that

 

gives the marginal distribution of each of the elements on the matrix's diagonal.

George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the off-diagonal elements is not chi-squared. Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family.[12]

Estimator of the multivariate normal distribution

The Wishart distribution is the sampling distribution of the maximum-likelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution.[13] A derivation of the MLE uses the spectral theorem.

Bartlett decomposition

The Bartlett decomposition of a matrix X from a p-variate Wishart distribution with scale matrix V and n degrees of freedom is the factorization:

 

where L is the Cholesky factor of V, and:

 

where   and nij ~ N(0, 1) independently.[14] This provides a useful method for obtaining random samples from a Wishart distribution.[15]

Marginal distribution of matrix elements

Let V be a 2 × 2 variance matrix characterized by correlation coefficient −1 < ρ < 1 and L its lower Cholesky factor:

 

Multiplying through the Bartlett decomposition above, we find that a random sample from the 2 × 2 Wishart distribution is

 

The diagonal elements, most evidently in the first element, follow the χ2 distribution with n degrees of freedom (scaled by σ2) as expected. The off-diagonal element is less familiar but can be identified as a normal variance-mean mixture where the mixing density is a χ2 distribution. The corresponding marginal probability density for the off-diagonal element is therefore the variance-gamma distribution

 

where Kν(z) is the modified Bessel function of the second kind.[16] Similar results may be found for higher dimensions, but the interdependence of the off-diagonal correlations becomes increasingly complicated. It is also possible to write down the moment-generating function even in the noncentral case (essentially the nth power of Craig (1936)[17] equation 10) although the probability density becomes an infinite sum of Bessel functions.

The range of the shape parameter

It can be shown [18] that the Wishart distribution can be defined if and only if the shape parameter n belongs to the set

 

This set is named after Gindikin, who introduced it[19] in the 1970s in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,

 

the corresponding Wishart distribution has no Lebesgue density.

Relationships to other distributions

See also

References

  1. ^ a b Wishart, J. (1928). "The generalised product moment distribution in samples from a normal multivariate population". Biometrika. 20A (1–2): 32–52. doi:10.1093/biomet/20A.1-2.32. JFM 54.0565.02. JSTOR 2331939.
  2. ^ Koop, Gary; Korobilis, Dimitris (2010). "Bayesian Multivariate Time Series Methods for Empirical Macroeconomics". Foundations and Trends in Econometrics. 3 (4): 267–358. doi:10.1561/0800000013.
  3. ^ Gupta, A. K.; Nagar, D. K. (2000). Matrix Variate Distributions. Chapman & Hall /CRC. ISBN 1584880465.
  4. ^ Gelman, Andrew (2003). Bayesian Data Analysis (2nd ed.). Boca Raton, Fla.: Chapman & Hall. p. 582. ISBN 158488388X. Retrieved 3 June 2015.
  5. ^ Zanella, A.; Chiani, M.; Win, M.Z. (April 2009). "On the marginal distribution of the eigenvalues of wishart matrices" (PDF). IEEE Transactions on Communications. 57 (4): 1050–1060. doi:10.1109/TCOMM.2009.04.070143. hdl:1721.1/66900. S2CID 12437386.
  6. ^ Muirhead, Robb J. (2005). Aspects of Multivariate Statistical Theory (2nd ed.). Wiley Interscience. ISBN 0471769851.
  7. ^ a b Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 259. ISBN 0-471-36091-0.
  8. ^ Uhlig, H. (1994). "On Singular Wishart and Singular Multivariate Beta Distributions". The Annals of Statistics. 22: 395–405. doi:10.1214/aos/1176325375.
  9. ^ a b c d Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
  10. ^ Mayerhofer, Eberhard (2019-01-27). "Reforming the Wishart characteristic function". arXiv:1901.09347 [math.PR].
  11. ^ a b Rao, C. R. (1965). Linear Statistical Inference and its Applications. Wiley. p. 535.
  12. ^ Seber, George A. F. (2004). Multivariate Observations. Wiley. ISBN 978-0471691211.
  13. ^ Chatfield, C.; Collins, A. J. (1980). Introduction to Multivariate Analysis. London: Chapman and Hall. pp. 103–108. ISBN 0-412-16030-7.
  14. ^ Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 257. ISBN 0-471-36091-0.
  15. ^ Smith, W. B.; Hocking, R. R. (1972). "Algorithm AS 53: Wishart Variate Generator". Journal of the Royal Statistical Society, Series C. 21 (3): 341–345. JSTOR 2346290.
  16. ^ Pearson, Karl; Jeffery, G. B.; Elderton, Ethel M. (December 1929). "On the Distribution of the First Product Moment-Coefficient, in Samples Drawn from an Indefinitely Large Normal Population". Biometrika. Biometrika Trust. 21 (1/4): 164–201. doi:10.2307/2332556. JSTOR 2332556.
  17. ^ Craig, Cecil C. (1936). "On the Frequency Function of xy". Ann. Math. Statist. 7: 1–15. doi:10.1214/aoms/1177732541.
  18. ^ Peddada and Richards, Shyamal Das; Richards, Donald St. P. (1991). "Proof of a Conjecture of M. L. Eaton on the Characteristic Function of the Wishart Distribution". Annals of Probability. 19 (2): 868–874. doi:10.1214/aop/1176990455.
  19. ^ Gindikin, S.G. (1975). "Invariant generalized functions in homogeneous domains". Funct. Anal. Appl. 9 (1): 50–52. doi:10.1007/BF01078179. S2CID 123288172.
  20. ^ Dwyer, Paul S. (1967). "Some Applications of Matrix Derivatives in Multivariate Analysis". J. Amer. Statist. Assoc. 62 (318): 607–625. doi:10.1080/01621459.1967.10482934. JSTOR 2283988.

External links

  • A C++ library for random matrix generator

wishart, distribution, statistics, generalization, multiple, dimensions, gamma, distribution, named, honor, john, wishart, first, formulated, distribution, 1928, wishartnotationx, parametersn, degrees, freedom, real, scale, matrix, supportx, positive, definite. In statistics the Wishart distribution is a generalization to multiple dimensions of the gamma distribution It is named in honor of John Wishart who first formulated the distribution in 1928 1 WishartNotationX Wp V n Parametersn gt p 1 degrees of freedom real V gt 0 scale matrix p p pos def SupportX p p positive definite matrixPDFf X x x n p 1 2 e tr V 1 x 2 2 n p 2 V n 2 G p n 2 displaystyle f mathbf X mathbf x frac mathbf x n p 1 2 e operatorname tr mathbf V 1 mathbf x 2 2 frac np 2 mathbf V n 2 Gamma p frac n 2 Gp is the multivariate gamma function tr is the trace functionMeanE X n V displaystyle operatorname E X n mathbf V Mode n p 1 V for n p 1VarianceVar X i j n v i j 2 v i i v j j displaystyle operatorname Var mathbf X ij n left v ij 2 v ii v jj right Entropysee belowCF8 I 2 i 8 V n 2 displaystyle Theta mapsto left mathbf I 2i mathbf Theta mathbf V right frac n 2 It is a family of probability distributions defined over symmetric nonnegative definite random matrices i e matrix valued random variables In random matrix theory the space of Wishart matrices is called the Wishart ensemble These distributions are of great importance in the estimation of covariance matrices in multivariate statistics In Bayesian statistics the Wishart distribution is the conjugate prior of the inverse covariance matrix of a multivariate normal random vector 2 Contents 1 Definition 2 Occurrence 3 Probability density function 4 Use in Bayesian statistics 4 1 Choice of parameters 5 Properties 5 1 Log expectation 5 2 Log variance 5 3 Entropy 5 4 Cross entropy 5 5 KL divergence 5 6 Characteristic function 6 Theorem 6 1 Corollary 1 6 2 Corollary 2 7 Estimator of the multivariate normal distribution 8 Bartlett decomposition 9 Marginal distribution of matrix elements 10 The range of the shape parameter 11 Relationships to other distributions 12 See also 13 References 14 External linksDefinition EditSuppose G is a p n matrix each column of which is independently drawn from a p variate normal distribution with zero mean G i g i 1 g i p T N p 0 V displaystyle G i g i 1 dots g i p T sim mathcal N p 0 V Then the Wishart distribution is the probability distribution of the p p random matrix 3 S G G T i 1 n G i G i T displaystyle S GG T sum i 1 n G i G i T known as the scatter matrix One indicates that S has that probability distribution by writing S W p V n displaystyle S sim W p V n The positive integer n is the number of degrees of freedom Sometimes this is written W V p n For n p the matrix S is invertible with probability 1 if V is invertible If p V 1 then this distribution is a chi squared distribution with n degrees of freedom Occurrence EditThe Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution It occurs frequently in likelihood ratio tests in multivariate statistical analysis It also arises in the spectral theory of random matrices citation needed and in multidimensional Bayesian analysis 4 It is also encountered in wireless communications while analyzing the performance of Rayleigh fading MIMO wireless channels 5 Probability density function EditThe Wishart distribution can be characterized by its probability density function as follows Let X be a p p symmetric matrix of random variables that is positive semi definite Let V be a fixed symmetric positive definite matrix of size p p Then if n p X has a Wishart distribution with n degrees of freedom if it has the probability density function f X x 1 2 n p 2 V n 2 G p n 2 x n p 1 2 e 1 2 tr V 1 x displaystyle f mathbf X mathbf x frac 1 2 np 2 left mathbf V right n 2 Gamma p left frac n 2 right left mathbf x right n p 1 2 e frac 1 2 operatorname tr mathbf V 1 mathbf x where x displaystyle left mathbf x right is the determinant of x displaystyle mathbf x and Gp is the multivariate gamma function defined as G p n 2 p p p 1 4 j 1 p G n 2 j 1 2 displaystyle Gamma p left frac n 2 right pi p p 1 4 prod j 1 p Gamma left frac n 2 frac j 1 2 right The density above is not the joint density of all the p 2 displaystyle p 2 elements of the random matrix X such p 2 displaystyle p 2 dimensional density does not exist because of the symmetry constrains X i j X j i displaystyle X ij X ji it is rather the joint density of p p 1 2 displaystyle p p 1 2 elements X i j displaystyle X ij for i j displaystyle i leq j 1 page 38 Also the density formula above applies only to positive definite matrices x displaystyle mathbf x for other matrices the density is equal to zero The joint eigenvalue density for the eigenvalues l 1 l p 0 displaystyle lambda 1 dots lambda p geq 0 of a random matrix X W p I n displaystyle mathbf X sim W p mathbf I n is 6 7 c n p e 1 2 i l i l i n p 1 2 i lt j l i l j displaystyle c n p e frac 1 2 sum i lambda i prod lambda i n p 1 2 prod i lt j lambda i lambda j where c n p displaystyle c n p is a constant In fact the above definition can be extended to any real n gt p 1 If n p 1 then the Wishart no longer has a density instead it represents a singular distribution that takes values in a lower dimension subspace of the space of p p matrices 8 Use in Bayesian statistics EditIn Bayesian statistics in the context of the multivariate normal distribution the Wishart distribution is the conjugate prior to the precision matrix W S 1 where S is the covariance matrix 9 135 Choice of parameters Edit The least informative proper Wishart prior is obtained by setting n p citation needed The prior mean of Wp V n is nV suggesting that a reasonable choice for V would be n 1S0 1 where S0 is some prior guess for the covariance matrix Properties EditLog expectation Edit The following formula plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution 9 693 E ln X ps p n 2 p ln 2 ln V displaystyle operatorname E ln left mathbf X right psi p left frac n 2 right p ln 2 ln mathbf V where ps p displaystyle psi p is the multivariate digamma function the derivative of the log of the multivariate gamma function Log variance Edit The following variance computation could be of help in Bayesian statistics Var ln X i 1 p ps 1 n 1 i 2 displaystyle operatorname Var left ln left mathbf X right right sum i 1 p psi 1 left frac n 1 i 2 right where ps 1 displaystyle psi 1 is the trigamma function This comes up when computing the Fisher information of the Wishart random variable Entropy Edit The information entropy of the distribution has the following formula 9 693 H X ln B V n n p 1 2 E ln X n p 2 displaystyle operatorname H left mathbf X right ln left B mathbf V n right frac n p 1 2 operatorname E left ln left mathbf X right right frac np 2 where B V n is the normalizing constant of the distribution B V n 1 V n 2 2 n p 2 G p n 2 displaystyle B mathbf V n frac 1 left mathbf V right n 2 2 np 2 Gamma p left frac n 2 right This can be expanded as follows H X n 2 ln V n p 2 ln 2 ln G p n 2 n p 1 2 E ln X n p 2 n 2 ln V n p 2 ln 2 ln G p n 2 n p 1 2 ps p n 2 p ln 2 ln V n p 2 n 2 ln V n p 2 ln 2 ln G p n 2 n p 1 2 ps p n 2 n p 1 2 p ln 2 ln V n p 2 p 1 2 ln V 1 2 p p 1 ln 2 ln G p n 2 n p 1 2 ps p n 2 n p 2 displaystyle begin aligned operatorname H left mathbf X right amp frac n 2 ln left mathbf V right frac np 2 ln 2 ln Gamma p left frac n 2 right frac n p 1 2 operatorname E left ln left mathbf X right right frac np 2 8pt amp frac n 2 ln left mathbf V right frac np 2 ln 2 ln Gamma p left frac n 2 right frac n p 1 2 left psi p left frac n 2 right p ln 2 ln left mathbf V right right frac np 2 8pt amp frac n 2 ln left mathbf V right frac np 2 ln 2 ln Gamma p left frac n 2 right frac n p 1 2 psi p left frac n 2 right frac n p 1 2 left p ln 2 ln left mathbf V right right frac np 2 8pt amp frac p 1 2 ln left mathbf V right frac 1 2 p p 1 ln 2 ln Gamma p left frac n 2 right frac n p 1 2 psi p left frac n 2 right frac np 2 end aligned Cross entropy Edit The cross entropy of two Wishart distributions p 0 displaystyle p 0 with parameters n 0 V 0 displaystyle n 0 V 0 and p 1 displaystyle p 1 with parameters n 1 V 1 displaystyle n 1 V 1 is H p 0 p 1 E p 0 log p 1 E p 0 log X n 1 p 1 1 2 e tr V 1 1 X 2 2 n 1 p 1 2 V 1 n 1 2 G p 1 n 1 2 n 1 p 1 2 log 2 n 1 2 log V 1 log G p 1 n 1 2 n 1 p 1 1 2 E p 0 log X 1 2 E p 0 tr V 1 1 X n 1 p 1 2 log 2 n 1 2 log V 1 log G p 1 n 1 2 n 1 p 1 1 2 ps p 0 n 0 2 p 0 log 2 log V 0 1 2 tr V 1 1 n 0 V 0 n 1 2 log V 1 1 V 0 p 1 1 2 log V 0 n 0 2 tr V 1 1 V 0 log G p 1 n 1 2 n 1 p 1 1 2 ps p 0 n 0 2 n 1 p 1 p 0 p 0 p 1 1 2 log 2 displaystyle begin aligned H p 0 p 1 amp operatorname E p 0 log p 1 8pt amp operatorname E p 0 left log frac left mathbf X right n 1 p 1 1 2 e operatorname tr mathbf V 1 1 mathbf X 2 2 n 1 p 1 2 left mathbf V 1 right n 1 2 Gamma p 1 left tfrac n 1 2 right right 8pt amp tfrac n 1 p 1 2 log 2 tfrac n 1 2 log left mathbf V 1 right log Gamma p 1 tfrac n 1 2 tfrac n 1 p 1 1 2 operatorname E p 0 left log left mathbf X right right tfrac 1 2 operatorname E p 0 left operatorname tr left mathbf V 1 1 mathbf X right right 8pt amp tfrac n 1 p 1 2 log 2 tfrac n 1 2 log left mathbf V 1 right log Gamma p 1 tfrac n 1 2 tfrac n 1 p 1 1 2 left psi p 0 tfrac n 0 2 p 0 log 2 log left mathbf V 0 right right tfrac 1 2 operatorname tr left mathbf V 1 1 n 0 mathbf V 0 right 8pt amp tfrac n 1 2 log left mathbf V 1 1 mathbf V 0 right tfrac p 1 1 2 log left mathbf V 0 right tfrac n 0 2 operatorname tr left mathbf V 1 1 mathbf V 0 right log Gamma p 1 left tfrac n 1 2 right tfrac n 1 p 1 1 2 psi p 0 tfrac n 0 2 tfrac n 1 p 1 p 0 p 0 p 1 1 2 log 2 end aligned Note that when p 0 p 1 displaystyle p 0 p 1 and n 0 n 1 displaystyle n 0 n 1 we recover the entropy KL divergence Edit The Kullback Leibler divergence of p 1 displaystyle p 1 from p 0 displaystyle p 0 is D K L p 0 p 1 H p 0 p 1 H p 0 n 1 2 log V 1 1 V 0 n 0 2 tr V 1 1 V 0 p log G p n 1 2 G p n 0 2 n 0 n 1 2 ps p n 0 2 displaystyle begin aligned D KL p 0 p 1 amp H p 0 p 1 H p 0 6pt amp frac n 1 2 log mathbf V 1 1 mathbf V 0 frac n 0 2 operatorname tr mathbf V 1 1 mathbf V 0 p log frac Gamma p left frac n 1 2 right Gamma p left frac n 0 2 right tfrac n 0 n 1 2 psi p left frac n 0 2 right end aligned Characteristic function Edit The characteristic function of the Wishart distribution is 8 E exp i tr X 8 1 2 i 8 V n 2 displaystyle Theta mapsto operatorname E left exp left i operatorname tr left mathbf X mathbf Theta right right right left 1 2i mathbf Theta mathbf V right n 2 where E denotes expectation Here 8 is any matrix with the same dimensions as V 1 indicates the identity matrix and i is a square root of 1 7 Properly interpreting this formula requires a little care because noninteger complex powers are multivalued when n is noninteger the correct branch must be determined via analytic continuation 10 Theorem EditIf a p p random matrix X has a Wishart distribution with m degrees of freedom and variance matrix V write X W p V m displaystyle mathbf X sim mathcal W p mathbf V m and C is a q p matrix of rank q then 11 C X C T W q C V C T m displaystyle mathbf C mathbf X mathbf C T sim mathcal W q left mathbf C mathbf V mathbf C T m right Corollary 1 Edit If z is a nonzero p 1 constant vector then 11 s z 2 z T X z x m 2 displaystyle sigma z 2 mathbf z T mathbf X mathbf z sim chi m 2 In this case x m 2 displaystyle chi m 2 is the chi squared distribution and s z 2 z T V z displaystyle sigma z 2 mathbf z T mathbf V mathbf z note that s z 2 displaystyle sigma z 2 is a constant it is positive because V is positive definite Corollary 2 Edit Consider the case where zT 0 0 1 0 0 that is the j th element is one and all others zero Then corollary 1 above shows that s j j 1 w j j x m 2 displaystyle sigma jj 1 w jj sim chi m 2 gives the marginal distribution of each of the elements on the matrix s diagonal George Seber points out that the Wishart distribution is not called the multivariate chi squared distribution because the marginal distribution of the off diagonal elements is not chi squared Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family 12 Estimator of the multivariate normal distribution EditThe Wishart distribution is the sampling distribution of the maximum likelihood estimator MLE of the covariance matrix of a multivariate normal distribution 13 A derivation of the MLE uses the spectral theorem Bartlett decomposition EditThe Bartlett decomposition of a matrix X from a p variate Wishart distribution with scale matrix V and n degrees of freedom is the factorization X L A A T L T displaystyle mathbf X textbf L textbf A textbf A T textbf L T where L is the Cholesky factor of V and A c 1 0 0 0 n 21 c 2 0 0 n 31 n 32 c 3 0 n p 1 n p 2 n p 3 c p displaystyle mathbf A begin pmatrix c 1 amp 0 amp 0 amp cdots amp 0 n 21 amp c 2 amp 0 amp cdots amp 0 n 31 amp n 32 amp c 3 amp cdots amp 0 vdots amp vdots amp vdots amp ddots amp vdots n p1 amp n p2 amp n p3 amp cdots amp c p end pmatrix where c i 2 x n i 1 2 displaystyle c i 2 sim chi n i 1 2 and nij N 0 1 independently 14 This provides a useful method for obtaining random samples from a Wishart distribution 15 Marginal distribution of matrix elements EditLet V be a 2 2 variance matrix characterized by correlation coefficient 1 lt r lt 1 and L its lower Cholesky factor V s 1 2 r s 1 s 2 r s 1 s 2 s 2 2 L s 1 0 r s 2 1 r 2 s 2 displaystyle mathbf V begin pmatrix sigma 1 2 amp rho sigma 1 sigma 2 rho sigma 1 sigma 2 amp sigma 2 2 end pmatrix qquad mathbf L begin pmatrix sigma 1 amp 0 rho sigma 2 amp sqrt 1 rho 2 sigma 2 end pmatrix Multiplying through the Bartlett decomposition above we find that a random sample from the 2 2 Wishart distribution is X s 1 2 c 1 2 s 1 s 2 r c 1 2 1 r 2 c 1 n 21 s 1 s 2 r c 1 2 1 r 2 c 1 n 21 s 2 2 1 r 2 c 2 2 1 r 2 n 21 r c 1 2 displaystyle mathbf X begin pmatrix sigma 1 2 c 1 2 amp sigma 1 sigma 2 left rho c 1 2 sqrt 1 rho 2 c 1 n 21 right sigma 1 sigma 2 left rho c 1 2 sqrt 1 rho 2 c 1 n 21 right amp sigma 2 2 left left 1 rho 2 right c 2 2 left sqrt 1 rho 2 n 21 rho c 1 right 2 right end pmatrix The diagonal elements most evidently in the first element follow the x2 distribution with n degrees of freedom scaled by s2 as expected The off diagonal element is less familiar but can be identified as a normal variance mean mixture where the mixing density is a x2 distribution The corresponding marginal probability density for the off diagonal element is therefore the variance gamma distribution f x 12 x 12 n 1 2 G n 2 2 n 1 p 1 r 2 s 1 s 2 n 1 K n 1 2 x 12 s 1 s 2 1 r 2 exp r x 12 s 1 s 2 1 r 2 displaystyle f x 12 frac left x 12 right frac n 1 2 Gamma left frac n 2 right sqrt 2 n 1 pi left 1 rho 2 right left sigma 1 sigma 2 right n 1 cdot K frac n 1 2 left frac left x 12 right sigma 1 sigma 2 left 1 rho 2 right right exp left frac rho x 12 sigma 1 sigma 2 1 rho 2 right where Kn z is the modified Bessel function of the second kind 16 Similar results may be found for higher dimensions but the interdependence of the off diagonal correlations becomes increasingly complicated It is also possible to write down the moment generating function even in the noncentral case essentially the nth power of Craig 1936 17 equation 10 although the probability density becomes an infinite sum of Bessel functions The range of the shape parameter EditIt can be shown 18 that the Wishart distribution can be defined if and only if the shape parameter n belongs to the set L p 0 p 1 p 1 displaystyle Lambda p 0 ldots p 1 cup left p 1 infty right This set is named after Gindikin who introduced it 19 in the 1970s in the context of gamma distributions on homogeneous cones However for the new parameters in the discrete spectrum of the Gindikin ensemble namely L p 0 p 1 displaystyle Lambda p 0 ldots p 1 the corresponding Wishart distribution has no Lebesgue density Relationships to other distributions EditThe Wishart distribution is related to the inverse Wishart distribution denoted by W p 1 displaystyle W p 1 as follows If X Wp V n and if we do the change of variables C X 1 then C W p 1 V 1 n displaystyle mathbf C sim W p 1 mathbf V 1 n This relationship may be derived by noting that the absolute value of the Jacobian determinant of this change of variables is C p 1 see for example equation 15 15 in 20 In Bayesian statistics the Wishart distribution is a conjugate prior for the precision parameter of the multivariate normal distribution when the mean parameter is known 9 A generalization is the multivariate gamma distribution A different type of generalization is the normal Wishart distribution essentially the product of a multivariate normal distribution with a Wishart distribution See also EditChi squared distribution Complex Wishart distribution F distribution Gamma distribution Hotelling s T squared distribution Inverse Wishart distribution Multivariate gamma distribution Student s t distribution Wilks lambda distributionReferences Edit a b Wishart J 1928 The generalised product moment distribution in samples from a normal multivariate population Biometrika 20A 1 2 32 52 doi 10 1093 biomet 20A 1 2 32 JFM 54 0565 02 JSTOR 2331939 Koop Gary Korobilis Dimitris 2010 Bayesian Multivariate Time Series Methods for Empirical Macroeconomics Foundations and Trends in Econometrics 3 4 267 358 doi 10 1561 0800000013 Gupta A K Nagar D K 2000 Matrix Variate Distributions Chapman amp Hall CRC ISBN 1584880465 Gelman Andrew 2003 Bayesian Data Analysis 2nd ed Boca Raton Fla Chapman amp Hall p 582 ISBN 158488388X Retrieved 3 June 2015 Zanella A Chiani M Win M Z April 2009 On the marginal distribution of the eigenvalues of wishart matrices PDF IEEE Transactions on Communications 57 4 1050 1060 doi 10 1109 TCOMM 2009 04 070143 hdl 1721 1 66900 S2CID 12437386 Muirhead Robb J 2005 Aspects of Multivariate Statistical Theory 2nd ed Wiley Interscience ISBN 0471769851 a b Anderson T W 2003 An Introduction to Multivariate Statistical Analysis 3rd ed Hoboken N J Wiley Interscience p 259 ISBN 0 471 36091 0 Uhlig H 1994 On Singular Wishart and Singular Multivariate Beta Distributions The Annals of Statistics 22 395 405 doi 10 1214 aos 1176325375 a b c d Bishop C M 2006 Pattern Recognition and Machine Learning Springer Mayerhofer Eberhard 2019 01 27 Reforming the Wishart characteristic function arXiv 1901 09347 math PR a b Rao C R 1965 Linear Statistical Inference and its Applications Wiley p 535 Seber George A F 2004 Multivariate Observations Wiley ISBN 978 0471691211 Chatfield C Collins A J 1980 Introduction to Multivariate Analysis London Chapman and Hall pp 103 108 ISBN 0 412 16030 7 Anderson T W 2003 An Introduction to Multivariate Statistical Analysis 3rd ed Hoboken N J Wiley Interscience p 257 ISBN 0 471 36091 0 Smith W B Hocking R R 1972 Algorithm AS 53 Wishart Variate Generator Journal of the Royal Statistical Society Series C 21 3 341 345 JSTOR 2346290 Pearson Karl Jeffery G B Elderton Ethel M December 1929 On the Distribution of the First Product Moment Coefficient in Samples Drawn from an Indefinitely Large Normal Population Biometrika Biometrika Trust 21 1 4 164 201 doi 10 2307 2332556 JSTOR 2332556 Craig Cecil C 1936 On the Frequency Function of xy Ann Math Statist 7 1 15 doi 10 1214 aoms 1177732541 Peddada and Richards Shyamal Das Richards Donald St P 1991 Proof of a Conjecture of M L Eaton on the Characteristic Function of the Wishart Distribution Annals of Probability 19 2 868 874 doi 10 1214 aop 1176990455 Gindikin S G 1975 Invariant generalized functions in homogeneous domains Funct Anal Appl 9 1 50 52 doi 10 1007 BF01078179 S2CID 123288172 Dwyer Paul S 1967 Some Applications of Matrix Derivatives in Multivariate Analysis J Amer Statist Assoc 62 318 607 625 doi 10 1080 01621459 1967 10482934 JSTOR 2283988 External links EditA C library for random matrix generator Retrieved from https en wikipedia org w index php title Wishart distribution amp oldid 1136512373, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.