fbpx
Wikipedia

Dirichlet distribution

In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution,[1] hence its alternative name of multivariate beta distribution (MBD).[2] Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

Dirichlet distribution
Probability density function
Parameters number of categories (integer)
concentration parameters, where
Support where and
PDF
where
where
Mean

(where is the digamma function)
Mode
Variance
where , and is the Kronecker delta
Entropy
with defined as for variance, above; and is the digamma function
Method of Moments where is any index, possibly itself

The infinite-dimensional generalization of the Dirichlet distribution is the Dirichlet process.

Definitions Edit

Probability density function Edit

 
Illustrating how the log of the density function changes when K = 3 as we change the vector α from α = (0.3, 0.3, 0.3) to (2.0, 2.0, 2.0), keeping all the individual  's equal to each other.

The Dirichlet distribution of order K ≥ 2 with parameters α1, ..., αK > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space RK-1 given by

 
where   belong to the standard   simplex, or in other words:  

The normalizing constant is the multivariate beta function, which can be expressed in terms of the gamma function:

 

Support Edit

The support of the Dirichlet distribution is the set of K-dimensional vectors   whose entries are real numbers in the interval [0,1] such that  , i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a K-way categorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of probability distributions, specifically the set of K-dimensional discrete distributions. The technical term for the set of points in the support of a K-dimensional Dirichlet distribution is the open standard (K − 1)-simplex,[3] which is a generalization of a triangle, embedded in the next-higher dimension. For example, with K = 3, the support is an equilateral triangle embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.

Special cases Edit

A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector   have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value α, called the concentration parameter. In terms of α, the density function has the form

 

When α=1[1], the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard (K − 1)-simplex, i.e. it is uniform over all points in its support. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.

More generally, the parameter vector is sometimes written as the product   of a (scalar) concentration parameter α and a (vector) base measure   where   lies within the (K − 1)-simplex (i.e.: its coordinates   sum to one). The concentration parameter in this case is larger by a factor of K than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing Dirichlet processes and is often used in the topic modelling literature.

^ If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter K, the dimension of the distribution, is the uniform distribution on the (K − 1)-simplex.

Properties Edit

Moments Edit

Let  .

Let

 

Then[4][5]

 
 

Furthermore, if  

 

The matrix is thus singular.

More generally, moments of Dirichlet-distributed random variables can be expressed as[6]

 

Mode Edit

The mode of the distribution is[7] the vector (x1, ..., xK) with

 

Marginal distributions Edit

The marginal distributions are beta distributions:[8]

 

Conjugate to categorical or multinomial Edit

The Dirichlet distribution is the conjugate prior distribution of the categorical distribution (a generic discrete probability distribution with a given number of possible outcomes) and multinomial distribution (the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the prior distribution of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the posterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.

Formally, this can be expressed as follows. Given a model

 

then the following holds:

 

This relationship is used in Bayesian statistics to estimate the underlying parameter p of a categorical distribution given a collection of N samples. Intuitively, we can view the hyperprior vector α as pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution.

In Bayesian mixture models and other hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the categorical variables appearing in the models. See the section on applications below for more information.

Relation to Dirichlet-multinomial distribution Edit

In a model where a Dirichlet prior distribution is placed over a set of categorical-valued observations, the marginal joint distribution of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a Dirichlet-multinomial distribution. This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as Gibbs sampling or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.

Entropy Edit

If X is a   random variable, the differential entropy of X (in nat units) is[9]

 

where   is the digamma function.

The following formula for   can be used to derive the differential entropy above. Since the functions   are the sufficient statistics of the Dirichlet distribution, the exponential family differential identities can be used to get an analytic expression for the expectation of   (see equation (2.62) in [10]) and its associated covariance matrix:

 

and

 

where   is the digamma function,   is the trigamma function, and   is the Kronecker delta.

The spectrum of Rényi information for values other than   is given by[11]

 

and the information entropy is the limit as   goes to 1.

Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector   with probability-mass distribution  , i.e.,  . The conditional information entropy of  , given   is

 

This function of   is a scalar random variable. If   has a symmetric Dirichlet distribution with all  , the expected value of the entropy (in nat units) is[12]

 

Aggregation Edit

If

 

then, if the random variables with subscripts i and j are dropped from the vector and replaced by their sum,

 

This aggregation property may be used to derive the marginal distribution of   mentioned above.

Neutrality Edit

If  , then the vector X is said to be neutral[13] in the sense that XK is independent of  [3] where

 

and similarly for removing any of  . Observe that any permutation of X is also neutral (a property not possessed by samples drawn from a generalized Dirichlet distribution).[14]

Combining this with the property of aggregation it follows that Xj + ... + XK is independent of  . In fact it is true, further, for the Dirichlet distribution, that for  , the pair  , and the two vectors   and  , viewed as triple of normalised random vectors, are mutually independent. The analogous result is true for partition of the indices {1,2,...,K} into any other pair of non-singleton subsets.

Characteristic function Edit

The characteristic function of the Dirichlet distribution is a confluent form of the Lauricella hypergeometric series. It is given by Phillips as[15]

 

where

 

The sum is over non-negative integers   and  . Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of a complex path integral:

 

where L denotes any path in the complex plane originating at  , encircling in the positive direction all the singularities of the integrand and returning to  .

Inequality Edit

Probability density function   plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.[16]

Related distributions Edit

For K independently distributed Gamma distributions:

 

we have:[17]: 402 

 
 

Although the Xis are not independent from one another, they can be seen to be generated from a set of K independent gamma random variable.[17]: 594  Unfortunately, since the sum V is lost in forming X (in fact it can be shown that V is stochastically independent of X), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.

Conjugate prior of the Dirichlet distribution Edit

Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior. The conjugate prior is of the form:[18]

 

Here   is a K-dimensional real vector and   is a scalar parameter. The domain of   is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is:[19]

 

The conjugation property can be expressed as

if [prior:  ] and [observation:  ] then [posterior:  ].

In the published literature there is no practical algorithm to efficiently generate samples from  .

Occurrence and applications Edit

Bayesian models Edit

Dirichlet distributions are most commonly used as the prior distribution of categorical variables or multinomial variables in Bayesian mixture models and other hierarchical Bayesian models. (In many fields, such as in natural language processing, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when Bernoulli distributions and binomial distributions are commonly conflated.)

Inference over hierarchical Bayesian models is often done using Gibbs sampling, and in such a case, instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet random variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a Dirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (the concentration parameters). One of the reasons for doing this is that Gibbs sampling of the Dirichlet-multinomial distribution is extremely easy; see that article for more information.


Intuitive interpretations of the parameters Edit

The concentration parameter Edit

Dirichlet distributions are very often used as prior distributions in Bayesian inference. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value α to which all parameters are set is called the concentration parameter. If the sample space of the Dirichlet distribution is interpreted as a discrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the concentration parameter for further discussion.

String cutting Edit

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that   The   values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with  .

 
Example of Dirichlet(1/2,1/3,1/6) distribution

Pólya's urn Edit

Consider an urn containing balls of K different colors. Initially, the urn contains α1 balls of color 1, α2 balls of color 2, and so on. Now perform N draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as N approaches infinity, the proportions of different colored balls in the urn will be distributed as Dir(α1,...,αK).[20]

For a formal proof, note that the proportions of the different colored balls form a bounded [0,1]K-valued martingale, hence by the martingale convergence theorem, these proportions converge almost surely and in mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed moments agree.

Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.


Random variate generation Edit

From gamma distribution Edit

With a source of Gamma-distributed random variates, one can easily sample a random vector   from the K-dimensional Dirichlet distribution with parameters   . First, draw K independent random samples   from Gamma distributions each with density

 

and then set

 
[Proof]

The joint distribution of the independently sampled gamma variates,  , is given by the product:

 

Next, one uses a change of variables, parametrising   in terms of   and   , and performs a change of variables from   such that  . Each of the variables   and likewise  . One must then use the change of variables formula,   in which   is the transformation Jacobian. Writing y explicitly as a function of x, one obtains   The Jacobian now looks like

 

The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain

 

which can be expanded about the bottom row to obtain the determinant value  . Substituting for x in the joint pdf and including the Jacobian determinant, one obtains:

 

where  . The right-hand side can be recognized as the product of a Dirichlet pdf for the   and a gamma pdf for  . The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain:

 

Which is equivalent to

  with support  

Below is example Python code to draw the sample:

params = [a1, a2, ..., ak] sample = [random.gammavariate(a, 1) for a in params] sample = [v / sum(sample) for v in sample] 

This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.

From marginal beta distributions Edit

A less efficient algorithm[21] relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate   from

 

Then simulate   in order, as follows. For  , simulate   from

 

and let

 

Finally, set

 

This iterative procedure corresponds closely to the "string cutting" intuition described above.

Below is example Python code to draw the sample:

params = [a1, a2, ..., ak] xs = [random.betavariate(params[0], sum(params[1:]))] for j in range(1, len(params) - 1): phi = random.betavariate(params[j], sum(params[j + 1 :])) xs.append((1 - sum(xs)) * phi) xs.append(1 - sum(xs)) 

See also Edit

References Edit

  1. ^ S. Kotz; N. Balakrishnan; N. L. Johnson (2000). Continuous Multivariate Distributions. Volume 1: Models and Applications. New York: Wiley. ISBN 978-0-471-18387-7. (Chapter 49: Dirichlet and Inverted Dirichlet Distributions)
  2. ^ Olkin, Ingram; Rubin, Herman (1964). "Multivariate Beta Distributions and Independence Properties of the Wishart Distribution". The Annals of Mathematical Statistics. 35 (1): 261–269. doi:10.1214/aoms/1177703748. JSTOR 2238036.
  3. ^ a b Bela A. Frigyik; Amol Kapila; Maya R. Gupta (2010). (PDF). University of Washington Department of Electrical Engineering. Archived from the original (Technical Report UWEETR-2010-006) on 2015-02-19.
  4. ^ Eq. (49.9) on page 488 of Kotz, Balakrishnan & Johnson (2000). Continuous Multivariate Distributions. Volume 1: Models and Applications. New York: Wiley.
  5. ^ BalakrishV. B. (2005). ""Chapter 27. Dirichlet Distribution"". A Primer on Statistical Distributions. Hoboken, NJ: John Wiley & Sons, Inc. p. 274. ISBN 978-0-471-42798-8.
  6. ^ Hoffmann, Till. . Retrieved 14 February 2016.
  7. ^ Christopher M. Bishop (17 August 2006). Pattern Recognition and Machine Learning. Springer. ISBN 978-0-387-31073-2.
  8. ^ Farrow, Malcolm. "MAS3301 Bayesian Statistics" (PDF). Newcastle University. Retrieved 10 April 2013.
  9. ^ Lin, Jiayu (2016). On The Dirichlet Distribution (PDF). Kingston, Canada: Queen's University. pp. § 2.4.9.
  10. ^ Nguyen, Duy. "AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE". Retrieved 15 August 2023.
  11. ^ Song, Kai-Sheng (2001). "Rényi information, loglikelihood, and an intrinsic distribution measure". Journal of Statistical Planning and Inference. Elsevier. 93 (325): 51–69. doi:10.1016/S0378-3758(00)00169-5.
  12. ^ Nemenman, Ilya; Shafee, Fariel; Bialek, William (2002). Entropy and Inference, revisited (PDF). NIPS 14., eq. 8
  13. ^ Connor, Robert J.; Mosimann, James E (1969). "Concepts of Independence for Proportions with a Generalization of the Dirichlet Distribution". Journal of the American Statistical Association. American Statistical Association. 64 (325): 194–206. doi:10.2307/2283728. JSTOR 2283728.
  14. ^ See Kotz, Balakrishnan & Johnson (2000), Section 8.5, "Connor and Mosimann's Generalization", pp. 519–521.
  15. ^ Phillips, P. C. B. (1988). "The characteristic function of the Dirichlet and multivariate F distribution" (PDF). Cowles Foundation Discussion Paper 865.
  16. ^ Grinshpan, A. Z. (2017). "An inequality for multiple convolutions with respect to Dirichlet probability measure". Advances in Applied Mathematics. 82 (1): 102–119. doi:10.1016/j.aam.2016.08.001.
  17. ^ a b Devroye, Luc (1986). Non-Uniform Random Variate Generation. Springer-Verlag. ISBN 0-387-96305-7.
  18. ^ Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George (2009). "Bayesian Inference on Multiscale Models for Poisson Intensity Estimation: Applications to Photon-Limited Image Denoising". IEEE Transactions on Image Processing. 18 (8): 1724–1741. Bibcode:2009ITIP...18.1724L. doi:10.1109/TIP.2009.2022008. PMID 19414285. S2CID 859561.
  19. ^ Andreoli, Jean-Marc (2018). "A conjugate prior for the Dirichlet distribution". arXiv:1811.05266 [cs.LG].
  20. ^ Blackwell, David; MacQueen, James B. (1973). "Ferguson distributions via Polya urn schemes". Ann. Stat. 1 (2): 353–355. doi:10.1214/aos/1176342372.
  21. ^ A. Gelman; J. B. Carlin; H. S. Stern; D. B. Rubin (2003). Bayesian Data Analysis (2nd ed.). pp. 582. ISBN 1-58488-388-X.

External links Edit

  • "Dirichlet distribution", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
  • Dirichlet Distribution
  • How to estimate the parameters of the compound Dirichlet distribution (Pólya distribution) using expectation-maximization (EM)
  • Luc Devroye. "Non-Uniform Random Variate Generation". Retrieved 19 October 2019.
  • Dirichlet Random Measures, Method of Construction via Compound Poisson Random Variables, and Exchangeability Properties of the resulting Gamma Distribution
  • SciencesPo: R package that contains functions for simulating parameters of the Dirichlet distribution.

dirichlet, distribution, probability, statistics, after, peter, gustav, lejeune, dirichlet, often, denoted, displaystyle, operatorname, boldsymbol, alpha, family, continuous, multivariate, probability, distributions, parameterized, vector, displaystyle, boldsy. In probability and statistics the Dirichlet distribution after Peter Gustav Lejeune Dirichlet often denoted Dir a displaystyle operatorname Dir boldsymbol alpha is a family of continuous multivariate probability distributions parameterized by a vector a displaystyle boldsymbol alpha of positive reals It is a multivariate generalization of the beta distribution 1 hence its alternative name of multivariate beta distribution MBD 2 Dirichlet distributions are commonly used as prior distributions in Bayesian statistics and in fact the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution Dirichlet distributionProbability density functionParametersK 2 displaystyle K geq 2 number of categories integer a a 1 a K displaystyle boldsymbol alpha alpha 1 ldots alpha K concentration parameters where a i gt 0 displaystyle alpha i gt 0 Supportx 1 x K displaystyle x 1 ldots x K where x i 0 1 displaystyle x i in 0 1 and i 1 K x i 1 displaystyle sum i 1 K x i 1 PDF1 B a i 1 K x i a i 1 displaystyle frac 1 mathrm B boldsymbol alpha prod i 1 K x i alpha i 1 where B a i 1 K G a i G a 0 displaystyle mathrm B boldsymbol alpha frac prod i 1 K Gamma alpha i Gamma bigl alpha 0 bigr where a 0 i 1 K a i displaystyle alpha 0 sum i 1 K alpha i MeanE X i a i a 0 displaystyle operatorname E X i frac alpha i alpha 0 E ln X i ps a i ps a 0 displaystyle operatorname E ln X i psi alpha i psi alpha 0 where ps displaystyle psi is the digamma function Modex i a i 1 a 0 K a i gt 1 displaystyle x i frac alpha i 1 alpha 0 K quad alpha i gt 1 VarianceVar X i a i 1 a i a 0 1 displaystyle operatorname Var X i frac tilde alpha i 1 tilde alpha i alpha 0 1 Cov X i X j d i j a i a i a j a 0 1 displaystyle operatorname Cov X i X j frac delta ij tilde alpha i tilde alpha i tilde alpha j alpha 0 1 where a i a i a 0 displaystyle tilde alpha i frac alpha i alpha 0 and d i j displaystyle delta ij is the Kronecker deltaEntropyH X log B a displaystyle H X log mathrm B boldsymbol alpha a 0 K ps a 0 displaystyle alpha 0 K psi alpha 0 j 1 K a j 1 ps a j displaystyle sum j 1 K alpha j 1 psi alpha j with a 0 displaystyle alpha 0 defined as for variance above and ps displaystyle psi is the digamma functionMethod of Momentsa i E X i E X j 1 E X j V X j 1 displaystyle alpha i E X i left frac E X j 1 E X j V X j 1 right where j displaystyle j is any index possibly i displaystyle i itselfThe infinite dimensional generalization of the Dirichlet distribution is the Dirichlet process Contents 1 Definitions 1 1 Probability density function 1 2 Support 1 3 Special cases 2 Properties 2 1 Moments 2 2 Mode 2 3 Marginal distributions 2 4 Conjugate to categorical or multinomial 2 5 Relation to Dirichlet multinomial distribution 2 6 Entropy 2 7 Aggregation 2 8 Neutrality 2 9 Characteristic function 2 10 Inequality 3 Related distributions 3 1 Conjugate prior of the Dirichlet distribution 4 Occurrence and applications 4 1 Bayesian models 4 2 Intuitive interpretations of the parameters 4 2 1 The concentration parameter 4 2 2 String cutting 4 2 3 Polya s urn 5 Random variate generation 5 1 From gamma distribution 5 2 From marginal beta distributions 6 See also 7 References 8 External linksDefinitions EditProbability density function Edit nbsp Illustrating how the log of the density function changes when K 3 as we change the vector a from a 0 3 0 3 0 3 to 2 0 2 0 2 0 keeping all the individual a i displaystyle alpha i nbsp s equal to each other The Dirichlet distribution of order K 2 with parameters a1 aK gt 0 has a probability density function with respect to Lebesgue measure on the Euclidean space RK 1 given by f x 1 x K a 1 a K 1 B a i 1 K x i a i 1 displaystyle f left x 1 ldots x K alpha 1 ldots alpha K right frac 1 mathrm B boldsymbol alpha prod i 1 K x i alpha i 1 nbsp where x k k 1 k K displaystyle x k k 1 k K nbsp belong to the standard K 1 displaystyle K 1 nbsp simplex or in other words i 1 K x i 1 and x i 0 1 for all i 1 K displaystyle sum i 1 K x i 1 mbox and x i in left 0 1 right mbox for all i in 1 dots K nbsp The normalizing constant is the multivariate beta function which can be expressed in terms of the gamma function B a i 1 K G a i G i 1 K a i a a 1 a K displaystyle mathrm B boldsymbol alpha frac prod limits i 1 K Gamma alpha i Gamma left sum limits i 1 K alpha i right qquad boldsymbol alpha alpha 1 ldots alpha K nbsp Support Edit The support of the Dirichlet distribution is the set of K dimensional vectors x displaystyle boldsymbol x nbsp whose entries are real numbers in the interval 0 1 such that x 1 1 displaystyle boldsymbol x 1 1 nbsp i e the sum of the coordinates is equal to 1 These can be viewed as the probabilities of a K way categorical event Another way to express this is that the domain of the Dirichlet distribution is itself a set of probability distributions specifically the set of K dimensional discrete distributions The technical term for the set of points in the support of a K dimensional Dirichlet distribution is the open standard K 1 simplex 3 which is a generalization of a triangle embedded in the next higher dimension For example with K 3 the support is an equilateral triangle embedded in a downward angle fashion in three dimensional space with vertices at 1 0 0 0 1 0 and 0 0 1 i e touching each of the coordinate axes at a point 1 unit away from the origin Special cases Edit A common special case is the symmetric Dirichlet distribution where all of the elements making up the parameter vector a displaystyle boldsymbol alpha nbsp have the same value The symmetric case might be useful for example when a Dirichlet prior over components is called for but there is no prior knowledge favoring one component over another Since all elements of the parameter vector have the same value the symmetric Dirichlet distribution can be parametrized by a single scalar value a called the concentration parameter In terms of a the density function has the form f x 1 x K a G a K G a K i 1 K x i a 1 displaystyle f x 1 dots x K alpha frac Gamma alpha K Gamma alpha K prod i 1 K x i alpha 1 nbsp When a 1 1 the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard K 1 simplex i e it is uniform over all points in its support This particular distribution is known as the flat Dirichlet distribution Values of the concentration parameter above 1 prefer variates that are dense evenly distributed distributions i e all the values within a single sample are similar to each other Values of the concentration parameter below 1 prefer sparse distributions i e most of the values within a single sample will be close to 0 and the vast majority of the mass will be concentrated in a few of the values More generally the parameter vector is sometimes written as the product a n displaystyle alpha boldsymbol n nbsp of a scalar concentration parameter a and a vector base measure n n 1 n K displaystyle boldsymbol n n 1 dots n K nbsp where n displaystyle boldsymbol n nbsp lies within the K 1 simplex i e its coordinates n i displaystyle n i nbsp sum to one The concentration parameter in this case is larger by a factor of K than the concentration parameter for a symmetric Dirichlet distribution described above This construction ties in with concept of a base measure when discussing Dirichlet processes and is often used in the topic modelling literature If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension the Dirichlet distribution with concentration parameter K the dimension of the distribution is the uniform distribution on the K 1 simplex Properties EditMoments Edit Let X X 1 X K Dir a displaystyle X X 1 ldots X K sim operatorname Dir boldsymbol alpha nbsp Let a 0 i 1 K a i displaystyle alpha 0 sum i 1 K alpha i nbsp Then 4 5 E X i a i a 0 displaystyle operatorname E X i frac alpha i alpha 0 nbsp Var X i a i a 0 a i a 0 2 a 0 1 displaystyle operatorname Var X i frac alpha i alpha 0 alpha i alpha 0 2 alpha 0 1 nbsp Furthermore if i j displaystyle i neq j nbsp Cov X i X j a i a j a 0 2 a 0 1 displaystyle operatorname Cov X i X j frac alpha i alpha j alpha 0 2 alpha 0 1 nbsp The matrix is thus singular More generally moments of Dirichlet distributed random variables can be expressed as 6 E i 1 K X i b i B a b B a G i 1 K a i G i 1 K a i b i i 1 K G a i b i G a i displaystyle operatorname E left prod i 1 K X i beta i right frac B left boldsymbol alpha boldsymbol beta right B left boldsymbol alpha right frac Gamma left sum limits i 1 K alpha i right Gamma left sum limits i 1 K alpha i beta i right times prod i 1 K frac Gamma alpha i beta i Gamma alpha i nbsp Mode Edit The mode of the distribution is 7 the vector x1 xK with x i a i 1 a 0 K a i gt 1 displaystyle x i frac alpha i 1 alpha 0 K qquad alpha i gt 1 nbsp Marginal distributions Edit The marginal distributions are beta distributions 8 X i Beta a i a 0 a i displaystyle X i sim operatorname Beta alpha i alpha 0 alpha i nbsp Conjugate to categorical or multinomial Edit The Dirichlet distribution is the conjugate prior distribution of the categorical distribution a generic discrete probability distribution with a given number of possible outcomes and multinomial distribution the distribution over observed counts of each possible category in a set of categorically distributed observations This means that if a data point has either a categorical or multinomial distribution and the prior distribution of the distribution s parameter the vector of probabilities that generates the data point is distributed as a Dirichlet then the posterior distribution of the parameter is also a Dirichlet Intuitively in such a case starting from what we know about the parameter prior to observing the data point we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time without running into mathematical difficulties Formally this can be expressed as follows Given a model a a 1 a K concentration hyperparameter p a p 1 p K Dir K a X p x 1 x K Cat K p displaystyle begin array rcccl boldsymbol alpha amp amp left alpha 1 ldots alpha K right amp amp text concentration hyperparameter mathbf p mid boldsymbol alpha amp amp left p 1 ldots p K right amp sim amp operatorname Dir K boldsymbol alpha mathbb X mid mathbf p amp amp left mathbf x 1 ldots mathbf x K right amp sim amp operatorname Cat K mathbf p end array nbsp then the following holds c c 1 c K number of occurrences of category i p X a Dir K c a Dir K c 1 a 1 c K a K displaystyle begin array rcccl mathbf c amp amp left c 1 ldots c K right amp amp text number of occurrences of category i mathbf p mid mathbb X boldsymbol alpha amp sim amp operatorname Dir K mathbf c boldsymbol alpha amp amp operatorname Dir left K c 1 alpha 1 ldots c K alpha K right end array nbsp This relationship is used in Bayesian statistics to estimate the underlying parameter p of a categorical distribution given a collection of N samples Intuitively we can view the hyperprior vector a as pseudocounts i e as representing the number of observations in each category that we have already seen Then we simply add in the counts for all the new observations the vector c in order to derive the posterior distribution In Bayesian mixture models and other hierarchical Bayesian models with mixture components Dirichlet distributions are commonly used as the prior distributions for the categorical variables appearing in the models See the section on applications below for more information Relation to Dirichlet multinomial distribution Edit In a model where a Dirichlet prior distribution is placed over a set of categorical valued observations the marginal joint distribution of the observations i e the joint distribution of the observations with the prior parameter marginalized out is a Dirichlet multinomial distribution This distribution plays an important role in hierarchical Bayesian models because when doing inference over such models using methods such as Gibbs sampling or variational Bayes Dirichlet prior distributions are often marginalized out See the article on this distribution for more details Entropy Edit If X is a Dir a displaystyle operatorname Dir boldsymbol alpha nbsp random variable the differential entropy of X in nat units is 9 h X E ln f X ln B a a 0 K ps a 0 j 1 K a j 1 ps a j displaystyle h boldsymbol X operatorname E ln f boldsymbol X ln operatorname B boldsymbol alpha alpha 0 K psi alpha 0 sum j 1 K alpha j 1 psi alpha j nbsp where ps displaystyle psi nbsp is the digamma function The following formula for E ln X i displaystyle operatorname E ln X i nbsp can be used to derive the differential entropy above Since the functions ln X i displaystyle ln X i nbsp are the sufficient statistics of the Dirichlet distribution the exponential family differential identities can be used to get an analytic expression for the expectation of ln X i displaystyle ln X i nbsp see equation 2 62 in 10 and its associated covariance matrix E ln X i ps a i ps a 0 displaystyle operatorname E ln X i psi alpha i psi alpha 0 nbsp and Cov ln X i ln X j ps a i d i j ps a 0 displaystyle operatorname Cov ln X i ln X j psi alpha i delta ij psi alpha 0 nbsp where ps displaystyle psi nbsp is the digamma function ps displaystyle psi nbsp is the trigamma function and d i j displaystyle delta ij nbsp is the Kronecker delta The spectrum of Renyi information for values other than l 1 displaystyle lambda 1 nbsp is given by 11 F R l 1 l 1 l log B a i 1 K log G l a i 1 1 log G l a 0 K K displaystyle F R lambda 1 lambda 1 left lambda log mathrm B boldsymbol alpha sum i 1 K log Gamma lambda alpha i 1 1 log Gamma lambda alpha 0 K K right nbsp and the information entropy is the limit as l displaystyle lambda nbsp goes to 1 Another related interesting measure is the entropy of a discrete categorical one of K binary vector Z displaystyle boldsymbol Z nbsp with probability mass distribution X displaystyle boldsymbol X nbsp i e P Z i 1 Z j i 0 X X i displaystyle P Z i 1 Z j neq i 0 boldsymbol X X i nbsp The conditional information entropy of Z displaystyle boldsymbol Z nbsp given X displaystyle boldsymbol X nbsp is S X H Z X E Z log P Z X i 1 K X i log X i displaystyle S boldsymbol X H boldsymbol Z boldsymbol X operatorname E boldsymbol Z log P boldsymbol Z boldsymbol X sum i 1 K X i log X i nbsp This function of X displaystyle boldsymbol X nbsp is a scalar random variable If X displaystyle boldsymbol X nbsp has a symmetric Dirichlet distribution with all a i a displaystyle alpha i alpha nbsp the expected value of the entropy in nat units is 12 E S X i 1 K E X i ln X i ps K a 1 ps a 1 displaystyle operatorname E S boldsymbol X sum i 1 K operatorname E X i ln X i psi K alpha 1 psi alpha 1 nbsp Aggregation Edit If X X 1 X K Dir a 1 a K displaystyle X X 1 ldots X K sim operatorname Dir alpha 1 ldots alpha K nbsp then if the random variables with subscripts i and j are dropped from the vector and replaced by their sum X X 1 X i X j X K Dir a 1 a i a j a K displaystyle X X 1 ldots X i X j ldots X K sim operatorname Dir alpha 1 ldots alpha i alpha j ldots alpha K nbsp This aggregation property may be used to derive the marginal distribution of X i displaystyle X i nbsp mentioned above Neutrality Edit Main article Neutral vector If X X 1 X K Dir a displaystyle X X 1 ldots X K sim operatorname Dir boldsymbol alpha nbsp then the vector X is said to be neutral 13 in the sense that XK is independent of X K displaystyle X K nbsp 3 where X K X 1 1 X K X 2 1 X K X K 1 1 X K displaystyle X K left frac X 1 1 X K frac X 2 1 X K ldots frac X K 1 1 X K right nbsp and similarly for removing any of X 2 X K 1 displaystyle X 2 ldots X K 1 nbsp Observe that any permutation of X is also neutral a property not possessed by samples drawn from a generalized Dirichlet distribution 14 Combining this with the property of aggregation it follows that Xj XK is independent of X 1 X 1 X j 1 X 2 X 1 X j 1 X j 1 X 1 X j 1 displaystyle left frac X 1 X 1 cdots X j 1 frac X 2 X 1 cdots X j 1 ldots frac X j 1 X 1 cdots X j 1 right nbsp In fact it is true further for the Dirichlet distribution that for 3 j K 1 displaystyle 3 leq j leq K 1 nbsp the pair X 1 X j 1 X j X K displaystyle left X 1 cdots X j 1 X j cdots X K right nbsp and the two vectors X 1 X 1 X j 1 X 2 X 1 X j 1 X j 1 X 1 X j 1 displaystyle left frac X 1 X 1 cdots X j 1 frac X 2 X 1 cdots X j 1 ldots frac X j 1 X 1 cdots X j 1 right nbsp and X j X j X K X j 1 X j X K X K X j X K displaystyle left frac X j X j cdots X K frac X j 1 X j cdots X K ldots frac X K X j cdots X K right nbsp viewed as triple of normalised random vectors are mutually independent The analogous result is true for partition of the indices 1 2 K into any other pair of non singleton subsets Characteristic function Edit The characteristic function of the Dirichlet distribution is a confluent form of the Lauricella hypergeometric series It is given by Phillips as 15 C F s 1 s K 1 E e i s 1 X 1 s K 1 X K 1 PS K 1 a 1 a K 1 a 0 i s 1 i s K 1 displaystyle CF left s 1 ldots s K 1 right operatorname E left e i left s 1 X 1 cdots s K 1 X K 1 right right Psi left K 1 right alpha 1 ldots alpha K 1 alpha 0 is 1 ldots is K 1 nbsp where PS m a 1 a m c z 1 z m a 1 k 1 a m k m z 1 k 1 z m k m c k k 1 k m displaystyle Psi m a 1 ldots a m c z 1 ldots z m sum frac a 1 k 1 cdots a m k m z 1 k 1 cdots z m k m c k k 1 cdots k m nbsp The sum is over non negative integers k 1 k m displaystyle k 1 ldots k m nbsp and k k 1 k m displaystyle k k 1 cdots k m nbsp Phillips goes on to state that this form is inconvenient for numerical calculation and gives an alternative in terms of a complex path integral PS m G c 2 p i L e t t a 1 a m c j 1 m t z j a j d t displaystyle Psi m frac Gamma c 2 pi i int L e t t a 1 cdots a m c prod j 1 m t z j a j dt nbsp where L denotes any path in the complex plane originating at displaystyle infty nbsp encircling in the positive direction all the singularities of the integrand and returning to displaystyle infty nbsp Inequality Edit Probability density function f x 1 x K 1 a 1 a K displaystyle f left x 1 ldots x K 1 alpha 1 ldots alpha K right nbsp plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution 16 Related distributions EditFor K independently distributed Gamma distributions Y 1 Gamma a 1 8 Y K Gamma a K 8 displaystyle Y 1 sim operatorname Gamma alpha 1 theta ldots Y K sim operatorname Gamma alpha K theta nbsp we have 17 402 V i 1 K Y i Gamma a 0 8 displaystyle V sum i 1 K Y i sim operatorname Gamma left alpha 0 theta right nbsp X X 1 X K Y 1 V Y K V Dir a 1 a K displaystyle X X 1 ldots X K left frac Y 1 V ldots frac Y K V right sim operatorname Dir left alpha 1 ldots alpha K right nbsp Although the Xis are not independent from one another they can be seen to be generated from a set of K independent gamma random variable 17 594 Unfortunately since the sum V is lost in forming X in fact it can be shown that V is stochastically independent of X it is not possible to recover the original gamma random variables from these values alone Nevertheless because independent random variables are simpler to work with this reparametrization can still be useful for proofs about properties of the Dirichlet distribution Conjugate prior of the Dirichlet distribution Edit Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior The conjugate prior is of the form 18 CD a v h 1 B a h exp k v k a k displaystyle operatorname CD boldsymbol alpha mid boldsymbol v eta propto left frac 1 operatorname B boldsymbol alpha right eta exp left sum k v k alpha k right nbsp Here v displaystyle boldsymbol v nbsp is a K dimensional real vector and h displaystyle eta nbsp is a scalar parameter The domain of v h displaystyle boldsymbol v eta nbsp is restricted to the set of parameters for which the above unnormalized density function can be normalized The necessary and sufficient condition is 19 k v k gt 0 and h gt 1 and h 0 or k exp v k h lt 1 displaystyle forall k v k gt 0 text and eta gt 1 text and eta leq 0 text or sum k exp frac v k eta lt 1 nbsp The conjugation property can be expressed as if prior a CD v h displaystyle boldsymbol alpha sim operatorname CD cdot mid boldsymbol v eta nbsp and observation x a Dirichlet a displaystyle boldsymbol x mid boldsymbol alpha sim operatorname Dirichlet cdot mid boldsymbol alpha nbsp then posterior a x CD v log x h 1 displaystyle boldsymbol alpha mid boldsymbol x sim operatorname CD cdot mid boldsymbol v log boldsymbol x eta 1 nbsp In the published literature there is no practical algorithm to efficiently generate samples from CD a v h displaystyle operatorname CD boldsymbol alpha mid boldsymbol v eta nbsp Occurrence and applications EditBayesian models Edit Dirichlet distributions are most commonly used as the prior distribution of categorical variables or multinomial variables in Bayesian mixture models and other hierarchical Bayesian models In many fields such as in natural language processing categorical variables are often imprecisely called multinomial variables Such a usage is unlikely to cause confusion just as when Bernoulli distributions and binomial distributions are commonly conflated Inference over hierarchical Bayesian models is often done using Gibbs sampling and in such a case instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet random variable This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated and the joint distribution over them assumes a Dirichlet multinomial distribution conditioned on the hyperparameters of the Dirichlet distribution the concentration parameters One of the reasons for doing this is that Gibbs sampling of the Dirichlet multinomial distribution is extremely easy see that article for more information Intuitive interpretations of the parameters Edit The concentration parameter Edit Dirichlet distributions are very often used as prior distributions in Bayesian inference The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution where all parameters are equal This corresponds to the case where you have no prior information to favor one component over any other As described above the single value a to which all parameters are set is called the concentration parameter If the sample space of the Dirichlet distribution is interpreted as a discrete probability distribution then intuitively the concentration parameter can be thought of as determining how concentrated the probability mass of the Dirichlet distribution to its center leading to samples with mass dispersed almost equally among all components i e with a value much less than 1 the mass will be highly concentrated in a few components and all the rest will have almost no mass and with a value much greater than 1 the mass will be dispersed almost equally among all the components See the article on the concentration parameter for further discussion String cutting Edit One example use of the Dirichlet distribution is if one wanted to cut strings each of initial length 1 0 into K pieces with different lengths where each piece had a designated average length but allowing some variation in the relative sizes of the pieces Recall that a 0 i 1 K a i displaystyle alpha 0 sum i 1 K alpha i nbsp The a i a 0 displaystyle alpha i alpha 0 nbsp values specify the mean lengths of the cut pieces of string resulting from the distribution The variance around this mean varies inversely with a 0 displaystyle alpha 0 nbsp nbsp Example of Dirichlet 1 2 1 3 1 6 distributionPolya s urn Edit Consider an urn containing balls of K different colors Initially the urn contains a1 balls of color 1 a2 balls of color 2 and so on Now perform N draws from the urn where after each draw the ball is placed back into the urn with an additional ball of the same color In the limit as N approaches infinity the proportions of different colored balls in the urn will be distributed as Dir a1 aK 20 For a formal proof note that the proportions of the different colored balls form a bounded 0 1 K valued martingale hence by the martingale convergence theorem these proportions converge almost surely and in mean to a limiting random vector To see that this limiting vector has the above Dirichlet distribution check that all mixed moments agree Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future This modification diminishes with the number of draws since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls Random variate generation EditFurther information Non uniform random variate generation From gamma distribution Edit With a source of Gamma distributed random variates one can easily sample a random vector x x 1 x K displaystyle x x 1 ldots x K nbsp from the K dimensional Dirichlet distribution with parameters a 1 a K displaystyle alpha 1 ldots alpha K nbsp First draw K independent random samples y 1 y K displaystyle y 1 ldots y K nbsp from Gamma distributions each with density Gamma a i 1 y i a i 1 e y i G a i displaystyle operatorname Gamma alpha i 1 frac y i alpha i 1 e y i Gamma alpha i nbsp and then set x i y i j 1 K y j displaystyle x i frac y i sum j 1 K y j nbsp Proof The joint distribution of the independently sampled gamma variates y i displaystyle y i nbsp is given by the product e i y i i 1 K y i a i 1 G a i displaystyle e sum i y i prod i 1 K frac y i alpha i 1 Gamma alpha i nbsp Next one uses a change of variables parametrising y i displaystyle y i nbsp in terms of y 1 y 2 y K 1 displaystyle y 1 y 2 ldots y K 1 nbsp and i 1 K y i displaystyle sum i 1 K y i nbsp and performs a change of variables from y x displaystyle y to x nbsp such that x i 1 K y i x 1 y 1 x x 2 y 2 x x K 1 y K 1 x displaystyle bar x textstyle sum i 1 K y i x 1 frac y 1 bar x x 2 frac y 2 bar x ldots x K 1 frac y K 1 bar x nbsp Each of the variables 0 x 1 x 2 x k 1 1 displaystyle 0 leq x 1 x 2 ldots x k 1 leq 1 nbsp and likewise 0 i 1 K 1 x i 1 displaystyle 0 leq textstyle sum i 1 K 1 x i leq 1 nbsp One must then use the change of variables formula P x P y x y x displaystyle P x P y x bigg frac partial y partial x bigg nbsp in which y x displaystyle bigg frac partial y partial x bigg nbsp is the transformation Jacobian Writing y explicitly as a function of x one obtains y 1 x x 1 y 2 x x 2 y K 1 x x K 1 y K x 1 i 1 K 1 x i displaystyle y 1 bar x x 1 y 2 bar x x 2 ldots y K 1 bar x x K 1 y K bar x 1 textstyle sum i 1 K 1 x i nbsp The Jacobian now looks like x 0 x 1 0 x x 2 x x 1 i 1 K 1 x i displaystyle begin vmatrix bar x amp 0 amp ldots amp x 1 0 amp bar x amp ldots amp x 2 vdots amp vdots amp ddots amp vdots bar x amp bar x amp ldots amp 1 sum i 1 K 1 x i end vmatrix nbsp The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row and adding each of the first K 1 rows to the bottom row to obtain x 0 x 1 0 x x 2 0 0 1 displaystyle begin vmatrix bar x amp 0 amp ldots amp x 1 0 amp bar x amp ldots amp x 2 vdots amp vdots amp ddots amp vdots 0 amp 0 amp ldots amp 1 end vmatrix nbsp which can be expanded about the bottom row to obtain the determinant value x K 1 displaystyle bar x K 1 nbsp Substituting for x in the joint pdf and including the Jacobian determinant one obtains i 1 K 1 x x i a i 1 x 1 i 1 K 1 x i a K 1 i 1 K G a i x K 1 e x G a i 1 K 1 x i a i 1 1 i 1 K 1 x i a K 1 i 1 K G a i x a i 1 e x G a displaystyle begin aligned amp frac left prod i 1 K 1 bar x x i alpha i 1 right left bar x 1 sum i 1 K 1 x i right alpha K 1 prod i 1 K Gamma alpha i bar x K 1 e bar x amp frac Gamma bar alpha left prod i 1 K 1 x i alpha i 1 right left 1 sum i 1 K 1 x i right alpha K 1 prod i 1 K Gamma alpha i times frac bar x bar alpha i 1 e bar x Gamma bar alpha end aligned nbsp where a i 1 K a i displaystyle bar alpha textstyle sum i 1 K alpha i nbsp The right hand side can be recognized as the product of a Dirichlet pdf for the x i displaystyle x i nbsp and a gamma pdf for x displaystyle bar x nbsp The product form shows the Dirichlet and gamma variables are independent so the latter can be integrated out by simply omitting it to obtain x 1 x 2 x K 1 1 i 1 K 1 x i a K 1 i 1 K 1 x i a i 1 B a displaystyle x 1 x 2 ldots x K 1 sim frac 1 sum i 1 K 1 x i alpha K 1 prod i 1 K 1 x i alpha i 1 B boldsymbol alpha nbsp Which is equivalent to i 1 K x i a i 1 B a displaystyle frac prod i 1 K x i alpha i 1 B boldsymbol alpha nbsp with support i 1 K x i 1 displaystyle sum i 1 K x i 1 nbsp Below is example Python code to draw the sample params a1 a2 ak sample random gammavariate a 1 for a in params sample v sum sample for v in sample This formulation is correct regardless of how the Gamma distributions are parameterized shape scale vs shape rate because they are equivalent when scale and rate equal 1 0 From marginal beta distributions Edit A less efficient algorithm 21 relies on the univariate marginal and conditional distributions being beta and proceeds as follows Simulate x 1 displaystyle x 1 nbsp from Beta a 1 i 2 K a i displaystyle textrm Beta left alpha 1 sum i 2 K alpha i right nbsp Then simulate x 2 x K 1 displaystyle x 2 ldots x K 1 nbsp in order as follows For j 2 K 1 displaystyle j 2 ldots K 1 nbsp simulate ϕ j displaystyle phi j nbsp from Beta a j i j 1 K a i displaystyle textrm Beta left alpha j sum i j 1 K alpha i right nbsp and let x j 1 i 1 j 1 x i ϕ j displaystyle x j left 1 sum i 1 j 1 x i right phi j nbsp Finally set x K 1 i 1 K 1 x i displaystyle x K 1 sum i 1 K 1 x i nbsp This iterative procedure corresponds closely to the string cutting intuition described above Below is example Python code to draw the sample params a1 a2 ak xs random betavariate params 0 sum params 1 for j in range 1 len params 1 phi random betavariate params j sum params j 1 xs append 1 sum xs phi xs append 1 sum xs See also EditGeneralized Dirichlet distribution Grouped Dirichlet distribution Inverted Dirichlet distribution Latent Dirichlet allocation Dirichlet process Matrix variate Dirichlet distributionReferences Edit S Kotz N Balakrishnan N L Johnson 2000 Continuous Multivariate Distributions Volume 1 Models and Applications New York Wiley ISBN 978 0 471 18387 7 Chapter 49 Dirichlet and Inverted Dirichlet Distributions Olkin Ingram Rubin Herman 1964 Multivariate Beta Distributions and Independence Properties of the Wishart Distribution The Annals of Mathematical Statistics 35 1 261 269 doi 10 1214 aoms 1177703748 JSTOR 2238036 a b Bela A Frigyik Amol Kapila Maya R Gupta 2010 Introduction to the Dirichlet Distribution and Related Processes PDF University of Washington Department of Electrical Engineering Archived from the original Technical Report UWEETR 2010 006 on 2015 02 19 Eq 49 9 on page 488 of Kotz Balakrishnan amp Johnson 2000 Continuous Multivariate Distributions Volume 1 Models and Applications New York Wiley BalakrishV B 2005 Chapter 27 Dirichlet Distribution A Primer on Statistical Distributions Hoboken NJ John Wiley amp Sons Inc p 274 ISBN 978 0 471 42798 8 Hoffmann Till Moments of the Dirichlet distribution Retrieved 14 February 2016 Christopher M Bishop 17 August 2006 Pattern Recognition and Machine Learning Springer ISBN 978 0 387 31073 2 Farrow Malcolm MAS3301 Bayesian Statistics PDF Newcastle University Retrieved 10 April 2013 Lin Jiayu 2016 On The Dirichlet Distribution PDF Kingston Canada Queen s University pp 2 4 9 Nguyen Duy AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE Retrieved 15 August 2023 Song Kai Sheng 2001 Renyi information loglikelihood and an intrinsic distribution measure Journal of Statistical Planning and Inference Elsevier 93 325 51 69 doi 10 1016 S0378 3758 00 00169 5 Nemenman Ilya Shafee Fariel Bialek William 2002 Entropy and Inference revisited PDF NIPS 14 eq 8 Connor Robert J Mosimann James E 1969 Concepts of Independence for Proportions with a Generalization of the Dirichlet Distribution Journal of the American Statistical Association American Statistical Association 64 325 194 206 doi 10 2307 2283728 JSTOR 2283728 See Kotz Balakrishnan amp Johnson 2000 Section 8 5 Connor and Mosimann s Generalization pp 519 521 Phillips P C B 1988 The characteristic function of the Dirichlet and multivariate F distribution PDF Cowles Foundation Discussion Paper 865 Grinshpan A Z 2017 An inequality for multiple convolutions with respect to Dirichlet probability measure Advances in Applied Mathematics 82 1 102 119 doi 10 1016 j aam 2016 08 001 a b Devroye Luc 1986 Non Uniform Random Variate Generation Springer Verlag ISBN 0 387 96305 7 Lefkimmiatis Stamatios Maragos Petros Papandreou George 2009 Bayesian Inference on Multiscale Models for Poisson Intensity Estimation Applications to Photon Limited Image Denoising IEEE Transactions on Image Processing 18 8 1724 1741 Bibcode 2009ITIP 18 1724L doi 10 1109 TIP 2009 2022008 PMID 19414285 S2CID 859561 Andreoli Jean Marc 2018 A conjugate prior for the Dirichlet distribution arXiv 1811 05266 cs LG Blackwell David MacQueen James B 1973 Ferguson distributions via Polya urn schemes Ann Stat 1 2 353 355 doi 10 1214 aos 1176342372 A Gelman J B Carlin H S Stern D B Rubin 2003 Bayesian Data Analysis 2nd ed pp 582 ISBN 1 58488 388 X External links Edit Dirichlet distribution Encyclopedia of Mathematics EMS Press 2001 1994 Dirichlet Distribution How to estimate the parameters of the compound Dirichlet distribution Polya distribution using expectation maximization EM Luc Devroye Non Uniform Random Variate Generation Retrieved 19 October 2019 Dirichlet Random Measures Method of Construction via Compound Poisson Random Variables and Exchangeability Properties of the resulting Gamma Distribution SciencesPo R package that contains functions for simulating parameters of the Dirichlet distribution Retrieved from https en wikipedia org w index php title Dirichlet distribution amp oldid 1170494520, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.